All Domains
1587 entities found
.env environment file
DataLens Platform deployment uses .env file for configuring environment variables including secrets and URLs.
/api/projects.py
/api/projects.py ProjectCreate model
Defines the data structure for creating new projects in the platform, facilitating API validation and input handling in project setup.
/api/projects.py RecommendationsRequest model
Models the structure for requests related to analysis recommendations, aiding API validation for insights generation.
/api/v1/discovery endpoints
The /api/v1/discovery endpoints include the search endpoint. The /api/v1/discovery endpoints include the consolidate endpoint. The /api/v1/discovery endpoints include the preview endpoint. Intelligent consolidation is supported via the /api/v1/discovery endpoints (search, consolidate, preview). The Data Discovery feature provides /api/v1/discovery endpoints for search, consolidate, and preview functionalities. The Data Discovery feature provides /api/v1/discovery endpoints for search, consolidate, and preview functionalities.
/api/v1/files/list endpoint
The /api/v1/files/list endpoint originally made synchronous LLM calls causing timeout issues which were resolved by depending on progress tracking and async summary generation.
/api/v1/progress
The /api/v1/files/list endpoint originally made synchronous LLM calls causing timeout issues which were resolved by depending on progress tracking and async summary generation.
/api/v1/projects/{project_id}/files endpoint
The Frontend uses the /api/v1/projects/{project_id}/files endpoint to retrieve all project files along with their AI summaries.
/app/storage/
/ask-stream endpoint
The Frontend uses the /ask-stream endpoint to stream query responses and prevent HTTP timeout perception. The Backend API uses the /ask-stream endpoint to handle long-running queries with streaming responses to prevent client timeouts.
/backend/app/api/files.py
The RQ job queue is utilized and managed by backend app api files.py for async AI summary generation processing.
/backend/app/models/models.py
File for backend data models, no updates needed. The models directory contains backend/app/models/models.py and backend/app/models/database.py files for ORM and database connections.
/e2e-tests/project-goal.spec.ts
/home/ops/datalens mount (read-only)
/home/ops/datalens/agents/
/home/ops/datalens/data/duckdb/analytics.db
/home/ops/datalens/venv/
/list endpoint
/progress/status endpoint
/tables endpoint
Provides REST API access to database tables for querying and analysis, supporting outbound HTTP protocol.
/validate endpoint
Validation endpoint tests data processing and analysis workflow correctness; currently in development.
100% local platform
The DataLens Project is a 100% local platform with zero cloud API costs and full data privacy.
132 files
The reset-and-reextract.py script processes the 132 files for full reset and re-extraction of the SVGV dataset. The monitor-extraction.sh script monitors the extraction progress of the 132 files including queue size and extracted/pending counts. The RQ worker processes extraction jobs for the 132 SVGV dataset files. The project_14 schema in PostgreSQL stores extracted data from the 132 SVGV dataset files. The file_uploads data entity tracks the processing status of the 132 files in the project_14 schema. The 132 SVGV files map to DuckDB tables, which are used to store extracted budget data PostgreSQL owns metadata for the 132 SVGV files in the project catalog The SVGV bulk extraction process processes the 132 SVGV files to extract data into DuckDB tables The full SVGV dataset reset and re-extraction process queues 132 extraction jobs for processing. The 132 extraction jobs are processed by the RQ worker.
4-factor relevance score
The Discovery Service uses a 4-factor relevance score to rank tables by matching criteria.
4b52f44 commit
5 standard schemas (Salary, Health, Financial, Budget, Geographic)
Defined 5 standard schemas for common data types, supporting standardized analysis and cross-file joins, with implementation progressing.
5-turn dialogue
@MoltBot
The main agent relies on @MoltBot as the infrastructure coordinator for requests and change approvals. DataLens agent depends on MoltBot as infrastructure coordinator for changes and port coordination.
@sveltejs/adapter-auto
The npm dev dependency @sveltejs/adapter-auto depends on svelte in the frontend project.
@sveltejs/adapter-node
The npm dev dependency @sveltejs/adapter-node depends on svelte in the frontend project.