Integrations
222 entities found
DELETE /findings/{finding_id}
Dismiss a finding through DELETE /findings/{finding_id}. Updating and deleting a finding relate to managing the lifecycle of findings in the agent backend.
DELETE /{file_id}
File deletion endpoint in backend/api/files.py, status unspecified.
DELETE /{project_id}
Delete project endpoint, status unspecified.
Discovery API endpoints
APIs registered for Discovery feature: /discovery, /tables, /validate, enabling data discovery, review, and validation workflows.
Discovery API router
Discovery API router was registered in the backend app main.py to expose discovery endpoints. DiscoveryFlow depends on the Discovery API router for backend data and functionality. The Discovery API router integrates with the DataLens backend system. Coolify auto-deploy integrates with the Discovery API router deployment process to automate updates.
Discovery backend service
The Discovery backend service implements the Discovery API endpoints. The UI components depend on the Discovery backend service to function correctly.
DISCOVERY_IMPLEMENTATION.md
Guides detailing the development and integration of the discovery system.
docker restart
The backend container may require a manual restart via docker restart command to fix potential deployment issues blocking query execution after thinking messages.
docker-compose
The Docker-compose configuration depends on the RQ extraction queue on redis for job processing. docker-compose.coolify.yml is part of the Docker-compose configuration used in deployment. The Docker-compose configuration was constrained by a misconfiguration of the RQ extraction queue on redis causing job processing issues. The Backend container depends on the Docker-compose configuration for deployment orchestration. The Coolify deployment uses the Docker-compose configuration for service orchestration. The backend container's deployment problems can be remedied by manual deployment through docker-compose on theo if Coolify is stuck. DataLens Platform uses Docker Compose for deployment. Backend API service is deployed via Docker Compose orchestration. Frontend service is deployed with Docker Compose orchestration. DataLens Platform uses Docker Compose for deployment. Backend API service is deployed via Docker Compose orchestration. Frontend service is deployed with Docker Compose orchestration.
docs/architecture/QUICK_WINS_IMPLEMENTATION.md
Contains implementation plans for rapid deployment of improvements like schema reduction, keyword expansion, and retrial logic.
DS-STAR Agent API
DS-STAR AI cataloging is integrated via the DS-STAR Agent API running on elin and proxied on theo. Backend integrates with DS-STAR API for AI cataloging functions. The Backend integrates with DS-STAR AI for autonomous extraction, Text-to-SQL, and RAG DataLens Development integrates with the DS-STAR Agent API running on elin for AI cataloging and autonomous extraction. DS-STAR Agent API deployment includes 12 DS-STAR agents providing AI cataloging functionality.
DS-STAR agents
DS-STAR agents are located at /home/ops/datalens/agents/ on elin and used by the DataLens backend for AI tasks via subprocess integration. They include components like FileAnalyzer, PlannerAgent, VerifierAgent, and RouterAgent, supporting autonomous data extraction and analysis within the platform. The environment uses a virtual environment at /home/ops/datalens/ and the backend API communicates with these agents for cataloging, extraction, and query processing.
DS-STAR FileAnalyzer
The DataLens Platform integrates with the DS-STAR FileAnalyzer component for automated file cataloging. DataLens Platform integrates the DS-STAR FileAnalyzer for automatic AI cataloging upon file upload. DS-STAR FileAnalyzer integration depends on the file upload feature to automatically catalog files upon upload. DSStarOrchestrator depends on the file analyzer agent for initial data analysis The file_analyzer.py component is part of the DS-STAR Intelligence system used by DataLens Project. The FastAPI backend uses DS-STAR FileAnalyzer for AI cataloging of uploaded files. The file_analyzer.py component is part of the DS-STAR Intelligence system used by DataLens Project. The FastAPI backend uses DS-STAR FileAnalyzer for AI cataloging of uploaded files. The Platform Backend integrates the DS-STAR FileAnalyzer for automatic cataloging of uploaded files. The Platform Backend depends on DS-STAR FileAnalyzer to perform automatic cataloging immediately after file upload. AI cataloging in the DataLens Platform integrates with the DS-STAR FileAnalyzer for automatic cataloging of uploaded files.
DSStarService HTTP client
DuckDB
DuckDB is used by DataLens for local structured data storage and querying from extracted analysis files. It is central to analysis and is used by SQLAgent. The platform is migrating to PostgreSQL to eliminate DuckDB's write-lock, enhancing concurrency. It stores tables like sales, surveys, and test data, supporting the platform's in-memory querying and analysis processes. Qdrant indexes embeddings generated from text chunks stored in DuckDB, enabling semantic search in the platform. OpenClaw Skill API queries DuckDB which contains 473 extracted budget tables for analytical data. DuckDB hosts the data tables extracted for Project 14 from SVGV files for analytical queries. The implementation uses DuckDB as a unified database for storing extracted data. CSV Extractor loads validated and cleaned CSV files into DuckDB. Excel Extractor loads normalized Excel data into DuckDB. PDF Extractor extracts tables from PDFs and loads them into DuckDB. SQLAgent executes generated SQL queries on DuckDB. The system uses LangChain framework alongside DuckDB for data pipeline management. RAPIDS cuDF is optionally used for large dataframe acceleration alongside DuckDB.
DuckDB integration
The FastAPI backend integrates with DuckDB for data extraction storage and querying.
DuckDB tables
Phase 2 Strategy depends on DuckDB tables loaded by processing Excel files DuckDB text_chunks physical table stores semantic chunks produced by the Docling extraction system for querying and analysis. The 132 SVGV files map to DuckDB tables, which are used to store extracted budget data Backend API queries DuckDB tables produced from the SVGV extraction for budget data
ElinSkillClient
RingfencedSkills uses ElinSkillClient to execute constrained skill operations for DataLens agent. ElinSkillClient integrates with RingfencedSkills to perform ringfenced skill executions on elin.
extract_file_job
A batch job responsible for extracting data from files like DOCX, PPTX, PDF, and Excel, triggering GPU extraction workflows, with no specific recurrence or failure consequences detailed. The RQ job queue manages the execution of the extract_file_job() function with a timeout for large files. The RQ extraction queue on redis triggers the extract_file_job(file_id) function to process file extraction jobs for summaries. The extract_file_job function is executed by the RQ worker to process extraction of files. The batch job extract_file_job performs DOCX and PPTX extraction via SSH to elin using Docling, producing JSON results.
Extraction API Endpoint
Extraction API processes SVGV files to extract data tables and write them to DuckDB. RQ worker processes execute extraction jobs by calling the Extraction API endpoints for each SVGV file. Backend API exposes the Extraction API endpoints to trigger extraction of individual SVGV files.
FastAPI OpenAPI Spec
Defines RESTful API specifications for seamless integration and documentation.
Files API Endpoints
The Files API supports AI Summary Generation by exposing necessary endpoints and data fields for storage and retrieval of AI summaries.
FINDINGS_INTEGRATION_GUIDE.md
FINDINGS_VISUALIZATION_PLAN.md
Progress plan for findings visualization components; includes feature set, design, and integration status.
FindingsGenerator
IronClaw Agent uses FindingsGenerator to generate findings from query results including handling PostgreSQL Decimal types. FindingsGenerator requires support for PostgreSQL Decimal type to correctly process numeric data in findings. FindingsGenerator uses Finding to represent individual analytical findings generated from query results. question_router.py uses FindingsGenerator to generate findings from query results FindingsGenerator results are included in the API response Logging will be added to findings_generator at the start of generate_findings to monitor numeric columns detection and diagnose failure points. Finding is a data structure created and managed by FindingsGenerator during analytical finding generation.
frontend service
The frontend service uses the Backend API endpoint, configured via the PUBLIC_API_URL environment variable, to communicate with the backend.
frontend/src/lib/api.ts
Frontend API definitions, no additional details provided.
generate-goal API endpoint
The AI-assisted goal generation capability is realized through the generate-goal API endpoint. The generate-goal API endpoint is implemented in backend/app/api/projects.py. The AI-assisted goal generation endpoint integrates with Claude to expand a brief description into a detailed project goal.
GET /
GET /analysis/history
Retrieves query history for data analysis projects. The file backend/app/api/analysis.py contains the /analysis/history endpoint.