Intent
495 entities found
DataFusion Mechanism
Proposed process for combining data from multiple tables into consolidated views, enhancing query relevance.
DataLens Agent Mode
DataLens Agent Mode uses IronClaw as the AI framework for autonomous data analysis, running as a sidecar Docker service, enabling persistent sessions with real-time communication via WebSocket, and designed for GDPR compliance by choosing secure, Rust-based IronClaw over OpenClaw.
DataLens Fresh Test Project 8
Test project with 8 files uploaded, extracted, and validated with functional core processes; summaries pending.
DataLens Master Implementation Plan
The plan outlines Epics for DS-STAR intelligence, Text-to-SQL, Document RAG, and DataLens Skill, detailing components like extraction, GPU infrastructure, and AI models such as Qwen2.5-Coder-14B-AWQ, with integration of Qdrant, DuckDB, and LlamaIndex, for autonomous data analysis and system orchestration. The DataLens DS-STAR Implementation Plan includes the Planner component for creating extraction plans. The plan includes Extractor Agents to extract and structure various data types. The plan includes a Verifier agent to check data quality after extraction. Router agent manages fixes and extensions to the extraction plan. The implementation uses DuckDB as a unified database for storing extracted data. After SQL execution, results are visualized as part of the pipeline. RAGAgent optionally searches unstructured text in the data analysis process. The plan uses vLLM for large language model inference on elin GPU. The implementation requires Python environment setup with all dependencies.
DataLens Phase 2 GPU-First Implementation
DataLens Session
Holistic validation and implementation of DataLens features including data discovery, language support, schema consolidation, and production deployment with full system testing across multiple phases. DataLens Session 2026-03-10 contains detailed status and progress about the IronClaw Agent Feature.
DataLens Skill
The DataLens Skill epic encompasses a Telegram bot for user interactions, using OpenClaw framework for managing conversation state, uploading files, and executing skills. It integrates with backend services and OpenClaw memory for a conversational data querying environment. DataLens Orchestrator on theo is included to handle user requests and coordinate elin agents. DataLens Orchestrator uses the DataLens Skill running on theo to handle Telegram user interactions. DataLens Orchestrator uses the DataLens Skill running on theo to handle Telegram user interactions. The DataLens Orchestrator uses Telegram UI as the front-end communication channel for users. DataLens Orchestrator on theo is included to handle user requests and coordinate elin agents. DataLens Orchestrator uses the DataLens Skill running on theo to handle Telegram user interactions. DataLens Orchestrator uses the DataLens Skill running on theo to handle Telegram user interactions. The DataLens Orchestrator uses Telegram UI as the front-end communication channel for users. The DataLens Master Implementation Plan includes a DataLens Skill development phase. DataLens Skill development involves creating an OpenClaw skill following best practices. DataLens Skill involves building a Telegram bot for user interaction. DataLens Skill includes development of a Telegram upload handler to interface with elin services. DataLens Skill encompasses a query handler for natural language questions. DataLens Skill integrates conversation memory to maintain context in dialogs. DataLens Skill handles user requests via Telegram interface on theo.
DataLens Subagent
DataLens SVGV Budget analysis system
DataLensAgentMemory
DataLensVannaAgent uses DataLensAgentMemory backed by Qdrant for semantic search and memory. DataLensAgentMemory is backed by QdrantService to provide vector-based agent memory.
DataLensPostgresRunner
DataLensPostgresRunner wraps PgDataService to enforce project-scoped schema isolation for SQL execution. DataLensVannaAgent uses DataLensPostgresRunner to run SQL queries with project schema isolation.
DataLensVannaAgent
DataLensVannaAgent uses DataLensAgentMemory backed by Qdrant for semantic search and memory. DataLensVannaAgent uses DataLensPostgresRunner to run SQL with project-specific schema isolation. DataLensVannaAgent uses DataLensPostgresRunner to run SQL queries with project schema isolation.
DB_PASSWORD environment variable
DataLens Platform requires setting a strong DB_PASSWORD environment variable to secure the PostgreSQL database connection.
debug mode
decimal import Decimal
Commit 507c94b adds support for decimal types by importing Decimal and updating numeric column detection.
Decimal type support
Decimal type support contributes to fixing SQL syntax errors by enabling proper support for decimal types in queries. Commit 507c94b adds Decimal type support by importing Decimal from the decimal module and updating the numeric column detection logic.
Decision to revert to last known good commit and deploy
The decision to revert to the last known good commit and deploy applies to the DataLens OpenClaw Integration as a strategy to resolve recent issues. The Commit identification and deployment process is part of the decision to revert to the last known good commit and deploy.
deployment
DataLens OpenClaw Integration is preparing deployment by reverting code and pushing to master branch, with deployment expected to follow automatically with Coolify.
DevOps
DataLens agent interacts with DevOps processes and personnel for deployment and infrastructure coordination.
discover-insights
DataLens Agent Mode incorporates the discover-insights skill to perform statistical profiling and anomaly detection.
DiscoverInsightsSkill
DiscoverInsightsSkill uses SkillResult when performing statistical profiling and anomaly detection.
Discovery feature
An integrated system component within DataLens that automates table ranking, join discovery, and consolidation, dramatically improving query success rate and transparency, with full functionality validated through the latest end-to-end tests. The Discovery feature is validated by Playwright E2E tests to ensure functionality. Frontend tests validate the Discovery feature's user interface and experience. Backend tests validate the Discovery feature's API endpoints and server-side logic.
Discovery latency
Expected to be less than 10 seconds during normal operation.
Discovery service
The Data Discovery feature uses the Discovery service. DiscoveryService generates ConsolidationRecommendation for intelligent table consolidation based on questions. DSStarService integrates with DiscoveryService as client for DS-STAR Agent API on elin.
Discovery success rate
Discovery → Analysis flow
Established process involving data discovery, consolidation, and analysis driven by a comprehensive workflow.
Docker deployment
The Docker deployment uses docker-compose.yml to orchestrate the stack The Docker deployment includes the Frontend The Docker deployment includes the Backend The Docker deployment includes PostgreSQL The Docker deployment includes Redis The Docker deployment uses health checks for monitoring service health The Docker deployment configures volumes for data persistence The Docker deployment configures networking for container communication The Docker deployment includes a Coolify-ready configuration
Docling extraction on elin GPU
Docling extraction on elin GPU is the core process used by the Docling extraction system for document processing using GPU. Docling extraction runs mandatorily on the elin GPU server for DOCX and PPTX extraction with no fallback. The Install_docling_elin.sh script installs Docling 2.75.0 and dependencies on the elin GPU server for mandatory extraction. Docling-based extraction leverages the RTX 4000 SFF Ada 20GB GPU available on elin for DOCX and PPTX file processing.
Docling extraction quality
Docling extraction quality is validated by the implemented Docling extraction system in the GPU-first document extraction pipeline.
Docling extraction system
The Docling extraction system was built as part of the DataLens Phase 2 GPU-first document extraction system. Docling extraction on elin GPU is the core process used by the Docling extraction system for document processing using GPU. The GPU-first document extraction system includes the Docling extraction system as the mandatory method for DOCX and PPTX extraction. Docling extraction quality is validated by the implemented Docling extraction system in the GPU-first document extraction pipeline. Semantic chunking is enforced as a business rule in the Docling extraction system for section/slide-based document processing. The Docling extraction system uses a business rule that tables extracted from documents are embedded as JSON within semantic chunks. Rich metadata including hierarchy, confidence, and provenance is enforced by the Docling extraction system as a business rule for DS-STAR reasoning. The Docling extraction system utilizes Ollama embeddings (nomic-embed-text) to generate vector embeddings for semantic search and reasoning. The DocxExtractor is part of the Docling extraction system for DOCX documents using GPU extraction. The PptxExtractor is part of the Docling extraction system for PPTX documents using GPU extraction. EmbeddingService is used by the Docling extraction system to produce GPU-accelerated embeddings for semantic chunk vectors. RQ worker depends on the Docling extraction system for processing DOCX/PPTX extraction jobs without fallback failure tolerance. DuckDB text_chunks physical table stores semantic chunks produced by the Docling extraction system for querying and analysis. The Deploy_gpu_extractors.sh script installs and configures the Docling extraction system and related GPU-first extraction components.