Integrations
222 entities found
IronClawClient
IronClaw Agent uses IronClawClient to connect and send messages to the remote IronClaw service. IronClawClient integrates with IronClaw Gateway API hosted on external system elin for remote operation. theo backend uses IronClawClient to connect to IronClaw Gateway for remote agent operations when IRONCLAW_MODE=remote environment variable is set. IronClawClient depends on OpenClawHttpClient for streaming communication with OpenClaw Gateway. TextToSQLService depends on IronClawClient integration for agent logic involving SQL generation. backend/app/services/ironclaw_client.py implements the IronClawClient used for remote agent communication via IronClaw Gateway API. IronClawClient integrates with the IronClaw Gateway API for sending and receiving agent messages. The Coolify backend is configured to use IronClawClient in remote mode with appropriate environment variables. The agent.py event_generator() function invokes get_ironclaw_client() to obtain the appropriate agent client for message processing. backend/app/services/ironclaw_client.py implements the IronClawClient used for remote agent communication via IronClaw Gateway API. IronClawClient integrates with the IronClaw Gateway API for sending and receiving agent messages. The Coolify backend is configured to use IronClawClient in remote mode with appropriate environment variables. The agent.py event_generator() function invokes get_ironclaw_client() to obtain the appropriate agent client for message processing. IronClawClient uses IronClawSkills definitions for agent skill execution via the IronClaw Gateway. OpenClawHttpClient serves as a WebSocket client alternative implementation to the HTTP-based IronClawClient for streaming agent responses.
Julius AI
lib/elin.py wrapper
The lib/elin.py wrapper in the DataLens Project uses OpenClaw node pairing for remote invocation. The theo directory contains the lib/elin.py communication helper for DataLens.
Live Backend
IronClaw Agent Findings Visualization uses the Live Backend to process and visualize data. The Live Backend processes Qwen3 multi-block XML responses for SQL extraction. The Live Backend supports PostgreSQL Decimal type for findings generation.
LlamaIndex
Document RAG integrates with LlamaIndex. Document RAG uses LlamaIndex for document chunking and embedding generation. Document RAG capability integrates with LlamaIndex external system for document indexing and retrieval. Document RAG uses LlamaIndex with Qdrant for semantic document indexing. LlamaIndex generates embeddings using bge-large-en-v1.5 model. RAGAgent uses LlamaIndex to chunk PDFs and generate embeddings.
LocalAgentClient
LocalAgentClient is used by IronClaw Agent Feature to maintain session state across API calls. LocalAgentClient uses SkillExecutor to process messages locally with an async generator. IronClaw Agent falls back to using LocalAgentClient when IronClaw environment variables are missing, resulting in local message processing and inability to complete queries successfully. theo backend uses LocalAgentClient if IRONCLAW_MODE environment variable is missing, leading to incomplete query processing in IronClaw Agent. LocalAgentClient.send_message uses SkillExecutor to execute and stream query results asynchronously. LocalAgentClient instantiates SkillExecutor internally for processing agent messages. LocalAgentClient.send_message uses SkillExecutor to execute and stream query results asynchronously. LocalAgentClient instantiates SkillExecutor internally for processing agent messages. LocalAgentClient directly uses SkillExecutor to run agent logic without IronClaw service dependency.
Memory: DataLens SVGV Analysis Timeout Issue
Analysis requests timeout due to missing ANTHROPIC_API_KEY in Coolify environment, preventing Claude responses. Solution: add the API key environment variable, then redeploy.
NDJSON response
NDJSON response streaming endpoint implemented for real-time query processing, helping to mitigate timeout issues during long-running DataLens analyses.
nomic-embed-text model
TableEmbeddingIndex uses the nomic-embed-text model to compute semantic embeddings for table names and schemas. QdrantService uses the nomic-embed-text model via Ollama API to embed texts into 768D vectors.
OPENAPI_CLIENTS.md
OpenBB Platform
DataLens Development uses architectural lessons from the OpenBB Platform for pattern adoption. DataLens Development incorporates architectural lessons learned from the OpenBB Platform for its extractor pipeline and API client generation.
OpenClaw
DataLens Skill follows OpenClaw skill best practices. OpenClaw node is a component or server related to OpenClaw system. DataLens Agent Mode chooses IronClaw over OpenClaw due to OpenClaw's security vulnerabilities, making OpenClaw unsuitable for personal data workloads. The Anthropic API key is required by the OpenClaw Gateway to authenticate calls to Claude for query processing OpenClaw Gateway integrates with the Anthropic API for language model processing The Backend container integrates with the OpenClaw Gateway to forward user analysis requests Agent Chat integrates with OpenClaw Gateway for processing user queries with Claude OpenClawHttpClient in the backend integrates with OpenClaw Gateway for WebSocket streaming communication OpenClaw Gateway health monitors the status of the OpenClaw Gateway service The decision to revert to the last known good commit and deploy applies to the DataLens OpenClaw Integration as a strategy to resolve recent issues. The Architecture supports and integrates with the DataLens OpenClaw Integration to provide a solid system structure. The DataLens OpenClaw Integration has been negatively affected by the System returning empty responses condition. DataLens Skill uses OpenClaw for skill handling and integration with backend services. OpenClaw is disqualified in favor of IronClaw due to security concerns preventing its use for personal data workloads. User interacts with OpenClaw as the agent orchestration platform to ask Danish budget questions.
OpenClaw client HTTP service
Deployed as a ringfenced HTTP API on elin at port 3100, supports streaming token responses, uses API key authentication, integrated with secure environment, and replaces SSH-based connections.
OpenClaw Gateway URL
OpenClaw Gateway WebSocket
OpenClaw HTTP API
OpenClaw HTTP streaming capability is part of the OpenClaw HTTP API hosted at elin:3100 to enable real-time token streaming for the user interface. theo backend integrates with OpenClaw HTTP API via HTTP streaming calls to handle agent conversational queries and responses. OpenClaw HTTP API is a core component of the OpenClaw infrastructure providing agent communication and streaming services. OpenClaw HTTP API is the network interface of the OpenClaw Gateway running on elin for agent communication. theo Backend communicates with OpenClaw HTTP API for streaming agent sessions and ringfenced skill executions.
OpenClaw HTTP streaming
OpenClaw HTTP streaming capability is part of the OpenClaw HTTP API hosted at elin:3100 to enable real-time token streaming for the user interface. OpenClaw HTTP streaming serves real-time token streaming to the User interface for interactive experience.
OpenClaw node pairing
DataLens Project integrates OpenClaw node pairing to connect theo and elin via SSH tunnel.
OpenClaw Skill API (elin:8002)
theo Backend communicates via HTTP with OpenClaw Skill API at elin:8002 to execute ringfenced SQL queries securely. OpenClaw Skill API depends on DuckDB that holds the 473 extracted budget tables for fast in-memory querying. OpenClaw Skill API depends on PostgreSQL which stores metadata for the 132 budget files. The DataLens Skill capability uses the OpenClaw skill framework to manage conversation state and skill execution.
OpenClaw WebSocket streaming
Oracle
Organizations API Endpoint
PATCH /api/v1/auth/me API endpoint
The User Language Preference capability uses the PATCH /api/v1/auth/me API endpoint to update the user's language setting.
PATCH /findings/{finding_id}
Update a finding (pin/unpin, tag) at PATCH /findings/{finding_id}. Listing findings for a project and updating individual findings are related operations for managing findings. Updating and deleting a finding relate to managing the lifecycle of findings in the agent backend.
PATCH /me
Getting and updating the current user's information are related operations in auth.py.
PATCH /sessions/{session_id}/model
API to switch the LLM backend for a session via PATCH /sessions/{session_id}/model. Within the agent backend, the message sending for sessions and model switching for sessions operate on the same session resource.
Platform Backend
The Platform Backend uses a multi-tenant PostgreSQL schema to support data modeling and authentication. The Platform Backend implements bearer token authentication for session management and user access. The Platform Backend integrates the DS-STAR FileAnalyzer for automatic cataloging of uploaded files. The Platform Backend extracts data into DuckDB tables for analysis and query execution. The Platform Backend implements a natural language query endpoint to generate and execute SQL queries from user questions. The Platform Backend is built as a FastAPI app exposing API endpoints for auth, projects, files, extraction, and analysis. The Platform Backend uses DS-STAR subprocess calls to implement cataloging, extraction, and SQL generation features. Ollama provides the qwen3-coder-next and nomic-embed-text models integrated into the Platform Backend for AI-powered text-to-SQL and document embedding. Qdrant is integrated into the Platform Backend for storage and retrieval of document vectors supporting Document RAG. The Platform Backend depends on Redis 7 for future background job management and caching, although it is not yet implemented. The Docker Compose Stack includes the Platform Backend service as one of its containers alongside PostgreSQL and Redis. Coolify is the deployment platform intended to host the Platform Backend service and its supporting infrastructure. The Platform Backend depends on DS-STAR FileAnalyzer to perform automatic cataloging immediately after file upload.
POST /
Creation and listing endpoints exist for projects.py to create new projects and list existing projects. Creation and listing endpoints exist for projects.py to create new projects and list existing projects.
POST /analysis/ask
API endpoint for submitting questions to DataLens, triggers question routing, classification, SQL generation, execution, and response synthesis. The file backend/app/api/analysis.py contains the /analysis/ask endpoint.
POST /analysis/query
Legacy SQL query API endpoint for data analysis in DataLens.