Architecture
232 entities found
tailwind-merge
The npm dev dependency tailwind-merge depends on tailwindcss as part of the frontend tooling.
TEE credential vault
IronClaw is constrained by the use of TEE credential vault for security.
Test-friendly paths
TET pattern
Architectural pattern for semantic boundary detection using Python libraries (python-docx, python-pptx) and JSON for table embedding, enabling better document structure preservation.
text chunk storage
Text chunk storage persists extracted text chunks including DOCX and PPTX chunks into the DuckDB database's doc_text_chunks table.
TextToSQLService
TextToSQLService converts natural language questions into SQL queries using LLMs, building prompts with schema info and parsing responses for execution. QuestionRouter uses TextToSQLService for generating SQL from natural language queries on the structured path. TextToSQLService integrates with Ollama LLM by calling its generate API on qwen3-coder-next model to generate SQL queries. QuestionRouter uses TextToSQLService for generating SQL from natural language queries on the structured path. TextToSQLService integrates with Ollama LLM by calling its generate API on qwen3-coder-next model to generate SQL queries. TextToSQLService depends on IronClawClient integration for agent logic involving SQL generation. QuestionRouter depends on TextToSQLService to convert natural language queries to SQL. DataLens Development uses TextToSQLService which connects to Ollama for generating SQL from natural language queries. TextToSQLService enables QueryDataSkill to convert natural language queries to SQL queries. Text-to-SQL Service is defined in backend/app/services/text_to_sql.py. The generate_sql method is part of the Text-to-SQL Service.
Tinfoil private inference
transformers
Utilized alongside PyTorch for NLP tasks within the GPU-based document extraction pipeline.
TypeScript
The Frontend uses TypeScript as a core technology for type safety.
TypeScript client
Unstract
UserResolver
For enterprise multi-tenant scaling, DataLens requires adding a UserResolver component to extract identity from authentication. DataLens requires adding UserResolver for multi-tenant user permissions. DataLens needs UserResolver to add multi-tenant permissions like Vanna
uvicorn module
API endpoint is likely affected by uvicorn module cache causing KeyError:0 bug FastAPI depends on Uvicorn for serving the application.
vanna
Vanna is a lightweight AI platform used for autonomous data extraction, planning, verification, and iterative refinement, all running locally with Ollama models like qwen3-coder-next. It supports fast, zero-cost inference on shared GPU hardware. Vanna depends on qdrant-client for vector database integration.
Verifier
The plan includes a Verifier agent to check data quality after extraction.
VerifierAgent
DS-STAR Intelligence includes the VerifierAgent component. Verifier Agent is part of the plan, serving as an LLM-judge to check data completeness, accuracy, and consistency. VerifierAgent is included in the DS-STAR pipeline. DS-STAR Orchestrator includes VerifierAgent as part of its iterative refinement loop DSStarOrchestrator incorporates the verifier agent for quality scoring of extraction DS-STAR Orchestrator includes VerifierAgent as part of its iterative refinement loop DSStarOrchestrator incorporates the verifier agent for quality scoring of extraction DS-STAR Intelligence capability includes the VerifierAgent component responsible for quality assessment and LLM-judge functionality. The DS-STAR pipeline includes the VerifierAgent component. The Verifier Agent operates within the plan to check data quality and extraction completeness. Router Agent uses outputs from Verifier Agent to decide extraction plan adjustments.
virtual environment
vite
npm dependency: vite@^7.3.1, used in frontend, version 7.3.1, license type unspecified, no approval or risk assessment noted.
vLLM
GPU Infrastructure requires deployment of vLLM with Qwen2.5-Coder-14B-AWQ model. The implementation plan includes the vLLM component for GPU infrastructure. GPU Infrastructure uses vLLM for large language model execution on elin. The vLLM component uses the Qwen2.5-Coder-14B-AWQ model version. The plan uses vLLM for large language model inference on elin GPU.
Volumes
The Docker deployment includes volumes configurations for persistent storage. The Docker deployment configures volumes for data persistence
WASM sandboxing
IronClaw incorporates WASM sandboxing as a technical constraint for tool isolation.