Project: datalens
81 entity types
ThirdPartyComponentArchitecture

Ollama

DataLens uses Ollama (ollama/qwen3-coder-next) on elin for local LLM inference, including embeddings and query synthesis. Integrated with TextToSQLService and vector search. Requires Ollama to run on elin (176.9.90.154) with port 11434 accessible. Deployed on GPU for high-performance AI tasks. Critical for Text-to-SQL and document embedding pipelines, and part of the Arctic-Text2SQL setup. IronClaw Gateway uses Ollama with the Qwen3-coder-next model as fallback for local inference when Anthropic Claude not available.