elin
DataLens utilizes the shared GPU box named elin for data analysis workloads, running Docling on an RTX 4000 Ada 20GB GPU for document extraction, with SSH and network setup for Ollama and Qdrant APIs. The platform depends on elin for GPU inference, document extraction, and DS-STAR agents, supporting large document processing with limited tested capabilities for large PDFs. elin hosts the Ollama LLM server including SQLCoder-7B and Arctic-Text2SQL-R1-7B models accessible by theo. DataLens operates using the GPU box (elin) for agentic data analysis workloads with GPU acceleration. The hybrid deployment involves backend running on elin. Hybrid deployment architecture runs all Python and AI workloads on elin. The hybrid deployment involves backend running on elin. Hybrid deployment architecture runs all Python and AI workloads on elin.