🔥 BLACK FRIDAY SALE: Get 15% OFF all source codes with code BLACKFRIDAY . Click here to apply discount automatically
🧠 Enterprise Local RAG: Private AI Knowledge Base (Docker + Llama 3)
Stop sending your sensitive engineering data to the cloud. This project provides a production-grade, 100% offline RAG (Retrieval-Augmented Generation) architecture. It allows you to chat with your proprietary documents (PDF, TXT, Markdown) using a local LLM, ensuring absolute data privacy.
🏗️ System Architecture
This system is designed with a microservices architecture, fully containerized using Docker Compose for one-click deployment.
🛠️ Tech Stack
LLM Inference: Ollama (Running Meta Llama 3 8B)
Ollama (Running 8B) Embeddings: mxbai-embed-large (State-of-the-art retrieval performance)
(State-of-the-art retrieval performance) Vector Database: ChromaDB (Persistent local storage)
ChromaDB (Persistent local storage) Backend/Frontend: Python + Streamlit (Optimized for RAG workflows)
... continue reading