MCP-based AI Research Assistant (RAG + LangChain + Claude)
What it does
AI agent that retrieves documents, processes context, and answers queries using an MCP architecture with RAG (Retrieval-Augmented Generation).
Tech stack
- LangChain
- Claude / Ollama-compatible models
- Vector DB: Chroma (example; configurable to Pinecone, Milvus, etc.)
- MCP (Model Context Protocol) for multi-tool orchestration
Features
- RAG-based retrieval pipeline
- Multi-tool agent (indexing, retrieval, LLM reasoning, tool calls)
- API integrations for internal data sources
Demo
See /app/demo_output.md for an example run showing Input โ Retrieved documents โ Final AI response. Include screenshots or short GIFs in the presentation/ folder if available.
How to run (quick)
- Create a virtual environment and install requirements.
python -m venv .venv
.venv\Scripts\activate # Windows
pip install -r requirements.txt
- Configure environment variables for your model and vector DB (examples):
export OPENAI_API_KEY=...
export CLAUDE_API_KEY=...
# For Windows PowerShell:
$env:CLAUDE_API_KEY = '...'
- Run the RAG pipeline or the MCP server components (examples):
python -m rag_pipeline.run # pipeline entry (if present)
python -m mcp_server.server # MCP server (if present)
Notes
- This repo has been reorganized to focus on a single concrete use-case: a Company Knowledgebase AI. Legacy course material was archived under
/legacy_course. - If you want the legacy numbered course folders removed or migrated into
/legacy_course, confirm and I will move them.