Example about Chroma and Ollama.

A RAG application that answers mobile phone questions using Amazon product reviews. It processes reviews from a CSV into vector embeddings with Ollama's mxbai-embed-large and stores them in a Chroma vector database for semantic search. The system retrieves the top 3 most relevant reviews based on semantic similarity, then uses Ollama's llama3.2 to generate answers from that context. All processing runs locally with no external API calls. The project uses LangChain to orchestrate the pipeline: data processing, embedding generation, vector storage, retrieval, and LLM generation. The vector database persists to disk, so reviews are indexed once and reused for fast retrieval. Users interact via a command-line interface, asking questions like "What's the best phone for photography?" The system finds relevant reviews and generates context-aware answers. Built with Python 3.12+, it demonstrates a complete RAG workflow using open-source, local AI tools.