Skip to content

Generative ai RAG movie history ( chat and qa) with langchain, openai and pinecone

Notifications You must be signed in to change notification settings

code4mk/langchain-rag-application

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAG Application

This is a Retrieval-Augmented Generation (RAG) application using Langchain, OpenAI, and Pinecone for movie history.

Features

  • Chat
  • Question Answering (QA)

Setup

  1. Create a virtual environment:

    python -m venv venv
  2. Activate the virtual environment:

    • On Windows:
      venv\Scripts\activate
    • On macOS and Linux:
      source venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Set up environment variables:

    Create a .env file in the root directory of your project and add the following variables:

    OPENAI_API_KEY=""
    PINECONE_API_KEY=""
    PINECONE_INDEX_NAME="movie-history"
    FEATURE_NAME="chat" # "qa"
    

    -> FEATURE_NAME will be chat|qa . diff is chat will preserve history

Load Data into Pinecone VectorDB

  1. Load the data into Pinecone:

    python ./src/vector_store/load_data.py

Run the Project

  1. start project with gradio ui

    python app.py
  • 127.0.0.1:7860

About

Generative ai RAG movie history ( chat and qa) with langchain, openai and pinecone

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages