--- license: mit title: Freeekyyy-chatBot sdk: streamlit sdk_version: 1.44.1 --- # ๐Ÿค– Freeekyyy ChatBot **Freeekyyy** is an *over-the-top*, emotional AI chatbot that FREAKS OUT (in Markdown!) on any topic you provide. It uses [LangChain](https://github.com/langchain-ai/langchain) + [OpenRouter](https://openrouter.ai) to generate expressive, explosive Markdown responses โ€” perfect for dramatic, chaotic, and wildly informative outputs. > ๐Ÿ”ฅ Now powered with a **RAG (Retrieval-Augmented Generation) pipeline** to respond using your own PDFs and documents! Check it out live ๐Ÿ‘‰ [MKCL/Freeekyyy-chatBot on Hugging Face ๐Ÿคฏ](https://huggingface.co/spaces/MKCL/Freeekyyy-chatBot) --- ## ๐Ÿง  How It Works - Uses `LangChain`'s `ChatPromptTemplate` to inject emotional few-shot prompts. - Connects to **DeepSeek-R1-Zero** via [OpenRouter](https://openrouter.ai). - Uses **vector search** (via `ChromaDB`) and **HuggingFace embeddings** for document retrieval (RAG). - Outputs responses in beautiful **Markdown (.md)** format. - Works as a **Streamlit app** or a **FastAPI backend**. --- ## ๐Ÿ” Retrieval-Augmented Generation (RAG) The chatbot now includes a smart document processing pipeline: 1. **Document Ingestion**: Parses your uploaded PDF files. 2. **Chunking**: Splits them into overlapping text chunks. 3. **Embeddings**: Generates vector embeddings using `BAAI/bge-small-en`. 4. **Vector Store**: Stores chunks in `ChromaDB`. 5. **Context Injection**: Relevant chunks are inserted into the LLM prompt for context-aware responses! --- ## ๐Ÿ–ฅ๏ธ Streamlit Integration To display Markdown output in Streamlit: ```python import streamlit as st # Assuming `md_output` contains your model's response st.markdown(md_output, unsafe_allow_html=True) ``` --- ## ๐Ÿš€ Installation ### Option 1: Using `uv` ```bash uv pip install -r requirements.txt ``` ### Option 2: Using regular pip ```bash pip install -r requirements.txt ``` --- ## ๐Ÿ“ฆ Requirements ``` langchain langchain-community langchain-openai openai chromadb python-dotenv huggingface_hub sentence-transformers streamlit uvicorn fastapi ``` --- ## ๐Ÿ› ๏ธ Environment Variables Create a `.env` file in the root directory: ``` OPENROUTER_API_KEY=your_openrouter_key_here HUGGINGFACE_API_KEY=your_huggingface_key_here ``` --- ## ๐Ÿงช Example Prompt Structure ```python from langchain.prompts import ChatPromptTemplate prompt = ChatPromptTemplate.from_messages([ ("system", "You're an extremely emotional AI. Always freak out in Markdown."), ("user", "Topic: Volcanoes") ]) ``` --- ## ๐Ÿ”— RAG Query with Vector Search ```python # Sample retrieval pipeline relevant_chunks = db.similarity_search(query, k=4) context = "\n\n".join([doc.page_content for doc in relevant_chunks]) final_prompt = f""" You are an emotional assistant. Respond dramatically using Markdown. Context: {context} Question: {query} """ ``` --- ## ๐Ÿง‘โ€๐Ÿ’ป Want to Use as an API? Run your backend like this: ```bash uvicorn main:app --reload ``` --- ## ๐Ÿ“Ž License MIT โ€” go freak out and teach some AI emotions! ๐Ÿคฏโค๏ธ๐Ÿ”ฅ