MKCL commited on
Commit
c6c15e2
Β·
verified Β·
1 Parent(s): 76eccc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -10
README.md CHANGED
@@ -1,13 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: Freeekyyy ChatBot
3
- emoji: πŸƒ
4
- colorFrom: purple
5
- colorTo: green
6
- sdk: streamlit
7
- sdk_version: 1.44.1
8
- app_file: app.py
9
- pinned: false
10
- short_description: A chat bot that freaks out on every small thing
 
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸ€– Freeekyyy ChatBot
2
+
3
+ **Freeekyyy** is an *over-the-top*, emotional AI chatbot that FREAKS OUT (in Markdown!) on any topic you provide.
4
+ It uses [LangChain](https://github.com/langchain-ai/langchain) + [OpenRouter](https://openrouter.ai) to generate expressive, explosive Markdown responses β€” perfect for dramatic, chaotic, and wildly informative outputs.
5
+
6
+ > πŸ”₯ Now powered with a **RAG (Retrieval-Augmented Generation) pipeline** to respond using your own PDFs and documents!
7
+
8
+ Check it out live πŸ‘‰ [MKCL/Freeekyyy-chatBot on Hugging Face 🀯](https://huggingface.co/spaces/MKCL/Freeekyyy-chatBot)
9
+
10
+ ---
11
+
12
+ ## 🧠 How It Works
13
+
14
+ - Uses `LangChain`'s `ChatPromptTemplate` to inject emotional few-shot prompts.
15
+ - Connects to **DeepSeek-R1-Zero** via [OpenRouter](https://openrouter.ai).
16
+ - Uses **vector search** (via `ChromaDB`) and **HuggingFace embeddings** for document retrieval (RAG).
17
+ - Outputs responses in beautiful **Markdown (.md)** format.
18
+ - Works as a **Streamlit app** or a **FastAPI backend**.
19
+
20
+ ---
21
+
22
+ ## πŸ” Retrieval-Augmented Generation (RAG)
23
+
24
+ The chatbot now includes a smart document processing pipeline:
25
+
26
+ 1. **Document Ingestion**: Parses your uploaded PDF files.
27
+ 2. **Chunking**: Splits them into overlapping text chunks.
28
+ 3. **Embeddings**: Generates vector embeddings using `BAAI/bge-small-en`.
29
+ 4. **Vector Store**: Stores chunks in `ChromaDB`.
30
+ 5. **Context Injection**: Relevant chunks are inserted into the LLM prompt for context-aware responses!
31
+
32
+ ---
33
+
34
+ ## πŸ–₯️ Streamlit Integration
35
+
36
+ To display Markdown output in Streamlit:
37
+
38
+ ```python
39
+ import streamlit as st
40
+
41
+ # Assuming `md_output` contains your model's response
42
+ st.markdown(md_output, unsafe_allow_html=True)
43
+ ```
44
+
45
+ ---
46
+
47
+ ## πŸš€ Installation
48
+
49
+ ### Option 1: Using `uv`
50
+ ```bash
51
+ uv pip install -r requirements.txt
52
+ ```
53
+
54
+ ### Option 2: Using regular pip
55
+ ```bash
56
+ pip install -r requirements.txt
57
+ ```
58
+
59
+ ---
60
+
61
+ ## πŸ“¦ Requirements
62
+
63
+ ```
64
+ langchain
65
+ langchain-community
66
+ langchain-openai
67
+ openai
68
+ chromadb
69
+ python-dotenv
70
+ huggingface_hub
71
+ sentence-transformers
72
+ streamlit
73
+ uvicorn
74
+ fastapi
75
+ ```
76
+
77
  ---
78
+
79
+ ## πŸ› οΈ Environment Variables
80
+
81
+ Create a `.env` file in the root directory:
82
+
83
+ ```
84
+ OPENROUTER_API_KEY=your_openrouter_key_here
85
+ HUGGINGFACE_API_KEY=your_huggingface_key_here
86
+ ```
87
+
88
  ---
89
 
90
+ ## πŸ§ͺ Example Prompt Structure
91
+
92
+ ```python
93
+ from langchain.prompts import ChatPromptTemplate
94
+
95
+ prompt = ChatPromptTemplate.from_messages([
96
+ ("system", "You're an extremely emotional AI. Always freak out in Markdown."),
97
+ ("user", "Topic: Volcanoes")
98
+ ])
99
+ ```
100
+
101
+ ---
102
+
103
+ ## πŸ”— RAG Query with Vector Search
104
+
105
+ ```python
106
+ # Sample retrieval pipeline
107
+ relevant_chunks = db.similarity_search(query, k=4)
108
+ context = "\n\n".join([doc.page_content for doc in relevant_chunks])
109
+
110
+ final_prompt = f"""
111
+ You are an emotional assistant. Respond dramatically using Markdown.
112
+
113
+ Context:
114
+ {context}
115
+
116
+ Question:
117
+ {query}
118
+ """
119
+ ```
120
+
121
+ ---
122
+
123
+ ## πŸ§‘β€πŸ’» Want to Use as an API?
124
+
125
+ Run your backend like this:
126
+
127
+ ```bash
128
+ uvicorn main:app --reload
129
+ ```
130
+
131
+ ---
132
+
133
+ ## πŸ“Ž License
134
+
135
+ MIT β€” go freak out and teach some AI emotions! 🀯❀️πŸ”₯