Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,142 +1,97 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: streamlit
|
| 7 |
-
sdk_version: "1.
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
---
|
| 11 |
|
| 12 |
-
#
|
| 13 |
|
| 14 |
-
An intelligent PDF chat application
|
| 15 |
|
| 16 |
## β¨ Features
|
| 17 |
|
| 18 |
-
### π€ **
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
|
| 23 |
### π― **Key Capabilities**
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
- β‘ **Quick Actions** - Pre-defined questions for instant insights
|
| 30 |
-
- π₯ **Export Summaries** - Download generated summaries
|
| 31 |
|
| 32 |
## π How to Use
|
| 33 |
|
| 34 |
-
### 1.
|
| 35 |
-
|
|
|
|
| 36 |
|
| 37 |
-
### 2.
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
- Text cleaning options
|
| 42 |
|
| 43 |
-
### 3.
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
### 4. Process PDF
|
| 49 |
-
- Click "π Process PDF" button
|
| 50 |
-
- Wait for processing to complete
|
| 51 |
-
|
| 52 |
-
### 5. Start Chatting!
|
| 53 |
-
- Type your questions in the chat box
|
| 54 |
-
- Use Quick Actions for common queries
|
| 55 |
-
- View chat history in real-time
|
| 56 |
-
|
| 57 |
-
## π Getting a Gemini API Key
|
| 58 |
-
|
| 59 |
-
For Advanced Chat mode:
|
| 60 |
-
1. Visit [Google AI Studio](https://makersuite.google.com/app/apikey)
|
| 61 |
-
2. Sign in with your Google account
|
| 62 |
-
3. Create a new API key
|
| 63 |
-
4. Paste it in the sidebar
|
| 64 |
|
| 65 |
## π οΈ Technical Stack
|
| 66 |
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
- **LLM (Advanced)**: Google Gemini 2.5 Flash
|
| 74 |
-
- **Framework**: LangChain
|
| 75 |
|
| 76 |
## π¦ Installation (Local)
|
| 77 |
|
| 78 |
-
|
| 79 |
-
git clone <your-repo>
|
| 80 |
-
cd <your-repo>
|
| 81 |
-
pip install -r requirements.txt
|
| 82 |
-
streamlit run app.py
|
| 83 |
-
```
|
| 84 |
|
| 85 |
-
|
| 86 |
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
- Maintains conversation history
|
| 91 |
-
- Fast and efficient
|
| 92 |
|
| 93 |
-
|
| 94 |
-
- Direct PDF processing with Gemini
|
| 95 |
-
- Superior understanding of document context
|
| 96 |
-
- Advanced reasoning capabilities
|
| 97 |
-
- Requires API key
|
| 98 |
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
- Customizable summary length
|
| 102 |
-
- Extract key points efficiently
|
| 103 |
-
- Downloadable summaries
|
| 104 |
|
| 105 |
-
|
|
|
|
| 106 |
|
| 107 |
-
|
| 108 |
-
2. **Long documents**: Standard mode chunks documents for better processing
|
| 109 |
-
3. **Complex queries**: Use Advanced (Gemini) mode for deep reasoning
|
| 110 |
-
4. **Quick insights**: Try the Quick Action buttons
|
| 111 |
|
| 112 |
-
|
| 113 |
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
- "What are the key recommendations?"
|
| 117 |
-
- "Explain the methodology used"
|
| 118 |
-
- "List all important dates mentioned"
|
| 119 |
|
| 120 |
-
|
| 121 |
|
| 122 |
-
|
| 123 |
-
- Gemini API: Requires valid API key and has usage limits
|
| 124 |
-
- Processing time: Varies based on document size
|
| 125 |
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
Contributions are welcome! Please feel free to submit issues or pull requests.
|
| 129 |
-
|
| 130 |
-
## π License
|
| 131 |
|
|
|
|
| 132 |
MIT License
|
| 133 |
|
| 134 |
-
|
|
|
|
| 135 |
|
| 136 |
-
|
| 137 |
-
- [LangChain Documentation](https://python.langchain.com/)
|
| 138 |
-
- [Google Gemini API](https://ai.google.dev/)
|
| 139 |
|
| 140 |
-
|
| 141 |
|
| 142 |
-
Made with β€οΈ
|
|
|
|
| 1 |
---
|
| 2 |
+
title: DocTalk - Chat With PDF
|
| 3 |
+
emoji: ππ¬
|
| 4 |
+
colorFrom: indigo
|
| 5 |
+
colorTo: pink
|
| 6 |
sdk: streamlit
|
| 7 |
+
sdk_version: "1.35.0"
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# ππ¬ DocTalk - Chat With PDF
|
| 13 |
|
| 14 |
+
An intelligent, completely free-to-run PDF chat application powered by Google's Gemma-2-2b-it model. Optimized for CPU usage on Hugging Face Spaces.
|
| 15 |
|
| 16 |
## β¨ Features
|
| 17 |
|
| 18 |
+
### π€ **Core Engine**
|
| 19 |
+
* **Model:** Google Gemma-2-2B-IT (Instruction Tuned)
|
| 20 |
+
* **Architecture:** Runs entirely locally on CPU (no GPU required)
|
| 21 |
+
* **Performance:** Optimized with FAISS for instant vector retrieval
|
| 22 |
|
| 23 |
### π― **Key Capabilities**
|
| 24 |
+
* β‘ **CPU Optimized** - Runs smoothly on Hugging Face Free Tier
|
| 25 |
+
* π€ **Easy Upload** - Simple sidebar PDF upload
|
| 26 |
+
* π§ **Smart Context** - Uses `all-MiniLM-L6-v2` for precise semantic search
|
| 27 |
+
* π¬ **Memory** - Maintains chat history within the session
|
| 28 |
+
* π **Secure** - Handles Hugging Face tokens via environment secrets
|
|
|
|
|
|
|
| 29 |
|
| 30 |
## π How to Use
|
| 31 |
|
| 32 |
+
### 1. Set Up Authentication
|
| 33 |
+
* This app requires a **Hugging Face Access Token** (Read permissions) to download the Gemma model.
|
| 34 |
+
* **For Users:** Enter your token in the app sidebar if prompted (or set it in Space secrets).
|
| 35 |
|
| 36 |
+
### 2. Upload Your PDF
|
| 37 |
+
* Navigate to the sidebar
|
| 38 |
+
* Click "Browse files" to upload your PDF document
|
| 39 |
+
* Click **"π Process Document"**
|
|
|
|
| 40 |
|
| 41 |
+
### 3. Start Chatting!
|
| 42 |
+
* Wait for the "β
Ready to chat!" notification
|
| 43 |
+
* Type your question in the chat input at the bottom
|
| 44 |
+
* Receive concise, context-aware answers from Gemma-2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
## π οΈ Technical Stack
|
| 47 |
|
| 48 |
+
* **Frontend**: Streamlit
|
| 49 |
+
* **LLM**: google/gemma-2-2b-it
|
| 50 |
+
* **Embeddings**: sentence-transformers/all-MiniLM-L6-v2
|
| 51 |
+
* **Vector Store**: FAISS (Facebook AI Similarity Search)
|
| 52 |
+
* **PDF Processing**: PyPDFLoader
|
| 53 |
+
* **Orchestration**: LangChain
|
|
|
|
|
|
|
| 54 |
|
| 55 |
## π¦ Installation (Local)
|
| 56 |
|
| 57 |
+
To run this app on your own machine:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
+
https://huggingface.co/spaces/ChiragKaushikCK/Chat_with_PDF
|
| 60 |
|
| 61 |
+
**π Features Breakdown**
|
| 62 |
+
FAISS Vector Search
|
| 63 |
+
Replaces heavy database lookups with lightweight, in-memory similarity search.
|
|
|
|
|
|
|
| 64 |
|
| 65 |
+
Ensures responses are strictly grounded in your uploaded document.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
+
Pre-loaded Models
|
| 68 |
+
The embedding models are cached (@st.cache_resource) to ensure the app feels snappy after the initial cold start.
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
+
Gemma-2-2B-IT
|
| 71 |
+
Google's latest lightweight open model.
|
| 72 |
|
| 73 |
+
Instruction-tuned for better Q&A performance compared to base models.
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
+
Small enough (~2.6B params) to fit in standard RAM.
|
| 76 |
|
| 77 |
+
**β οΈ Limitations**
|
| 78 |
+
Speed: Since this runs on CPU, generating long answers may take a few seconds.
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
+
Memory: Designed for standard PDFs. Extremely large files (500+ pages) might hit RAM limits on free tiers.
|
| 81 |
|
| 82 |
+
Session: Chat history is cleared if the page is refreshed.
|
|
|
|
|
|
|
| 83 |
|
| 84 |
+
π€ Contributing
|
| 85 |
+
Contributions are welcome! Please feel free to submit issues or pull requests to improve the UI or add new features.
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
+
π License
|
| 88 |
MIT License
|
| 89 |
|
| 90 |
+
π Links
|
| 91 |
+
Google Gemma Models
|
| 92 |
|
| 93 |
+
LangChain Documentation
|
|
|
|
|
|
|
| 94 |
|
| 95 |
+
Streamlit
|
| 96 |
|
| 97 |
+
<div align="center"> Made with β€οΈ with Streamlit and Gemma model, by Tannu Yadav </div>
|