Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,27 +9,30 @@ license: mit
|
|
| 9 |
---
|
| 10 |
|
| 11 |
|
|
|
|
| 12 |
|
| 13 |
-
|
| 14 |
-
Ollama AI Assistant
|
| 15 |
-
This Hugging Face Space hosts an AI assistant powered by Ollama, FastAPI, and Streamlit.
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
How
|
| 24 |
-
The Docker container starts, running Ollama, FastAPI, and Streamlit concurrently.
|
| 25 |
|
| 26 |
-
Ollama
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
-
|
| 31 |
-
Simply type your query into the text box and click "Get Response" to interact with the AI assistant.
|
| 32 |
|
| 33 |
-
|
| 34 |
|
| 35 |
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
|
| 12 |
+
# π€ Ollama AI Assistant
|
| 13 |
|
| 14 |
+
This project hosts a lightweight AI assistant powered by **Ollama**, **FastAPI**, and **Streamlit**, all bundled in a single Docker environment.
|
|
|
|
|
|
|
| 15 |
|
| 16 |
+
## π Overview
|
| 17 |
|
| 18 |
+
- **Ollama** β Runs and serves the LLM model.
|
| 19 |
+
- **FastAPI** β Handles backend API requests to interact with the model.
|
| 20 |
+
- **Streamlit** β Provides a user-friendly web UI.
|
| 21 |
+
- **Docker** β Runs everything in isolated and reproducible containers.
|
| 22 |
|
| 23 |
+
---
|
| 24 |
|
| 25 |
+
## π§ How It Works
|
|
|
|
| 26 |
|
| 27 |
+
1. **Ollama** loads the LLM model: `krishna_choudhary/lightweight_chatbot`.
|
| 28 |
+
2. **FastAPI** provides an API backend (running on internal port `7860`) for prompt-response communication.
|
| 29 |
+
3. **Streamlit UI** (exposed on port `8501`) lets users enter prompts and receive responses.
|
| 30 |
+
4. The UI interacts with FastAPI, which in turn queries the LLM via Ollama.
|
| 31 |
|
| 32 |
+
---
|
| 33 |
|
| 34 |
+
## π₯οΈ User Interface
|
|
|
|
| 35 |
|
| 36 |
+
By default, the **Streamlit UI** is the primary interface and launches at:
|
| 37 |
|
| 38 |
|