Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,5 +7,22 @@ sdk: docker
|
|
| 7 |
pinned: false
|
| 8 |
license: mit
|
| 9 |
---
|
|
|
|
| 10 |
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
license: mit
|
| 9 |
---
|
| 10 |
+
This Hugging Face Space hosts an AI assistant powered by Ollama, FastAPI, and Streamlit.
|
| 11 |
|
| 12 |
+
Ollama: The large language model (LLM) inference engine.
|
| 13 |
+
|
| 14 |
+
FastAPI: Provides a robust backend API for interacting with the LLM.
|
| 15 |
+
|
| 16 |
+
Streamlit: Offers an intuitive and interactive web-based user interface.
|
| 17 |
+
|
| 18 |
+
### How it Works
|
| 19 |
+
The Docker container starts, running Ollama, FastAPI, and Streamlit concurrently.
|
| 20 |
+
|
| 21 |
+
Ollama pulls and serves the `krishna_choudhary/lightweight_chatbot` model.
|
| 22 |
+
|
| 23 |
+
The Streamlit UI (exposed on port `8501`) communicates with the FastAPI backend (running internally on port `7860`) to send user prompts and receive AI responses.
|
| 24 |
+
|
| 25 |
+
### Get Started
|
| 26 |
+
Simply type your query into the text box and click "Get Response" to interact with the AI assistant.
|
| 27 |
+
|
| 28 |
+
Built with Docker, Ollama, FastAPI, and Streamlit.
|