Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,17 +1,14 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
colorTo: blue
|
| 6 |
sdk: docker
|
|
|
|
| 7 |
pinned: false
|
| 8 |
license: mit
|
| 9 |
---
|
| 10 |
|
| 11 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 12 |
-
|
| 13 |
-
title: Ollama AI Assistant emoji: 🤖 colorFrom: purple colorTo: blue sdk: docker app_port: 8501 # This tells Hugging Face Spaces to expose port 8501
|
| 14 |
-
Ollama AI Assistant
|
| 15 |
This Hugging Face Space hosts an AI assistant powered by Ollama, FastAPI, and Streamlit.
|
| 16 |
|
| 17 |
Ollama: The large language model (LLM) inference engine.
|
|
@@ -20,14 +17,14 @@ FastAPI: Provides a robust backend API for interacting with the LLM.
|
|
| 20 |
|
| 21 |
Streamlit: Offers an intuitive and interactive web-based user interface.
|
| 22 |
|
| 23 |
-
How it Works
|
| 24 |
The Docker container starts, running Ollama, FastAPI, and Streamlit concurrently.
|
| 25 |
|
| 26 |
-
Ollama pulls and serves the krishna_choudhary/AI_Assistant_Chatbot model.
|
| 27 |
|
| 28 |
-
The Streamlit UI (exposed on port 8501) communicates with the FastAPI backend (running internally on port 7860) to send user prompts and receive AI responses.
|
| 29 |
|
| 30 |
-
Get Started
|
| 31 |
Simply type your query into the text box and click "Get Response" to interact with the AI assistant.
|
| 32 |
|
| 33 |
Built with Docker, Ollama, FastAPI, and Streamlit.
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Ollama AI Assistant
|
| 3 |
+
emoji: 🤖
|
| 4 |
+
colorFrom: purple
|
| 5 |
colorTo: blue
|
| 6 |
sdk: docker
|
| 7 |
+
app_port: 8501
|
| 8 |
pinned: false
|
| 9 |
license: mit
|
| 10 |
---
|
| 11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
This Hugging Face Space hosts an AI assistant powered by Ollama, FastAPI, and Streamlit.
|
| 13 |
|
| 14 |
Ollama: The large language model (LLM) inference engine.
|
|
|
|
| 17 |
|
| 18 |
Streamlit: Offers an intuitive and interactive web-based user interface.
|
| 19 |
|
| 20 |
+
### How it Works
|
| 21 |
The Docker container starts, running Ollama, FastAPI, and Streamlit concurrently.
|
| 22 |
|
| 23 |
+
Ollama pulls and serves the `krishna_choudhary/AI_Assistant_Chatbot` model.
|
| 24 |
|
| 25 |
+
The Streamlit UI (exposed on port `8501`) communicates with the FastAPI backend (running internally on port `7860`) to send user prompts and receive AI responses.
|
| 26 |
|
| 27 |
+
### Get Started
|
| 28 |
Simply type your query into the text box and click "Get Response" to interact with the AI assistant.
|
| 29 |
|
| 30 |
Built with Docker, Ollama, FastAPI, and Streamlit.
|