ThinkPad / README.md
os-odyssey's picture
Update README.md
22a1588 verified
---
title: ThinkPad
emoji: 🚀
colorFrom: green
colorTo: green
sdk: gradio
sdk_version: 6.0.1
app_file: app.py
pinned: false
hf_oauth: true
hf_oauth_scopes:
- inference-api
license: mit
---
# ThinkPad 🪐
A simple, interactive chatbot with thinking mode.
**Project built with [JumpLander](https://jumplander.org)**
---
## Overview
NovaTalk is an interactive chatbot that uses a Hugging Face Inference API model to generate natural responses.
It features a "thinking" mode to simulate a more human-like conversation flow.
- **Thinking Mode**: Delays response for a short time to simulate thinking.
- **Safe & Lightweight**: Runs entirely using Hugging Face Inference API, no heavy GPU needed.
- **Easy to Configure**: Set the model and token in Hugging Face Spaces Variables & Secrets.
---
## How to Use
1. Enter your message in the chat box.
2. Toggle "Thinking mode" if you want a small delay before the bot replies.
3. The bot will generate a response using the selected HF model.
---
## Configuration
Set the following **Variables & Secrets** in your Hugging Face Space:
| Name | Type | Description |
|-----------------|---------|-------------|
| HF_API_TOKEN | Secret | Your Hugging Face API token |
| GPT_MODEL_ID | Variable | The model ID to use for chat (e.g., `HuggingFaceH4/zephyr-7b-beta`) |
---
## Example
```python
# Example Python usage (if you want to use outside Gradio)
from huggingface_hub import InferenceClient
client = InferenceClient(model="HuggingFaceH4/zephyr-7b-beta", token="YOUR_HF_API_TOKEN")
response = client.text_generation("Hello, how are you?")
print(response)
Credits
This project is powered by Hugging Face and written with ❤️ by JumpLander.
```
---
An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).