ThinkPad / README.md
os-odyssey's picture
Update README.md
22a1588 verified

A newer version of the Gradio SDK is available: 6.2.0

Upgrade
metadata
title: ThinkPad
emoji: πŸš€
colorFrom: green
colorTo: green
sdk: gradio
sdk_version: 6.0.1
app_file: app.py
pinned: false
hf_oauth: true
hf_oauth_scopes:
  - inference-api
license: mit

ThinkPad πŸͺ

A simple, interactive chatbot with thinking mode.

Project built with JumpLander


Overview

NovaTalk is an interactive chatbot that uses a Hugging Face Inference API model to generate natural responses.
It features a "thinking" mode to simulate a more human-like conversation flow.

  • Thinking Mode: Delays response for a short time to simulate thinking.
  • Safe & Lightweight: Runs entirely using Hugging Face Inference API, no heavy GPU needed.
  • Easy to Configure: Set the model and token in Hugging Face Spaces Variables & Secrets.

How to Use

  1. Enter your message in the chat box.
  2. Toggle "Thinking mode" if you want a small delay before the bot replies.
  3. The bot will generate a response using the selected HF model.

Configuration

Set the following Variables & Secrets in your Hugging Face Space:

Name Type Description
HF_API_TOKEN Secret Your Hugging Face API token
GPT_MODEL_ID Variable The model ID to use for chat (e.g., HuggingFaceH4/zephyr-7b-beta)

Example

# Example Python usage (if you want to use outside Gradio)
from huggingface_hub import InferenceClient

client = InferenceClient(model="HuggingFaceH4/zephyr-7b-beta", token="YOUR_HF_API_TOKEN")
response = client.text_generation("Hello, how are you?")
print(response)
Credits
This project is powered by Hugging Face and written with ❀️ by JumpLander.

An example chatbot using Gradio, huggingface_hub, and the Hugging Face Inference API.