reidmen's picture
using docker to trigger building
4ec36ec
metadata
title: DeepSeek-R1 WebGPU
emoji: 🧠🐳
colorFrom: red
colorTo: blue
sdk: docker
pinned: false
app_port: 7860
license: apache-2.0
short_description: Small reasoning model running in browser
thumbnail: >-
  https://huggingface.co/spaces/webml-community/deepseek-r1-webgpu/resolve/main/banner.png

DeepSeek-R1 WebGPU

Next-generation reasoning model running entirely in your browser using WebGPU acceleration.

System Requirements

  • WebGPU-compatible browser
  • 6GB+ RAM recommended

Features

  • πŸš€ Runs entirely in browser using WebGPU acceleration
  • 🧠 1.5B parameter reasoning model
  • πŸ’­ Shows step-by-step reasoning process
  • πŸ”’ LaTeX math support
  • πŸŒ™ Dark mode support

Getting Started

Follow the steps below to set up and run the application.

1. Clone the Repository

Clone the examples repository from GitHub:

git clone https://github.com/huggingface/transformers.js-examples.git

2. Navigate to the Project Directory

Change your working directory to the deepseek-r1-webgpu folder:

cd transformers.js-examples/deepseek-r1-webgpu

3. Install Dependencies

Install the necessary dependencies using npm:

npm i

4. Run the Development Server

Start the development server:

npm run dev

The application should now be running locally. Open your browser and go to http://localhost:5173 to see it in action.

Usage

  1. Click "Load model" to download and initialize the model
  2. Wait for model initialization to complete (~1-2 minutes)
  3. Type your question or select an example
  4. View the model's step-by-step reasoning

Browser Compatibility

  • βœ… Chrome

Troubleshooting

  • WebGPU Not Available: Ensure you're using a compatible browser with WebGPU enabled
  • Out of Memory: Try closing other browser tabs to free up GPU memory
  • Model Loading Fails: Check your internet connection and try reloading

License & Attribution