Spaces:
Sleeping
Sleeping
Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,15 +1,53 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
colorTo: purple
|
| 6 |
-
sdk:
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
|
|
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Chat with Llama 4, Gemini 2.5, Qwen2.5, Mistral and Deepseek-R1 (reasoning) augmented LLMs
|
| 3 |
+
emoji: π
|
| 4 |
+
colorFrom: indigo
|
| 5 |
colorTo: purple
|
| 6 |
+
sdk: docker
|
| 7 |
+
pinned: true
|
| 8 |
+
license: apache-2.0
|
| 9 |
+
models:
|
| 10 |
+
- deepseek-ai/DeepSeek-R1
|
| 11 |
+
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
|
| 12 |
+
- mistralai/Mistral-Small-24B-Instruct-2501
|
| 13 |
+
- mistralai/Mistral-Large-Instruct-2411
|
| 14 |
+
- mistralai/Mistral-Nemo-Instruct-2407
|
| 15 |
+
- meta-llama/Llama-4-Scout-17B-16E-Instruct
|
| 16 |
+
short_description: Supercharge your favourite LLMs with reasoning capabilities
|
| 17 |
---
|
| 18 |
+
# Reasoner4All
|
| 19 |
|
| 20 |
+
Reasoner4All is a simple demo that allows you to supercharge your favourite LLMs by integrating advanced reasoning capabilities from the latest open-source thinking models. Whether you're working with OpenAI, Mistral, Groq, or Gemini, this framework seamlessly enhances their decision-making with an additional reasoning layer. Leverage models like DeepSeek R1 Distill LLaMA-70B to improve your AI's problem-solving skills and contextual understanding!
|
| 21 |
+
|
| 22 |
+
## Useful Links π
|
| 23 |
+
|
| 24 |
+
- AiCore [Repo](https://github.com/BrunoV21/AiCore) my custom wrapper around multiple providers which uniforms streaming, json outputs and reasoning integration.
|
| 25 |
+
|
| 26 |
+
## Features
|
| 27 |
+
|
| 28 |
+
**Supports Multiple LLM Providers**
|
| 29 |
+
- OpenAI
|
| 30 |
+
- Mistral
|
| 31 |
+
- Groq (including DeepSeek R1 Distill LLaMA-70B for reasoning)
|
| 32 |
+
- Gemini
|
| 33 |
+
- and more!
|
| 34 |
+
|
| 35 |
+
**Reasoner Integration**
|
| 36 |
+
- Uses an additional (reasoning) LLM to enhance reasoning capabilities
|
| 37 |
+
- Support for Deepseek R1 Distill LLaMA-70B (Groq Hosted) and Deepseek R1 (Nvidia Hosted)
|
| 38 |
+
|
| 39 |
+
**Conversation History**
|
| 40 |
+
- For context reasons the converstation history up to latest 4096 tokens is preserved and passed to the llms as context
|
| 41 |
+
|
| 42 |
+
## User Interface
|
| 43 |
+
**Settings**
|
| 44 |
+
- Provider and Model selection
|
| 45 |
+
- Reasoner Option and Model selection
|
| 46 |
+
- Support for System Propmpt and Reasoner System prompt specification
|
| 47 |
+
|
| 48 |
+
**Profiles**
|
| 49 |
+
- Reasoner4All uses a series of pre-set API Keys for demo purposes
|
| 50 |
+
- OpenAi allows you to connect to your models with support for api key if required *don't worry I am not logging your keys anwyehre but you can check the src code of this space to be sure :)*
|
| 51 |
+
|
| 52 |
+
**Chat**
|
| 53 |
+
- Reasoning steps appear inside a step which can be expanded to visualize the process
|