File size: 4,059 Bytes
6020923
 
 
 
 
 
 
 
 
 
 
37ececc
 
0b0305c
37ececc
9745430
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ed438c
9745430
 
 
 
e2e7692
37ececc
e2e7692
0b0305c
e2e7692
 
 
 
 
 
 
0b0305c
e2e7692
 
 
 
 
 
0b0305c
e2e7692
 
 
 
 
 
 
 
 
 
 
 
0b0305c
e2e7692
 
0b0305c
e2e7692
0b0305c
 
 
 
 
 
 
 
 
 
 
 
e2e7692
 
0b0305c
e2e7692
 
 
 
 
 
 
37ececc
1b460ab
e2e7692
 
 
 
 
 
 
 
0b0305c
 
e2e7692
 
0b0305c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
title: LLMates
emoji: 🤖
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: "5.31.0"
app_file: app.py
pinned: false
---

# 🤖 LLMates – Chat with Custom AI Personas

LLMates is a minimal, modular chatbot application that allows you to switch between different AI assistant personas powered by OpenRouter's language models. The application also supports OpenAI and local models via Ollama as alternative backends.

## 🧠 Why I Built This

LLMates is the first step in my return to serious, grounded building in the AI space.

I’m just starting out again. This isn’t the most impressive version of me or the most ambitious project I could build — in fact, it’s pretty simple. But that’s the point. After working as a software engineer until 2023, getting back into the groove — especially in today’s AI/ML world — has felt completely different.

With AI pair programmers and new tooling everywhere, things move fast. Sometimes too fast. When code “just works,” I catch myself thinking I didn’t really build it. It feels more like *vibe coding* than actual engineering. And that leads to impostor syndrome.

So I decided to reset.

LLMates is my first intentionally small project — built using an AI-first development workflow in Windsurf (after experimenting with VS Code + Copilot and PyCharm). The idea was to slow down, ask why things work, and take back a sense of control over my code — and to learn how to work *with* an AI pair programmer, not just let it code for me.

With this project, I wanted to:
- Get back to core engineering habits
- Learn how modern LLM tooling actually fits together — APIs, UI, deployment
- Practice working with AI pair programmers in a more intentional way
- Start small and build toward deeper, lower-level projects with confidence

> “Anything worth doing is worth doing poorly.”

This is my way of showing up, learning by doing, and giving myself permission to start small.

This marks **Week 1** of a weekly cadence of AI/ML projects — not just to ship, but to *really understand* what I build.

## 🚀 Features

- Multiple AI personas with distinct personalities and expertise
- Primary support for OpenRouter's wide range of models (including GPT-4, Claude, and more)
- Simple and intuitive Gradio-based web interface
- Easy configuration through environment variables

## 💡 Available Personas

- **Python Tutor**: Get help with Python programming concepts and debugging
- **Regex Helper**: Expert assistance with regular expressions
- **Motivational Coach**: Encouraging and inspiring conversations (default)
- **Startup Advisor**: Practical advice for startups and entrepreneurship

## 🛠️ Installation

1. Clone the repository:
   ```bash
   git clone https://github.com/sanchitv7/llmates.git
   cd llmates
   ```

2. Install the required dependencies:
   ```bash
   pip install -r requirements.txt
   ```

3. Create a `.env` file and configure your settings (see Configuration section below)

## ⚙️ Configuration

Copy the example `.env` file and update it with your preferred settings:

```env
# Choose one of the following backends

## OpenRouter
USE_OPENROUTER=true
OPENROUTER_API_KEY=your_openrouter_api_key
OPENROUTER_MODEL=meta-llama/llama-3-70b-instruct  # or openai/gpt-4, anthropic/claude-3-opus, google/gemini-pro, etc.

## Ollama
# USE_OLLAMA=false
# OLLAMA_MODEL=llama3

## OpenAI
# OPENAI_API_KEY=your_openai_api_key
# OPENAI_MODEL=gpt-4o

# Application Settings
DEFAULT_PERSONA="Motivational Coach"
TEMPERATURE=0.7
MAX_TURNS=10
```

## 🚀 Running the Application

Start the application with:

```bash
python app.py
```

The application will start a local web server, and you can access it in your browser at `http://localhost:7860`.

## 🛠️ Tech Stack

- **UI**: Gradio
- **Primary Backend**: OpenRouter (with support for 100+ models)
- **Alternative Backends**: OpenAI API, Ollama
- **Language**: Python 3.8+
- **Configuration**: Environment variables via python-dotenv
- **Dependencies**: openai, gradio, python-dotenv, requests