|
|
--- |
|
|
title: PIPS Demo |
|
|
colorFrom: blue |
|
|
colorTo: green |
|
|
sdk: gradio |
|
|
app_file: src/pips/gradio_app.py |
|
|
pinned: true |
|
|
--- |
|
|
|
|
|
# PIPS: Python Iterative Problem Solving |
|
|
|
|
|
**PIPS** (Python Iterative Problem Solving) is a powerful library for iterative code generation and refinement using Large Language Models (LLMs). It provides both programmatic APIs and a web interface for solving complex problems through iterative reasoning and code execution. |
|
|
|
|
|
Paper: arxiv.org/abs/2510.22849 |
|
|
|
|
|
## Installation |
|
|
|
|
|
### From PyPI (when available) |
|
|
```bash |
|
|
pip install pips-solver |
|
|
``` |
|
|
|
|
|
### From Source |
|
|
```bash |
|
|
git clone <repository-url> |
|
|
cd pips |
|
|
pip install -e . |
|
|
``` |
|
|
|
|
|
### With Optional Dependencies |
|
|
```bash |
|
|
# For web interface |
|
|
pip install pips-solver[web] |
|
|
|
|
|
# For development |
|
|
pip install pips-solver[dev] |
|
|
|
|
|
# All optional dependencies |
|
|
pip install pips-solver[all] |
|
|
``` |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
### 1. Command Line Interface |
|
|
|
|
|
Start the web interface: |
|
|
```bash |
|
|
pips |
|
|
# or |
|
|
python -m pips |
|
|
|
|
|
# Custom host and port |
|
|
pips --host 127.0.0.1 --port 5000 --debug |
|
|
``` |
|
|
|
|
|
### 2. Programmatic Usage |
|
|
|
|
|
```python |
|
|
from pips import PIPSSolver, get_model |
|
|
from pips.utils import RawInput |
|
|
|
|
|
# Initialize a model |
|
|
model = get_model("gpt-4o", api_key="your-openai-api-key") |
|
|
|
|
|
# Create solver |
|
|
solver = PIPSSolver( |
|
|
model=model, |
|
|
max_iterations=8, |
|
|
temperature=0.0 |
|
|
) |
|
|
|
|
|
# Solve a problem |
|
|
problem = RawInput( |
|
|
text_input="What is the sum of the first 10 prime numbers?", |
|
|
image_input=None |
|
|
) |
|
|
|
|
|
# Chain of thought solving |
|
|
answer, logs = solver.solve_chain_of_thought(problem) |
|
|
print(f"Answer: {answer}") |
|
|
|
|
|
# Code-based solving |
|
|
answer, logs = solver.solve_with_code(problem) |
|
|
print(f"Answer: {answer}") |
|
|
``` |
|
|
|
|
|
### 3. Streaming Usage |
|
|
|
|
|
```python |
|
|
def on_token(token, iteration, model_name): |
|
|
print(f"Token: {token}", end="", flush=True) |
|
|
|
|
|
def on_step(step, message, **kwargs): |
|
|
print(f"Step {step}: {message}") |
|
|
|
|
|
callbacks = { |
|
|
"on_llm_streaming_token": on_token, |
|
|
"on_step_update": on_step |
|
|
} |
|
|
|
|
|
# Solve with streaming |
|
|
answer, logs = solver.solve_with_code( |
|
|
problem, |
|
|
stream=True, |
|
|
callbacks=callbacks |
|
|
) |
|
|
``` |
|
|
|
|
|
## Supported Models |
|
|
|
|
|
### OpenAI Models |
|
|
- GPT-4o, GPT-4o-mini |
|
|
- GPT-4, GPT-4-turbo |
|
|
- GPT-3.5-turbo |
|
|
- O1-preview, O1-mini |
|
|
- O3-mini (when available) |
|
|
|
|
|
### Anthropic Models |
|
|
- Claude-3.5-sonnet |
|
|
- Claude-3-opus, Claude-3-sonnet, Claude-3-haiku |
|
|
- Claude-2.1, Claude-2.0 |
|
|
|
|
|
### Google Models |
|
|
- Gemini-2.0-flash-exp |
|
|
- Gemini-1.5-pro, Gemini-1.5-flash |
|
|
- Gemini-1.0-pro |
|
|
|
|
|
## API Reference |
|
|
|
|
|
### PIPSSolver |
|
|
|
|
|
The main solver class for iterative problem solving. |
|
|
|
|
|
```python |
|
|
PIPSSolver( |
|
|
model: LLMModel, |
|
|
max_iterations: int = 8, |
|
|
temperature: float = 0.0, |
|
|
max_tokens: int = 4096, |
|
|
top_p: float = 1.0 |
|
|
) |
|
|
``` |
|
|
|
|
|
#### Methods |
|
|
|
|
|
- `solve_chain_of_thought(sample, stream=False, callbacks=None)`: Solve using chain-of-thought reasoning |
|
|
- `solve_with_code(sample, stream=False, callbacks=None)`: Solve using iterative code generation |
|
|
|
|
|
### Model Factory |
|
|
|
|
|
```python |
|
|
from pips import get_model |
|
|
|
|
|
# Get a model instance |
|
|
model = get_model(model_name, api_key=None) |
|
|
``` |
|
|
|
|
|
### Utilities |
|
|
|
|
|
```python |
|
|
from pips.utils import RawInput, img2base64, base642img |
|
|
|
|
|
# Create input with text and optional image |
|
|
input_data = RawInput( |
|
|
text_input="Your question here", |
|
|
image_input=PIL.Image.open("image.jpg") # Optional |
|
|
) |
|
|
``` |
|
|
|
|
|
## Configuration |
|
|
|
|
|
### Environment Variables |
|
|
|
|
|
Set your API keys as environment variables: |
|
|
|
|
|
```bash |
|
|
export OPENAI_API_KEY="your-openai-key" |
|
|
export ANTHROPIC_API_KEY="your-anthropic-key" |
|
|
export GOOGLE_API_KEY="your-google-key" |
|
|
``` |
|
|
|
|
|
### Web Interface Settings |
|
|
|
|
|
The web interface allows you to configure: |
|
|
- Model selection |
|
|
- API keys |
|
|
- Solving mode (chain-of-thought vs code) |
|
|
- Temperature, max tokens, iterations |
|
|
- Code execution timeout |
|
|
|
|
|
## Examples |
|
|
|
|
|
### Mathematical Problem |
|
|
```python |
|
|
problem = RawInput( |
|
|
text_input="Find the derivative of f(x) = x^3 + 2x^2 - 5x + 1", |
|
|
image_input=None |
|
|
) |
|
|
answer, logs = solver.solve_with_code(problem) |
|
|
``` |
|
|
|
|
|
### Image-Based Problem |
|
|
```python |
|
|
from PIL import Image |
|
|
|
|
|
image = Image.open("chart.png") |
|
|
problem = RawInput( |
|
|
text_input="What is the trend shown in this chart?", |
|
|
image_input=image |
|
|
) |
|
|
answer, logs = solver.solve_chain_of_thought(problem) |
|
|
``` |
|
|
|
|
|
### Multi-Step Reasoning |
|
|
```python |
|
|
problem = RawInput( |
|
|
text_input=""" |
|
|
A company has 3 departments with 10, 15, and 20 employees respectively. |
|
|
If they want to form a committee with 2 people from each department, |
|
|
how many different committees are possible? |
|
|
""", |
|
|
image_input=None |
|
|
) |
|
|
answer, logs = solver.solve_with_code(problem) |
|
|
``` |
|
|
|
|
|
## Web Interface |
|
|
|
|
|
The web interface provides: |
|
|
- **Problem Input**: Text area with optional image upload |
|
|
- **Model Selection**: Choose from available LLM providers |
|
|
- **Settings Panel**: Configure solving parameters |
|
|
- **Real-time Streaming**: Watch the AI solve problems step-by-step |
|
|
- **Chat History**: Review previous solutions |
|
|
- **Export Options**: Download chat logs and solutions |
|
|
|
|
|
## Session Management |
|
|
|
|
|
PIPS includes comprehensive session management capabilities: |
|
|
|
|
|
### Automatic Session Loading |
|
|
- **First Launch**: Automatically loads curated example sessions demonstrating PIPS capabilities |
|
|
- **Persistent Storage**: All sessions are saved in browser localStorage for persistence across visits |
|
|
- **Smart Cleanup**: Automatically removes incomplete or invalid sessions |
|
|
|
|
|
### Import/Export Sessions |
|
|
- **Bulk Export**: Export all sessions as a JSON file via the "Export" button |
|
|
- **Individual Export**: Download single sessions using the download icon next to each session |
|
|
- **Import Sessions**: Import previously exported session files via the "Import" button |
|
|
- **Duplicate Detection**: Automatically detects and handles duplicate sessions during import |
|
|
|
|
|
### Session Format |
|
|
Sessions are exported in a portable JSON format: |
|
|
```json |
|
|
{ |
|
|
"exportDate": "2024-01-15T10:00:00.000Z", |
|
|
"sessions": { |
|
|
"session_id": { |
|
|
"id": "session_id", |
|
|
"title": "Session title", |
|
|
"problemText": "Original problem description", |
|
|
"image": "base64_image_data_or_null", |
|
|
"createdAt": "2024-01-15T09:00:00.000Z", |
|
|
"lastUsed": "2024-01-15T09:15:00.000Z", |
|
|
"status": "completed|interrupted|solving|active", |
|
|
"chatHistory": [ |
|
|
{ |
|
|
"sender": "PIPS|AI Assistant|User", |
|
|
"content": "Message content", |
|
|
"iteration": "Iteration 1", |
|
|
"timestamp": "2024-01-15T09:01:00.000Z" |
|
|
} |
|
|
] |
|
|
} |
|
|
} |
|
|
} |
|
|
``` |
|
|
|
|
|
### Session States |
|
|
- **Active**: New sessions where users can input problems |
|
|
- **Solving**: Sessions currently being processed by PIPS |
|
|
- **Completed**: Successfully finished sessions (read-only) |
|
|
- **Interrupted**: Sessions stopped by user or error (read-only) |
|
|
|
|
|
## Development |
|
|
|
|
|
### Setup Development Environment |
|
|
```bash |
|
|
git clone <repository-url> |
|
|
cd pips |
|
|
pip install -e .[dev] |
|
|
``` |
|
|
|
|
|
### Running Tests |
|
|
```bash |
|
|
pytest |
|
|
pytest --cov=pips # With coverage |
|
|
``` |
|
|
|
|
|
### Code Formatting |
|
|
```bash |
|
|
black pips/ |
|
|
isort pips/ |
|
|
flake8 pips/ |
|
|
mypy pips/ |
|
|
``` |
|
|
|
|
|
## Contributing |
|
|
|
|
|
1. Fork the repository |
|
|
2. Create a feature branch (`git checkout -b feature/amazing-feature`) |
|
|
3. Commit your changes (`git commit -m 'Add amazing feature'`) |
|
|
4. Push to the branch (`git push origin feature/amazing-feature`) |
|
|
5. Open a Pull Request |
|
|
|
|
|
## License |
|
|
|
|
|
This project is licensed under the MIT License - see the LICENSE file for details. |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
- OpenAI for GPT models |
|
|
- Anthropic for Claude models |
|
|
- Google for GenAI models |
|
|
- Flask and SocketIO communities |
|
|
|