Update README.md
Browse files
README.md
CHANGED
|
@@ -1,140 +1,10 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
The system implements multiple specialized agents, each with a specific role in the scientific research process:
|
| 12 |
-
|
| 13 |
-
1. **Base Agent** - Parent class for all agents, handles core LLM interactions
|
| 14 |
-
2. **Generation Agent** - Generates initial scientific hypotheses from multiple perspectives
|
| 15 |
-
3. **Reflection Agent** - Acts as a peer reviewer, critically evaluating hypotheses
|
| 16 |
-
4. **Ranking Agent** - Compares and scores hypotheses using tournament-style evaluation
|
| 17 |
-
5. **Evolution Agent** - Refines and improves promising hypotheses
|
| 18 |
-
6. **Proximity Agent** - Ensures hypotheses remain relevant to research goals
|
| 19 |
-
7. **Meta-Review Agent** - Synthesizes research findings into comprehensive reports
|
| 20 |
-
8. **Supervisor Agent** - Coordinates the entire multi-agent system workflow
|
| 21 |
-
|
| 22 |
-
## π Getting Started
|
| 23 |
-
|
| 24 |
-
### Prerequisites
|
| 25 |
-
|
| 26 |
-
- Python 3.8 or higher
|
| 27 |
-
- OpenAI API key
|
| 28 |
-
|
| 29 |
-
### Installation
|
| 30 |
-
|
| 31 |
-
1. Clone the repository:
|
| 32 |
-
```
|
| 33 |
-
git clone https://github.com/yourusername/ai-coscientist.git
|
| 34 |
-
cd ai-coscientist
|
| 35 |
-
```
|
| 36 |
-
|
| 37 |
-
2. Create and activate a virtual environment:
|
| 38 |
-
```
|
| 39 |
-
python -m venv venv
|
| 40 |
-
# On Windows
|
| 41 |
-
venv\Scripts\activate
|
| 42 |
-
# On macOS/Linux
|
| 43 |
-
source venv/bin/activate
|
| 44 |
-
```
|
| 45 |
-
|
| 46 |
-
3. Install dependencies:
|
| 47 |
-
```
|
| 48 |
-
pip install -r requirements.txt
|
| 49 |
-
```
|
| 50 |
-
|
| 51 |
-
4. Create a `.env` file in the project root directory and add your OpenAI API key:
|
| 52 |
-
```
|
| 53 |
-
OPENAI_API_KEY=your_api_key_here
|
| 54 |
-
```
|
| 55 |
-
|
| 56 |
-
### Usage
|
| 57 |
-
|
| 58 |
-
#### Command Line Interface
|
| 59 |
-
|
| 60 |
-
Run the system from the command line:
|
| 61 |
-
|
| 62 |
-
```
|
| 63 |
-
python -m src.main --goal "To investigate the relationship between microbiome diversity and autoimmune disorders in urban populations" --iterations 3 --output ./results
|
| 64 |
-
```
|
| 65 |
-
|
| 66 |
-
Options:
|
| 67 |
-
- `--goal` or `-g`: Research goal to pursue
|
| 68 |
-
- `--model` or `-m`: LLM model to use (default: GPT-4o)
|
| 69 |
-
- `--temp` or `-t`: Temperature for LLM generation (default: 0.4)
|
| 70 |
-
- `--iterations` or `-i`: Number of refinement iterations (default: 3)
|
| 71 |
-
- `--output` or `-o`: Output directory for results
|
| 72 |
-
- `--verbose` or `-v`: Enable verbose logging
|
| 73 |
-
|
| 74 |
-
#### Python API
|
| 75 |
-
|
| 76 |
-
```python
|
| 77 |
-
from src import AICoScientist
|
| 78 |
-
|
| 79 |
-
# Initialize the system
|
| 80 |
-
acs = AICoScientist()
|
| 81 |
-
|
| 82 |
-
# Run full workflow
|
| 83 |
-
results = acs.run_full_workflow(
|
| 84 |
-
research_goal="To investigate the relationship between microbiome diversity and autoimmune disorders in urban populations",
|
| 85 |
-
iterations=3,
|
| 86 |
-
output_dir="./results"
|
| 87 |
-
)
|
| 88 |
-
|
| 89 |
-
# Access results
|
| 90 |
-
top_hypothesis = results["hypotheses"][0]["hypothesis"]
|
| 91 |
-
print(f"Top hypothesis: {top_hypothesis}")
|
| 92 |
-
print(f"Executive summary: {results['report']['executive_summary']}")
|
| 93 |
-
```
|
| 94 |
-
## Output
|
| 95 |
-

|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
## π Project Structure
|
| 99 |
-
|
| 100 |
-
```
|
| 101 |
-
ACS - AI CoScientist/
|
| 102 |
-
βββ .env # Environment variables (create this file)
|
| 103 |
-
βββ requirements.txt # Project dependencies
|
| 104 |
-
βββ README.md # Project documentation
|
| 105 |
-
βββ src/ # Source code
|
| 106 |
-
βββ __init__.py # Package initialization
|
| 107 |
-
βββ main.py # Main application entry point
|
| 108 |
-
βββ agents/ # Agent implementations
|
| 109 |
-
βββ tools/ # Tool implementations
|
| 110 |
-
βββ utils/ # Utility functions
|
| 111 |
-
βββ config/ # Configuration files
|
| 112 |
-
```
|
| 113 |
-
|
| 114 |
-
## π§ Configuration
|
| 115 |
-
|
| 116 |
-
The system's behavior can be configured through parameters in `src/config/config.py` or by passing a custom configuration dictionary to the `AICoScientist` constructor.
|
| 117 |
-
|
| 118 |
-
Key configuration options:
|
| 119 |
-
- `AGENT_DEFAULT_MODEL`: Default LLM model to use (e.g., "gpt-4o-2024-05-13")
|
| 120 |
-
- `AGENT_DEFAULT_TEMPERATURE`: Default temperature for LLM generation
|
| 121 |
-
- `MAX_ITERATIONS`: Maximum number of iterations for hypothesis refinement
|
| 122 |
-
- `MAX_TOKENS`: Maximum number of tokens for LLM responses
|
| 123 |
-
|
| 124 |
-
## π Features
|
| 125 |
-
|
| 126 |
-
- **Multi-Agent Collaboration**: Specialized agents working together to generate and refine scientific hypotheses
|
| 127 |
-
- **Iterative Refinement**: Hypotheses are continuously improved through multiple refinement cycles
|
| 128 |
-
- **Quality Evaluation**: Rigorous evaluation of hypotheses for scientific validity and relevance
|
| 129 |
-
- **Comprehensive Reporting**: Detailed research reports with executive summaries and ranked hypotheses
|
| 130 |
-
- **Tool Integration**: Scientific search, reasoning, and citation tools to enhance the research process
|
| 131 |
-
|
| 132 |
-
## π License
|
| 133 |
-
|
| 134 |
-
This project is licensed under the MIT License - see the LICENSE file for details.
|
| 135 |
-
|
| 136 |
-
## π Acknowledgements
|
| 137 |
-
|
| 138 |
-
- Inspired by the paper "Towards an AI Co-Scientist"
|
| 139 |
-
- Built with OpenAI's LLM technologies
|
| 140 |
-
- Leverages the LangChain framework
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: AI Co-Scientist
|
| 3 |
+
emoji: π¬
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: green
|
| 6 |
+
sdk: gradio # Or streamlit if run_ui.py uses Streamlit
|
| 7 |
+
app_file: run_ui.py # Tells Spaces to run this file
|
| 8 |
+
pinned: false
|
| 9 |
+
# python_version: 3.9 # Optional: specify if needed, check repo's Python 3.8+
|
| 10 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|