OnyxlMunkey commited on
Commit
cddcab0
·
1 Parent(s): c504ae7

Initial Docker Space setup for AutoGPT

Browse files
Files changed (4) hide show
  1. Dockerfile +29 -0
  2. README.md +32 -6
  3. app.py +116 -0
  4. requirements.txt +12 -0
Dockerfile ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dockerfile for AutoGPT Hugging Face Space
2
+ FROM python:3.11-slim
3
+
4
+ # Set working directory
5
+ WORKDIR /app
6
+
7
+ # Install system dependencies
8
+ RUN apt-get update && apt-get install -y \
9
+ git \
10
+ curl \
11
+ build-essential \
12
+ && rm -rf /var/lib/apt/lists/*
13
+
14
+ # Copy requirements first for better caching
15
+ COPY requirements.txt .
16
+ RUN pip install --no-cache-dir -r requirements.txt
17
+
18
+ # Copy application code
19
+ COPY . .
20
+
21
+ # Expose port (Hugging Face Spaces use port 7860 by default)
22
+ EXPOSE 7860
23
+
24
+ # Set environment variables
25
+ ENV PYTHONUNBUFFERED=1
26
+ ENV PORT=7860
27
+
28
+ # Run the application
29
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,11 +1,37 @@
1
  ---
2
- title: Autogpt Space
3
- emoji: 👀
4
- colorFrom: indigo
5
- colorTo: red
6
  sdk: docker
7
  pinned: false
8
- short_description: An autonomous AI agent powered by AutoGPT
 
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: AutoGPT Space
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: docker
7
  pinned: false
8
+ license: mit
9
+ app_port: 7860
10
  ---
11
 
12
+ # AutoGPT Space
13
+
14
+ An autonomous AI agent powered by AutoGPT, deployed as a Hugging Face Space using Docker.
15
+
16
+ ## Features
17
+
18
+ - 🤖 Autonomous task execution
19
+ - 🔄 Iterative problem-solving
20
+ - 🎛️ Configurable parameters
21
+ - 🌐 Web-based interface
22
+
23
+ ## Usage
24
+
25
+ 1. Enter a task description in the text box
26
+ 2. Adjust max iterations and temperature if needed
27
+ 3. Click "Run AutoGPT" to start execution
28
+ 4. View results in the output panel
29
+
30
+ ## Configuration
31
+
32
+ - **Max Iterations**: Controls how many steps AutoGPT will take
33
+ - **Temperature**: Controls the randomness of LLM responses
34
+
35
+ ## License
36
+
37
+ MIT License
app.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ AutoGPT Hugging Face Space Entry Point
3
+ Main application file for the Docker Space
4
+ """
5
+ import os
6
+ import gradio as gr
7
+ from typing import Optional
8
+
9
+ # Initialize AutoGPT components here
10
+ # This is a placeholder - adjust based on your actual AutoGPT implementation
11
+
12
+ def autogpt_interface(
13
+ task: str,
14
+ max_iterations: int = 5,
15
+ temperature: float = 0.7
16
+ ) -> str:
17
+ """
18
+ Main AutoGPT interface function
19
+
20
+ Args:
21
+ task: The task description for AutoGPT
22
+ max_iterations: Maximum number of iterations
23
+ temperature: Temperature for LLM generation
24
+
25
+ Returns:
26
+ Result string from AutoGPT execution
27
+ """
28
+ try:
29
+ # Placeholder for AutoGPT execution
30
+ # Replace this with actual AutoGPT integration
31
+ result = f"""
32
+ AutoGPT Task Execution:
33
+ Task: {task}
34
+ Max Iterations: {max_iterations}
35
+ Temperature: {temperature}
36
+
37
+ [AutoGPT execution would happen here]
38
+ This is a placeholder implementation.
39
+ """
40
+ return result.strip()
41
+ except Exception as e:
42
+ return f"Error: {str(e)}"
43
+
44
+ # Create Gradio interface
45
+ def create_interface():
46
+ """Create and return the Gradio interface"""
47
+ with gr.Blocks(title="AutoGPT Space", theme=gr.themes.Soft()) as demo:
48
+ gr.Markdown(
49
+ """
50
+ # 🤖 AutoGPT Space
51
+
52
+ An autonomous AI agent powered by AutoGPT. Enter a task and let AutoGPT work on it autonomously.
53
+ """
54
+ )
55
+
56
+ with gr.Row():
57
+ with gr.Column():
58
+ task_input = gr.Textbox(
59
+ label="Task Description",
60
+ placeholder="Enter the task you want AutoGPT to accomplish...",
61
+ lines=3
62
+ )
63
+
64
+ with gr.Row():
65
+ max_iter = gr.Slider(
66
+ minimum=1,
67
+ maximum=20,
68
+ value=5,
69
+ step=1,
70
+ label="Max Iterations"
71
+ )
72
+ temperature = gr.Slider(
73
+ minimum=0.0,
74
+ maximum=1.0,
75
+ value=0.7,
76
+ step=0.1,
77
+ label="Temperature"
78
+ )
79
+
80
+ submit_btn = gr.Button("Run AutoGPT", variant="primary")
81
+
82
+ with gr.Column():
83
+ output = gr.Textbox(
84
+ label="Result",
85
+ lines=10,
86
+ interactive=False
87
+ )
88
+
89
+ submit_btn.click(
90
+ fn=autogpt_interface,
91
+ inputs=[task_input, max_iter, temperature],
92
+ outputs=output
93
+ )
94
+
95
+ gr.Examples(
96
+ examples=[
97
+ ["Research and summarize the latest developments in AI"],
98
+ ["Create a Python script to analyze stock market data"],
99
+ ["Write a blog post about sustainable energy solutions"]
100
+ ],
101
+ inputs=task_input
102
+ )
103
+
104
+ return demo
105
+
106
+ if __name__ == "__main__":
107
+ # Get port from environment (Hugging Face Spaces sets this)
108
+ port = int(os.environ.get("PORT", 7860))
109
+
110
+ # Launch the interface
111
+ demo = create_interface()
112
+ demo.launch(
113
+ server_name="0.0.0.0",
114
+ server_port=port,
115
+ share=False
116
+ )
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core dependencies
2
+ gradio>=4.0.0
3
+ python-dotenv>=1.0.0
4
+
5
+ # AutoGPT dependencies (adjust based on actual requirements)
6
+ # openai>=1.0.0
7
+ # langchain>=0.1.0
8
+ # tiktoken>=0.5.0
9
+
10
+ # Additional utilities
11
+ requests>=2.31.0
12
+ pydantic>=2.0.0