Zekai-Chen commited on
Commit
364742e
·
verified ·
1 Parent(s): 5592f72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -61
README.md CHANGED
@@ -1,110 +1,171 @@
1
  ---
2
- title: AI Agent Systems Competition
3
- emoji: 🏢
4
  colorFrom: blue
5
- colorTo: green
6
  sdk: gradio
7
  pinned: true
8
- sdk_version: 6.5.1
9
  ---
10
 
11
- # AI Agent Systems Competition (Founding Team Track)
12
 
13
- We are running a builder-focused competition to identify exceptional engineers who can design and ship **reliable AI agent systems**.
14
- Top performers will be invited to join our founding team / execution center.
 
15
 
16
  ---
17
 
18
- ## 🎯 What to build
19
 
20
- Create an **AI agent system** that solves a real business execution workflow.
21
 
22
- Choose one track (or propose your own):
23
 
24
- ### Track A Sales Follow-Through Agent
25
- An agent that turns inbound/outbound leads into scheduled calls and updated CRM states (with follow-ups, retries, and escalation).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ### Track B — Recruiting Coordination Agent
28
- An agent that coordinates candidates + interviewers across scheduling, reminders, status updates, and handoff to humans when needed.
29
 
30
- ### Track C Open Track (Execution Agent)
31
- Any execution workflow that has clear ownership + measurable outcomes (support escalation, onboarding, ops tracking, etc.).
 
 
 
 
 
 
 
32
 
33
  ---
34
 
35
- ## Submission requirements (must have)
36
 
37
- Submit **one** entry including:
38
 
39
- 1) **Code repo (required)**
40
- - GitHub or Hugging Face repo link
41
- - Must include: `README`, setup steps, and system architecture description
 
42
 
43
- 2) **Demo video (required, 3–5 minutes)**
44
- Your video must show:
45
- - A live run of the agent/system
46
- - A walkthrough of system components
47
- - At least **one failure case** and how your system handles it
48
 
49
- 3) **Short write-up (required, 1–2 pages OR README section)**
50
- Include:
51
- - Problem definition & assumptions
52
- - Architecture diagram (can be simple)
53
- - Reliability & failure handling strategy
54
- - Metrics you track (and why)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
  ---
57
 
58
- ## 🧪 Evaluation rubric
59
 
60
- We evaluate systems based on:
61
 
62
- - **Reliability & failure handling** (timeouts, retries, fallbacks, human handoff)
63
- - **Execution logic & ownership** (clear workflow/state, no “prompt-only” demo)
64
- - **Real-world usability** (deployable mindset, clear interfaces, readable code)
65
- - **Outcome focus** (metrics, success criteria, measurable impact)
66
 
67
- > Flashy UI is not required. We care about systems that can run.
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  ---
70
 
71
- ## 🗓️ Timeline
72
 
73
- - Registration deadline: **[YYYY-MM-DD]**
74
- - Submission deadline: **[YYYY-MM-DD]**
75
- - Final demos / interviews: **[YYYY-MM-DD]**
76
- - Winner announcements: **[YYYY-MM-DD]**
 
77
 
78
- (All times: Pacific Time)
79
 
80
  ---
81
 
82
- ## 📮 How to submit
83
 
84
- Fill out the submission form:
85
 
86
- - **Submission Form:** [PASTE YOUR GOOGLE FORM LINK HERE]
87
 
88
- What you will submit in the form:
89
- - Name / email
90
- - Track
91
- - Repo link (GitHub or HF)
92
- - Demo video link (YouTube / Drive / Loom)
93
- - Optional: short note (what you’re proud of / what’s incomplete)
 
94
 
95
  ---
96
 
97
- ## 📌 Rules
98
 
99
- - Individuals or teams (up to **[N]** people)
100
- - You may use AI tools during development
101
- - Submissions must be original work (you can use open-source libraries with proper attribution)
102
- - We may showcase your demo publicly (only with your permission — specify in the form)
103
 
104
  ---
105
 
106
- ## 🙌 Contact
 
 
107
 
108
- Questions? Post in **Discussions** or email: **[your-email]**
109
 
110
- Good luck — we’re excited to see what you build.
 
1
  ---
2
+ title: Do You Want to Build AI Agents That Run Real Businesses?
3
+ emoji: 🚀
4
  colorFrom: blue
5
+ colorTo: purple
6
  sdk: gradio
7
  pinned: true
 
8
  ---
9
 
10
+ # AI Agent Execution Competition Participation Guide
11
 
12
+ **Duration:** 3-Week Build Phase + 2-Week Review Phase
13
+ **Location:** Virtual
14
+ **Host:** The Chinese University of Hong Kong, Shenzhen
15
 
16
  ---
17
 
18
+ ## Overview
19
 
20
+ The AI Agent Execution Competition is a hands-on systems challenge for builders who want to design and ship **AI agents that execute real business workflows**.
21
 
22
+ This competition emphasizes execution, reliability, and ownership — not flashy prototypes.
23
 
24
+ Participants will design and demonstrate AI agent systems that solve operational problems such as sales follow-through, recruiting coordination, or other measurable execution workflows.
25
+
26
+ There are no monetary prizes.
27
+
28
+ **Outstanding participants may be invited to join our AI systems team and collaborate on real-world agent infrastructure projects.**
29
+
30
+ This competition is about identifying builders who can turn AI into operating systems for real businesses.
31
+
32
+ ---
33
+
34
+ ## Competition Tracks
35
+
36
+ Participants may choose one track:
37
+
38
+ ### Track A — Sales Execution Agent
39
+
40
+ Build an agent that manages sales workflows:
41
+
42
+ - lead follow-through
43
+ - communication coordination
44
+ - scheduling
45
+ - pipeline tracking
46
+ - escalation handling
47
+
48
+ Goal: maximize execution reliability and workflow completion.
49
+
50
+ ---
51
 
52
  ### Track B — Recruiting Coordination Agent
 
53
 
54
+ Build an agent that coordinates hiring workflows:
55
+
56
+ - interview scheduling
57
+ - candidate communication
58
+ - reminders and follow-ups
59
+ - status tracking
60
+ - failure recovery
61
+
62
+ Goal: reduce operational friction and improve workflow efficiency.
63
 
64
  ---
65
 
66
+ ### Track C Open Execution Agent
67
 
68
+ Build an agent that solves any measurable execution workflow:
69
 
70
+ - customer support coordination
71
+ - onboarding automation
72
+ - operations tracking
73
+ - internal workflow management
74
 
75
+ Participants must clearly define the problem and success metrics.
 
 
 
 
76
 
77
+ ---
78
+
79
+ ## Schedule
80
+
81
+ **Competition Timeline**
82
+
83
+ Week 1–3 — Build Phase
84
+ Week 4–5 — Review Phase
85
+ End of Week 5 — Winners Announced
86
+
87
+ Exact calendar dates will be posted in Discussions.
88
+
89
+ ---
90
+
91
+ ## Submission Requirements
92
+
93
+ Each participant must submit:
94
+
95
+ ### 1. System Report (Required)
96
+
97
+ A concise report (1–3 pages) describing:
98
+
99
+ - problem definition
100
+ - architecture design
101
+ - execution workflow
102
+ - reliability strategy
103
+ - evaluation metrics
104
 
105
  ---
106
 
107
+ ### 2. Demo Video (Required, 3–5 minutes)
108
 
109
+ The video must demonstrate:
110
 
111
+ - a live agent run
112
+ - system walkthrough
113
+ - at least one failure case and recovery behavior
 
114
 
115
+ ---
116
+
117
+ ## Evaluation Criteria
118
+
119
+ Submissions are evaluated on:
120
+
121
+ - reliability and failure handling
122
+ - execution logic and ownership modeling
123
+ - real-world usability
124
+ - outcome focus and measurable impact
125
+ - clarity of system design
126
+
127
+ This competition rewards systems that **operate reliably**, not just impressive demos.
128
 
129
  ---
130
 
131
+ ## Participation Rules
132
 
133
+ - Individual participation only
134
+ - Participants may use AI tools during development
135
+ - Submissions must be original work
136
+ - External libraries and frameworks are allowed
137
+ - GPU resources are optional and not required
138
 
139
+ Joining this Hugging Face organization constitutes participation.
140
 
141
  ---
142
 
143
+ ## How to Submit
144
 
145
+ Create a Discussion post titled:
146
 
147
+ **“Competition Submission [Your Name]”**
148
 
149
+ Include:
150
+
151
+ - demo video link
152
+ - PDF report link
153
+ - short summary of your system
154
+
155
+ Submissions are reviewed manually by the evaluation team.
156
 
157
  ---
158
 
159
+ ## Community & Support
160
 
161
+ Use the Discussions tab for announcements, questions, and technical discussion.
 
 
 
162
 
163
  ---
164
 
165
+ ## Final Note
166
+
167
+ This is not a showcase competition for prototypes.
168
 
169
+ It is a search for builders who can design AI systems that execute real workflows.
170
 
171
+ We are excited to see what you build.