Dr-P commited on
Commit
4e6f9ac
·
verified ·
1 Parent(s): 4a5281b

Upload app (5).py

Browse files
Files changed (1) hide show
  1. app (5).py +1347 -0
app (5).py ADDED
@@ -0,0 +1,1347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import textwrap
4
+ from datetime import datetime
5
+ from typing import Any, Dict, List, Tuple
6
+
7
+ import gradio as gr
8
+
9
+ try:
10
+ from anthropic import Anthropic
11
+ except ImportError:
12
+ Anthropic = None
13
+
14
+
15
+ APP_TITLE = "Business-to-Technical AI On-Ramp — Adaptive Technical Coach"
16
+ CS50_EMBED_URL = "https://www.youtube.com/embed/JP7ITIXGpHk"
17
+ DEFAULT_MODEL = os.getenv("ANTHROPIC_MODEL", "claude-sonnet-4-6")
18
+ DEFAULT_BASE_URL = os.getenv("ANTHROPIC_BASE_URL", "https://api.anthropic.com")
19
+ SERVER_API_KEY = os.getenv("ANTHROPIC_API_KEY", "")
20
+ ANTHROPIC_VERSION = os.getenv("ANTHROPIC_VERSION", "2023-06-01")
21
+
22
+
23
+ def md(text: str) -> str:
24
+ return textwrap.dedent(text).strip()
25
+
26
+
27
+ def now_string() -> str:
28
+ return datetime.now().strftime("%Y-%m-%d %H:%M")
29
+
30
+
31
+ def default_profile() -> Dict[str, Any]:
32
+ return {
33
+ "name": "Learner",
34
+ "goal": "Become technically fluent enough to understand, scope, and discuss AI/software systems confidently.",
35
+ "background": "Business / product / operations",
36
+ "hours_per_week": 4,
37
+ "track": "Business-to-Technical Foundations",
38
+ "completed_lessons": [],
39
+ "weak_areas": [],
40
+ "notes": [],
41
+ "usage_count": 0,
42
+ "last_visit": "First visit",
43
+ "favorite_topics": [],
44
+ }
45
+
46
+
47
+ CURRICULUM: Dict[str, Dict[str, Any]] = {
48
+ "Orientation": {
49
+ "level": "Foundational",
50
+ "objective": "Build the mental model for how software, data, and AI systems fit together.",
51
+ "lessons": {
52
+ "What software engineering actually is": md("""
53
+ # What software engineering actually is
54
+
55
+ **Plain-English view**
56
+ Software engineering is the discipline of building software so that it works, stays understandable,
57
+ can be improved safely, and does not collapse the moment requirements change.
58
+
59
+ **Business analogy**
60
+ It is the difference between a one-off spreadsheet heroics effort and a repeatable, documented,
61
+ quality-controlled operating process.
62
+
63
+ **Technical view**
64
+ Software engineering is not only writing code. It also includes:
65
+ - structuring files and modules
66
+ - version control
67
+ - testing
68
+ - debugging
69
+ - interfaces and APIs
70
+ - deployment
71
+ - maintenance and iteration
72
+
73
+ **What beginners usually misunderstand**
74
+ 1. They think code = software.
75
+ 2. They underestimate maintenance.
76
+ 3. They do not realize the importance of environment consistency, interfaces, and testing.
77
+
78
+ **What to internalize**
79
+ Good software is code plus process plus clarity plus reliability.
80
+
81
+ **Mini exercise**
82
+ Take a tool you use at work and describe:
83
+ - what inputs it needs
84
+ - what outputs it creates
85
+ - what could break
86
+ - who maintains it
87
+ """),
88
+ "How an AI product is layered": md("""
89
+ # How an AI product is layered
90
+
91
+ A useful AI product usually has multiple layers:
92
+ 1. **User interface layer** — where the person interacts with the app.
93
+ 2. **Application logic layer** — rules, routing, and business logic.
94
+ 3. **Model layer** — ML model, LLM, or rules engine.
95
+ 4. **Data layer** — files, databases, APIs, vector stores, logs.
96
+ 5. **Deployment layer** — hosting, containers, CI/CD, observability.
97
+
98
+ **Why this matters**
99
+ A lot of business-side learners think the model is the whole product.
100
+ It almost never is.
101
+
102
+ **Key takeaway**
103
+ A product is a system, not a single model call.
104
+ """),
105
+ "Rules vs ML vs LLM": md("""
106
+ # Rules vs ML vs LLM
107
+
108
+ **Rules engine**
109
+ Use this when the logic is explicit and deterministic.
110
+ Example: "If customer is platinum and amount < threshold, route to fast lane."
111
+
112
+ **Classical ML**
113
+ Use this when you have structured data and want to predict a label, class, or number.
114
+ Example: churn prediction, fraud scoring, forecasting.
115
+
116
+ **LLM workflow**
117
+ Use this when the task is language-heavy or knowledge-heavy.
118
+ Example: summarization, extraction, search, drafting, document Q&A.
119
+
120
+ **Practical lesson**
121
+ Not every business problem is an LLM problem.
122
+ Many are better solved with rules first.
123
+ """),
124
+ },
125
+ },
126
+ "Python & Programming": {
127
+ "level": "Foundational",
128
+ "objective": "Become comfortable reading code and understanding basic logic.",
129
+ "lessons": {
130
+ "Reading Python without panic": md("""
131
+ # Reading Python without panic
132
+
133
+ When reading Python, scan in this order:
134
+ 1. Imports
135
+ 2. Functions
136
+ 3. Inputs
137
+ 4. Outputs
138
+ 5. Control flow
139
+ 6. Main app wiring
140
+
141
+ **Common building blocks**
142
+ - variables store values
143
+ - functions package behavior
144
+ - conditionals choose among branches
145
+ - loops repeat work
146
+ - dictionaries map keys to values
147
+ - lists store sequences
148
+
149
+ **Best first habit**
150
+ Do not try to understand every character.
151
+ First answer: what problem is this script solving?
152
+ """),
153
+ "Functions, inputs, outputs, and control flow": md("""
154
+ # Functions, inputs, outputs, and control flow
155
+
156
+ A function is a reusable unit of logic.
157
+
158
+ ```python
159
+ def classify_budget(amount):
160
+ if amount < 1000:
161
+ return "small"
162
+ elif amount < 10000:
163
+ return "medium"
164
+ return "large"
165
+ ```
166
+
167
+ **How to read it**
168
+ - input: `amount`
169
+ - internal logic: compare against thresholds
170
+ - output: a category string
171
+
172
+ **Why this matters**
173
+ Many production systems are still mostly functions wrapped inside APIs or user interfaces.
174
+ """),
175
+ "Files, modules, and project structure": md("""
176
+ # Files, modules, and project structure
177
+
178
+ Beginners often put everything in one file. That works for day 1, but scales badly.
179
+
180
+ **Typical small project structure**
181
+ ```text
182
+ project/
183
+ ├── app.py
184
+ ├── requirements.txt
185
+ ├── README.md
186
+ ├── utils.py
187
+ └── tests/
188
+ ```
189
+
190
+ **Mental model**
191
+ - `app.py` = main entry point
192
+ - `utils.py` = helper logic
193
+ - `README.md` = explanation for humans
194
+ - `requirements.txt` = dependencies
195
+ - `tests/` = checks that reduce breakage
196
+ """),
197
+ },
198
+ },
199
+ "APIs & Data Exchange": {
200
+ "level": "Core",
201
+ "objective": "Understand how software systems talk to each other.",
202
+ "lessons": {
203
+ "What an API is": md("""
204
+ # What an API is
205
+
206
+ An API is a contract for software-to-software communication.
207
+
208
+ **Simple model**
209
+ - client sends request
210
+ - server receives request
211
+ - server returns response
212
+
213
+ **Typical API concepts**
214
+ - endpoint
215
+ - method / verb
216
+ - authentication
217
+ - request body
218
+ - response body
219
+ - status code
220
+
221
+ **Why this matters**
222
+ In production, models are often exposed behind APIs rather than being run manually inside notebooks.
223
+ """),
224
+ "JSON, GET, POST, and status codes": md("""
225
+ # JSON, GET, POST, and status codes
226
+
227
+ **JSON**
228
+ A text format that represents structured data.
229
+
230
+ Example:
231
+ ```json
232
+ {
233
+ "customer": "Acme",
234
+ "priority": "high",
235
+ "amount": 2500
236
+ }
237
+ ```
238
+
239
+ **HTTP verbs**
240
+ - `GET` = read
241
+ - `POST` = create or trigger
242
+ - `PUT/PATCH` = update
243
+ - `DELETE` = remove
244
+
245
+ **Status codes**
246
+ - `200` = success
247
+ - `400` = client error / bad request
248
+ - `401` = unauthorized
249
+ - `404` = not found
250
+ - `500` = server error
251
+ """),
252
+ "FastAPI in plain English": md("""
253
+ # FastAPI in plain English
254
+
255
+ FastAPI is a Python framework for turning Python functions into web endpoints.
256
+
257
+ **Why people like it**
258
+ - fast to develop
259
+ - strongly typed
260
+ - clean request/response validation
261
+ - auto-generated docs
262
+
263
+ **Conceptual pattern**
264
+ Python function -> API endpoint -> request validation -> response
265
+ """),
266
+ },
267
+ },
268
+ "Deployment & Platform": {
269
+ "level": "Core",
270
+ "objective": "Understand how apps move from local code to a live service.",
271
+ "lessons": {
272
+ "Git and GitHub": md("""
273
+ # Git and GitHub
274
+
275
+ Git is version control. GitHub is a hosted platform for repositories and collaboration.
276
+
277
+ **Why it matters**
278
+ - keeps history
279
+ - supports branching
280
+ - enables pull requests
281
+ - acts as the trigger point for automation
282
+
283
+ **Mental model**
284
+ Git is the memory of your project.
285
+ """),
286
+ "Docker and containers": md("""
287
+ # Docker and containers
288
+
289
+ Docker helps package an app with its environment so it runs more consistently.
290
+
291
+ **Important distinction**
292
+ - image = blueprint
293
+ - container = running instance
294
+
295
+ **Why it matters**
296
+ It reduces the classic problem: "it works on my machine."
297
+ """),
298
+ "CI/CD and GitHub Actions": md("""
299
+ # CI/CD and GitHub Actions
300
+
301
+ CI/CD automates checks and sometimes deployments when code changes.
302
+
303
+ **Typical flow**
304
+ push code -> run tests -> build app -> optionally deploy
305
+
306
+ **Why this matters for a business-side learner**
307
+ It helps you understand why engineering teams care about pipelines before release.
308
+ """),
309
+ "Kubernetes without the hype": md("""
310
+ # Kubernetes without the hype
311
+
312
+ Kubernetes is a system for orchestrating containers at scale.
313
+
314
+ **Important advice**
315
+ Do not start here.
316
+ Understand scripts, APIs, Git, and containers first.
317
+
318
+ **Words to recognize**
319
+ - pod
320
+ - deployment
321
+ - service
322
+ - ingress
323
+ - secret
324
+ """),
325
+ },
326
+ },
327
+ "AI Product Architecture": {
328
+ "level": "Advanced beginner",
329
+ "objective": "Connect business problems to realistic technical architectures.",
330
+ "lessons": {
331
+ "From idea to architecture": md("""
332
+ # From idea to architecture
333
+
334
+ A useful first-pass architecture answer should name:
335
+ - user
336
+ - trigger
337
+ - inputs
338
+ - transformation steps
339
+ - model / logic layer
340
+ - outputs
341
+ - storage/logging
342
+ - deployment target
343
+
344
+ **Practical rule**
345
+ If you cannot explain the system in boxes and arrows, the scope is still too fuzzy.
346
+ """),
347
+ "MLOps at a high level": md("""
348
+ # MLOps at a high level
349
+
350
+ MLOps is the operational discipline around machine learning systems.
351
+
352
+ It includes:
353
+ - data versioning
354
+ - experiment tracking
355
+ - deployment
356
+ - monitoring
357
+ - retraining / iteration
358
+
359
+ **Key distinction**
360
+ A model notebook is not a production ML system.
361
+ """),
362
+ "LLM apps, RAG, agents, and MCP": md("""
363
+ # LLM apps, RAG, agents, and MCP
364
+
365
+ **LLM app**
366
+ Uses a language model for reasoning or generation.
367
+
368
+ **RAG**
369
+ Retrieval-augmented generation combines retrieval of relevant context with generation.
370
+
371
+ **Agent**
372
+ A workflow where the model can select from tools or take multiple action steps.
373
+
374
+ **MCP**
375
+ A standard for exposing tools, resources, and prompts to AI applications.
376
+ """),
377
+ },
378
+ },
379
+ }
380
+
381
+
382
+ REFERENCE_LIBRARY = md("""
383
+ # Free references
384
+
385
+ ## Programming and software foundations
386
+ - CS50 Python: https://cs50.harvard.edu/python
387
+ - CS50 YouTube channel: https://www.youtube.com/cs50
388
+ - Pro Git: https://git-scm.com/book/en/v2
389
+
390
+ ## AI and product building
391
+ - Hugging Face Learn: https://huggingface.co/learn
392
+ - Hugging Face LLM Course: https://huggingface.co/learn/llm-course/chapter1/1
393
+ - Full Stack Deep Learning: https://fullstackdeeplearning.com/
394
+
395
+ ## Developer docs
396
+ - FastAPI: https://fastapi.tiangolo.com/
397
+ - Requests: https://requests.readthedocs.io/
398
+ - Docker Get Started: https://docs.docker.com/get-started/
399
+ - GitHub Actions: https://docs.github.com/actions
400
+ - Kubernetes docs: https://kubernetes.io/docs/home/
401
+ - Model Context Protocol: https://modelcontextprotocol.io/
402
+
403
+ ## Hugging Face / Gradio docs
404
+ - Spaces Overview: https://huggingface.co/docs/hub/spaces-overview
405
+ - Spaces Config Reference: https://huggingface.co/docs/hub/spaces-config-reference
406
+ - Gradio Docs: https://www.gradio.app/docs
407
+ - Gradio ChatInterface: https://www.gradio.app/docs/gradio/chatinterface
408
+ - Gradio State in Blocks: https://www.gradio.app/guides/state-in-blocks
409
+ """)
410
+
411
+
412
+ QUIZ_BANK: Dict[str, Dict[str, Any]] = {
413
+ "API Fundamentals": {
414
+ "questions": [
415
+ {
416
+ "prompt": "What is the best plain-English description of an API?",
417
+ "choices": [
418
+ "A machine learning model registry",
419
+ "A contract for software-to-software communication",
420
+ "A file format for Docker images",
421
+ ],
422
+ "answer": "A contract for software-to-software communication",
423
+ "explanation": "APIs define how one system asks another for data or actions.",
424
+ },
425
+ {
426
+ "prompt": "Which HTTP method is most associated with creating or triggering work?",
427
+ "choices": ["GET", "POST", "DELETE"],
428
+ "answer": "POST",
429
+ "explanation": "POST is commonly used to submit data or create work on the server.",
430
+ },
431
+ {
432
+ "prompt": "What does a 404 status code usually mean?",
433
+ "choices": ["Unauthorized", "Server exploded", "Resource not found"],
434
+ "answer": "Resource not found",
435
+ "explanation": "404 indicates that the requested endpoint or resource could not be found.",
436
+ },
437
+ ],
438
+ },
439
+ "Deployment Basics": {
440
+ "questions": [
441
+ {
442
+ "prompt": "What is a Docker image?",
443
+ "choices": [
444
+ "A blueprint used to create a running container",
445
+ "A screenshot of a deployed app",
446
+ "A Git branch used for production",
447
+ ],
448
+ "answer": "A blueprint used to create a running container",
449
+ "explanation": "An image is the packaged recipe; a container is the running instance.",
450
+ },
451
+ {
452
+ "prompt": "What is the main purpose of CI/CD?",
453
+ "choices": [
454
+ "To automate build, test, and deployment workflows",
455
+ "To compress videos for production",
456
+ "To store passwords in a repository",
457
+ ],
458
+ "answer": "To automate build, test, and deployment workflows",
459
+ "explanation": "CI/CD reduces manual release friction and catches issues earlier.",
460
+ },
461
+ {
462
+ "prompt": "What is the best beginner advice about Kubernetes?",
463
+ "choices": [
464
+ "Start there before learning Git",
465
+ "Ignore containers and skip straight to clusters",
466
+ "Learn scripts, APIs, Git, and containers first",
467
+ ],
468
+ "answer": "Learn scripts, APIs, Git, and containers first",
469
+ "explanation": "Kubernetes makes much more sense after foundational deployment concepts.",
470
+ },
471
+ ],
472
+ },
473
+ }
474
+
475
+
476
+ CODE_LABS: Dict[str, Dict[str, str]] = {
477
+ "Read a simple Python function": {
478
+ "code": md("""
479
+ def classify_ticket(priority, customer_tier):
480
+ if priority == "critical":
481
+ return "Escalate immediately"
482
+ if customer_tier == "enterprise":
483
+ return "Fast-track review"
484
+ return "Normal queue"
485
+ """),
486
+ "walkthrough": md("""
487
+ ## Walkthrough
488
+
489
+ This function accepts two inputs:
490
+ - `priority`
491
+ - `customer_tier`
492
+
493
+ It applies **ordered decision logic**:
494
+ 1. If priority is critical, it immediately escalates.
495
+ 2. Otherwise, if the customer is enterprise, it fast-tracks.
496
+ 3. Otherwise, it uses the normal queue.
497
+
498
+ **Why this matters**
499
+ This is a simple rules engine. Not every automation problem needs ML.
500
+ """),
501
+ },
502
+ "Read a tiny API example": {
503
+ "code": md("""
504
+ from fastapi import FastAPI
505
+
506
+ app = FastAPI()
507
+
508
+ @app.get("/health")
509
+ def health():
510
+ return {"status": "ok"}
511
+ """),
512
+ "walkthrough": md("""
513
+ ## Walkthrough
514
+
515
+ - `FastAPI()` creates the application.
516
+ - `@app.get("/health")` defines an endpoint.
517
+ - When that endpoint is called, the function returns JSON.
518
+
519
+ **Why health checks exist**
520
+ Operations teams want a lightweight way to ask: "Is this service alive?"
521
+ """),
522
+ },
523
+ "Read a tiny CI workflow": {
524
+ "code": md("""
525
+ name: ci
526
+ on: [push, pull_request]
527
+
528
+ jobs:
529
+ test:
530
+ runs-on: ubuntu-latest
531
+ steps:
532
+ - uses: actions/checkout@v4
533
+ - uses: actions/setup-python@v5
534
+ with:
535
+ python-version: '3.10'
536
+ - run: python -m py_compile app.py
537
+ """),
538
+ "walkthrough": md("""
539
+ ## Walkthrough
540
+
541
+ This workflow says:
542
+ - run on push or pull request
543
+ - create a job named `test`
544
+ - use a GitHub-hosted Ubuntu machine
545
+ - check out the repo
546
+ - install Python 3.10
547
+ - syntax-check `app.py`
548
+
549
+ **Why it matters**
550
+ This is a quality gate. Even a basic workflow helps catch obvious errors before release.
551
+ """),
552
+ },
553
+ }
554
+
555
+
556
+ def all_lessons() -> List[str]:
557
+ values = []
558
+ for module in CURRICULUM.values():
559
+ values.extend(module["lessons"].keys())
560
+ return values
561
+
562
+
563
+ def build_dashboard(profile: Dict[str, Any]) -> str:
564
+ total_lessons = sum(len(module["lessons"]) for module in CURRICULUM.values())
565
+ completed = profile.get("completed_lessons", [])
566
+ weak_areas = profile.get("weak_areas", [])
567
+ notes = profile.get("notes", [])[-5:]
568
+
569
+ next_lesson = None
570
+ for module in CURRICULUM.values():
571
+ for lesson_name in module["lessons"].keys():
572
+ if lesson_name not in completed:
573
+ next_lesson = lesson_name
574
+ break
575
+ if next_lesson:
576
+ break
577
+
578
+ note_lines = "\n".join([f"- {n}" for n in notes]) if notes else "- No notes saved yet"
579
+ weak_lines = "\n".join([f"- {w}" for w in weak_areas]) if weak_areas else "- No weak areas flagged yet"
580
+
581
+ pct = round((len(completed) / total_lessons) * 100, 1) if total_lessons else 0.0
582
+
583
+ return md(f"""
584
+ # Learning Dashboard
585
+
586
+ **Learner:** {profile.get('name', 'Learner')}
587
+
588
+ **Current track:** {profile.get('track', 'Business-to-Technical Foundations')}
589
+
590
+ **Goal:** {profile.get('goal', '')}
591
+
592
+ **Background:** {profile.get('background', '')}
593
+
594
+ **Hours per week:** {profile.get('hours_per_week', 4)}
595
+
596
+ **Progress:** {len(completed)}/{total_lessons} lessons marked complete ({pct}%)
597
+
598
+ **Recommended next lesson:** {next_lesson or 'You completed everything currently loaded. Expand the curriculum next.'}
599
+
600
+ **Last visit:** {profile.get('last_visit', 'Unknown')}
601
+
602
+ **Usage count:** {profile.get('usage_count', 0)}
603
+
604
+ ## Weak areas to revisit
605
+ {weak_lines}
606
+
607
+ ## Recent saved notes
608
+ {note_lines}
609
+ """)
610
+
611
+
612
+ def load_profile(profile_state: Dict[str, Any]):
613
+ profile = profile_state or default_profile()
614
+ profile["usage_count"] = int(profile.get("usage_count", 0)) + 1
615
+ profile["last_visit"] = now_string()
616
+ completed = profile.get("completed_lessons", [])
617
+ return (
618
+ profile,
619
+ build_dashboard(profile),
620
+ profile.get("name", "Learner"),
621
+ profile.get("goal", ""),
622
+ profile.get("background", "Business / product / operations"),
623
+ profile.get("hours_per_week", 4),
624
+ profile.get("track", "Business-to-Technical Foundations"),
625
+ completed,
626
+ )
627
+
628
+
629
+ def save_profile(name: str, goal: str, background: str, hours: int, track: str, completed: List[str], profile_state: Dict[str, Any]):
630
+ profile = profile_state or default_profile()
631
+ profile.update({
632
+ "name": name or "Learner",
633
+ "goal": goal or profile.get("goal", ""),
634
+ "background": background or profile.get("background", ""),
635
+ "hours_per_week": int(hours),
636
+ "track": track,
637
+ "completed_lessons": sorted(set(completed or [])),
638
+ "last_visit": now_string(),
639
+ })
640
+ return profile, build_dashboard(profile)
641
+
642
+
643
+ def add_note(note_text: str, profile_state: Dict[str, Any]):
644
+ profile = profile_state or default_profile()
645
+ clean = (note_text or "").strip()
646
+ if clean:
647
+ profile.setdefault("notes", []).append(f"{now_string()} — {clean}")
648
+ return profile, build_dashboard(profile), ""
649
+
650
+
651
+ def render_module_summary(module_name: str) -> str:
652
+ module = CURRICULUM[module_name]
653
+ lesson_lines = "\n".join([f"- {lesson}" for lesson in module["lessons"].keys()])
654
+ return md(f"""
655
+ # {module_name}
656
+
657
+ **Level:** {module['level']}
658
+
659
+ **Objective:** {module['objective']}
660
+
661
+ ## Lessons
662
+ {lesson_lines}
663
+ """)
664
+
665
+
666
+ def render_lesson(module_name: str, lesson_name: str) -> str:
667
+ return CURRICULUM[module_name]["lessons"][lesson_name]
668
+
669
+
670
+ def get_lesson_choices(module_name: str):
671
+ lessons = list(CURRICULUM[module_name]["lessons"].keys())
672
+ return gr.Dropdown(choices=lessons, value=lessons[0])
673
+
674
+
675
+ def generate_30_day_plan(track: str, hours: int, background: str, goal: str) -> str:
676
+ hours = int(hours)
677
+ pace_note = (
678
+ "Keep the scope intentionally small: one concept block plus one hands-on interaction per week."
679
+ if hours <= 4
680
+ else "You have enough time to combine concept learning with small implementation exercises every week."
681
+ )
682
+
683
+ return md(f"""
684
+ # Personalized 30-Day Plan
685
+
686
+ **Track:** {track}
687
+
688
+ **Background:** {background}
689
+
690
+ **Weekly time budget:** {hours} hours
691
+
692
+ **Primary goal:** {goal}
693
+
694
+ **Pacing note:** {pace_note}
695
+
696
+ ## Week 1 — Mental models
697
+ - Learn the difference between rules, ML, and LLM workflows.
698
+ - Read the Orientation module in this app.
699
+ - Watch the embedded CS50 introduction video.
700
+ - Write a one-page summary of how an AI product is layered.
701
+
702
+ ## Week 2 — Read code without panic
703
+ - Work through the Python & Programming module.
704
+ - Use the Code Lab tab to explain one snippet in your own words.
705
+ - Learn what functions, conditionals, dictionaries, and files do.
706
+
707
+ ## Week 3 — APIs and services
708
+ - Learn JSON, GET, POST, and status codes.
709
+ - Read the FastAPI lesson.
710
+ - Explain what `/health` and `/predict` would mean in a product meeting.
711
+
712
+ ## Week 4 — Deployment thinking
713
+ - Learn Git, Docker, and CI/CD at a conceptual level.
714
+ - Use the Architecture Coach tab with 2-3 business problems.
715
+ - End by writing your own architecture summary for one realistic product idea.
716
+
717
+ ## Success criteria after 30 days
718
+ 1. You can explain what an API is in plain English and technical language.
719
+ 2. You can describe the difference between a script, a service, a container, and deployment.
720
+ 3. You can match a business problem to a first-pass technical approach.
721
+ 4. You can read a small app repo without feeling overwhelmed.
722
+ """)
723
+
724
+
725
+ def architecture_coach(problem: str, data_readiness: str, risk_level: str, horizon: str, delivery_target: str) -> str:
726
+ text = (problem or "").lower()
727
+
728
+ if any(word in text for word in ["summarize", "extract", "search", "document", "chat", "email"]):
729
+ approach = "LLM or retrieval-style workflow"
730
+ why = "The problem looks language-heavy and context-heavy."
731
+ stack = "Python + Gradio or FastAPI + model API + retrieval layer if needed"
732
+ elif any(word in text for word in ["predict", "forecast", "classify", "churn", "fraud", "score"]):
733
+ approach = "Supervised ML prototype"
734
+ why = "The problem appears to need structured prediction."
735
+ stack = "Python + pandas + baseline ML + FastAPI or notebook-to-service progression"
736
+ elif any(word in text for word in ["route", "approve", "quote", "policy", "form", "workflow", "if ", "then"]):
737
+ approach = "Rules engine / workflow automation first"
738
+ why = "The logic sounds structured enough that deterministic rules may beat AI initially."
739
+ stack = "Python decision rules + UI + audit logging"
740
+ else:
741
+ approach = "Process mapping first, AI second"
742
+ why = "The problem statement is still broad or underspecified."
743
+ stack = "Map workflow, define inputs and outputs, then choose rules/ML/LLM"
744
+
745
+ data_note = {
746
+ "No clean data": "Expect a painful data-preparation phase before promising ML performance.",
747
+ "Some spreadsheets / exports": "Good enough for a prototype if the fields are interpretable.",
748
+ "Clean structured data": "Strong starting position for scoped prototyping.",
749
+ "Mostly documents / text": "Natural fit for extraction, retrieval, summarization, or chat workflows.",
750
+ }[data_readiness]
751
+
752
+ risk_note = {
753
+ "Low": "Move fast with feedback loops.",
754
+ "Medium": "Add explicit review checkpoints and lightweight validation.",
755
+ "High / regulated": "Bias hard toward auditability, human review, traceability, and narrow scope.",
756
+ }[risk_level]
757
+
758
+ target_note = {
759
+ "Clickable demo": "Gradio is an excellent first destination.",
760
+ "Internal service": "FastAPI plus authentication and logging becomes more relevant.",
761
+ "Production-minded prototype": "Think in terms of APIs, containers, tests, and deployment discipline.",
762
+ }[delivery_target]
763
+
764
+ return md(f"""
765
+ # Architecture recommendation
766
+
767
+ **Recommended first approach:** {approach}
768
+
769
+ **Why:** {why}
770
+
771
+ **Suggested starter stack:** {stack}
772
+
773
+ **Data note:** {data_note}
774
+
775
+ **Risk note:** {risk_note}
776
+
777
+ **Timeline:** {horizon}
778
+
779
+ **Delivery target note:** {target_note}
780
+
781
+ ## First three actions
782
+ 1. Write the exact business decision the tool should improve.
783
+ 2. Gather 10-20 realistic examples.
784
+ 3. Define what a good output looks like before adding more complexity.
785
+ """)
786
+
787
+
788
+ def render_code_lab(lab_name: str) -> Tuple[str, str]:
789
+ lab = CODE_LABS[lab_name]
790
+ return lab["code"], lab["walkthrough"]
791
+
792
+
793
+ def get_quiz_ui(quiz_name: str):
794
+ questions = QUIZ_BANK[quiz_name]["questions"]
795
+ updates = []
796
+ for q in questions:
797
+ updates.append(gr.Radio(choices=q["choices"], label=q["prompt"], value=None))
798
+ return updates
799
+
800
+
801
+ def grade_quiz(quiz_name: str, q1: str, q2: str, q3: str, profile_state: Dict[str, Any]):
802
+ profile = profile_state or default_profile()
803
+ questions = QUIZ_BANK[quiz_name]["questions"]
804
+ answers = [q1, q2, q3]
805
+ score = 0
806
+ feedback = []
807
+
808
+ for idx, (question, user_answer) in enumerate(zip(questions, answers), start=1):
809
+ correct = question["answer"]
810
+ if user_answer == correct:
811
+ score += 1
812
+ feedback.append(f"✅ Q{idx}: Correct — {question['explanation']}")
813
+ else:
814
+ feedback.append(f"❌ Q{idx}: Correct answer = **{correct}** — {question['explanation']}")
815
+
816
+ if score < len(questions):
817
+ profile.setdefault("weak_areas", []).append(quiz_name)
818
+ profile["weak_areas"] = sorted(set(profile["weak_areas"]))
819
+
820
+ result = md("\n".join([f"- {line}" for line in feedback]))
821
+ summary = md(f"""
822
+ # Quiz score: {score}/{len(questions)}
823
+
824
+ {result}
825
+ """)
826
+ return profile, summary, build_dashboard(profile)
827
+
828
+
829
+ TUTOR_SYSTEM_PROMPT = md("""
830
+ You are an adaptive technical tutor for a business-to-technical AI learning app.
831
+ Your job is to teach clearly, patiently, and concretely.
832
+
833
+ Priorities:
834
+ 1. Explain in plain English first.
835
+ 2. Then give a more technical version.
836
+ 3. Use business analogies when helpful.
837
+ 4. Never assume the learner already knows software engineering vocabulary.
838
+ 5. Keep answers practical, structured, and confidence-building.
839
+ 6. When asked about tools, explain when to use them and when not to.
840
+ 7. When asked about architecture, map the problem to rules vs ML vs LLM thinking.
841
+ 8. Encourage the learner to practice with a tiny next step.
842
+ """)
843
+
844
+
845
+ def local_tutor_response(message: str, mode: str, current_lesson: str, track: str) -> str:
846
+ text = (message or "").lower()
847
+
848
+ intro = {
849
+ "Beginner-friendly": "I'll explain this in plain English first.",
850
+ "Business analogy": "I'll frame this like a business process and operating model discussion.",
851
+ "Technical transition": "I'll bridge from business understanding into technical implementation language.",
852
+ "Quiz me": "I'll answer, then end with a short self-check question.",
853
+ }[mode]
854
+
855
+ if any(k in text for k in ["api", "endpoint", "json", "post", "get"]):
856
+ body = md(f"""
857
+ {intro}
858
+
859
+ **API explanation**
860
+ An API is a structured contract for one software system to talk to another.
861
+
862
+ **Plain-English version**
863
+ Think of it like a standardized intake form between teams.
864
+ If the form is filled out correctly, the receiving side knows what action to take.
865
+
866
+ **Technical version**
867
+ A client sends an HTTP request to an endpoint using verbs like GET or POST.
868
+ The server validates the input, performs logic, and returns a response, often in JSON.
869
+
870
+ **Why it matters**
871
+ In real AI products, models are usually wrapped behind APIs so other apps can call them safely.
872
+
873
+ **Tiny next step**
874
+ Open the lesson **What an API is** or **JSON, GET, POST, and status codes** in this app and compare the vocabulary.
875
+ """)
876
+ elif any(k in text for k in ["docker", "container", "deployment"]):
877
+ body = md(f"""
878
+ {intro}
879
+
880
+ **Docker explanation**
881
+ Docker packages an app with the environment it needs so it runs more consistently.
882
+
883
+ **Plain-English version**
884
+ Instead of handing someone only the recipe, you also ship the kitchen setup.
885
+
886
+ **Technical version**
887
+ A Docker image is the packaged blueprint. A container is the running instance created from that image.
888
+
889
+ **Why it matters**
890
+ Deployment becomes easier when the runtime is standardized.
891
+
892
+ **Tiny next step**
893
+ Read the **Docker and containers** lesson, then explain in one paragraph why 'works on my machine' is a deployment problem.
894
+ """)
895
+ elif any(k in text for k in ["kubernetes", "k8s"]):
896
+ body = md(f"""
897
+ {intro}
898
+
899
+ **Kubernetes explanation**
900
+ Kubernetes manages containers across a larger environment.
901
+
902
+ **Most important advice**
903
+ Do not start with Kubernetes.
904
+ Start with understanding scripts, services, Git, and containers first.
905
+
906
+ **Technical version**
907
+ Kubernetes helps manage scaling, rollout, service discovery, and resilience for containerized apps.
908
+
909
+ **Tiny next step**
910
+ Learn what a service, container, and health check are before diving deeper.
911
+ """)
912
+ elif any(k in text for k in ["mcp", "agent", "rag", "llm"]):
913
+ body = md(f"""
914
+ {intro}
915
+
916
+ **LLM / RAG / agents / MCP explanation**
917
+ These all sit in the family of AI application design, but they are not the same thing.
918
+
919
+ - **LLM app**: uses a language model directly.
920
+ - **RAG**: retrieves context first, then generates.
921
+ - **Agent**: can select tools or take multiple action steps.
922
+ - **MCP**: a standard way to expose tools, resources, and prompts to AI applications.
923
+
924
+ **Tiny next step**
925
+ Read the lesson **LLM apps, RAG, agents, and MCP** and then ask me to compare two of those directly.
926
+ """)
927
+ else:
928
+ body = md(f"""
929
+ {intro}
930
+
931
+ I can help with this, but I want to structure it so it actually builds skill.
932
+
933
+ **Current track:** {track}
934
+ **Current lesson context:** {current_lesson}
935
+
936
+ **A good way to think about your question**
937
+ 1. What is the business problem?
938
+ 2. What are the inputs and outputs?
939
+ 3. Is the first solution rules-based, ML-based, or LLM-based?
940
+ 4. What layer are we talking about: UI, API, model, data, or deployment?
941
+
942
+ Send me the question again with one of these forms:
943
+ - "Explain X in plain English"
944
+ - "Compare X vs Y"
945
+ - "Turn this business problem into an architecture"
946
+ - "Quiz me on X"
947
+ - "Explain this code"
948
+ """)
949
+
950
+ if mode == "Quiz me":
951
+ body += "\n\n**Self-check:** In one sentence, what practical problem does this concept solve?"
952
+
953
+ return body
954
+
955
+
956
+ def call_anthropic_chat(message: str, history: List[Dict[str, Any]], mode: str, current_lesson: str, track: str, runtime_key: str, model_name: str, base_url: str) -> str:
957
+ api_key = (runtime_key or "").strip() or SERVER_API_KEY
958
+ model = (model_name or "").strip() or DEFAULT_MODEL
959
+ endpoint_base = (base_url or "").strip() or DEFAULT_BASE_URL
960
+
961
+ if not api_key or not model or Anthropic is None:
962
+ return local_tutor_response(message, mode, current_lesson, track)
963
+
964
+ system_prompt = TUTOR_SYSTEM_PROMPT + "\n\n" + f"Tutor mode: {mode}. Current lesson: {current_lesson}. Current track: {track}."
965
+
966
+ messages = []
967
+ normalized_history = history or []
968
+ for item in normalized_history[-8:]:
969
+ if isinstance(item, dict):
970
+ role = item.get("role", "user")
971
+ content = item.get("content", "")
972
+ if role in {"user", "assistant"} and str(content).strip():
973
+ messages.append({"role": role, "content": str(content)})
974
+ elif isinstance(item, (list, tuple)) and len(item) == 2:
975
+ user_msg, assistant_msg = item
976
+ if user_msg:
977
+ messages.append({"role": "user", "content": str(user_msg)})
978
+ if assistant_msg:
979
+ messages.append({"role": "assistant", "content": str(assistant_msg)})
980
+
981
+ messages.append({"role": "user", "content": message})
982
+
983
+ try:
984
+ client_kwargs = {"api_key": api_key}
985
+ if endpoint_base and endpoint_base != "https://api.anthropic.com":
986
+ client_kwargs["base_url"] = endpoint_base.rstrip("/")
987
+ client = Anthropic(**client_kwargs)
988
+ response = client.messages.create(
989
+ model=model,
990
+ max_tokens=900,
991
+ system=system_prompt,
992
+ messages=messages,
993
+ )
994
+ parts = []
995
+ for block in getattr(response, "content", []) or []:
996
+ txt = getattr(block, "text", None)
997
+ if txt:
998
+ parts.append(txt)
999
+ if parts:
1000
+ return "\n\n".join(parts)
1001
+ return local_tutor_response(message, mode, current_lesson, track)
1002
+ except Exception as exc:
1003
+ return md(f"""
1004
+ I could not reach the live Anthropic API, so I am falling back to the built-in tutor.
1005
+
1006
+ **Error summary:** `{type(exc).__name__}: {exc}`
1007
+
1008
+ ---
1009
+
1010
+ {local_tutor_response(message, mode, current_lesson, track)}
1011
+ """)
1012
+
1013
+
1014
+ def test_anthropic_connection(api_key_state: str, model_name: str, base_url: str) -> str:
1015
+ api_key = (api_key_state or "").strip() or SERVER_API_KEY
1016
+ model = (model_name or "").strip() or DEFAULT_MODEL
1017
+ endpoint_base = (base_url or "").strip() or DEFAULT_BASE_URL
1018
+
1019
+ if Anthropic is None:
1020
+ return "❌ The `anthropic` Python package is not installed. Add it to requirements.txt and rebuild the Space."
1021
+ if not api_key:
1022
+ return "❌ No API key detected. Add `ANTHROPIC_API_KEY` in Space Settings → Secrets or save a browser key in the app."
1023
+ if not model:
1024
+ return "❌ No model name detected. Set `ANTHROPIC_MODEL` or type one in the app."
1025
+
1026
+ try:
1027
+ client_kwargs = {"api_key": api_key}
1028
+ if endpoint_base and endpoint_base != "https://api.anthropic.com":
1029
+ client_kwargs["base_url"] = endpoint_base.rstrip("/")
1030
+ client = Anthropic(**client_kwargs)
1031
+ response = client.messages.create(
1032
+ model=model,
1033
+ max_tokens=32,
1034
+ system="You are a connectivity test. Reply with exactly: LIVE_CONNECTION_OK",
1035
+ messages=[{"role": "user", "content": "ping"}],
1036
+ )
1037
+ text_parts = [getattr(block, "text", "") for block in getattr(response, "content", []) or []]
1038
+ reply = " ".join([x for x in text_parts if x]).strip() or "(empty text response)"
1039
+ return md(f"""
1040
+ ✅ **Live Anthropic connection succeeded**
1041
+
1042
+ - **Model:** `{model}`
1043
+ - **Base URL:** `{endpoint_base}`
1044
+ - **Reply preview:** `{reply}`
1045
+ """)
1046
+ except Exception as exc:
1047
+ return md(f"""
1048
+ ❌ **Live Anthropic connection failed**
1049
+
1050
+ - **Model:** `{model}`
1051
+ - **Base URL:** `{endpoint_base}`
1052
+ - **Error:** `{type(exc).__name__}: {exc}`
1053
+
1054
+ Check whether the key is a direct Anthropic API key, whether the model name is valid for your account, and whether you accidentally set a non-default base URL.
1055
+ """)
1056
+ def save_user_api_key(user_api_key: str, api_key_state: str, model_name: str):
1057
+ stored = (user_api_key or api_key_state or "").strip()
1058
+ if stored:
1059
+ save_msg = "✅ API key stored in this browser state for this app."
1060
+ else:
1061
+ save_msg = "No user API key stored. The app will use the Space secret if configured, otherwise the built-in tutor."
1062
+ return stored, save_msg, app_status_message(stored, model_name)
1063
+
1064
+
1065
+ def app_status_message(api_key_state: str, model_name: str):
1066
+ if api_key_state.strip() or SERVER_API_KEY:
1067
+ source = "browser-supplied key" if api_key_state.strip() else "Space secret ANTHROPIC_API_KEY"
1068
+ model = model_name.strip() or DEFAULT_MODEL or "(set ANTHROPIC_MODEL or enter a model)"
1069
+ return md(f"""
1070
+ ✅ **AI tutor live mode should be available**
1071
+ - Key source: **{source}**
1072
+ - Model: `{model}`
1073
+ - Provider: `Anthropic Messages API` via the official Python SDK
1074
+ - Use the **Test live connection** button below to verify the key and model directly.
1075
+ """)
1076
+ return md("""
1077
+ ❌ **Local tutor mode only**
1078
+ No live API key is currently available.
1079
+
1080
+ To enable the live tutor, either:
1081
+ 1. set `ANTHROPIC_API_KEY` in Hugging Face Space Secrets, or
1082
+ 2. paste a temporary key below and save it into browser storage.
1083
+ """)
1084
+
1085
+
1086
+ INTRO_HTML = f"""
1087
+ <div style="padding: 12px 0 4px 0;">
1088
+ <div style="background: linear-gradient(135deg, #0f172a, #1e3a8a); color: white; border-radius: 18px; padding: 22px;">
1089
+ <h1 style="margin-top: 0;">{APP_TITLE}</h1>
1090
+ <p style="font-size: 16px; line-height: 1.6;">
1091
+ This app is designed to help a business-side or early-transition learner become substantially more technical over time.
1092
+ It combines structured curriculum, architecture coaching, quizzes, code reading, and an optional live AI tutor.
1093
+ </p>
1094
+ <p style="font-size: 15px; line-height: 1.6; margin-bottom: 0;">
1095
+ Start with the video below, then use the Dashboard and Curriculum tabs to build momentum.
1096
+ </p>
1097
+ </div>
1098
+ </div>
1099
+ <div style="margin-top: 14px; border-radius: 16px; overflow: hidden; border: 1px solid #dbe4ff;">
1100
+ <iframe
1101
+ width="100%"
1102
+ height="480"
1103
+ src="{CS50_EMBED_URL}"
1104
+ title="CS50P Lecture 0"
1105
+ frameborder="0"
1106
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
1107
+ allowfullscreen>
1108
+ </iframe>
1109
+ </div>
1110
+ <p style="margin-top: 8px; font-size: 14px; color: #475569;">
1111
+ Embedded intro: CS50P Lecture 0. If the video does not load in your browser, open it directly on YouTube.
1112
+ </p>
1113
+ """
1114
+
1115
+
1116
+ CUSTOM_CSS = """
1117
+ .gradio-container {max-width: 1300px !important;}
1118
+ #hero-note {font-size: 0.95rem; color: #475569;}
1119
+ .section-card {border: 1px solid #e2e8f0; border-radius: 16px; padding: 14px; background: white;}
1120
+ """
1121
+
1122
+
1123
+ with gr.Blocks(title=APP_TITLE, delete_cache=(3600, 3600)) as demo:
1124
+ # Browser-persisted state for this learner.
1125
+ profile_store = gr.BrowserState(default_profile())
1126
+ api_key_store = gr.BrowserState("")
1127
+
1128
+ gr.HTML(INTRO_HTML)
1129
+
1130
+ with gr.Row():
1131
+ dashboard_md = gr.Markdown()
1132
+ tutor_status_md = gr.Markdown(value=app_status_message("", DEFAULT_MODEL))
1133
+
1134
+ with gr.Tab("Dashboard & Profile"):
1135
+ with gr.Row():
1136
+ with gr.Column(scale=1):
1137
+ learner_name = gr.Textbox(label="Learner name", placeholder="Example: Jordan")
1138
+ learning_goal = gr.Textbox(
1139
+ label="Main goal",
1140
+ lines=3,
1141
+ placeholder="Example: I want to become technically fluent enough to understand AI product architecture and talk to engineers confidently.",
1142
+ )
1143
+ background = gr.Textbox(label="Current background", placeholder="Example: business / product / operations")
1144
+ hours_per_week = gr.Slider(1, 15, value=4, step=1, label="Hours per week available")
1145
+ track = gr.Dropdown(
1146
+ choices=[
1147
+ "Business-to-Technical Foundations",
1148
+ "AI Product Manager Transition",
1149
+ "Business Analyst to Technical Builder",
1150
+ "Future ML/AI Operator",
1151
+ ],
1152
+ value="Business-to-Technical Foundations",
1153
+ label="Learning track",
1154
+ )
1155
+ completed_lessons = gr.CheckboxGroup(choices=all_lessons(), label="Completed lessons")
1156
+ save_profile_btn = gr.Button("Save profile & refresh dashboard", variant="primary")
1157
+
1158
+ with gr.Column(scale=1):
1159
+ notes_box = gr.Textbox(
1160
+ label="Saved notes",
1161
+ placeholder="Type a learning reflection, a confusing term, or a project idea here.",
1162
+ lines=5,
1163
+ )
1164
+ save_note_btn = gr.Button("Save note")
1165
+ gr.Markdown(md("""
1166
+ ### How to use this app well
1167
+ 1. Save your profile.
1168
+ 2. Work through one module at a time.
1169
+ 3. Mark lessons complete as you go.
1170
+ 4. Use the tutor only after attempting your own explanation.
1171
+ 5. Revisit weak areas flagged by quizzes.
1172
+ """))
1173
+
1174
+ with gr.Tab("Curriculum"):
1175
+ with gr.Row():
1176
+ with gr.Column(scale=1):
1177
+ module_name = gr.Dropdown(choices=list(CURRICULUM.keys()), value=list(CURRICULUM.keys())[0], label="Module")
1178
+ module_summary = gr.Markdown(value=render_module_summary(list(CURRICULUM.keys())[0]))
1179
+ with gr.Column(scale=2):
1180
+ lesson_name = gr.Dropdown(
1181
+ choices=list(CURRICULUM[list(CURRICULUM.keys())[0]]["lessons"].keys()),
1182
+ value=list(CURRICULUM[list(CURRICULUM.keys())[0]]["lessons"].keys())[0],
1183
+ label="Lesson",
1184
+ )
1185
+ lesson_content = gr.Markdown(value=render_lesson(list(CURRICULUM.keys())[0], list(CURRICULUM[list(CURRICULUM.keys())[0]]["lessons"].keys())[0]))
1186
+
1187
+ with gr.Tab("30-Day Plan"):
1188
+ plan_btn = gr.Button("Generate personalized 30-day plan", variant="primary")
1189
+ plan_md = gr.Markdown()
1190
+
1191
+ with gr.Tab("Architecture Coach"):
1192
+ problem_statement = gr.Textbox(
1193
+ lines=6,
1194
+ label="Describe the business problem",
1195
+ placeholder="Example: We want to help account managers summarize large customer email threads and draft the next recommended response.",
1196
+ )
1197
+ with gr.Row():
1198
+ data_readiness = gr.Radio(
1199
+ ["No clean data", "Some spreadsheets / exports", "Clean structured data", "Mostly documents / text"],
1200
+ value="Mostly documents / text",
1201
+ label="Data situation",
1202
+ )
1203
+ risk_level = gr.Radio(["Low", "Medium", "High / regulated"], value="Medium", label="Risk level")
1204
+ time_horizon = gr.Radio(["2 weeks", "1 month", "2+ months"], value="1 month", label="Timeline")
1205
+ delivery_target = gr.Radio(
1206
+ ["Clickable demo", "Internal service", "Production-minded prototype"],
1207
+ value="Clickable demo",
1208
+ label="Delivery target",
1209
+ )
1210
+ architecture_btn = gr.Button("Generate architecture recommendation", variant="primary")
1211
+ architecture_md = gr.Markdown()
1212
+
1213
+ with gr.Tab("AI Tutor"):
1214
+ gr.Markdown(md("""
1215
+ Use this tab as an adaptive coach.
1216
+
1217
+ - If a Space secret named `ANTHROPIC_API_KEY` is configured, the app can use it.
1218
+ - You can also paste your own API key below and store it in browser state for this app.
1219
+ - If no key is configured, the tutor still works using built-in teaching logic.
1220
+ """))
1221
+ with gr.Row():
1222
+ tutor_mode = gr.Radio(
1223
+ ["Beginner-friendly", "Business analogy", "Technical transition", "Quiz me"],
1224
+ value="Beginner-friendly",
1225
+ label="Tutor mode",
1226
+ )
1227
+ current_lesson = gr.Dropdown(choices=all_lessons(), value=all_lessons()[0], label="Current lesson context")
1228
+ with gr.Row():
1229
+ user_api_key = gr.Textbox(label="Optional user API key", type="password", placeholder="Paste only if you want this browser session to use your own key")
1230
+ model_name = gr.Textbox(label="Model name", value=DEFAULT_MODEL, placeholder="Enter a Claude model name or set ANTHROPIC_MODEL in Space Secrets/Variables")
1231
+ base_url = gr.Textbox(label="API base URL", value=DEFAULT_BASE_URL)
1232
+ save_key_btn = gr.Button("Save API key for this app in this browser")
1233
+ save_key_status = gr.Markdown()
1234
+
1235
+ def tutor_fn(message, history, mode, lesson, track_value, runtime_key, model_value, base_value):
1236
+ return call_anthropic_chat(message, history, mode, lesson, track_value, runtime_key, model_value, base_value)
1237
+
1238
+ chatbot = gr.ChatInterface(
1239
+ fn=tutor_fn,
1240
+ additional_inputs=[tutor_mode, current_lesson, track, api_key_store, model_name, base_url],
1241
+ save_history=True,
1242
+ fill_height=True,
1243
+ )
1244
+ test_connection_btn = gr.Button("Test live connection")
1245
+ test_connection_out = gr.Markdown()
1246
+
1247
+ with gr.Tab("Code Lab"):
1248
+ with gr.Row():
1249
+ code_lab_name = gr.Dropdown(choices=list(CODE_LABS.keys()), value=list(CODE_LABS.keys())[0], label="Choose a code lab")
1250
+ with gr.Row():
1251
+ code_view = gr.Code(value=CODE_LABS[list(CODE_LABS.keys())[0]]["code"], language="python", label="Code or config snippet")
1252
+ code_walkthrough = gr.Markdown(value=CODE_LABS[list(CODE_LABS.keys())[0]]["walkthrough"])
1253
+
1254
+ with gr.Tab("Quiz & Review"):
1255
+ quiz_name = gr.Dropdown(choices=list(QUIZ_BANK.keys()), value=list(QUIZ_BANK.keys())[0], label="Quiz set")
1256
+ quiz_q1 = gr.Radio(choices=QUIZ_BANK[list(QUIZ_BANK.keys())[0]]["questions"][0]["choices"], label=QUIZ_BANK[list(QUIZ_BANK.keys())[0]]["questions"][0]["prompt"])
1257
+ quiz_q2 = gr.Radio(choices=QUIZ_BANK[list(QUIZ_BANK.keys())[0]]["questions"][1]["choices"], label=QUIZ_BANK[list(QUIZ_BANK.keys())[0]]["questions"][1]["prompt"])
1258
+ quiz_q3 = gr.Radio(choices=QUIZ_BANK[list(QUIZ_BANK.keys())[0]]["questions"][2]["choices"], label=QUIZ_BANK[list(QUIZ_BANK.keys())[0]]["questions"][2]["prompt"])
1259
+ grade_btn = gr.Button("Grade quiz", variant="primary")
1260
+ quiz_result = gr.Markdown()
1261
+
1262
+ with gr.Tab("References & Setup"):
1263
+ gr.Markdown(REFERENCE_LIBRARY)
1264
+ gr.Markdown(md("""
1265
+ ## Hugging Face Space setup tips
1266
+ - Put API keys in **Space Settings -> Secrets**, not in the repo.
1267
+ - For this app, the expected secret names are:
1268
+ - `ANTHROPIC_API_KEY`
1269
+ - `ANTHROPIC_MODEL` (optional; can also be a public variable if non-sensitive)
1270
+ - `ANTHROPIC_BASE_URL` (optional if using a non-default endpoint)
1271
+ - The app will also work without any live model key because it includes a built-in local tutor.
1272
+ """))
1273
+
1274
+ # Load profile from browser state when app loads.
1275
+ demo.load(
1276
+ load_profile,
1277
+ inputs=[profile_store],
1278
+ outputs=[profile_store, dashboard_md, learner_name, learning_goal, background, hours_per_week, track, completed_lessons],
1279
+ )
1280
+ demo.load(app_status_message, inputs=[api_key_store, model_name], outputs=[tutor_status_md])
1281
+
1282
+ # Save profile.
1283
+ save_profile_btn.click(
1284
+ save_profile,
1285
+ inputs=[learner_name, learning_goal, background, hours_per_week, track, completed_lessons, profile_store],
1286
+ outputs=[profile_store, dashboard_md],
1287
+ )
1288
+
1289
+ # Save note.
1290
+ save_note_btn.click(
1291
+ add_note,
1292
+ inputs=[notes_box, profile_store],
1293
+ outputs=[profile_store, dashboard_md, notes_box],
1294
+ )
1295
+
1296
+ # Curriculum updates.
1297
+ module_name.change(
1298
+ lambda m: (render_module_summary(m), gr.Dropdown(choices=list(CURRICULUM[m]["lessons"].keys()), value=list(CURRICULUM[m]["lessons"].keys())[0]), render_lesson(m, list(CURRICULUM[m]["lessons"].keys())[0])),
1299
+ inputs=[module_name],
1300
+ outputs=[module_summary, lesson_name, lesson_content],
1301
+ )
1302
+ lesson_name.change(render_lesson, inputs=[module_name, lesson_name], outputs=[lesson_content])
1303
+
1304
+ # Personalized plan.
1305
+ plan_btn.click(generate_30_day_plan, inputs=[track, hours_per_week, background, learning_goal], outputs=[plan_md])
1306
+
1307
+ # Architecture coach.
1308
+ architecture_btn.click(
1309
+ architecture_coach,
1310
+ inputs=[problem_statement, data_readiness, risk_level, time_horizon, delivery_target],
1311
+ outputs=[architecture_md],
1312
+ )
1313
+
1314
+ # Save user API key into browser state.
1315
+ save_key_btn.click(
1316
+ save_user_api_key,
1317
+ inputs=[user_api_key, api_key_store, model_name],
1318
+ outputs=[api_key_store, save_key_status, tutor_status_md],
1319
+ )
1320
+ model_name.change(app_status_message, inputs=[api_key_store, model_name], outputs=[tutor_status_md])
1321
+ test_connection_btn.click(
1322
+ test_anthropic_connection,
1323
+ inputs=[api_key_store, model_name, base_url],
1324
+ outputs=[test_connection_out],
1325
+ )
1326
+
1327
+ # Code lab update.
1328
+ code_lab_name.change(render_code_lab, inputs=[code_lab_name], outputs=[code_view, code_walkthrough])
1329
+
1330
+ # Quiz updates and grading.
1331
+ def update_quiz_ui(name: str):
1332
+ questions = QUIZ_BANK[name]["questions"]
1333
+ return (
1334
+ gr.Radio(choices=questions[0]["choices"], label=questions[0]["prompt"], value=None),
1335
+ gr.Radio(choices=questions[1]["choices"], label=questions[1]["prompt"], value=None),
1336
+ gr.Radio(choices=questions[2]["choices"], label=questions[2]["prompt"], value=None),
1337
+ )
1338
+
1339
+ quiz_name.change(update_quiz_ui, inputs=[quiz_name], outputs=[quiz_q1, quiz_q2, quiz_q3])
1340
+ grade_btn.click(
1341
+ grade_quiz,
1342
+ inputs=[quiz_name, quiz_q1, quiz_q2, quiz_q3, profile_store],
1343
+ outputs=[profile_store, quiz_result, dashboard_md],
1344
+ )
1345
+
1346
+
1347
+ demo.launch(theme=gr.themes.Soft(), css=CUSTOM_CSS)