Welcome to the hands-on section! Based on the slideshow we just watched, choose an experiment below to test how computers "see" the world.
Remember the slide about skeletons?
In this experiment, the AI will try to find your joints (like elbows, knees, and shoulders) and connect them with lines.
Try this: Try to do some movements with yout hands and face. Does the AI follow your movement?
How does a self-driving car see?
This model is trained to recognize thousands of everyday items. It draws a box around what it sees and gives it a label.
Try this: Hold up a pen, a water bottle, or your shoe. Can it guess them all correctly?
Can computers understand feelings?
This AI looks at the geometry of your faceβhow much your mouth curves or your eyebrows liftβto guess your emotion.
Try this: Make a super happy face, then a sad face. See if the "Confidence Score" changes!
Can a computer actually βseeβ movement?
This project watches your webcam and compares each frame to the one before it.
When something movesβeven a tiny bitβthe computer highlights the change so you can
spot motion glowing on the screen.
Try this: Wave your hand slowly, then quickly. Notice how the motion pattern changes!
How can a computer tell what's in a picture?
This project uses a preβtrained AI model that can look at an imageβor even your
webcamβand point out objects it recognizes. It doesnβt just guess the object,
it also shows *where* it is in the picture by drawing a box around it.
Try this: Hold up different objects (like a pencil, cup, or book) and see
which ones the AI can detect!
How about making your own AI model that detects the objects you care about?
You can try this experiment to build your own custom model using TensorFlow.js and the Teachable Machine Algorithm.
Tip: Try choosing 2 objects around you (like a water bottle, pencil, pair of headphones, or even yourself) and gather at least 30 images of each from different angles. The more variety you include, the smarter your model becomes.