What it’s about
Build a cyber:bot project that uses face recognition to decide when to let someone “pass.” The HUSKYLENS AI camera identifies faces it has previously learned, and the cyber:bot performs a specific motion sequence whenever it sees a recognized individual. This kind of interaction mirrors access-control concepts used in robotics and automated systems, where a device responds only to approved or known inputs.
Face recognition is widely used in modern technology—from unlocking a smartphone to managing access to secure rooms or equipment. In robotics, combining computer vision with movement allows machines to interact intelligently with people and their environment. In this tutorial, you will load a saved face-recognition model from the HUSKYLENS, watch how it identifies different trained face IDs, and control the cyber:bot robot’s maneuvers in real time based on what the camera sees. The result is a simple but powerful demonstration of how AI can guide robotic behavior.
What you need to get started
These activities assume that you have already completed the earlier cyber:bot tutorials in sequence. Make sure you have finished through the Navigation and Circuits lessons before beginning this one.
Before running the script for this for this app, you must train the HUSKYLENS to recognize several faces and save the results using the Logo + A method. Follow the steps in Remember Training Data with a microSD Card (Python) to store the learned faces into model slot 1 on the HUSKYLENS microSD card.
For this these activities, you will need:
-
A fully assembled and tested cyber:bot robot
-
A HUSKYLENS AI camera with its I2C cable and microSD card
-
At least three trained face samples stored in model slot 1
After you finish
Once you complete this tutorial, you will be ready to explore more advanced AI-guided navigation activities. You may choose to expand the Robot Gatekeeper into a full access-control demonstration, experiment with other HUSKYLENS algorithms such as color or tag recognition, or combine face detection with line following or obstacle avoidance for more complex robotic behaviors.