Members of TIPL (Cathy Lewin, Nic Whitton, Sarah McNicol and James Duggan) met with Keeley Crockett, David Mclean and Annabel Latham from the School of Computing, Maths and Digital Technology to discuss what we could do with Nao?
Nao is an autonomous, programmable humanoid robot developed by Aldebaran Robotics. The things that Nao can do are:
- ENHANCED AUDIO AND VISUAL CAPABILITIES
- NATURAL MOTION REFLEXES
- Fully programmable, open and autonomous
- Easy to use and understand: achieve better project results and improve learning effectiveness
- Attractive and motivating: highly increase and catch audience attention
- ENHANCED VISION AUDIO CAPABILITIES
- Camera
- higher sensitivity in VGA for low light perception.
- For image processing – up to 30 images/second in HD resolution.
- has a great capacity to sense his environment – move headby 239°horizontally and by 68° vertically, and his camera can see at 61° horizontally and 47°vertically.
- Object Recognition
- NAO has the capacity to recognize a large quantity of objects. Once the object is saved if he sees it again, NAO is able to recognize and say what it is.
- Face Detection and Recognition
- It’s one of the best known features for interaction. NAO can detect and learn a face in order to recognize it next time.
- Text to Speech
- NAO is able to speak up to 9 languages.
- Automatic Speech Recognition
- Speech recognition is at the heart of intuitive human-robot interaction. Nuance provides stable and powerful speech recognition. NAO is now able to hear you from 2 meters away, recognize a complete sentence or just few words in the sentence. The result: more fluidity and natural conversations.
- Sound Detection and Localization
- Our environment is made of sounds that NAO, like us, is able to detect and localize in the space thanks to microphones all around his head.
Or, if you prefer videos.
Also, he can dance:
So the question is what education research we can do with him? Any ideas?