PhD Candidate, Speech Media Processing Group (Okuno Lab)
Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University
京都大学大学院 情報学研究科 知能情報学専攻 奥乃研究室
E-mail : angelica[at]kuis.kyoto-u.ac.jp
LinkedIn: Angelica Lim
My Master's thesis was entitled Design and Implementation of Emotions for Humanoid Robots based on the Modality-independent DESIRE Model. (pdf, 31 Mb) It discusses how we can generate and analyze emotions in the same way, whether it's in voice, music or movement. My model is based on four parameters: speed, intensity, regularity and extent (SIRE). The thesis discusses how to use this approach to add emotional colour to humanoids such as HRP-2, NAO, and potentially any other robot.
Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno: Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music, EURASIP Journal on Audio, Speech, and Music Processing, 2012:3, Published: 17 January 2012. doi:10.1186/1687-4722-2012-3
Angelica Lim, Takeshi Mizumoto, Tetsuya Ogata, Hiroshi G. Okuno: A musical robot that synchronizes with a co-player using non-verbal cues, Advanced Robotics, Special Issue on Cutting Edge of Robtics in Japan, Vol.26 (2012) pp.363-381. doi:10.1163/156855311X614626
Angelica Lim, Hiroshi G. Okuno: Using speech data to recognize emotion in human gait, Proceedings of the Third Workshop on Human Behavior and Understanding, IEEE/RSJ-2012 Workshop (acceptance rate 42%), A.A. Salah et al. (Eds): HBU 2012, Lecture Notes in Computer Science, Vol.7559, pp.52-64, Springer, Algarve, Portugal, October 7, 2012.
Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno: Converting emotional voice to motion for robot telepresence, Proceedings of IEEE-RAS International Conference on Humanoid Robots (Humanoids 2011), accepted as oral (acceptance rate 17.4% = 28/190), Bled, Slovenia, Oct. 26-28, 2011.
Angelica Lim, Takeshi Mizumoto, Takuma Otsuka, Tatsuhiko Itohara, Kazuhiro Nakadai, Tetsuya Ogata, Hiroshi G. Okuno: More cowbell! A musical ensemble with the NAO thereminist, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2011), IROS 2011 Standard Platform Demo, San Francisco, 25-30 Sep. 2011.
Angelica Lim, Takeshi Mizumoto, Louis-Kenzo Cahier, Takuma Otsuka, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno: Robot Musical Accompaniment: Integrating Audio and Visual Cues for Real-time Synchronization with a Human Flutist (Invited paper), Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2010), IEEE, RSJ, Taipei, Oct. 2010. NTF Award for Entertainment Robots and Systems (1/832 papers)pdfdemo
Angelica Lim, Takeshi Mizumoto, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno: Programming by Playing and Approaches for Expressive Robot Performances, Proceedings of IEEE/RSJ-2010 Workshop on Robots and Musical Expression, Oct. 2010, Taipei, Taiwan.pdfdemo pageposter
Takeshi Mizumoto, Angelica Lim, Takuma Otsuka, Kazuhiro Nakadai, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno: Integration of flutist gesture recognition and beat tracking for human-robot ensemble, Proceedings of IEEE/RSJ-2010 Workshop on Robots and Musical Expression, Oct. 2010, Taipei, Taiwan. pdf
Converting emotional voice to motion for robot telepresence
NAO robot plays the Theme from Star Trek
NAO plays Hey Jude while listening to the beat
A Theremin-playing, Opera-singing Robot Accompanist
Programming by Playing: A Music Robot with Expression
Visual Cue Test with Vocaloid
In this video, I perform Lakme's Flute Duet with Prima Vocaloid. I start and stop Vocaloid using a vision-based flute cue recognition algorithm. In other words, I control when Prima starts and stops by moving my flute! A short description can be found here.