- Aug. 21st, 2014, The11th Tutorial on Open Source Robot Audition Software HARK will be held at Kyoto Univ., Kyoto, Japan CFP
- Dec. 20th, 2013, Sample files for HARK 2.0 and corresponding HARK Cookbook parts are updated. HARK Document
- Dec. 5th, 2013, The 10th HARK Tutorial will be held at Waseda University, Japan. CFP
- Dec. 5th, 2013, "HARK 2.0" released.
- Oct. 20th, 2013, HARK is introduced by Bob Igo, "Open Senses: Open Source Robotics Software for non-Robotic Uses" (URL, slides)
- Oct. 2nd, 2013, HARK 1.9.9 released.
- Oct. 2nd, 2013, The 9th HARK Tutorial will be held at LAAS, Toulouse, France. CFP
- Mar. 15th, 2013, HARK 1.2.0 released.
- Mar. 19th, 2013, The 8th Tutorial on Open Source Robot Audition Software HARK will be held at Kyoto Univ., Kyoto, Japan http://winnie.kuis.kyoto-u.ac.jp/HARK/8th-HARK-tutorial-CFP.pdf
- Jun. 22nd 2012, HARK 1.1.1 released for Ubuntu 12.04 Precise.
- Mar. 21st 2012, documents for HARK 1.1.0 released.
- Feb. 29th 2012, HARK 1.1.0 released.
- Mar. 9th, 2012, The 7th Tutorial on Open Source Robot Audition Software HARK will be held at Nagoya Institute of Technology, Nagoya, Japan http://winnie.kuis.kyoto-u.ac.jp/HARK/7th-HARK-tutorial-CFP.pdf
- Feb. 29th 2012, The 6th Tutorial on Open Source Robot Audition Software HARK will be held at UPMC, Paris, France.
- Nov. 25th 2010, HARK 1.0.0 released
HRI-JP Audition for Robots with Kyoto University (HARK)
HARK is open-sourced robot audition software consisting of sound source localization modules, sound source separation modules and automatic speech recognition modules of separated speech signals that works on any robot with any microphone configuration.
Since a robot with ears may be deployed to various auditory environments, the robot audition system should provide an easy way to adapt to them. HARK provides a set of modules to cope with various auditory environments by using an open-sourced middleware, FlowDesigner, and reduces the overheads of data transfer between modules.
HARK has been open-sourced since April 2008. The resulting implementation of HARK with MUSIC-based sound source localization, GHDSS-based sound source separation and Missing-Feature-Theory-based
automatic speech recognition (ASR) on several robots like HRP-2, SIG, SIG2, and Robovie R2 attains recognizing three simultaneous utterances in real time.
HARK consists of a lot of modules for robot audition. These modules are implemented as a module for FlowDesigner and some modules are based on ManyEars. ManyEars provides microphone array processing to perform sound source localization, tracking, and separation.
- Audio Signal Input
- Sound Source Localization
- Sound Source Separation
- Acoustic Feature Extraction
- Automatic Missing Feature Mask Generation
- Speech Recognition Client
The Installation instruction for HARK is available here.
Other additional packages are also available.
- HARK-ROS Installation Instructions
- HARK-BINAURAL Installation Instructions
- HARK-MUSIC Installation Instructions
See SupportedHardware for the microphone arrays supported by HARK.
- Dataflow-oriented GUI programming environment (middleware) Flowdesigner
- Another robot audition project for Flowdesinger ManyEars
- Open-Source Large Vocabulary CSR Engine Julius
- Linux distribution Ubuntu
- Advanced Linux Sound Architecture ALSA
- Hidden Markov Toolkit HTK
- The RESPITE CASA Toolkit Project CTK