SEARCH WITHIN CONTENT
Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 7, Issue 4, Pages 1,908-1,921, DOI: https://doi.org/10.21307/ijssis-2017-739
License : (CC BY-NC-ND 4.0)
Received Date : 15-October-2014 / Accepted: 12-November-2014 / Published Online: 01-December-2014
Mixed sounds can be separated from multiple sound sources using microphone array sensor
and signal processing. We believe that promotion of interest in this technique can lead to significant
future development in science and technology. To investigate this technique, we designed a language
game for children called “KIKIWAKE 3D” that uses a sound-source-separation system to arouse
children’s interest in this technology. However, the microphone array sensor in a previous research
had a limited scope in separating sounds. We developed a spherical microphone array sensor with
three-dimensional directivity designed for this game. In this paper, we report the evaluation of this
microphone array sensor in adapting to this game by separating the sound level and using
 B. D. Van Veen and K. M. Buckley, “Beamforming: a versatile approach to spatial filtering,”
IEEE Acoust., Speech, Signal Processing Mag. , vol. 5, pp., April 4–24, 1988.
 T. Yoshida, K. Nakadai, H. Okuno, “Two-Layered Audio-Visual Speech Recognition for
Robots in Noisy Environment”, The 2010 IEEE\RSJ International Conference on Intelligent
Robots and Systems (IROS-2010), pp.988-993, October 18-22, 2010.
 M. Goseki, M. Ding, H. Takemura, and H. Mizoguchi, “Combination of microphone array
and camera image processing for visualizing sound pressure distribution.” SMC2011, pp.139-143,
 T. Taguchi, M. Goseki, R. Egusa, M. Namatame, F. Kusunoki, M. Sugimoto, E. Yamaguchi,
S. Inagaki, Y. Takeda, and H. Mizoguchi, “KIKIWAKE: Sound source separation system for
children-computer interaction.” CHI 2013 Extended Abstracts, pp. 757-762, April 27–May 2,
 N.-V. Vu, H. Ye, J. Whittington, J. Devlin, and M. Mason, “Small footprint implementation
of dual-microphone delay-and-sum beamforming for in-car speech enhancement.” International
Conference on Acoustics, Speech, and Signal Processing. USA, pp. 1482-1485, March 2010.
 M. Fuchs, T. Haulick, and G. Schmidt, “Noise suppression for automotive applications based
on directional information.” International Conference on Acoustics, Speech, and Signal
Processing. Canada, vol. 1, pp. I–237-40, May 2004.
 E. Weinstein, k. Steele, A. Agarwal, and J.Glass.Loud: A 1020-node modular microphone
array and beamformer for intelligent computing spaces. Technical Report MIT-LCS-TM-642,
MIT/LCS Technical Memo, April 2004.
 H.Sun, S. Yan, P. Svensson, “Robust Minimum Sidelobe Beamforming for Spherical
Microphone Array”, IEEE TRANSCATIONS ON AUDIO, SPEECH, AND LANGUAGE
PROCESSING, vol. 19, No. 4, pp.1045-1051, May 2011.
 T. Fujihara, S. Kagami, Y. Sasaki, and H. Mizoguchi, “Arrangement optimization for narrow
directivity and high S/N ratio beam forming microphone array.” IEEE SENSORS, Italia, pp. 450-
453, October 2008.
 T. Taguchi, T. Nakadai, R. Egusa, M. Namatame, F. Kusunoki, M. Sugimoto, E. Yamaguchi,
S. Inagaki, Y. Takeda, and H. Mizoguchi, “Investigation on optimal microphone arrangement of
spherical microphone array to achieve shape beamforming.” Intelligent Systems, Modelling, and
Simulation, pp. 330-333, January 27–29, 2014.