Bongjun Kim

Audio/Speech/ML Research | Solventum
Ph.D in Computer Science

CV: PDF
Email: bongjun [at] u.northwestern.edu
YouTube | Scholar | ResearchGate | Github | Linkedin | twitter


(interactive) Machine Learning | Audio Signal Processing | HCI

I am currently an AI researcher at Solventum. I completed my PhD in computer science at Northwestern University as a member of the Interactive Audio Lab (Advisor: Bryan Pardo).

My research interests inlucde machine learning, audio signal processing (e.g., sound event recognition), intelligent interactive system, multimedia information retrieval, and human-in-the-loop interface. I enjoy working on a musical interface and interactive media art. I also make music


News

11/04/2022:   I gave a talk about intelligent user interface for sound search at Applied AI Conference in St. Paul, MN.

3/30/2021:   I was invited to Conversations on Applied AI Podcast and talked about my research on AI and audio. The episode is up now: link

12/03/2020:   I gave a talk about my PhD research at AppliedAI meetup (YouTube video)

04/28/2020:   I’ve succefully defended my PhD disseration (Title: Sound Event Annotation and Detection with Less Human Effort)

04/14/2020:   I’ve released an audio embedding model, M-VGGish which was used in my recent works: Link

10/20/2019:   I am attending the full week of audio events in New York: WASPAA, SANE, and DCASE. I am giving a talk at WASPAA and a poster presentation at SANE. Here is my paper to present: pdf

9/4/2019:   I gave a talk about my work, “Self-supervised Attention Model for Weakly Labeled Audio Event Classification” at EUSIPCO 2019 in A Coruña, Spain.

8/29/2019:   My co-authored journal paper, “Learning to Build Natural Audio Production Interfaces” got published in Arts

8/24/2019:   My co-authored paper, “Classifying non-speech vocals: Deep vs Signal Processing Representations” has been accepted from DCASE 2019

7/15/2019:   My paper, “Sound Event Detection Using Point-labeled Data” has been accepted from WASPAA 2019

7/01/2019:  My DCASE submission (task 5) got 3rd place out of 22 systems competing. (2nd in team rankings). Read more about the challenge and the results: click.

6/27/2019:   I am giving a talk about “A Human-in-the-loop system for labeling sound events in audio recordings” at Midwest Music and Audio Day 2019 (MMAD) in Indiana University, Bloomington, USA.

6/03/2019:   My paper, “Self-supervised Attention Model for Weakly Labeled Audio Event Classification” has been accepted from EUSIPCO 2019

5/11/2019:   I am presenting my work, “Improving Content-based Audio Retrieval by Vocal Imitation Feedback” at ICASSP 2019 in Brighten, UK.

3/16/2019:   I am giving a talk about my work, “A Human-in-the-loop System for Sound Event Detection and Annotation” at IUI 2019 in Los Angeles, USA.

11/19/2018:  My sound classification model got 3rd place (out of 23 systems competing) in Making Sense of Sounds Data Challenge, 2018

11/19/2018:   I am presenting my work, “Vocal Imitation Set: a dataset of vocally imitated sound events using the AudioSet ontology” at DCASE 2018 in Surrey, UK.