Bongjun Kim

Ph.D. Candidate, Computer Science
Northwestern University

CV: PDF (Oct, 2019)
Email: bongjun [at] u.northwestern.edu
Scholar | ResearchGate | Github | Linkedin | twitter


(interactive) Machine Learning | Audio Signal Processing | HCI

Bongjun Kim is a Ph.D. candidate in the Department of Computer Science at Northwestern University and working at Interactive Audio Lab with Prof. Bryan Pardo. His current research interests include sound event detection, human-in-the-loop interfaces for audio annotation, interactive machine learning, and multimedia information retrieval. He also enjoys working on a musical interface and interactive media art.


News

10/20/2019:   I am attending the full week of audio events in New York: WASPAA, SANE, and DCASE. I am giving a talk at WASPAA and a poster presentation at SANE. Here is my paper to present: pdf

9/4/2019:   I gave a talk about my work, “Self-supervised Attention Model for Weakly Labeled Audio Event Classification” at EUSIPCO 2019 in A Coruña, Spain.

8/29/2019:   My co-authored journal paper, “Learning to Build Natural Audio Production Interfaces” got published in Arts

8/24/2019:   My co-authored paper, “Classifying non-speech vocals: Deep vs Signal Processing Representations” has been accepted from DCASE 2019

7/15/2019:   My paper, “Sound Event Detection Using Point-labeled Data” has been accepted from WASPAA 2019

7/01/2019:  My DCASE submission (task 5) got 3rd place out of 22 systems competing. (2nd in team rankings). Read more about the challenge and the results: click.

6/27/2019:   I am giving a talk about “A Human-in-the-loop system for labeling sound events in audio recordings” at Midwest Music and Audio Day 2019 (MMAD) in Indiana University, Bloomington, USA.

6/03/2019:   My paper, “Self-supervised Attention Model for Weakly Labeled Audio Event Classification” has been accepted from EUSIPCO 2019

5/11/2019:   I am presenting my work, “Improving Content-based Audio Retrieval by Vocal Imitation Feedback” at ICASSP 2019 in Brighten, UK.

3/16/2019:   I am giving a talk about my work, “A Human-in-the-loop System for Sound Event Detection and Annotation” at IUI 2019 in Los Angeles, USA.

11/19/2018:  My sound classification model got 3rd place (out of 23 systems competing) in Making Sense of Sounds Data Challenge, 2018

11/19/2018:   I am presenting my work, “Vocal Imitation Set: a dataset of vocally imitated sound events using the AudioSet ontology” at DCASE 2018 in Surrey, UK.