Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. GuessTheMusic: Song Identification from Electroencephalography response
 
  • Details

GuessTheMusic: Song Identification from Electroencephalography response

Source
ACM International Conference Proceeding Series
Date Issued
2020-01-02
Author(s)
Sonawane, Dhananjay
Miyapuram, Krishna Prasad  
Bharatesh, R. S.
Lomas, Derek J.
DOI
10.1145/3430984.3431023
Abstract
The music signal comprises of different features like rhythm, timbre, melody, harmony. Its impact on the human brain has been an exciting research topic for the past several decades. Electroencephalography (EEG) signal enables the non-invasive measurement of brain activity. Leveraging the recent advancements in deep learning, we proposed a novel approach for song identification using a Convolution Neural network given the electroencephalography (EEG) responses. We recorded the EEG signals from a group of 20 participants while listening to a set of 12 song clips, each of approximately 2 minutes, that were presented in random order. The repeating nature of Music is captured by a data slicing approach considering brain signals of 1 second duration as representative of each song clip. More specifically, we predict the song corresponding to one second of EEG data when given as input rather than a complete two-minute response. We have also discussed pre-processing steps to handle large dimensions of a dataset and various CNN architectures. For all the experiments, we have considered each participant's EEG response for each song in both train and test data. We have obtained 84.96% accuracy for the same at 0.3 train-test split ratio. Moreover, our model gave commendable results as compare to chance level probability when trained on only 10% of the total dataset. The performance observed gives appropriate implication towards the notion that listening to a song creates specific patterns in the brain, and these patterns vary from person to person.
Publication link
https://arxiv.org/pdf/2009.08793
URI
http://repository.iitgn.ac.in/handle/IITG2025/25697
Subjects
brain signals | classification | CNN | EEG | frequency following response | music | neural entrainmment
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify