Download PDFOpen PDF in browser

EMO-MUSIC (Emotion based Music player)

EasyChair Preprint 2463

5 pagesDate: January 26, 2020

Abstract

Everyone wants to listen music of their individual taste, mostly based on their mood. Average person spends more time to listen music. Music has high impact on person brain activity. User always face the task to manually browse the music and to create a playlist based on the current mood. This project is very efficient which generate a music playlist based on the current mood of user. However the proposed existing algorithms in use are comparably slow, less accurate and sometimes even require use of additional hardware like EEG or sensors. Facial expression is a easy way and most ancient way of expressing emotion, feelings and ongoing mood of the person. This model based on real time extraction of facial expression and identify the mood. In this project we are using Haar cascade classifier to extract the facial features based on the extracted features from haar cascade, we are using cohn kanade dataset to identify the emotion of user. If the user's detected emotion is neutral then the background will be detected and the music will play according to the background. for example. If it detects gym equipment, the algorithm will automatically create a workout song playlist from the captured image of the background.

Keyphrases: Haar cascade amplifier, Open CV, Python

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:2463,
  author    = {Sarvesh Pal and Ankit Mishra and Hridaypratap Mourya and Supriya Dicholkar},
  title     = {EMO-MUSIC (Emotion based Music player)},
  howpublished = {EasyChair Preprint 2463},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser