Depression Detection using speech as Input Signal
release_su3l2z63bjgbfeo6lfszfqg2jm
by
Aniket Waghela, Prinkle Singharia, Bhavya Haria, Bhakti Sonawane
2020
Abstract
<em>In </em><em>this paper we propose a method for computer based depression detection. It is focusing on two aspects: gathering conversations and getting only the patient's audio and creating a deep learning model for automatic depression detection. Convolutional Neural Network (CNN) classifier is used to find patterns in audio characteristics of the depressed patients. This training is carried out on DAIC-WOZ Dataset from USC's Institute of Creative Technologies and it was released as part of the AVEC(Audio/Visual Emotional Challenge) 2016. There is a sample imbalance in the dataset, as there are more number of non-depressed patients in the dataset. To remove this sample imbalance, we introduce random sampling before the model training.</em>
In text/plain
format
Archived Files and Locations
application/pdf 640.1 kB
file_7jspgfxcvjcvvcfviusos7x7mi
|
zenodo.org (repository) web.archive.org (webarchive) |
access all versions, variants, and formats of this works (eg, pre-prints)
Datacite Metadata (via API)
Worldcat
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar