Show simple item record

dc.contributor.authorSimić, Nikola
dc.contributor.authorSuzić, Siniša
dc.contributor.authorNosek, Tijana
dc.contributor.authorVujović, Mia
dc.contributor.authorPerić, Zoran
dc.contributor.authorSavić, Milan
dc.contributor.authorDelić, Vlado
dc.date.accessioned2023-04-10T12:13:45Z
dc.date.available2023-04-10T12:13:45Z
dc.date.issued2022-03-16
dc.identifier.citationIII44006en_US
dc.identifier.urihttps://platon.pr.ac.rs/handle/123456789/1181
dc.description.abstractSpeaker recognition is an important classification task, which can be solved using several approaches. Although building a speaker recognition model on a closed set of speakers under neutral speaking conditions is a well-researched task and there are solutions that provide excellent performance, the classification accuracy of developed models significantly decreases when applying them to emotional speech or in the presence of interference. Furthermore, deep models may require a large number of parameters, so constrained solutions are desirable in order to implement them on edge devices in the Internet of Things systems for real-time detection. The aim of this paper is to propose a simple and constrained convolutional neural network for speaker recognition tasks and to examine its robustness for recognition in emotional speech conditions. We examine three quantization methods for developing a constrained network: floating-point eight format, ternary scalar quantization, and binary scalar quantization. The results are demonstrated on the recently recorded SEAC dataset.en_US
dc.language.isoen_USen_US
dc.publisherMolecular Diversity Preservation Internationalen_US
dc.titleSpeaker Recognition Using Constrained Convolutional Neural Networks in Emotional Speechen_US
dc.title.alternativeEntropyen_US
dc.typeclanak-u-casopisuen_US
dc.description.versionpublishedVersionen_US
dc.identifier.doihttps://doi.org/10.3390/e24030414, 1099-4300
dc.citation.volume24
dc.citation.issue3
dc.subject.keywordsspeaker recognitionen_US
dc.subject.keywordsconvolutional neural networken_US
dc.subject.keywordsquantizationen_US
dc.subject.keywordsemotional speechen_US
dc.type.mCategoryM22en_US
dc.type.mCategoryopenAccessen_US
dc.type.mCategoryM22en_US
dc.type.mCategoryopenAccessen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record