Skip to main navigation
Skip to search
Skip to main content
Charles Sturt University Research Output Home
Home
Researchers
Research Organisations
Research Outputs
Datasets
Prizes
Activities
Press/Media
Impacts
Equipment
Search by expertise, name or affiliation
Augmented audio data in improving speech emotion classification tasks
Nusrat J. Shoumy
, Li Minn Ang
,
D. M.Motiur Rahaman
,
Tanveer Zia
, Kah Phooi Seng
, Sabira Khatun
Computing, Mathematics and Engineering
Advanced Network Research Group (ANRG)
Cyber Security Research Group (CSRG)
Machine Vision and Digital Health (MaViDH) Research Group
Data Science and Engineering Research Unit
Universiti Malaysia Pahang
University of the Sunshine Coast
University of New South Wales
Research output
:
Book chapter/Published conference paper
›
Conference paper
›
peer-review
5
Citations (Scopus)
Overview
Fingerprint
Fingerprint
Dive into the research topics of 'Augmented audio data in improving speech emotion classification tasks'. Together they form a unique fingerprint.
Sort by
Weight
Alphabetically
Keyphrases
Audio Data
100%
Audio Signal
50%
Augmentation Approach
50%
Augmentation Strategy
50%
Augmented Audio
100%
Classification Accuracy
50%
Classification Model
100%
Classification Task
100%
Classifier Training
50%
Convolutional Neural Network
50%
Deep Neural Network Classifier
50%
Emotion Classification
50%
High Performance
50%
Higher Classification
50%
K-nearest
50%
Main Concepts
50%
Nave Bayes
50%
Performance Accuracy
50%
SAVEE
50%
Speech Emotion Classification
100%
Speech Signal
100%
Voice Emotion Recognition
50%
Computer Science
Classification Accuracy
50%
Classification Models
100%
Classification Task
100%
Convolutional Neural Network
100%
Large Data Set
50%
Nave Bayes
50%