Parameter reduction in deep learning and classification
Loading...
Files
Date
2020-02-20
Authors
Browne, David
Journal Title
Journal ISSN
Volume Title
Publisher
University College Cork
Published Version
Abstract
The goal of this thesis is to develop methods to reduce model and problem complexity
in the area of classification tasks. Whether it is a traditional or a deep
learning classification task, decreasing complexity helps to greatly improve efficiency,
and also adds regularization to the models. In traditional machine
learning, high-dimensionality can cause models to over-fit the training data,
and hence not generalize well, while in deep learning, neural networks have
shown to achieve state-of-the-art results, especially in the area of image recognition,
in their current state cannot be easily deployed on memory restricted
Internet-of-Things devices.
Although much work has been carried out on dimensionality reduction, the
first part of our work focuses on using dominancy between features in the aim
to select a relevant subset of informative features. We propose 3 variations,
with different benefits, including fast filter features selection and a hybrid filter-wrapper
approach. In the second section, dedicated to deep learning, our work
focuses on pruning methods to extract an overall much more efficient neural
network.
We show that our proposed techniques outperform previous state-of-the-art
methods, across the different classification areas on a number of benchmark
datasets using various classifiers and neural networks.
Description
Keywords
Classification , AlexNet , VGG16 , CIFAR , Deep learning , Feature selection , Network pruning , Supervised learning , Unsupervised Kmeans , Machine learning , Random forest , Support vector machines , Image recognition , Microarray data , Gene selection , Credit scoring , High-dimensional data
Citation
Browne, D. 2020. Parameter reduction in deep learning and classification. PhD Thesis, University College Cork.