Applying Singular Value Decomposition (SVD) to CNNs: A Path Toward Lightweight and Efficient Architecture

Main Article Content

Khawla Hussein Ali

Abstract

 This paper aims to investigate whether applying the Singular Value Decomposition (SVD) technique can reduce the workload of Convolutional Neural Networks (CNNs) without compromising image classification results. Usual methods for reducing file size are generally unsuitable for deployment because they are costly to train, not straightforward to use, or require additional resources. We performed a low-rank approximation of the CNN weights using the singular value decomposition (SVD) approach. Results for ResNet-50 were obtained from CIFAR-10, and EfficientNet-B0 was assessed using ImageNet. The assessment used metrics including accuracy, precision, recall, and F1-score. Using SVD at a 0.5 compression level, we reduced the network size of ResNet-50 on CIFAR-10 by 41.7% without significantly compromising accuracy (95.4%). With aggressive compression at a ratio of 0.1, the model achieved an 88.9% reduction in parameters, but accuracy decreased to 72.9%. Furthermore, the model increased inference speed slightly (up to 1.0x) and maintained an approximately 89.96 MB size. The uncompressed model was 10.4% accurate before training. Rather than other techniques, SVD offers straightforward yet practical suggestions for enhancing CNN performance on small datasets, without significantly compromising accuracy. These results demonstrate that SVD-based compression is a promising approach for utilizing CNNs in limited-resource systems. It achieves nearly complete accuracy with fewer parameters and has a minimal impact on the speed of CNNs.

Article Details

Section
Articles