| [18] | Kodak dataset | Adaptive block size selection and DCT-SVD hybrid | DCT, SVD, and adaptive processing | High compression and good quality | Complexity in hybridization | Adaptive hybridization |
| [19] | UCID dataset | Wavelet transform | Wavelet transform | Multiresolution representation | Limited to certain images | Improved wavelet selection |
| [20] | CALTECH dataset | Huffman coding | Huffman coding | No quality loss | Limited compression ratio | Enhanced entropy coding |
| [21] | ImageNet dataset | DCT-based compression | Discrete cosine transform | Established standard | Lossy compression | Improved quantization |
| [22] | Custom dataset | Iterated function system | Fractal encoding | Good compression | Iteration limits | Adaptive fractal generation |
| [23] | MNIST dataset | DCT-DWT hybrid | DCT and DWT | Multifrequency representation | High computational cost | Improved parallel processing |
| [24] | COCO dataset | Singular value decomposition | Singular value decomposition | Noise robustness | Singular value truncation | Adaptive truncation threshold |
| [25] | CIFAR-10 dataset | Neural network-based approach | Neural networks | Adaptive learning | Training complexity | Improved model architecture |
| [26] | ImageNet dataset | Contextual analysis | Contextual processing | Improved quality | Complexity | Efficient context modeling |
| [27] | Medical images | Adaptive block size selection and transform coding | DCT and Huffman coding | Lossless compression | Limited to medical images | Improved coding strategies |
| [28] | Custom dataset | Vector quantization | Vector quantization | High compression ratios | Information loss | Enhanced vector codebooks |
| [29] | COCO dataset | Adaptive processing based on content | DCT and adaptive strategies | Improved quality and efficient compression | Complexity in content analysis | Enhanced adaptive strategies |
| [30] | ImageNet dataset | Pyramid-based compression | Pyramid transform | Multiresolution representation | Complexity | Optimized pyramid levels |
| [31] | Kodak dataset | Progressive compression approach | DCT and SVD | Stepwise quality enhancement | Progressive transmission complexity | Improved transmission order |
| [32] | CALTECH dataset | Block-based processing and Huffman coding | Block processing and Huffman coding | Balanced quality compression | Block artifacts | Enhanced block processing |
| [33] | ImageNet dataset | Simultaneous compression and encryption | DCT and encryption techniques | Secure compression | Increased complexity | Improved encryption algorithms |
| [34] | Custom dataset | Arithmetic coding | Arithmetic coding | High compression and lossless compression | Complexity | Enhanced probability modeling |
| [35] | CIFAR-10 dataset | DCT–neural network hybrid | DCT and neural networks | Adaptive compression and improved quality | Training complexity | Enhanced training strategies |
| [36] | COCO dataset | Wavelet transform | Wavelet transform | Multifrequency representation | Complexity | Enhanced transform selection |
| [37] | Custom dataset | Contextual Huffman coding | Contextual analysis and Huffman coding | Improved compression | Complexity | Enhanced context modeling |
| [38] | ImageNet dataset | Multiresolution encoding | Discrete wavelet transform | Progressive quality and multiresolution | Complexity | Adaptive wavelet selection |