Classification is an important problem in a large variety of applications, which makes it an open-ended forumfor researchers in various disciplines. In this paper, the proposed two approaches are mostly focused on Nearest Regularized Subspace (NRS) classifier. The proposed pair of variants are: (1) reducing the computational complexity of NRS classifier termed as Conditional Fast Nearest Regularized Subspace (CFNRS) classifier and (2) incorporation of dissimilarity feature termed as Conditional Dissimilarity-based Nearest Regularized Subspace (CDNRS) classifier. Regarding the first approach, the simple k-NN classifier result is used as a condition to evaluate NRS classifier. Regarding the second approach, an intrafeature dissimilarity measure is considered to create a pair of dictionary which contains a conditional binary matrix and a dissimilarity feature matrix. Each conditional binary matrix is a collection of distinct sub-spaces representing their respective classes. The incoming data is classified with the help of a dictionary and classification performance is validated over other state-of-art approaches. As Hyper-Spectral (HS) data have a strong spectral correlation in terms of spectral signatures, various such HS data are included for validation of our approach. The experimental study reveals the pros and cons of different classifiers along with the effectiveness of our proposed approach. The proposed conditional classifier is found to be able to compete with other, standard classifiers in terms of accuracy and computational complexity, mostly where features possess some sort of sequence information. The suitability of the proposed classifier is also verified for both binary and multi-class classification problems. Along with the experimental validation, the statistical significance of the proposed conditional measure is assessed over other standard state-of-art methods, and the proposed one outperformed the others. The experimental results reveal the importance of dissimilarity features for classification and also suggest to incorporate a simple classifier such as k-NN, prior to any other classifier. This is mainly meant to reduce the computational complexity of the selected classifier without compromising the accuracy.
File in questo prodotto:
Non ci sono file associati a questo prodotto.