ELCVIA Electronic Letters on Computer Vision and Image Analysis https://elcvia.cvc.uab.cat/ Electronic Journal on Computer Vision and Image Analysis en-US Authors who publish with this journal agree to the following terms:<br /><ol type="a"><li>Authors retain copyright.</li><li>The texts published in this journal are – unless indicated otherwise – covered by the Creative Commons Spain <a href="http://creativecommons.org/licenses/by-nc-nd/4.0">Attribution-NonComercial-NoDerivatives 4.0</a> licence. You may copy, distribute, transmit and adapt the work, provided you attribute it (authorship, journal name, publisher) in the manner specified by the author(s) or licensor(s). The full text of the licence can be consulted here: <a href="http://creativecommons.org/licenses/by-nc-nd/4.0">http://creativecommons.org/licenses/by-nc-nd/4.0</a>.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ol> elcvia@cvc.uab.cat (Electronic Letters on Computer Vision and Image Analysis) elcvia@cvc.uab.cat (ELCVIA) Tue, 01 Jun 2021 14:19:10 +0200 OJS 3.2.1.4 http://blogs.law.harvard.edu/tech/rss 60 Modelling and Analysis of Facial Expressions Using Optical Flow Derived Divergence and Curl Templates https://elcvia.cvc.uab.cat/article/view/v20-n2-anthwal Facial expressions are integral part of non-verbal paralinguistic communication as they provide cues significant in perceiving one’s emotional state. Assessment of emotions through expressions is an active research domain in computer vision due to its potential applications in multi-faceted domains. In this work, an approach is presented where facial expressions are modelled and analyzed with dense optical flow derived divergence and curl templates that embody the ideal motion pattern of facial features pertaining to unfolding of an expression on the face. Two types of classification schemes based on multi-class support vector machine and k-nearest neighbour are employed for evaluation. Promising results obtained from comparative analysis of the proposed approach with state-of-the-art techniques on the Extended Cohn Kanade database and with human cognition and pre-trained Microsoft face application programming interface on the Karolinska Directed Emotional Faces database validate the efficiency of the approach. Shivangi Anthwal Copyright (c) 2021 Shivangi Anthwal https://creativecommons.org/licenses/by-nc-nd/4.0 https://elcvia.cvc.uab.cat/article/view/v20-n2-anthwal Tue, 01 Jun 2021 00:00:00 +0200 Accuracy improvement of the inSAR quality-guided phase unwrapping based on a modified PDV map. https://elcvia.cvc.uab.cat/article/view/1220 <p class="AbstractBodytext"><span lang="EN-GB">In this paper, an accuracy improvement of the quality-guided phase unwrapping algorithm is proposed. Our proposal is based on a modified phase derivative variance which provides more details on local variations especially for important patterns such as fringes and edges, hence distorted regions may be re-unwrapped according to this new reliable PDV. The proposed improvement is not only effective on accuracy but also on time, the obtained results have shown that the running time with our proposal is less than that of a skillful optimization-based algorithm. To prove effectiveness, the experimental test is carried out on simulated and real data, and the comparison is made under several relevant criteria.<strong></strong></span></p> Tarek Bentahar Copyright (c) 2021 Tarek Bentahar https://creativecommons.org/licenses/by-nc-nd/4.0 https://elcvia.cvc.uab.cat/article/view/1220 Wed, 18 Aug 2021 00:00:00 +0200 Underwater Acoustic Image Denoising Using Stationary Wavelet Transform and Various Shrinkage Functions https://elcvia.cvc.uab.cat/article/view/1360 <p>Underwater acoustic images are captured by sonar technology which uses sound as a source. The noise in the acoustic images may occur only during acquisition. These noises may be multiplicative in nature and cause serious effects on the images affecting their visual quality. Generally image denoising techniques that remove the noise from the images can use linear and non-linear filters. In this paper, wavelet based denoising method is used to reduce the noise from the images. The image is decomposed using Stationary Wavelet Transform (SWT) into low and high frequency components. The various shrinkage functions such as Visushrink and Sureshrink are used for selecting the threshold to remove the undesirable signals in the low frequency component. The high frequency components such as edges and corners are retained. Then the inverse SWT is used for reconstruction of denoised image by combining the modified low frequency components with the high frequency components. The performance measure Peak Signal to Noise Ratio (PSNR) is obtained for various wavelets such as Haar, Daubechies,Coiflet and by changing the thresholding methods.</p> Priyadharsini Ravisankar Copyright (c) 2021 Priyadharsini Ravisankar https://creativecommons.org/licenses/by-nc-nd/4.0 https://elcvia.cvc.uab.cat/article/view/1360 Tue, 14 Sep 2021 00:00:00 +0200 An Efficient BoF Representation for Object Classification https://elcvia.cvc.uab.cat/article/view/1403 The Bag-of-features (BoF) approach has proved to yield better performance in a patch-based object classification system owing to its simplicity. However, often the very large number of patch-based descriptors (such as scale-invariant feature transform and speeded up robust features, extracted from images to create a BoF vector) leads to huge computational cost and an increased storage requirement. This paper demonstrates a two-staged approach to creating a discriminative and compact BoF representation for object classification. As a preprocessing stage to the codebook construction, ambiguous patch-based descriptors are eliminated using an entropy-based and one-pass feature selection approach, to retain high-quality descriptors. As a post-processing stage to the codebook construction, a subset of codewords which is not activated enough in images are eliminated from the initially constructed codebook based on statistical measures. Finally, each patch-based descriptor of an image is assigned to the closest codeword to create a histogram representation. One-versus-all support vector machine is applied to classify the histogram representation. The proposed methods are evaluated on benchmark image datasets. Testing results show that the proposed methods enables the codebook to be more discriminative and compact in moderate sized visual object classification tasks. Veerapathirapillai Vinoharan, Amirthalingam Ramanan Copyright (c) 2021 Veerapathirapillai Vinoharan, Amirthalingam Ramanan https://creativecommons.org/licenses/by-nc-nd/4.0 https://elcvia.cvc.uab.cat/article/view/1403 Thu, 16 Dec 2021 00:00:00 +0100 Deep Learning Based Models for Offline Gurmukhi Handwritten Character and Numeral Recognition https://elcvia.cvc.uab.cat/article/view/1282 <p class="Els-Affiliation">Over the last few years, several researchers have worked on handwritten character recognition and have proposed various techniques to improve the performance of Indic and non-Indic scripts recognition. Here, a Deep Convolutional Neural Network has been proposed that learns deep features for offline Gurmukhi handwritten character and numeral recognition (HCNR). The proposed network works efficiently for training as well as testing and exhibits a good recognition performance. Two primary datasets comprising of offline handwritten Gurmukhi characters and Gurmukhi numerals have been employed in the present work. The testing accuracies achieved using the proposed network is 98.5% for characters and 98.6% for numerals.</p> Manoj Kumar Mahto, Karamjit Bhatia, Rajendra Kumar Sharma Copyright (c) 2022 Manoj Kumar Mahto, Karamjit Bhatia, Rajendra Kumar Sharma https://creativecommons.org/licenses/by-nc-nd/4.0 https://elcvia.cvc.uab.cat/article/view/1282 Tue, 18 Jan 2022 00:00:00 +0100