toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author David Berga; Xose R. Fernandez-Vidal; Xavier Otazu; V. Leboran; Xose M. Pardo edit   pdf
url  openurl
  Title Psychophysical evaluation of individual low-level feature influences on visual attention Type Journal Article
  Year 2019 Publication Vision Research Abbreviated Journal (down) VR  
  Volume 154 Issue Pages 60-79  
  Keywords Visual attention; Psychophysics; Saliency; Task; Context; Contrast; Center bias; Low-level; Synthetic; Dataset  
  Abstract In this study we provide the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of synthetically-generated image patterns. Design of visual stimuli was inspired by the ones used in previous psychophysical experiments, namely in free-viewing and visual searching tasks, to provide a total of 15 types of stimuli, divided according to the task and feature to be analyzed. Our interest is to analyze the influences of low-level feature contrast between a salient region and the rest of distractors, providing fixation localization characteristics and reaction time of landing inside the salient region. Eye-tracking data was collected from 34 participants during the viewing of a 230 images dataset. Results show that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. temporality of fixations, 4. task difficulty and 5. center bias. This experimentation proposes a new psychophysical basis for saliency model evaluation using synthetic images.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; 600.128; 600.120 Approved no  
  Call Number Admin @ si @ BFO2019a Serial 3274  
Permanent link to this record
 

 
Author David Geronimo; Angel Sappa; Antonio Lopez; Daniel Ponsa edit   pdf
url  openurl
  Title Pedestrian Detection Using AdaBoost Learning of Features and Vehicle Pitch Estimation Type Miscellaneous
  Year 2006 Publication 6th IASTED International Conference on Visualization, Imaging and Image Processing Abbreviated Journal (down) VIIP  
  Volume Issue Pages 400–405  
  Keywords ADAS, pedestrian detection, adaboost learning, pitch estimation, haar wavelets, edge orientation histograms.  
  Abstract In this paper we propose a combination of different Haar filter sets and Edge Orientation Histograms (EOH) in order to learn a model for pedestrian detection. As we will show, with the addition of EOH we obtain better ROCs than using Haar filters alone. Hence, a model consisting of discriminant features, selected by AdaBoost, is applied at pedestrian-sized image windows in order to perform
the classification. Additionally, taking into account the final application, a driver assistance system with realtime requirements, we propose a novel stereo-based camera pitch estimation to reduce the number of explored windows.
With this approach, the system can work in urban roads, as will be illustrated by current results.
 
  Address Palma de Mallorca (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ GSL2006 Serial 672  
Permanent link to this record
 

 
Author Petia Radeva; Jordi Vitria; Fernando Vilariño; Panagiota Spyridonos; Fernando Azpiroz; Juan Malagelada; Fosca de Iorio; Anna Accarino edit   pdf
url  openurl
  Title Cascade analysis for intestinal contraction detection Type Patent
  Year 2009 Publication US 2009/0284589 A1 Abbreviated Journal (down) USPO  
  Volume Issue Pages 1-25  
  Keywords  
  Abstract A method and system cascade analysisi for intestinal contraction detection is provided by extracting from image frames captured in-vivo. The method and system also relate to the detection of turbid liquids in intestinal tracts, to automatic detection of video image frames taken in the gastrointestinal tract including a field of view obstructed by turbid media, and more particulary, to extraction of image data obstructed by turbid media.  
  Address  
  Corporate Author US Patent Office Thesis  
  Publisher US Patent Office Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; OR; MV;SIAI Approved no  
  Call Number IAM @ iam @ RVV2009 Serial 1700  
Permanent link to this record
 

 
Author Simone Balocco; Carlo Gatta; Oriol Pujol; J. Mauri; Petia Radeva edit  doi
openurl 
  Title SRBF: Speckle Reducing Bilateral Filtering Type Journal Article
  Year 2010 Publication Ultrasound in Medicine and Biology Abbreviated Journal (down) UMB  
  Volume 36 Issue 8 Pages 1353-1363  
  Keywords  
  Abstract Speckle noise negatively affects medical ultrasound image shape interpretation and boundary detection. Speckle removal filters are widely used to selectively remove speckle noise without destroying important image features to enhance object boundaries. In this article, a fully automatic bilateral filter tailored to ultrasound images is proposed. The edge preservation property is obtained by embedding noise statistics in the filter framework. Consequently, the filter is able to tackle the multiplicative behavior modulating the smoothing strength with respect to local statistics. The in silico experiments clearly showed that the speckle reducing bilateral filter (SRBF) has superior performances to most of the state of the art filtering methods. The filter is tested on 50 in vivo US images and its influence on a segmentation task is quantified. The results using SRBF filtered data sets show a superior performance to using oriented anisotropic diffusion filtered images. This improvement is due to the adaptive support of SRBF and the embedded noise statistics, yielding a more homogeneous smoothing. SRBF results in a fully automatic, fast and flexible algorithm potentially suitable in wide ranges of speckle noise sizes, for different medical applications (IVUS, B-mode, 3-D matrix array US).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HUPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ BGP2010 Serial 1314  
Permanent link to this record
 

 
Author Marina Alberti; Simone Balocco; Xavier Carrillo; Josefina Mauri; Petia Radeva edit  url
doi  openurl
  Title Automatic non-rigid temporal alignment of IVUS sequences: method and quantitative validation Type Journal Article
  Year 2013 Publication Ultrasound in Medicine and Biology Abbreviated Journal (down) UMB  
  Volume 39 Issue 9 Pages 1698-712  
  Keywords Intravascular ultrasound; Dynamic time warping; Non-rigid alignment; Sequence matching; Partial overlapping strategy  
  Abstract Clinical studies on atherosclerosis regression/progression performed by intravascular ultrasound analysis would benefit from accurate alignment of sequences of the same patient before and after clinical interventions and at follow-up. In this article, a methodology for automatic alignment of intravascular ultrasound sequences based on the dynamic time warping technique is proposed. The non-rigid alignment is adapted to the specific task by applying it to multidimensional signals describing the morphologic content of the vessel. Moreover, dynamic time warping is embedded into a framework comprising a strategy to address partial overlapping between acquisitions and a term that regularizes non-physiologic temporal compression/expansion of the sequences. Extensive validation is performed on both synthetic and in vivo data. The proposed method reaches alignment errors of approximately 0.43 mm for pairs of sequences acquired during the same intervention phase and 0.77 mm for pairs of sequences acquired at successive intervention stages.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ABC2013 Serial 2313  
Permanent link to this record
 

 
Author G. Zahnd; Simone Balocco; A. Serusclat; P. Moulin; M. Orkisz; D. Vray edit  doi
openurl 
  Title Progressive attenuation of the longitudinal kinetics in the common carotid artery: preliminary in vivo assessment Ultrasound in Medicine and Biology Type Journal Article
  Year 2015 Publication Ultrasound in Medicine and Biology Abbreviated Journal (down) UMB  
  Volume 41 Issue 1 Pages 339-345  
  Keywords Arterial stiffness; Atherosclerosis; Common carotid artery; Longitudinal kinetics; Motion tracking; Ultrasound imaging  
  Abstract Longitudinal kinetics (LOKI) of the arterial wall consists of the shearing motion of the intima-media complex over the adventitia layer in the direction parallel to the blood flow during the cardiac cycle. The aim of this study was to investigate the local variability of LOKI amplitude along the length of the vessel. By use of a previously validated motion-estimation framework, 35 in vivo longitudinal B-mode ultrasound cine loops of healthy common carotid arteries were analyzed. Results indicated that LOKI amplitude is progressively attenuated along the length of the artery, as it is larger in regions located on the proximal side of the image (i.e., toward the heart) and smaller in regions located on the distal side of the image (i.e., toward the head), with an average attenuation coefficient of -2.5 ± 2.0%/mm. Reported for the first time in this study, this phenomenon is likely to be of great importance in improving understanding of atherosclerosis mechanisms, and has the potential to be a novel index of arterial stiffness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ZBS2014 Serial 2556  
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez edit  openurl
  Title Intrusion Classification in Intelligent Video Surveillance Systems Type Report
  Year 2008 Publication Estudis d'Enginyeria Superior en Informática Abbreviated Journal (down) UAB  
  Volume Issue Pages  
  Keywords Human detection; Car detection; Intrusion detection  
  Abstract An intelligent video surveillance system (IVS) is a camera-based installation able to process in real-time the images coming from the cameras. The aim is to automatically warn about different events of interest at the moment they happen. Daview system of Davantis is a com mercial example of IVS system. The problems addressed by any IVS system, and so Daview, are so challenging that none IVS system is perfect, thus, they need continuous improvement. Accordingly, this project aims to study different approaches in order to outperform current Daview performance, in particular, we bet for improving its classification core. We present an in deep study of the state of the art on IVS systems, as well as on how Daview works. Based on that knowledge, we propose four possibilities for improving Daview classification capabilities: improve existent classifiers; improve existing classifiers combination; create new classifiers and create new classifier-based architectures. Our main contribution has been the incorporation of state-of-the-art feature selection and machine learning techniques for the classification tasks, a viewpoint not fully addressed in current Daview system. After a comprehensive quantitative evaluation we will see how one of our proposals clearly outperforms the overall performance of current Daview system. In particular the classification core that we finally propose consists in an AdaBoost One-Against-All architecture that uses appearance and motion features that were already present in current Daview system  
  Address Bellaterra, Spain  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PFC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ VL2008a Serial 1670  
Permanent link to this record
 

 
Author Shifeng Zhang; Ajian Liu; Jun Wan; Yanyan Liang; Guogong Guo; Sergio Escalera; Hugo Jair Escalante; Stan Z. Li edit  url
doi  openurl
  Title CASIA-SURF: A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing Type Journal
  Year 2020 Publication IEEE Transactions on Biometrics, Behavior, and Identity Science Abbreviated Journal (down) TTBIS  
  Volume 2 Issue 2 Pages 182 - 193  
  Keywords  
  Abstract Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects (≤170) and modalities (≤2), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of 1,000 subjects with 21,000 videos and each sample has 3 modalities ( i.e. , RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training/validation/testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at https://sites.google.com/qq.com/face-anti-spoofing/welcome/challengecvpr2019?authuser=0  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ ZLW2020 Serial 3412  
Permanent link to this record
 

 
Author David Masip; Agata Lapedriza; Jordi Vitria edit  doi
openurl 
  Title Boosted Online Learning for Face Recognition Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal (down) TSMCB  
  Volume 39 Issue 2 Pages 530–538  
  Keywords  
  Abstract Face recognition applications commonly suffer from three main drawbacks: a reduced training set, information lying in high-dimensional subspaces, and the need to incorporate new people to recognize. In the recent literature, the extension of a face classifier in order to include new people in the model has been solved using online feature extraction techniques. The most successful approaches of those are the extensions of the principal component analysis or the linear discriminant analysis. In the current paper, a new online boosting algorithm is introduced: a face recognition method that extends a boosting-based classifier by adding new classes while avoiding the need of retraining the classifier each time a new person joins the system. The classifier is learned using the multitask learning principle where multiple verification tasks are trained together sharing the same feature space. The new classes are added taking advantage of the structure learned previously, being the addition of new classes not computationally demanding. The present proposal has been (experimentally) validated with two different facial data sets by comparing our approach with the current state-of-the-art techniques. The results show that the proposed online boosting algorithm fares better in terms of final accuracy. In addition, the global performance does not decrease drastically even when the number of classes of the base problem is multiplied by eight.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1083–4419 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ MLV2009 Serial 1155  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
openurl 
  Title Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal (down) TSMCB  
  Volume 39 Issue 4 Pages 935–944  
  Keywords  
  Abstract Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2009a Serial 1218  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: