|
Agata Lapedriza. (2009). Multitask Learning Techniques for Automatic Face Classification (Jordi Vitria, & David Masip, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Automatic face classification is currently a popular research area in Computer Vision. It involves several subproblems, such as subject recognition, gender classification or subject verification.
Current systems of automatic face classification need a large amount of training data to robustly learn a task. However, the collection of labeled data is usually a difficult issue. For this reason, the research on methods that are able to learn from a small sized training set is essential.
The dependency on the abundance of training data is not so evident in human learning processes. We are able to learn from a very small number of examples, given that we use, additionally, some prior knowledge to learn a new task. For example, we frequently find patterns and analogies from other domains to reuse them in new situations, or exploit training data from other experiences.
In computer science, Multitask Learning is a new Machine Learning approach that studies this idea of knowledge transfer among different tasks, to overcome the effects of the small sample sized problem.
This thesis explores, proposes and tests some Multitask Learning methods specially developed for face classification purposes. Moreover, it presents two more contributions dealing with the small sample sized problem, out of the Multitask Learning context. The first one is a method to extract external face features, to be used as an additional information source in automatic face classification problems. The second one is an empirical study on the most suitable face image resolution to perform automatic subject recognition.
|
|
|
Agata Lapedriza. (2005). Face Classification using External Face Features.
|
|
|
Agata Lapedriza, David Masip, & D.Sanchez. (2014). Emotions Classification using Facial Action Units Recognition. In 17th International Conference of the Catalan Association for Artificial Intelligence (Vol. 269, pp. 55–64).
Abstract: In this work we build a system for automatic emotion classification from image sequences. We analyze subtle changes in facial expressions by detecting a subset of 12 representative facial action units (AUs). Then, we classify emotions based on the output of these AUs classifiers, i.e. the presence/absence of AUs. We base the AUs classification upon a set of spatio-temporal geometric and appearance features for facial representation, fusing them within the emotion classifier. A decision tree is trained for emotion classifying, making the resulting model easy to interpret by capturing the combination of AUs activation that lead to a particular emotion. For Cohn-Kanade database, the proposed system classifies 7 emotions with a mean accuracy of near 90%, attaining a similar recognition accuracy in comparison with non-interpretable models that are not based in AUs detection.
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2008). Subject Recognition Using a New Approach for Feature Extraction. In 3rd International Conference on Computer Vision Theory and Applications (Vol. 2, 61–66).
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2008). On the Use of Independent Tasks for Face Recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1–6).
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2007). A Hierarchical Approach for Multi-task Logistic Regression. In J. Marti et al. (Ed.), 3rd Iberian Conference on Pattern Recognition and Image Analysis (Vol. 4478, 258–265). LNCS.
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2006). Face Verification using External Features.
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2006). On the Use of External Face Features for Identity Verification. Journal of Multimedia, 1(4): 11–20, 11–20.
Abstract: In general automatic face classification applications images are captured in natural environments. In these cases, the performance is affected by variations in facial images related to illumination, pose, occlusion or expressions. Most of the existing face classification systems use only the internal features information, composed by eyes, nose and mouth, since they are more difficult to imitate. Nevertheless, nowadays a lot of applications not related to security are developed, and in these cases the information located at head, chin or ears zones (external features) can be useful to improve the current accuracies. However, the lack of a natural alignment in these areas makes difficult to extract these features applying classic Bottom-Up methods. In this paper, we propose a complete scheme based on a Top-Down reconstruction algorithm to extract external features of face images. To test our system we have performed face verification experiments using public databases, given that identity verification is a general task that has many real life applications. We have considered images uniformly illuminated, images with occlusions and images with high local changes in the illumination, and the obtained results show that the information contributed by the external features can be useful for verification purposes, specially significant when faces are partially occluded.
Keywords: Face Verification, Computer Vision, Machine Learning
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2005). Are external face features useful for automatic face classification?.
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2005). The contribution of external features to face recognition. In Pattern Recognition and Image Analysis (IbPRIA 2005), LNCS 3523: 537–544.
|
|
|
Agata Lapedriza, Jaume Garcia, Ernest Valveny, Robert Benavente, Miquel Ferrer, & Gemma Sanchez. (2008). Una experiencia de aprenentatge basada en projectes en el ambit de la informatica.
|
|
|
Agata Lapedriza, & Jordi Vitria. (2005). Experimental Study of the Usefulness of External Face Features for Face Classification. In Artificial Intelligence Research and Development, IOS Press, 99–106.
|
|
|
Agata Lapedriza, Santiago Segui, David Masip, & Jordi Vitria. (2008). A Sparse Bayesian Approach for Joint Feature Selection and Classifier Learning. Pattern Analysis and Applications, Special Issue: Non–Parametric Distance–Based Classification Techniques and Their Applications,, 299–308.
|
|
|
Agnes Borras. (2009). Contributions to the Content-Based Image Retrieval Using Pictorial Queries (Josep Llados, Ed.). Ph.D. thesis, Ediciones Graficas Rey, Bellaterra.
Abstract: The broad access to digital cameras, personal computers and Internet, has lead to the generation of large volumes of data in digital form. If we want an effective usage of this huge amount of data, we need automatic tools to allow the retrieval of relevant information. Image data is a particular type of information that requires specific techniques of description and indexing. The computer vision field that studies these kind of techniques is called Content-Based Image Retrieval (CBIR). Instead of using text-based descriptions, a system of CBIR deals on properties that are inherent in the images themselves. Hence, the feature-based description provides a universal via of image expression in contrast with the more than 6000 languages spoken in the world.
Nowadays, the CBIR is a dynamic focus of research that has derived in important applications for many professional groups. The potential fields of application can be such diverse as: the medical domain, the crime prevention, the protection of the intel- lectual property, the journalism, the graphic design, the web search, the preservation of cultural heritage, etc.
The definition on the role of the user is a key point in the development of a CBIR application. The user is in charge to formulate the queries from which the images are retrieved. We have centered our attention on the image retrieval techniques that use queries based on pictorial information. We have identified a taxonomy composed by four main query paradigms: query-by-selection, query-by-iconic-composition, query- by-sketch and query-by-paint. Each one of these paradigms allows a different degree of user expressivity. From a simple image selection, to a complete painting of the query, the user takes control of the input in the CBIR system.
Along the chapters of this thesis we have analyzed the influence that each query paradigm imposes in the internal operations of a CBIR system. Moreover, we have proposed a set of contributions that we have exemplified in the context of a final application.
|
|
|
Agnes Borras. (2002). High-Level Clothes Description Based on Colour-Texture Features..
|
|