Publicacions CVC
Home
|
Show All
|
Simple Search
|
Advanced Search
|
Add Record
|
Import
You must login to submit this form!
Login
Quick Search:
Field:
main fields
author
title
publication
keywords
abstract
created_date
call_number
contains:
...
Edit the following record:
Author
...
is Editor
Title
...
Type
Journal Article
Abstract
Book Chapter
Book Whole
Conference Article
Conference Volume
Journal
Magazine Article
Manual
Manuscript
Map
Miscellaneous
Newspaper Article
Patent
Report
Software
Year
...
Publication
...
Abbreviated Journal
...
Volume
...
Issue
...
Pages
...
Keywords
...
Abstract
We present the results of recent challenges in Automated Computer Vision (AutoCV, renamed here for clarity AutoCV1 and AutoCV2, 2019), which are part of a series of challenge on Automated Deep Learning (AutoDL). These two competitions aim at searching for fully automated solutions for classification tasks in computer vision, with an emphasis on any-time performance. The first competition was limited to image classification while the second one included both images and videos. Our design imposed to the participants to submit their code on a challenge platform for blind testing on five datasets, both for training and testing, without any human intervention whatsoever. Winning solutions adopted deep learning techniques based on already published architectures, such as AutoAugment, MobileNet and ResNet, to reach state-of-the-art performance in the time budget of the challenge (only 20 minutes of GPU time). The novel contributions include strategies to deliver good preliminary results at any time during the learning process, such that a method can be stopped early and still deliver good performance. This feature is key for the adoption of such techniques by data analysts desiring to obtain rapidly preliminary results on large datasets and to speed up the development process. The soundness of our design was verified in several aspects: (1) Little overfitting of the on-line leaderboard providing feedback on 5 development datasets was observed, compared to the final blind testing on the 5 (separate) final test datasets, suggesting that winning solutions might generalize to other computer vision classification tasks; (2) Error bars on the winners’ performance allow us to say with confident that they performed significantly better than the baseline solutions we provided; (3) The ranking of participants according to the any-time metric we designed, namely the Area under the Learning Curve, was different from that of the fixed-time metric, i.e. AUC at the end of the fixed time budget. We released all winning solutions under open-source licenses. At the end of the AutoDL challenge series, all data of the challenge will be made publicly available, thus providing a collection of uniformly formatted datasets, which can serve to conduct further research, particularly on meta-learning.
Address
...
Corporate Author
...
Thesis
Bachelor's thesis
Master's thesis
Ph.D. thesis
Diploma thesis
Doctoral thesis
Habilitation thesis
Publisher
...
Place of Publication
...
Editor
...
Language
...
Summary Language
...
Original Title
...
Series Editor
...
Series Title
...
Abbreviated Series Title
...
Series Volume
...
Series Issue
...
Edition
...
ISSN
...
ISBN
...
Medium
...
Area
...
Expedition
...
Conference
...
Notes
...
Approved
yes
no
Location
Call Number
...
Serial
Marked
yes
no
Copy
true
fetch
ordered
false
Selected
yes
no
User Keys
...
User Notes
...
User File
...
User Groups
...
Cite Key
...
Related
...
File
URL
...
DOI
...
Online publication. Cite with this text:
...
Location Field:
don't touch
add
remove
my name & email address
Home
SQL Search
|
Library Search
|
Show Record
|
Extract Citations
Help