Editorial: Learning With Fewer Labels in Computer Vision
Liu, Li; Hospedales, Timothy; Lecun, Yann; Long, Mingsheng; Luo, Jiebo; Ouyang, Wanli; Pietikainen, Matti; Tuytelaars, Tinne (2024-02-06)
Liu, Li
Hospedales, Timothy
Lecun, Yann
Long, Mingsheng
Luo, Jiebo
Ouyang, Wanli
Pietikainen, Matti
Tuytelaars, Tinne
IEEE
06.02.2024
L. Liu et al., "Editorial: Learning With Fewer Labels in Computer Vision," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 3, pp. 1319-1326, March 2024, doi: 10.1109/TPAMI.2023.3341723
https://rightsstatements.org/vocab/InC/1.0/
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:oulu-202404102636
https://urn.fi/URN:NBN:fi:oulu-202404102636
Tiivistelmä
Abstract
Undoubtedly, Deep Neural Networks (DNNs), from AlexNet to ResNet to Transformer, have sparked revolutionary advancements in diverse computer vision tasks. The scale of DNNs has grown exponentially due to the rapid development of computational resources. Despite the tremendous success, DNNs typically depend on massive amounts of training data (especially the recent various foundation models) to achieve high performance and are brittle in that their performance can degrade severely with small changes in their operating environment. Generally, collecting massive-scale training datasets is costly or even infeasible, as for certain fields, only very limited or no examples at all can be gathered. Nevertheless, collecting, labeling, and vetting massive amounts of practical training data is certainly difficult and expensive, as it requires the painstaking efforts of experienced human annotators or experts, and in many cases, prohibitively costly or impossible due to some reason, such as privacy, safety or ethic issues.
Undoubtedly, Deep Neural Networks (DNNs), from AlexNet to ResNet to Transformer, have sparked revolutionary advancements in diverse computer vision tasks. The scale of DNNs has grown exponentially due to the rapid development of computational resources. Despite the tremendous success, DNNs typically depend on massive amounts of training data (especially the recent various foundation models) to achieve high performance and are brittle in that their performance can degrade severely with small changes in their operating environment. Generally, collecting massive-scale training datasets is costly or even infeasible, as for certain fields, only very limited or no examples at all can be gathered. Nevertheless, collecting, labeling, and vetting massive amounts of practical training data is certainly difficult and expensive, as it requires the painstaking efforts of experienced human annotators or experts, and in many cases, prohibitively costly or impossible due to some reason, such as privacy, safety or ethic issues.
Kokoelmat
- Avoin saatavuus [37647]