Algorithm and Data Imbalance: Implementing the New IIL Benchmarks

Wait 5 sec.

Table of LinksAbstract and 1 IntroductionRelated worksProblem settingMethodology4.1. Decision boundary-aware distillation4.2. Knowledge consolidationExperimental results and 5.1. Experiment Setup5.2. Comparison with SOTA methods5.3. Ablation studyConclusion and future work and References\Supplementary MaterialDetails of the theoretical analysis on KCEMA mechanism in IILAlgorithm overviewDataset detailsImplementation detailsVisualization of dusted input imagesMore experimental results8. Algorithm overviewThe whole process of the proposed method is illustrated in Algorithm 1. Codes will be released to public upon publication. 9. Dataset detailsCorresponding to the definition of our new increment-alinstance learning (IIL) task, we reorganize the public dataset Cifar-100 [11] and ImageNet-100 [24] to establish the benchmark in this research topic.\ \ \ \As can be seen, the base dataset has a more balance sample distribution, where the largest class is class 6 with 279 images. The smallest class is class 29 and has 215 images. Incremental dataset “D5” has the most imbalance number of samples between classes, where the maximum number is 40 in class 58 and the minimum number is 12 in class 76. That is, the highest imbalance ratio is 3.33 : 1.\ \:::infoAuthors:(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);(2) Weifu Fu, Tencent Youtu Lab;(3) Yuhuan Lin, Tencent Youtu Lab;(4) Jialin Li, Tencent Youtu Lab;(5) Yifeng Zhou, Tencent Youtu Lab;(6) Yong Liu, Tencent Youtu Lab;(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);(8) Chengjie Wang, Tencent Youtu Lab.::::::infoThis paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.:::\