Table of LinksAbstract and 1. Introduction2 Concepts in Pretraining Data and Quantifying Frequency3 Comparing Pretraining Frequency & “Zero-Shot” Performance and 3.1 Experimental Setup3.2 Result: Pretraining Frequency is Predictive of “Zero-Shot” Performance4 Stress-Testing the Concept Frequency-Performance Scaling Trend and 4.1 Controlling for Similar Samples in Pretraining and Downstream Data4.2 Testing Generalization to Purely Synthetic Concept and Data Distributions5 Additional Insights from Pretraining Concept Frequencies6 Testing the Tail: Let It Wag!7 Related Work8 Conclusions and Open Problems, Acknowledgements, and ReferencesPart IAppendixA. Concept Frequency is Predictive of Performance Across Prompting StrategiesB. Concept Frequency is Predictive of Performance Across Retrieval MetricsC. Concept Frequency is Predictive of Performance for T2I ModelsD. Concept Frequency is Predictive of Performance across Concepts only from Image and Text DomainsE. Experimental DetailsF. Why and How Do We Use RAM++?G. Details about Misalignment Degree ResultsH. T2I Models: EvaluationI. Classification Results: Let It Wag!5 Additional Insights from Pretraining Concept FrequenciesWe now present notable observations concerning the distribution of downstream concept frequencies across text, image, and text-image matched modalities in pretraining datasets.\Finding 1: Pretraining Datasets Exhibit Long-tailed Concept Distribution. Our analysis in Fig. 5 reveals an extremely long-tailed distribution of concept frequencies in pretraining datasets, with over two-thirds of concepts occurring at almost negligible frequencies relative to the size of the datasets. Our observations support the findings of past work that have noted the long-tailed distribution of large-scale language datasets [25, 88, 136]. As we observed with the log-linear trend, this distribution directly reflects disparities in performance.\Finding 2: Misalignment Between Concepts in Image-Text Pairs. We investigated the alignment of concepts within paired pretraining image-text data. Perfect image-text alignment is defined as every image-text pair containing the same concepts. Previous studies have qualitatively discussed the problem of misalignment in large image-text datasets [75, 124, 76]. Our analysis enables us to quantify this misalignment degree—for each image-text pair in the pretraining dataset, we find the concepts that are matched to the image and the text caption independently. If there are no intersecting concepts from the independent image\ \ \ \and text hits, we count that pair as misaligned (detailed algorithm provided in Appx. G). Tab. 3 shows the high degree of misalignment in all image-text pairs. To the best of our knowledge, this is the first attempt to explicitly quantify the degree of misalignment in pretraining image-text datasets. We release the precise misaligned image-text pairs in the pretraining datasets to enable better data curation.\Finding 3: Concept Frequencies Across Datasets are Correlated. Despite vast differences in the size (ranging from 3M to 400M samples) and curation strategies of the datasets analyzed, we discovered a surprisingly high correlation in concept frequencies across them, as presented in Tab. 4. This consistency suggests that the internet, as the common source of these datasets, naturally exhibits a long-tailed distribution, influencing any dataset derived from it to also display similar long-tailed behavior. This result inspired the “Let It Wag!” dataset.\:::infoAuthors:(1) Vishaal Udandarao, Tubingen AI Center, University of Tubingen, University of Cambridge, and equal contribution;(2) Ameya Prabhu, Tubingen AI Center, University of Tubingen, University of Oxford, and equal contribution;(3) Adhiraj Ghosh, Tubingen AI Center, University of Tubingen;(4) Yash Sharma, Tubingen AI Center, University of Tubingen;(5) Philip H.S. Torr, University of Oxford;(6) Adel Bibi, University of Oxford;(7) Samuel Albanie, University of Cambridge and equal advising, order decided by a coin flip;(8) Matthias Bethge, Tubingen AI Center, University of Tubingen and equal advising, order decided by a coin flip.::::::infoThis paper is available on arxiv under CC BY 4.0 DEED license.:::\