Meta introduces Omnilingual Automatic Speech Recognition (ASR), an AI model capable of recognizing over 1,600 languages automatically.The model was released on November 10, 2025, and is available immediately for developers worldwide through Meta’s GitHub repository and Hugging Face platform.Developed with 7 billion parameters, it is available as open source under the Apache 2.0 license. Of these languages, 78% can be translated with an error rate of less than 10%. To train Omnilingual ASR, Meta used data from 249 high-resource languages, 881 medium-resource languages, and 546 low-resource languages.For context, this means communities speaking languages such as Urdu, Punjabi, Sindhi, or Pashto can now access AI transcription tools that previously only worked well for English and a few dozen major languages.Meta claims that the model could be expanded to support up to 5,400 languages worldwide, vastly surpassing OpenAI’s Whisper, which supports only 99 major languages.The zero-shot learning feature enables users to add new languages with just a few audio-text examples, removing the need for expensive retraining.Additionally, Omnilingual ASR supports 500 languages previously unsupported by any automatic language recognition system. It can identify languages in audio and text and generate transcripts simultaneously.Meta has released the complete training dataset covering 350 underserved languages under a CC-BY 4.0 license, allowing researchers to verify and build upon the work.