Broadcom and a company called CAMB.AI are teaming up to bring on-device audio translation to a chipset. This would allow devices that use the SoC to complete translation, dubbing and audio description tasks without having to dip into the cloud. In other words, it could massively improve accessibility for consumers.The companies promise ultra-low latency and enhanced privacy, being that all processing is kept local to the user's device. The wireless bandwidth should also be drastically reduced.As for the audio description piece, there's a demo video of the tool being used on a clip from the film Ratatouille. The AI can be heard describing the scene in various languages, in addition to a written translation appearing on-screen. This looks incredibly useful, particularly for those with vision issues.There is a major caveat. This is a tightly controlled clip with plenty of edits. We have no idea how this tech will work in a real world scenario. Also, we don't know how accurate the information will be. It does feature a voice model that's already being used by organizations like NASCAR, Comcast and Eurovision.The companies boast that this will enable "on-device translation in over 150 languages." We don't know when these chips will begin showing up in TVs and other gadgets. The tech is in the testing phase for now, so it's gonna be a while. Broadcom also recently teamed up with OpenAI to help the latter company to manufacture its own chips.This article originally appeared on Engadget at https://www.engadget.com/ai/broadcom-just-announced-an-ai-chipset-that-translates-audio-in-real-time-directly-on-the-device-050036717.html?src=rss