DeepSeek warns of ‘jailbreak’ risks for its open-source models

Wait 5 sec.

DeepSeek has revealed details about the risks posed by its artificial intelligence models for the first time, noting that open-sourced models are particularly susceptible to being “jailbroken” by malicious actors.The Hangzhou-based start-up said it evaluated its models using industry benchmarks as well as its own tests in a peer-reviewed article published in the academic journal Nature.American AI companies often publicise research about the risks of their rapidly improving models and have...