DeepSeek releases ‘sparse attention’ model that cuts API costs in half

Wait 5 sec.

Researchers at DeepSeek released a new experimental model designed to have dramatically lower inference costs when used in long-context operations.