Degraded Gemini 2.5 Pro experience in Copilot

Wait 5 sec.

Oct 2, 22:33 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.Oct 2, 22:26 UTCUpdate - We have confirmed that the fix for the lower token input limit for Gemini 2.5 Pro is in place and are currently testing our previous higher limit to verify that customers will experience no further impact.Oct 2, 17:13 UTCUpdate - The underlying issue for the lower token limits for Gemini 2.5 Pro has been identified and a fix is in progress. We will update again once we have tested and confirmed that the fix is correct and globally deployed.Oct 2, 02:52 UTCUpdate - We are continuing to work with our provider to resolve the issue where some Copilot requests using Gemini 2.5 Pro return an error indicating a bad request due to exceeding the input limit size.Oct 1, 18:16 UTCUpdate - We are continuing to investigate and test solutions internally while working with our model provider on a deeper investigation into the cause. We will update again when we have identified a mitigation.Oct 1, 17:37 UTCUpdate - We are testing other internal mitigations so that we can return to the higher maximum input length. We are still working with our upstream model provider to understand the contributing factors for this sudden decrease in input limits.Oct 1, 16:49 UTCUpdate - We are experiencing a service regression for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. The maximum input length of Gemini 2.5 prompts been decreased. Long prompts or large context windows may result in errors. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.Oct 1, 16:43 UTCInvestigating - We are investigating reports of degraded performance for Copilot