54 gpt5 Context at Around 50% Full, Performance Seriously Degrades
Just look at the image, this is proven poor performance
The model used at that time should be gpt 5.1, this model is actually quite good at writing code, but the defect is when context still has 58% remaining, performance already dropped very obviously
So my recent practice is when remaining around 70%, directly /compact to compress context, ensure model always stays in best state
When using other models, also pay attention to this long context causing performance degradation problem
How to count as performance degradation, I observed these several situations:
- Claude and Gemini: Both perform consistently, if appears not following rules in rules file, then it's not working. For example, my rules file clearly says use Simplified Chinese to communicate with me, when context too long will suddenly start speaking English, especially Gemini more obvious, specific percentage I forgot, but this phenomenon shows model starting to forget initial important messages, is performance degradation manifestation
Purchase required to continue
This is a paid article. After signing in, your purchase will be unlocked automatically.
No comments yet. Be the first to share your thoughts.