News

Claude Sonnet 4 has been upgraded, and it can now remember up to 1 million tokens of context, but only when it's used via API ...
In August 2025, SonarSource released its latest State of Code study, The Coding Personalities of Leading LLMs – A State of Code Report. This research goes beyond accuracy scores, examining how large ...
Anthropic has expanded the capabilities of its Claude Sonnet 4 AI model to handle up to one million tokens of context, five ...
Discover how Qwen 3 Code offers 2,000 free AI coding runs daily, making advanced programming tools accessible to all ...
Anthropic has expanded Claude Sonnet 4’s context window to 1 million tokens, matching OpenAI’s GPT-4.1 and enhancing its ability to process large code bases and document sets in one request.
Anthropic’s Claude Sonnet 4 now supports a 1 million token context window, enabling AI to process entire codebases and complex documents in a single request—redefining software development and ...
A new report today from code quality testing startup SonarSource SA is warning that while the latest large language models ...
Anthropic upgrades Claude Sonnet 4 to a 1M token context window and adds memory, enabling full codebase analysis, long ...
The new context window is available today within the Anthropic API for certain customers — like those with Tier 4 and custom ...
But despite the differences, all models excel at making errors and shouldn't be trusted Generative AI coding models have common strengths and weaknesses, but express those characteristics differently ...
GPT-5, a new release from OpenAI, is the latest product to suggest that progress on large language models has stalled.
Anthropic has upgraded Claude Sonnet 4 with a 1M token context window, competing with OpenAI's GPT-5 and Meta's Llama 4.