Opus 4.7 released today

Was speculating how good the next Opus release would be and once again has surpassed expectations. Nothing even comes close. Directed it an already well rounded Opus 4.6 code-base and it pulled it apart, refactored it, dropped a good couple thousand lines and replaced them with a couple of hundred lines. No wasted time with errors (one shot), totally clean, higher performance, more optimisations, less memory consumption during execution of some incredibly complex scripts, all in all annihilated Opus 4.6 when it comes to JavaScript. I'm sure they'll tune it even more now it is in the wild... Same price per input/output as 4.6. I mentioned previously that Sonnet 4.6 was doing very well against Opus 4.6 but now the gap is quite different in comparison. I'm sure Sonnet 4.7 can't be far behind though. Where is this all going to end up?

1 Like

Us out of a job i guess, glad i am retired!

5 Likes

I have been using claude sonnet 4.6 and getting pretty good results. I’m trying to allow wappler to use claude opus 4.7, but I can’t seem to find where you allow this in github copilot. Previously you had allow which models you wanted wappler to access. But now they seemed to have moved it? Do you have a screen shot of where that is turned on now?

Thanks

Baub

In my case, 4.7 magically appeared:

Microsoft is enforcing some limits to copilot as they have troubles handling the load.

and Opus 4.7 is only for the Pro+ subscribers now.

Btw I tested Opus 4.7 also well, and it is ok for creative/design work (although too expensive 7.5x requests), but for programming work it still does too complex solutions and GPT 5.4 is still the best.

2 Likes

Well that only took them a few days to totally ruin Opus 4.7. I'm now getting better results from MiniMax 2.7, if that doesn't do too well then GLM 5.1 or Kimi 2.6... Despite maybe having to amend a prompt once or twice it still 10 x the savings over Anthropic/OpenAI. I'm just going to have a session with my old friend DeepSeek to see how that has improved in their latest V4 release. Gone right off Anthropic right now. Their rate limiting is a total joke and we're paying 200 Dollars a month for the Max plan... Must be awful on Copilot etc?

3 Likes

For me, Kimi 2.5/2.6 stands out as the most powerful and complete model available right now. The price-to-quality ratio is unbeatable, and it is an absolute beast when it comes to programming and technical logic.

When I need high efficiency and a cost-effective solution, MiniMax m2.7 is my go-to, particularly for medium-complexity tasks using OpenClaw. It handles that middle ground remarkably well without driving up costs.

Finally, whenever the priority shifts toward rock-solid stability and logical coherence, I usually stick with Qwen 3.5/3.6 to ensure everything remains consistent and reliable.

1 Like