Rendered at 19:10:37 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
lepuski 18 hours ago [-]
I can't see why anyone still chooses Claude. Codex outperforms it in most respects, and its quotas are about ten times larger. A $100 Codex plan gets me through the whole week with 6–12 hours of coding per day.
jjice 18 hours ago [-]
I found GPT 5.5 is pretty solid, but I keep getting impressed by opus. It's tracked down some insane stuff while I look away during a meeting. 5.5 is way closer than previous OpenAI models to Anthropic IMO.
These things are so tricky because everyone has a seemingly conflicting experience. Part of the fun I guess!
SatvikBeri 18 hours ago [-]
I've never actually run into the issues that people talk about online, like Claude suddenly getting dumb or running out of usage. So there's just not a lot of incentive for me to shop around. I've used Amp a bit, and it's quite nice, but a bit more expensive without the subsidized subscription.
raincole 17 hours ago [-]
It has always been like this. We actually know that the model performance has been mostly steady[0], but you cannot beat the notion of "evil companies secretly serving us worse models." The meme value is too strong.
Hmm, today's pass rate raised to 73% - interesting, are they AB-testing some new model? This is too high for Opus 4.7.
gardnr 17 hours ago [-]
Are you using Opus? Sonnet remains as useful as it was while Opus efficacy and token burn rate has soured over the last 4 months.
fny 17 hours ago [-]
I'm using Opus on xhigh 10+ hours a day, and I've only reached 80% of weekly limits when doing massive ports or refactors. I haven't once hit hourly limits, and I've used Claude very, very aggressively. I guess its a pain point for power users.
josephg 14 hours ago [-]
I sometimes run multiple claudes at the same time, with each terminal working on a different task. I have 2 going right now.
Its very easy to burn through your quota if you work like that. Especially on high / xhigh.
plufz 11 hours ago [-]
I used to be mostly at high/xhigh but now at medium I think it actually performs quite well both on results and token usage.
SatvikBeri 14 hours ago [-]
Yes, I've pretty much used Opus exclusively for the last year, except for a brief period when Sonnet was ahead
mbreese 17 hours ago [-]
When do you use it the most? I’ve noticed that it most often starts to degrade during 10-5 US East coast time. Late at night, I have the least amount of issues, but without fail, if I’m trying to do anything complex during the day, Claude gets loopy.
SatvikBeri 14 hours ago [-]
9-5 Pacific Time
dboreham 18 hours ago [-]
Same here. Works every time. Never ran into usage limits either.
elahieh 18 hours ago [-]
One reason might be that Claude Opus 4.7 thinking benchmarks better on Arena Coding at https://arena.ai/leaderboard/text/coding ... hopefully that effectively assesses correctness. It doesn't account for reliability though.
hansvm 18 hours ago [-]
Claude is the only AI coding tool I've found worth a damn. Without it I'd just do everything by hand save for a few bash scripts or whatever.
arcanemachiner 17 hours ago [-]
Have you tried other harnesses, such as OpenCode?
hansvm 17 hours ago [-]
Yeah, harness quality matters too, but the underlying model capabilities are night and day.
xboxnolifes 16 hours ago [-]
I certainly get more usage before cutoff from GPT 5.5, but the output I get from Opus 4.7 is way better. It just sucks that I get 2 good "long running" prompts on Opus 4.7 before my daily quota is met on the $20 subscription.
Thaxll 18 hours ago [-]
I think it's impossible to say that codex x.y.z is better than Sonnet x.y.z, I used many "high" end models and they're just all good.
kylemaxwell 18 hours ago [-]
Corporate policies and agreements. In large corporations, using external non-approved models with proprietary source code is a good way to have significant career issues.
SeanAnderson 18 hours ago [-]
You get a discount for paying for a full year on Teams and Enterprise can involve contractual obligations. It's a lot of effort to get buy-in to change providers and to shift an entire organization. The winds change frequently in this space and the pain needs to get to a certain level before it's worth rolling the dice.
taspeotis 18 hours ago [-]
Claude Max 20x gives me unlimited (for my level of usage) Opus 4.7 - how much money do I have pay OpenAI for that?
arcanemachiner 17 hours ago [-]
Based on the experience of people using the $20 Claude Pro subscription and exhausting their quotas in a manner of minutes, the answer to your question is probably "less". (I would guess that the $100 plan would do the trick.)
taspeotis 17 hours ago [-]
Okay so how much less will I have to pay OpenAI for unlimited Opus 4.7?
CompoundEyes 17 hours ago [-]
In my org the teams doing agent engineering at scale are all on Codex using gpt-5.5. By scale I mean fully agent authored code workflows with long running / multi hour plans.
etchalon 17 hours ago [-]
I'd rather not give money to Sam Altman.
beering 14 hours ago [-]
with Anthropic you’re giving money to Elon Musk. Seems like a pick-your-billionaire world we’re in now
etchalon 3 hours ago [-]
I can't chose what the people I give money to do with it, just who I chose to give money to.
I refuse to accept the lazy cynicism of "nothing matters at all".
atraac 10 hours ago [-]
But 100$ Claude subscription also gets me easily entire week of coding 6-8 hours a day? What on earth do you do to run out of limits on Max? Do you vibe multiple new codebases every day for a living? The benefit of Claude is also not gaslighting me every time I tell it it's wrong.
wahnfrieden 14 hours ago [-]
Claude is (per benchmarks) much worse at instruction following, but is more charming and deceptive and anthropomorphized by default (in name and image), leading to productivity assessment psychosis
squirrellous 18 hours ago [-]
Corporate reasons. AWS hasn't opened codex models to everyone yet.
echelon 18 hours ago [-]
Claude is significantly better at Rust in my experience, and Rust is my favorite language to emit from LLMs.
Opus 4.7 + Rust is a killer combo.
yieldcrv 18 hours ago [-]
because my shard isn’t erroring
I use Codex when Claude Code is down, and I only began using Claude when ChatGPT was down
yes codex is very fast, I go back to Claude for now
nothinkjustai 18 hours ago [-]
Because of marketing and vibes mostly.
Heck I prefer DeepSeek to both of those.
josephg 18 hours ago [-]
Wow, I'm really surprised. I tried deepseek (their best model, through the official API). Its extremely cheap, but its clearly not as good at programming as Opus 4.7. It seems nowhere near as good at making high level design choices. Deepseek also seems to get stuck in whack-a-mole fixing loops much more than opus. I stopped it at one point, and asked opus to solve the problem it was trying to solve and it saw the solution immediately.
I was running deepseek through claude's code agent harness. Maybe it works better through a different tool?
zmmmmm 18 hours ago [-]
I've given V4 Pro some curly things and I was impressed at how it figured them out. I agree high level design is not its forte. But it sat in a loop and dogmatically debugged a crazy dependency issue to come to the right answer over the course of 15 minutes which impressed me.
nothinkjustai 15 hours ago [-]
Idk, I don’t vibe code so even the flash model is great for generating code for myself. I tend to do the planning and design myself though.
Harness also matters, and also provider. I was using openrouter and switched to the Deepseek api and suddenly all the tool call issues I was having resolved themselves. Flash is so damn fast at doing stuff like generating boilerplate I can’t go back to the bigger slower models.
esafak 18 hours ago [-]
You tried v4?
codybontecou 17 hours ago [-]
I tried to like it, but it eventually got stuck in a near-infinite loop trying to debug an extra curly bracket in an iOS app.
That and the lack of image-read support surprised me. I'm a big fan of feeding screenshots into my llm and that killed it for me.
josephg 18 hours ago [-]
Yeah, v4.
I would have been much more impressed with v4 about 6 months ago. But I've been spoiled by opus 4.7. Deepseek isn't at the same level.
mcv 9 hours ago [-]
I feel you. I'd prefer to stick entirely with local open source models. I tried using Aider and Qwen last week, and while it's still impressive what it can do with just local resources and entirely for free, its error rate is too high, and it's clearly not remotely in the same league as Claude Code.
zmmmmm 18 hours ago [-]
interestingly I had the same experience, and weirdly it's in part because it is clearly less intelligent. It's more of a mechanistic tool just doing what I ask (but still very smart and very competent about it) and less trying to win a nobel prize with each answer. Turns out I actually like that.
gopalv 19 hours ago [-]
Sonnet is also throwing overloaded error.
My systems are hitting exponential delay retries, so this might not get better because retries overload things again.
I can see a weird spike in my cache hit-rate a few minutes before, so this might actually be some extra caching they have thrown in.
textlapse 17 hours ago [-]
Say what you will about Sam Altman, but at least he engages with his user base and acts on user feedback.
Dario and co seem to be on some elevated pedestal - us mere mortals are beneath them - and they have this scattershot devrel where each engineer has their own X way of communicating to the public often at odds with each other.
I loved Sonnet and Opus fwiw but not anymore.
SilverElfin 17 hours ago [-]
Plus I can’t really trust someone who emphasizes ethics and then partners with Elon to buy compute from a potentially illegal natural gas powered datacenter.
so, all those CEOs moving all those remaining engineers to be dependent on a cloud service to the extent that there's no local development capability are gonna appologize right
claaams 18 hours ago [-]
in a year or two when AI tool costs go from 5M per year to 15M per year...even then, maybe not.
FergusArgyll 18 hours ago [-]
I thought the deal with xai was supposed to solve this? Is this basically the adding lanes paradox?
josephg 18 hours ago [-]
You're assuming the elevated error rates are due to the system being overloaded. We have no evidence this is actually the case. Its much more likely due to a simple misconfiguration or failing router or something.
cr125rider 15 hours ago [-]
The incredible infrastructure required to coordinate warehouses worth of compute actually seems pretty tricky. They’re worth more money than god so they get 0 leniency, but it does seem hard.
imperio59 17 hours ago [-]
I love Claude but I hate waiting a minute or two for any inference to start. I hope they can get their xAI capacity online ASAP and that it helps!
These things are so tricky because everyone has a seemingly conflicting experience. Part of the fun I guess!
[0]: https://marginlab.ai/trackers/claude-code/
Its very easy to burn through your quota if you work like that. Especially on high / xhigh.
I refuse to accept the lazy cynicism of "nothing matters at all".
Opus 4.7 + Rust is a killer combo.
I use Codex when Claude Code is down, and I only began using Claude when ChatGPT was down
yes codex is very fast, I go back to Claude for now
Heck I prefer DeepSeek to both of those.
I was running deepseek through claude's code agent harness. Maybe it works better through a different tool?
Harness also matters, and also provider. I was using openrouter and switched to the Deepseek api and suddenly all the tool call issues I was having resolved themselves. Flash is so damn fast at doing stuff like generating boilerplate I can’t go back to the bigger slower models.
That and the lack of image-read support surprised me. I'm a big fan of feeding screenshots into my llm and that killed it for me.
I would have been much more impressed with v4 about 6 months ago. But I've been spoiled by opus 4.7. Deepseek isn't at the same level.
My systems are hitting exponential delay retries, so this might not get better because retries overload things again.
> {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}, 'request_id': 'req_ ...
I can see a weird spike in my cache hit-rate a few minutes before, so this might actually be some extra caching they have thrown in.
Dario and co seem to be on some elevated pedestal - us mere mortals are beneath them - and they have this scattershot devrel where each engineer has their own X way of communicating to the public often at odds with each other.
I loved Sonnet and Opus fwiw but not anymore.