Edinburgh Airport is also down, suspending all flights after an "IT issue with our air traffic control provider". Not sure if this is coincidental, but the timing is rather suspicious!
Not a safety-critical system but I know passenger information screens in at least some airports are just full-screen browsers displaying a SaaS-hosted webpage.
Annoyingly I wanted to fly a parcel from Edinburgh up to Stornoway, but it's looking like I'd be quicker driving the seven hours up to the ferry terminal myself.
In the chain of events that led to Cloudflare's largest ever outage, code they'd rewritten from C to Rust was significant factor. There are, of course, other factors that meant the Rust-based problem was not mitigated.
They expected a maximum config file size but an upstream error meant it was much larger than normal. Their Rust code parsed a fraction of the config, then did ".unwrap()" and panicked, crashing the entire program.
This validated a number of things that programmers say in response to Rust advocates who relentlessly badger people in pursuit of mindshare and adoption:
* memory errors are not the only category of errors, or security flaws. A language claiming magic bullets for one thing might be nonetheless be worse at another thing.
* there is no guarantee that if you write in <latest hyped language> your code will have fewer errors. If anything, you'll add new errors during the rewrite
* Rust has footguns like any other language. If it gains common adoption, there will be doofus programmers using it too, just like the other languages. What will the errors of Rust doofuses look like, compared to C, C++, C#, Java, JavaScript, Python, Ruby, etc. doofuses?
* availability is orthagonal to security. While there is a huge interest in remaining secure, if you design for "and it remains secure because it stops as soon as there's an error", have you considered what negative effects a widespread outage would cause?
I'm not the person you are replying to, but like all of technology, you just find the latest (or most public) change made, and then fire your blame-cannon at it.
Excel crashed? Must be that new WiFi they installed!
Cloudflare was crowing that their services were better because “We write a lot of Rust, and we’ve gotten pretty good at it.”
The last outage was in fact partially due to a Rust panic because of some sloppy code.
Yes, these complex systems are way more complex than just which language they use. But Cloudflare is the one who made the oversimplified claim that using Rust would necessarily make their systems better. It’s not so simple.
“haha rust is bad” or something, is’s a silly take. these things hardly, if ever, are due to programming language choice and rather due to complicated interactions between different systems.
That blog post made it to the front page of HN and my site did not go down. Nor did any DDoS network take the site out even though I also challenged them last time by commenting that I would be okay with a DDoS. I would figure out a way around it.
In general, marketing often works via fear, that's why Cloudflare has those blog posts talking about "largest botnet ever". Advertisement for medicine for example also works often via fear. "Take this or you die", essentially.
Cloudflare is widely used because it's the easiest way to run a website for free or expose local services to internet. I think for most cloudflare users, the ddos protection is not the main reason they're using it.
Yes, marketing often works via fear. And decision making in organizations often works through blame shifting and diffusion of accountability. So organizations will just stick with centralization and Cloudfare, AWS, Microsoft et al regardless of technical concerns.
The old guard has left as they we too much of an expense in this cost-cutting age... without mentors, crap creeps in and now we are seeing what happens when people don't know how things work, are in charge...
They might go on hiring freezes more often, cancel a role, and in some cases pass on someone asking too much... But I don't think many companies are actively out trawling for "cheap and dumb".
You'll find some, but not Cloudflare, AWS and Google.
The website you're using right now is hosted from a single location without any kind of CDN, so unless by coincidence you just happen to live next door then you seem to be managing. Not bundling 40MB of Javascript or doing 50 roundtrips to load a page goes a long way.
What is "high latency" nowadays? If people wouldn't bundle 30mb into every html page it wouldn't be needed.
Also cloudflare is needed due to DDOS and abuse from rogue actors, which are mostly located in specific areas. Residential IP ranges in democratic countries are not causing the issues.
That stupid Cloudflare check page often adds latency in orders of magnitude compared to what a few thousand miles of cables would. Also most applications and websites are not that sensitive to latency anyway, at least when done properly.
> A change made to how Cloudflare's Web Application Firewall parses requests caused Cloudflare's network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.
I always suspected RSC was actually a secret Facebook plan to sabotage the React ecosystem now that their competitors all use it to some degree. Now I’m convinced.
They’re a global company that offshores with location based pay and utilizes H1Bs. I think that’s the first thing to look at. You get what you pay for.
Stop trying to devalue labor. Not much sympathy when you’re obviously cutting corners.
Just because someone is on an H1B visa doesn't mean they know less. It's a bit rich to blame this on foreign workers even though nothing is known about who or what caused this outage.
Knowledge + tech skills are not the only factor that lead to subpar outcomes with these scenarios. In my experience the thing that causes the most problems with H1Bs is the weak English and related communication issues.
In my experience, the communication problems stem from the Americans who expect perfect English from all others. English is spoken across the entire business world between people for whom it is not their first language. The accents and broken English is epic in many organizations. Yet they work through it and get things done together.
If you work harder at taking the burden upon yourself to understand others, you might be surprised how well people can learn to communicate despite differing backgrounds.
The problem with H1B is that these people are effectively prisoners. The market is not so hot right now even for those who have leverage, but combine it with the visa system and you get this "gotta do the needful" attitude to please the bosses, rushing broken fixes to production.
I see this directly on my team. The h1bs get bullied by their boss (it's a split team, I work with him but don't report to him) and they don't say anything because he could effectively have them deported. At least 2 of them have kids here and perhaps the others do. So not only does it incentivize the bully to do it, but it traps them to just take it for their family. I openly talk shit back to him because he can't deport me.
In the 80s, a "series" of fires broke out and destroyed many homes and businesses in England, all of which having a print of a painting known as 'The Crying Boy'. The painting has ever since been rumoured to be haunted.
Obviously, 'The Crying Boy' was not the cause of the fires, it was just that most homes in the 80s England had that print, as it was a popular one, and people found a pattern where there wasn't one.
causality, causation, yadda yadda. They already explained that it was some react server component update. sure, could've also been done with some ai assist but we don't know.
These companies also don't vibe code (which would involve just prompting without editing code yourself, at least that's the most common definition).
I really hope news like these won't be followed by comments like these (not criticism of you personally) until the AI hype dies down a bit. It's getting really tiresome to always read the same oversimplified takes every time there's some outage involving centralized entities such as cloudflare instead of talking about the elephant in the room, which is their attempt of doing MITM on the majority of internet users.
This ignores all the companies that publicly embraced vibe coding and did NOT have outages. Not a huge fan of vibe coding, but let's keep the populism to minimum here.
I host my companys website on Cloudflare pages using Cloudflare's DNS. I don't want to move to 100% self hosting but I would like to have self hosted backup. Has anyone solved this?
Having a self-hosted “backup” that is ready to go at any time means having a self-hosted server that’s always on, basically. There are lots of cheap colo or VM options out there. But the problem is going to be dealing with an outage… how do you switch DNS over when your DNS provider is down?
Well, one way is to use a different DNS provider than either of your hosting options.
You can see this is getting complicated. Might be better to take the downtime.
But if I had to make a real recommendation I’m not aware of any time in the last decade that a static site deployed on AWS S3/Cloudfront would have actually been unavailable.
Not sure if this is related, but has anyone seen their allowance used up unexpectedly fast? Had Claude Code Web showing service disruption warnings, and all of a sudden I'm at 92% usage.
I'm on the pro plan, only using Sonnet and Haiku. I almost never hit the 5-hour limit, let alone in less than 2 hours.
downdetectorsdowndetector.com does not load the results as part of the HTML, nor does it do any API requests to retrieve the status. Instead, the obfuscated javascript code contains a `generateMockStatus()` function that has parts like `responseTimeMs: randomInt(...)` and a hardcoded `status: up` / `httpStatus: 200`. I didn't reverse-engineer the entire script, but based on it incorrectly showing downdetector.com as being up today, I'm pretty sure that downdetectorsdowndetector.com is just faking the results.
downdetectorsdowndetectorsdowndetector.com and downdetectorsdowndetectorsdowndetectorsdowndetector.com seem like they might be legit. One has the results in the HTML, the other fetches some JSON from a backend (`status4.php`).
Instead of figuring out a novel way of distributing content a stateful way with security and redundancy in mind we have created the current centralised monstrosity that we call the modern web. ¯\_(ツ)_/¯
How are these clowns deploying stuff on a Friday, it is unbelievable to me. It is not even funny any more. It seems cloudflare is held together by marketing only. They should stop all of these stupid initiatives and keep their stack simple.
And I'm 100% sure the management responsible for this is already fueling up the ferraris to drive to their beach house. All of us make them rich and they keep on enshittifying their product out of pure hubris.
> How are these clowns deploying stuff on a Friday, it is unbelievable to me
I have stopped fighting this battle at work. Despite Friday being one of the most important days of the week for our customers, people still push out the latest commit 10 minutes before they leave in the afternoon. Going on a weekend trip home to your family? No problem, just deploy and be offline for hours while you are traveling...
The response was that my way of thinking is "old school". Modern development is "fail fast" and that CI/CD with good tests and rollback fixes everything. Being afraid of deploys is "so 2010s"... The problem is that our tests don't cover everything, and not all deploys can be rolled back quickly and the person who knows how what their commit actually does is unavailable!
We have had multiple issues with late afternoon deploys, but somehow we keep doing this. Funnily enough, I have noticed a pattern. Many devs only do this a few times due to the massive backlash from customers when they are fixing the bug. So gradually they learn to deploy at less busy times. The problem is that not enough has learned this lesson, or are too invested in their point of view to change. It seems that some individuals learn the hard way, but the organization has not learned or is reluctant to push for a change due to office politics.
If you are a monopoly, there is no incentive to do anything well. You've saturated the market, the incentive is to cut costs.
In fact, there are incentives for public failures: they'll help the politicians that you bought sell the legislation that you wrote explaining how national security requires that the taxpayer write a check to your stockholders/owners in return for nothing.
Yes, but a hotfix was already in place. They chose to deploy the "proper fix" this morning, and obviously it went wrong. Also they didn't do a phased rollout because it impacted their high-value customers such as shopify as well as claude, causing significant damages. Their procedures are not good.
"A change made to how Cloudflare's Web Application Firewall parses requests caused Cloudflare's network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components."
The bug is known since several days, and the hotfix was already in place. So they worked on the "final fix" and chose to deploy it on a friday morning.
If Crunchyroll is down for 30 minutes it's nbd, because you know they'll be back. If the pirate sites are down for any duration, it can be very stressful, because they can be gone for good.