To some extent this reminds me of the situation around aerosols - we banned them, and fixed the ozone layer! Huzzah!
…except people tend to forget about such things. At least there no one blamed the Montreal protocol.
I know it’s a much bigger issue, but I wish climate change could also be tackled similarly. Changes do appear to be happening, albeit slowly.
hummusandsushi 1 days ago [-]
The name for this effect appears to be the "Preparedness Paradox" [1]. The effect where people attribute the minimal or reduced harm due to some problem to the problem not having been that serious, instead of attributing it to the essential work performed by other people to prevent the worst disaster.
A very funny depiction of this effect is in the hilarious episode "Charlie Work" of It's Always Sunny in Philadelphia where the titular Charlie Kelly works tirelessly to have their bar pass health inspection, only to have the other main characters shrug and say "But we always pass health inspection".
See also: Absence of airline terror attacks in the US since 9/11/01.
AnthonyMouse 1 days ago [-]
Nope, there is only one single thing that has prevented that since 9/11, everything else is security theater and BS.
On 9/10/2001, the assumption was that if someone was trying to hijack a plane, they were planning to ransom the passengers as hostages, so you should let them take over the plane so the hijackers don't hurt the passengers who fight back.
As of 9/11, the assumption is that they want to fly the plane into a building and kill everyone on it, so if someone tries to hijack a plane, everyone on the plane knows to fight them to the death with suitcases and shoelaces and now you can't hijack a plane anymore so there is no point in trying.
noduerme 20 hours ago [-]
I was just talking about the "preparedness paradox" that the parent was referring to. Whatever reason you attribute it to, heightened security or more vigilant passengers, the fact that it hasn't happened means we can't prove that it wouldn't happen in the absence of either of those things. Both arguments are counterfactual, so people begin to take it for granted that "no further work was required to prevent fill in the blank from happening."
smegger001 1 days ago [-]
And you know locking the cockpit door now. Hard hyjack a plane when you cant access the controls.
AnthonyMouse 1 days ago [-]
Eh, that's probably more of a trade off. The door is typically going to be unlocked during the flight anyway so the pilots can get their meals and use the head, and then you're creating a risk that a hijacker gets into the cockpit and locks the door behind them.
Whereas it's already pretty hard for a hijacker to access the controls after the passengers kill them.
BoorishBears 1 days ago [-]
I agree 9/11 bred a lot of security theater, but I also don't buy the "one single thing" stopping terrorists from hijacking a plane is fear of a battle with passengers.
Pre 9/11 an attempted hijacking would "just" be a harrowing tale (because of your own reasoning). Post 9/11 even an unsuccessful attempt would create an overwhelming wave of fear, panic, and paranoia for the very same reason you argue people would be willing to fight to the death.
We just had two terrorist attacks this week and it's already falling off the news cycle: I don't think that'd be the case if the public had been given flashbacks to 9/11.
AnthonyMouse 1 days ago [-]
> Post 9/11 even an unsuccessful attempt would create an overwhelming wave of fear, panic, and paranoia for the very same reason you argue people would be willing to fight to the death.
There were unsuccessful attempts. Shoe bomber etc. -- passengers and flight staff stopped them.
Delphiza 1 days ago [-]
What is often missed in Y2K mitigation is just how much software was discarded in favour of something more modern.
I had a happy customer that was using software that I built in the early 90s. I was asked to 'Y2K certify' it, which I couldn't do for free, so they had to ditch it before Y2K. Even though I would have used proper datetimes, there may have been a couple of places where there were problems, such as printed reports or displaying dates on the screen. I certainly couldn't underwrite that it would work without reviewing it extensively.
Apart from cobol/mainframe projects to fix bugs, a significant part of Y2K preparation was throwing out whatever cobbled-together software businesses had, and replacing it with SAP. Indeed, a large part of the SAP sales-pitch in the mid-nineties was 'Y2K ready'. The number of SAP licences sold in the mid-late nineties is useful data for Y2K mitigation that was often overlooked.
It was highly likely that various applications within any particular business had Y2K problems, as with my little application. If they didn't, you get the original vendor to certify them as being 'Y2K ready'. For many people involved, it was quicker, cheaper, easier, and less risky to replace with SAP, rather than review and fix everything.
I am convinced that ERP (SAP, Oracle) and CRM (remember Siebel?) were given major boosts in adoption due to Y2K alone.
axegon_ 1 days ago [-]
To be honest it is still a problem till this day in some areas. A few years back I was making a gift which involved a ton of electronics and embedded hardware. One of the components I had laying around was a real time clock which refused to go beyond 31.12.1999, no matter what I did. Turned out that in (2021 iirc), there were still rtc modules on the market that had this problem. But it did not affect me all that much since the date had to be displayed on screen so I did the hacky-patchy thing to trim the first two bytes from the string and prepend "20". I bet there are tons of software and hardware out there that use dirty hacks like that to "fix it".
noduerme 1 days ago [-]
It's lucky that leap years are every 4, and factor as a whole integer into 100. Otherwise that wouldn't have worked?
I wonder if part of the "no big deal" view comes from the difference between tech companies as they're most visible now - 'move fast and break things' - versus all the other industries that rely on computers.
It's easy to imagine a lot of tech companies today taking a 'fix on failure' approach, because they're built on the idea that you deploy changes quickly, so they can accept a higher risk of faults. It's harder to make changes in banks, airlines & power stations, thus they're more risk averse, and far more likely to invest resources in the kind of effort the author describes to find & fix issues in advance.
sebstefan 1 days ago [-]
I hope I'll be retired when its big brother MAX_INT day comes on January 19th 2038 :-)
ChrisArchitect 1 days ago [-]
As of today,
2038 Epochalyspe countdown:
13 years, 11 days, 16 hours, 28 minutes
IshKebab 1 days ago [-]
I don't think anyone disputes that it was a real problem. It just wasn't the problem it was hyped up to be (and no, not because all the bugs were fixed).
That's obviously mainly the fault of the media trying to make the story more exciting, as they always do.
richardw 1 days ago [-]
“On 8 February 1999, while testing Y2K compliance in a computer system monitoring nuclear core rods at Peach Bottom Nuclear Generating Station, Pennsylvania, instead of resetting the time on the external computer meant to simulate the date rollover a technician accidentally changed the time on the operation systems computer. This computer had not yet been upgraded, and the date change caused all the computers at the station to crash. It took approximately seven hours to restore all normal functions, during which time workers had to use obsolete manual equipment to monitor plant operations.”
We do not have a full list of problems that were averted. The only way you can state it was overblown is if you know what would have happened if we hadn’t fixed all the systems. There were definitely countries that didn't do much and had very few issues, but that doesn't prove that e.g. the far more computerised West wouldn't have had.
The only rational course was to check every system that the failure of which would cost more than checking it. It was insurance against a problem that had unknown levels of downside until the work was done.
IshKebab 1 days ago [-]
Yeah exactly. If that's the worst thing you can find that happened then it falls faaar short of what the media were saying.
richardw 1 days ago [-]
That’s not how logic works. You can’t prove something wouldn’t have been impactful because there were few ill effects after the downside has been reduced. The point is the downside was unknowable beforehand, and we don’t have a full list of all the fixes and what would have happened if they weren’t fixed.
One nuclear issue having occurred in what should have been a safe and controlled context demonstrates that there could have easily been 100 nuclear issues, all in one night. This is what happened because the system wasn’t patched yet. Extrapolate to banks, payment networks, shipping, defence.
Professionals don’t YOLO this shit. Many smart people were around making rational decisions.
Delphiza 1 days ago [-]
By what definition of 'hype'? It was absolutely a significant problem that needed to be 'hyped' in order to get people involved and businesses to spend money. I was involved in the periphery of Y2K, and saw significant budgets being released as a result of the hype. The OPs point is that because those budgets were spent (on the right things, hopefully), the failures did not occur, which means that some hype was necessary.
IshKebab 1 days ago [-]
Maybe you don't remember but the news at the time was talking about it as if it was going to be a nuclear apocalypse or something.
It's annoyingly hard to search for old newspapers & TV news (does anyone know of a good free site for that?). The first thing I found was this Y2K survival kit:
> Futurists predict that riots will sweep through cities all around the world after Y2K knocks out electricity and destroys banking and commerce systems.
That's the sort of nonsense that was "promised", and to a large extent, exploited by Y2K consultants.
So when people say "it wasn't a problem", they aren't saying "you didn't have to do some IT work to make sure your phone system didn't go down" or whatever. They mean there were no planes falling out of the sky, trains crashing, riots & looting, etc.
pdejager 1 days ago [-]
The year2000(dot)com was the clearing house for much of the non-hypearticles from IT folks.
The way back machine has it archived. I’d suggest looking at the copies of the site towards the end of 1999. A good example of such an article was ‘a crock of clocks’
If you wanted to hear my take on y2k 20 years later? Check the podcast: y2k an autobiography
Cheers
zamalek 1 days ago [-]
One of the critics in the post (Anthony Finkelstein) very much claimed that it wasn't a real problem, it sounds like he might have had an audience too.
ZiiS 1 days ago [-]
Had you heard of him prior to this post though?
ZiiS 1 days ago [-]
The are always plenty of companies that don't belive a disaster will happen even with ample evidence. The fact that none of their systems caused significant harm strongly implies others over invested.
dspillett 1 days ago [-]
> The fact that none of their systems caused significant harm strongly implies others over invested.
Or that their systems were not truly that significant…
One of the things people see, in hindsight, as overinvestment is large scale investigations (codebase reviews, archeology of documentation for embedded hardware, etc.) that found little wrong. Once it is known that for [whatever systemn] there were only a couple of problems that were easy to fix it feels like that effort was wasted: people stop seeing the risk of the unknown once they know.
Also, all that time/money investment wasn't just in the latter part of 1999, many systems were updated before then because they were tracking due dates months or years in advance. Furthermore there were a lot of things fixed (or replaced) that weren't fully Y2K issues but things that would crop up later because offsets had been used on two-digit years (49/50 was a common roll-over point instead of 99/00, but that was far from universal, with the matter being compounded by systems that talked to each other).
Some money was wasted. But I suspect, at least from a risk management perspective of knowing that things were going to be fine or mitigations were in place, that most of it would be considered well spent.
Hopefully we've learned the true lessons (no, I know we haven't!): don't take shortcuts for temporary gain in production, and don't assume systems you put in place now will be gone in a decade or more so you don't need to design for longer.
petesergeant 1 days ago [-]
I shared this mostly because I really enjoyed reading about the process there
Rendered at 20:04:05 GMT+0000 (UTC) with Wasmer Edge.
…except people tend to forget about such things. At least there no one blamed the Montreal protocol.
I know it’s a much bigger issue, but I wish climate change could also be tackled similarly. Changes do appear to be happening, albeit slowly.
A very funny depiction of this effect is in the hilarious episode "Charlie Work" of It's Always Sunny in Philadelphia where the titular Charlie Kelly works tirelessly to have their bar pass health inspection, only to have the other main characters shrug and say "But we always pass health inspection".
[1]: https://en.wikipedia.org/wiki/Preparedness_paradox
On 9/10/2001, the assumption was that if someone was trying to hijack a plane, they were planning to ransom the passengers as hostages, so you should let them take over the plane so the hijackers don't hurt the passengers who fight back.
As of 9/11, the assumption is that they want to fly the plane into a building and kill everyone on it, so if someone tries to hijack a plane, everyone on the plane knows to fight them to the death with suitcases and shoelaces and now you can't hijack a plane anymore so there is no point in trying.
Whereas it's already pretty hard for a hijacker to access the controls after the passengers kill them.
Pre 9/11 an attempted hijacking would "just" be a harrowing tale (because of your own reasoning). Post 9/11 even an unsuccessful attempt would create an overwhelming wave of fear, panic, and paranoia for the very same reason you argue people would be willing to fight to the death.
We just had two terrorist attacks this week and it's already falling off the news cycle: I don't think that'd be the case if the public had been given flashbacks to 9/11.
There were unsuccessful attempts. Shoe bomber etc. -- passengers and flight staff stopped them.
I had a happy customer that was using software that I built in the early 90s. I was asked to 'Y2K certify' it, which I couldn't do for free, so they had to ditch it before Y2K. Even though I would have used proper datetimes, there may have been a couple of places where there were problems, such as printed reports or displaying dates on the screen. I certainly couldn't underwrite that it would work without reviewing it extensively.
Apart from cobol/mainframe projects to fix bugs, a significant part of Y2K preparation was throwing out whatever cobbled-together software businesses had, and replacing it with SAP. Indeed, a large part of the SAP sales-pitch in the mid-nineties was 'Y2K ready'. The number of SAP licences sold in the mid-late nineties is useful data for Y2K mitigation that was often overlooked.
It was highly likely that various applications within any particular business had Y2K problems, as with my little application. If they didn't, you get the original vendor to certify them as being 'Y2K ready'. For many people involved, it was quicker, cheaper, easier, and less risky to replace with SAP, rather than review and fix everything.
I am convinced that ERP (SAP, Oracle) and CRM (remember Siebel?) were given major boosts in adoption due to Y2K alone.
It's easy to imagine a lot of tech companies today taking a 'fix on failure' approach, because they're built on the idea that you deploy changes quickly, so they can accept a higher risk of faults. It's harder to make changes in banks, airlines & power stations, thus they're more risk averse, and far more likely to invest resources in the kind of effort the author describes to find & fix issues in advance.
2038 Epochalyspe countdown:
13 years, 11 days, 16 hours, 28 minutes
That's obviously mainly the fault of the media trying to make the story more exciting, as they always do.
https://en.m.wikipedia.org/wiki/Year_2000_problem
We do not have a full list of problems that were averted. The only way you can state it was overblown is if you know what would have happened if we hadn’t fixed all the systems. There were definitely countries that didn't do much and had very few issues, but that doesn't prove that e.g. the far more computerised West wouldn't have had.
The only rational course was to check every system that the failure of which would cost more than checking it. It was insurance against a problem that had unknown levels of downside until the work was done.
One nuclear issue having occurred in what should have been a safe and controlled context demonstrates that there could have easily been 100 nuclear issues, all in one night. This is what happened because the system wasn’t patched yet. Extrapolate to banks, payment networks, shipping, defence.
Professionals don’t YOLO this shit. Many smart people were around making rational decisions.
It's annoyingly hard to search for old newspapers & TV news (does anyone know of a good free site for that?). The first thing I found was this Y2K survival kit:
https://www.etsy.com/uk/listing/1824776914/vintage-y2k-survi...
> Futurists predict that riots will sweep through cities all around the world after Y2K knocks out electricity and destroys banking and commerce systems.
That's the sort of nonsense that was "promised", and to a large extent, exploited by Y2K consultants.
So when people say "it wasn't a problem", they aren't saying "you didn't have to do some IT work to make sure your phone system didn't go down" or whatever. They mean there were no planes falling out of the sky, trains crashing, riots & looting, etc.
The way back machine has it archived. I’d suggest looking at the copies of the site towards the end of 1999. A good example of such an article was ‘a crock of clocks’
If you wanted to hear my take on y2k 20 years later? Check the podcast: y2k an autobiography
Cheers
Or that their systems were not truly that significant…
One of the things people see, in hindsight, as overinvestment is large scale investigations (codebase reviews, archeology of documentation for embedded hardware, etc.) that found little wrong. Once it is known that for [whatever systemn] there were only a couple of problems that were easy to fix it feels like that effort was wasted: people stop seeing the risk of the unknown once they know.
Also, all that time/money investment wasn't just in the latter part of 1999, many systems were updated before then because they were tracking due dates months or years in advance. Furthermore there were a lot of things fixed (or replaced) that weren't fully Y2K issues but things that would crop up later because offsets had been used on two-digit years (49/50 was a common roll-over point instead of 99/00, but that was far from universal, with the matter being compounded by systems that talked to each other).
Some money was wasted. But I suspect, at least from a risk management perspective of knowing that things were going to be fine or mitigations were in place, that most of it would be considered well spent.
Hopefully we've learned the true lessons (no, I know we haven't!): don't take shortcuts for temporary gain in production, and don't assume systems you put in place now will be gone in a decade or more so you don't need to design for longer.