If companies practiced data minimisation, and end-to-end encrypted their customers' data that they don't need to see, fewer of these breaches would happen because there would be no incentive to break in. But intelligence agencies insist on having access to innocent citizens' conversations.
beezlebroxxxxxx 6 days ago [-]
> But intelligence agencies insist on having access to innocent citizens' conversations.
That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Until the data breaches lead to serious $$$ impact for the company, the impact of these breaches will simply be waved off and pushed down to users. ("Sorry, we didn't protect your stuff at all. But, here's some credit monitoring!") Even in the profession of software development and engineering, very few people actually take data security seriously. There's lots of talk in the industry, but also lots of pisspoor practices when it comes to actually implementing the tech in a business.
Diggsey 6 days ago [-]
Companies already pay for cyber insurance because they don't want to take on this risk themselves.
In principle the insurance company then dictates security requirements back to the company in order to keep the premiums manageable.
However, in practice the insurance company has no deep understanding of the company and so the security requirements are blunt and ineffective at preventing breaches. They are very effective at covering the asses of the decision makers though... "we tried: look we implemented all these policies and bought this security software and installed it on our machines! Nobody could possibly have prevented such an advanced attack that bypassed all these precautions!"
Another problem is that often the IT at large enterprises is functionally incompetent. Even when the individual people are smart and incentivised (which is no guarantee) the entire department is steeped in legacy ways of doing things and caught between petty power struggles of executives. You can't fix that with financial incentives because most of these companies would go bankrupt before figuring out how to change.
I don't see things improving unless someone spoon-feeds these companies solutions to these problems in a low risk (ie. nobody's going to get fired over implementing them) way.
ta_1138 6 days ago [-]
The typical IT department in a large corporation is way too big to have reasonable visibility into what it manages. There's no way to build reasonable controls that work out when you have 50K programmers on staff. It's purely a matter of size.
Often the end result is having just enough red tape to turn a 2 week project into an 8 month project, and yet not enough as to make sure it's impossible for someone to, say, build a data lake into a new cloud for some reports that just happen to have names, addresses and emails. Too big to manage.
anyonecancode 5 days ago [-]
Which gets back to the original point, that the real answer is to minimize how much data is held in the first place. Controls will always be insufficient to prevent breaches. Companies and organizations should keep less data, keep it for less time, and try harder to avoid collecting PII in the first place.
gregw2 4 days ago [-]
I don't disagree with you but as someone who has thought a moderate amount about data security at a "bigco", I will point out something I haven't seen people really talk about...
Audit trails (of who did/saw what in a system) and PII-reduction (so you don't know who did what) are fundamentally at odds.
Assuming you are already handling "sensitive PII" SSNs/payroll/HIPPA/creditcard# data appropriately, which constitutes security best practice: PII-reduction or audit-reduction?
tsimionescu 5 days ago [-]
Let's say the CEO agrees with you and is horrified of any amount of unnecessary data being stored.
How would they then enforce this in a large company with 50k programmers? This was what the previous post was discussing.
Not to mention, a lot of this data is necessary. If you're invoicing, you need to store the names and many other kinds of sensitive data of your customers, you are legally required to do so.
josephg 5 days ago [-]
Culture change. The CEO can push for top down culture change to get people to care about this stuff. Make it their job to care. Engage their passion to care.
It’s not easy, but it can move the needle over time.
thayne 5 days ago [-]
That is easier said than done. In order to achieve that effectively every employee that has any relation to data needs to be constantly vigilant in keeping PII to a minimum, and properly secured.
It is often much easier to use an email address or a SSN when a randomly generated id, or even a hash of the original data would work fine.
I'm not saying that we shouldn't put more effort into reducing the amount of data kept, but it isn't as simple as just saying "collect less data".
And sometimes you can't avoid keeping PII.
thwarted 6 days ago [-]
There's another side to it, which you allude to with the give away of credit monitoring services that data breaches result in. The whole reason the data is valuable is for account takeover and identity theft because identity verification uses publicly available information (largely publicly available, or at least discoverable, even without breaches). But no one wants to put in the effort to do appropriate identity verification, and consumers don't want to be bothered to jump through stricter identity verification process hoops and delays---they'll just go to a competitor who isn't as strict.
So we could make the PII less valuable by not using for things that attract fraudsters.
ganoushoreilly 6 days ago [-]
Hell in this instance, just replacing non EOL equipment that had known vulnerabilities would have gone a long way. We're talking routing infrastructure with implants designed years ago, still vulnerable and shuffling data internally.
Dalewyn 6 days ago [-]
The "problem" is noone cares and certainly doesn't want to pay for the costs, especially the end users. That EOL equipment still works, there are next to no practical problems for the vast vast vast vast vast vast vast majority of people. You cannot convince them that this is a problem (for them) worth spending (their) money on.
Even during the best of times people simply do not give a fuck about privacy.
Honestly, if there is a problem at all I would say it's the uselessness of the Intelligence Community when actually posed with an espionage attack on our national security. FBI and CISA's response has been "Can't do; don't use." and I haven't heard a peep from the CIA or NSA.
danudey 6 days ago [-]
Until companies are held liable for security failures they could have and should have prevented, there's no incentive for anyone to do anything. As long as the cost of replacing hardware, securing software, and hiring experienced professionals to manage everything is higher than the cost of suffering a data breach companies aren't going to do anything.
I've seen the same thing at previous jobs; I had a lot to do and knew a lot of security issues that could potentially cause us problems, but management wasn't willing to give me any more resources (like hiring someone else) despite increasing my workload and responsibilities for no extra pay. Surprise, one of our game's beta testers discovered a misconfigured firewall and default password and got access to one of our backend MySQL servers. Thankfully they reported it to us right away, but... geez.
dstroot 6 days ago [-]
>The "problem" is noone cares and certainly doesn't want to pay for the costs, especially the end users
Well I care. I’d pay a premium to a telco that prioritized security and privacy. But they all are terrible, hovering up data, selling it indiscriminately and not protecting it. If they all suck then the default is to use the cheapest.
It’s definitely why I use Apple devices because I can buy directly from Apple and they don’t allow carriers to install their “junkware”.
khana 5 days ago [-]
[dead]
thayne 5 days ago [-]
That EOL equipment probably shouldn't be EOL though. Part of the blame should go to equipment makers that didn't bother to send out updates to fix the vulnerability in still functional equipment.
thayne 5 days ago [-]
Another issue is lack of education/training/awareness among developers.
A BS in CS has maybe one class on security, and then maybe employees have a yearly hour-long seminar on security to remind them to think about security. That isn't enough. And the security team and engineers that put the effort into learning more about security and privacy often aren't enough to guard against every possible problem.
mystified5016 6 days ago [-]
But At&t and their 42,690 partners say they value my privacy :(
scrose 6 days ago [-]
They do value your privacy! They just don’t like to share how many cents its worth to them
int_19h 6 days ago [-]
Apple seems to be willing to spend money on this kinda stuff. But the reason why they do this is because it allows them to differentiate their offering from the others, with privacy being part of the "luxury package", so to speak. That is - their incentive to do so is tied to it not being the norm.
Spooky23 6 days ago [-]
Apple and Google care about this because they handle more customer data and require more customer trust than most companies.
People were shitting a brick over a pretty minor change in photo and location processing at Apple. That’s because they don’t screw up like this.
int_19h 4 days ago [-]
The point is that Apple specifically goes out of the way to avoid having customer data in the first place.
(Google, on the other hand, is the opposite.)
But, as far as I can tell, the only reason why Apple does this is because privacy these days can be sold as a premium, luxury feature.
oooyay 6 days ago [-]
I work in internal tools development, aka platform engineering, and this is interesting:
> That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Frankly, any company that says they're a technology or software business should be building these kinds of systems. They can grab FOSS implementations and build on top or hire people who build these kinds of systems from the ground up. There's plenty of people in platform engineering in the US who could use those jobs. There's zero excuse other than that they don't want to spend the money to protect their customers data.
stackskipton 6 days ago [-]
This is not a tools problem, this is incentive and political problem.
Telecoms will not get fined for this breach, or fined at amount that is meaningful, so they are not going to care.
oooyay 5 days ago [-]
I'm not sure why it's either/or to you. Seems to me like we're talking about the same problem but stated from two different perspectives.
Politics has historically incentivized job creation.
stackskipton 5 days ago [-]
Because you came in acting like Internal Developer Platform would be fix to their problems when it won't be. In fact, I doubt the lack of IDP is their problem.
As SRE, I'm just over everyone running around acting like another tool is going to solve the problem. It's not, incentives need to be present not to be completely terrible at their job.
Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
oooyay 4 days ago [-]
An IDP is not a secrets management tool or vice versa. IDPs are more like connectors of your internal tools/platforms. Their key metrics have more to do with toil reduction and velocity, but they can certainly solve the kinds of problems that lead to a company thinking they need a group of people focusing solely on reliability.
> Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
I am a SRE. I stopped using that title professionally some time ago and started focusing on what makes companies reach for SRE when the skillset is the same as a platform engineer.
After Apple argued for years that a mandatory encryption-bypassing, privacy-bypassing backdoor for the government could be used by malicious entities, and the government insisting that it's all fine don't worry, now we're seeing those mandatory encryption-bypassing, privacy-bypassing backdoors for government being used by malicious entities and suddenly the FBI is suggesting everyone use end-to-end encryption apps because of the fiasco that they caused.
But don't worry, as soon as this catastrophe is over we'll be back to encryption is bad, security is bad, give us an easy way to get all your data or the bad guys win.
matthewdgreen 5 days ago [-]
The story is a little longer than this. A bunch of folks from academia and industry have been fighting the inclusion of wiretapping mandates within encrypted communications systems. The fight goes back to the Clipper chip. These folks made the argument that something like Salt Typhoon was inevitable if key escrow systems were mandated. It was a very difficult claim to make at the time, because there wasn’t much precedent for it - electronic espionage was barely in its infancy at the time, and the idea that our information systems might be systematically picked open by sophisticated foreign actors was just some crazy idea that ivory tower eggheads cooked up.
I have to admire those pioneers for seeing this and being right about it. I also admire them for influencing companies like Apple (in some cases by working there and designing things like iMessage, which is basically PGP for texts.) It doesn’t fix a damn thing when it comes to the traditional telecom providers, but it does mean we now have backup systems that aren’t immediately owned.
jrexilius 6 days ago [-]
Thats not exactly true. The FCC911 and other government laws require the telcos to have access to location data and record calls/texts for warrants. The problem is both regulatory as well as commercial. It is unrealistic to expect the general public nor the government to go with real privacy for mobile phones. People want LE/firefighters to respond when they call 911. Most people want organized crime and other egregious crimes to be caught/prosecuted, etc. etc.
salawat 6 days ago [-]
Nonsense. I kindly informed my teenage niece of the fact all her communications on her phone should be considered public, and the nature of Lawful Interception, and the tradeoffs she was opted into for the sakenof Law Enforcement's convenience.
She was not amused or empathetic to their plight in the slightest. Population of at least 2 I guess.
jrexilius 6 days ago [-]
Make that population of 3. I'm not a fan either. But I'm also realistic. I treat the phone as what it is: malicious spyware. But I realize that most people want the convenience and the safety (of sorts) of dialing 911 and getting the right dispatch..
fastily 6 days ago [-]
If law enforcement actually did their jobs, this would be more understandable. I don’t know about you or others’ experiences, but when I’ve called the police to report a crime (e.g. someone casually smashing car windows at 3p in the afternoon and stealing anything that isn’t bolted down), they never show up and usually just tell me to file a police report which of course never gets actioned. Seems pretty obvious to me that weakening encryption/opsec to “let the good guys in” is total nonsense and that there are blatant ulterior motives at play. To be clear I’m a strong proponent of good security practices and end to end encryption
nullityrofl 6 days ago [-]
There's not nearly enough public information to discern whether or not this had anything to do with stored PII or lawful interception. All we know is that they geolocated subscribers.
The SS7 protocol provides the ability to determine which RNC/MMC a phone is paired with at any given time: it's fundamental to the nature of the functioning of the network. A sufficiently sophisticated adversary, with sufficient access to telephony hardware, could simply issue those protocol instructions to determine the location.
nozzlegear 6 days ago [-]
> and end-to-end encrypted their customers' data
Somewhat of a tangent: does anyone have any resources on designing/implementing E2E encryption for an app where users have shared "team" data? I understand the basics of how it works when there's just one user involved, but I'm hoping to learn more about how shared data scenarios (e.g. shared E2E group chats like Facebook Messsenger) are implemented.
Exactly the kind of thing I was looking for, thank you! And thanks for the tip about "double ratchet protocol," that helps a ton.
causal 6 days ago [-]
Intelligence agencies may use that data, but there are plenty of financial incentives to keep that data regardless. Mining user data is a big business.
api 6 days ago [-]
The best solution to privacy is serious liability for losses of private customer data.
Leak or lose a customer's location tracking data? That'll be $10,000 per data point per customer please.
It would convert this stuff from an asset into a liability.
dylan604 6 days ago [-]
All of these claims for serious fines, yet no indication of where the fine is to be paid. Fines means the gov't is getting the money, yet the person whose data was lost still gets nothing. Why does the person that was actually harmed get nothing while the gov't who did nothing gets everything?
oxide 6 days ago [-]
Even better - give the customer the $10k.
nyc_data_geek1 6 days ago [-]
This exactly. Data ought to be viewed as fissile material. That is, potentially very powerful, but extremely risky to store for long periods. Imposing severe penalties is the only way to attain this, as the current slap on the wrist/offer ID theft/credit monitoring is an absurd slap in the face to consumers as we are inundated with new and better scams from better equipped scammers everyday.
The current state is clearly broken and unsustainable, but good luck getting any significant penalties through legislation with a far-right government.
Terr_ 6 days ago [-]
Yeah, take an externality, make it priceable, and then "the market" and amoral corporations will start reacting.
Same principle as fines for hard-to-localize pollution.
magic_smoke_ee 5 days ago [-]
Corporations' motivations rarely coincide with deep, consistent systems strategy, and largely operate reactively and in a manner where individuals get favorable performance reviews for adding profitable features or saving costs.
roenxi 5 days ago [-]
They are appropriately motivated in this case, carriers would surely rather have no idea whatsoever about the data they are carrying. The default incentive is they'd really rather avoid being part of any compliance regimes or law enforcement actions because that sort of thing is expensive, fiddly and carries a high risk of public outcry.
If they had the option the telecommunication companies would love to encrypt traffic and obscure it so much that they have no plausible way of figuring out what is going on. Then they can take customer money and throw their hands up in honest confusion when anyone wants them to moderate their customer's behaviour.
They don't because that would be super-illegal. The police and intelligence services demand that they snoop, log and avoid data-minimisation techniques. It is entirely a question of regulatory demand and time that these sort of breaches happen; if the US government demands the data then sooner or later the Chinese government will get a copy too. I assume that is a trade off the US government is happy to make.
2OEH8eoCRo0 6 days ago [-]
While I agree, isn't this a degree of victim blaming? They were hacked by a state actor and every thread ignores the elephant in the room.
immibis 6 days ago [-]
They had a backdoor. Someone used the backdoor. You stick your hand in a running lawnmower and it gets chopped off. Nobody is surprised.
Who put the backdoor there? The US government did.
2OEH8eoCRo0 5 days ago [-]
No.
A telecommunications carrier may comply with CALEA in different ways:
The carrier may develop its own compliance solution for its unique network.
The carrier may purchase a compliance solution from vendors, including the manufacturers of the equipment it is using to provide service.
The carrier may purchase a compliance solution from a trusted third party (TTP).
CALEA is a mandate from the U.S. Government to backdoor all telecom infrastructure for U.S. LE and intelligence purposes.
peutetre 5 days ago [-]
> But intelligence agencies insist on having access to innocent citizens' conversations.
Intelligence agencies also stockpile software vulnerabilities that they don't report to the vendor because they want to exploit the security flaw themselves.
We'll never have a secure internet when it's being constantly and systematically undermined.
ndsipa_pomu 5 days ago [-]
Yes, but spies are going to spy, so we should focus on getting software built to have security by design and not just keep out-sourcing to the cheapest programmers that don't even know what a sql injection is.
Currently, with proprietary software, there's an incentive for companies to not even acknowledge bugs and it costs them money to fix issues, so they often rely on security through obscurity which is not much of a solution.
nixosbestos 6 days ago [-]
Meanwhile US banks, Venmo, PayPal, etc all insist on using "real" phone numbers as verification.
Funny that Venmo won't let me use a voip number, but I signed up for Tello, activated an eSIM while abroad and was immediately able to receive an SMS and sign-up. For the high barrier cost of $5. Wow, such security. Bravo folks.
fnordpiglet 6 days ago [-]
These stem from a requirement to know you as a person in some verifiable way. These are legal and regulatory requirements but the laws and requirements are there to ensure finserv can meaningfully contain criminal activity - fraud, theft, money laundering, black market, terrorism financing, etc. It turns out by far the most effective measure is simply knowing who the principals are in any transaction.
Some companies have much lower thresholds for their KYC, but end up being facilitators of crime and draw scrutiny over time by both their more regulated partners and their governments.
I’d note that the US is relatively lax in these requirements compared to Singapore, Canada, Japan, and increasingly the EU. In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
photonthug 6 days ago [-]
> know you as a person in some verifiable way .. the laws and requirements are there to ensure .. knowing who the principals are in any transaction.
Except that person you’re responding to explains succinctly how this is security theater that accomplishes little and ultimately is just a thinly veiled tactic for harassing users / coercive data collection. And the person above that is commenting that unnecessary data collection is just an incentive for hackers.
Comments like this just feel like apologism for bad policies, at best. Does anyone really think that people need to be scrutinized because most money laundering is small transactions from individuals, or, is it still huge transactions from huge customers that banks want to protect?
nixosbestos 3 days ago [-]
Let me make it even more clear. I registered from [South American country]. Called from a US Voip. Told them I was in [US State]. They called my bluff. I clarified exactly what I was doing and they immediately approved the line. Took less than a minute.
fnordpiglet 2 days ago [-]
I’m not sure I claimed simple phone number collection requirements is necessarily good policy or that it’s effective. I did not that other regimes have more draconian but more effective measure. I was explaining the provenance for such requirements - and that the base motivation is KYC. Being in the industry for a long time from small fintech to massive institutions I’ve never seen any place that’s intentionally harassing or being coercive - in fact the pressure is towards minimization of requirements and easing of onboarding / KYC as much as they can get away with. However this also turns into a farcical underinvestment in UX because management often believes by ignoring the function and turning the thumb screws on their KYC functions they can somehow make it better rather than worse - worse to the extent of appearing harassing and coercive, or worse to the extent of exposing legit users to fraud and hacking.
The issue though boils down to governments don’t want the financial infrastructure in their jurisdiction to allow unfettered crime. I’ve never seen a single government (granted I’ve never seen what happens in extremely oppressive regimes as we don’t generally do business there due to sanctions controls) who actively collects KYC outside of large transactions, the regulations exist to ensure a minimum baseline of KYC so the companies themselves can comply and reduce their own losses and instability as someone is often kiss liable in fraud and in money laundering or sanctions evasion some institution is subject to fines for facilitation.
But to be frank I think very little of what’s done is materially successful against most competent criminals and the consequences of being caught is usually just being blocked until they find a way around. To that end it’s a bit of not security theatre but compliance theatre. On the other hand it does act as a high pass filter as most fraud and financial crime is NOT competent. By and large retail finserv is a minimization effort not a prevention effort.
The regulations that are effective at prevention are usually so restrictive and so difficult to implement that they’re absurd for both the finserv to implement and for the participants to get through the hurdles.
I don’t know there’s any perfect solutions, and what exists is generally dumb, but the intentions are at the core well intended. It’s foolish tho to look at something as complex as financial infrastructure and wave it away as harassment and coercion rather than well intentioned incompetence.
codedokode 6 days ago [-]
Phone number is not an identity document, and you can rent a number cheaply on a black market. Also, there should be no verification for small amounts of money. We can use cash anonymously, why we cannot transfer money anonymously?
Andrex 6 days ago [-]
> In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
When vtuber-esque deepfakes become trivial for the average person, I wonder what the next stage in this cat-and-mouse becomes. DNA-verficiation-USB-dongles?
krapht 6 days ago [-]
Why do straight to dystopia when notary publics exist?
lazide 6 days ago [-]
Online notaries have been a thing for awhile now. Don’t worry, we can still have dystopias with Notaries.
Electricniko 6 days ago [-]
The DNA-collecting businesses have already been hacked.
vkou 6 days ago [-]
Maybe you could just, you know, show up to a bank branch? Like people have done for centuries?
brendoelfrendo 6 days ago [-]
Physical businesses? The horror! Won't someone think of the fintechs?
BenjiWiebe 6 days ago [-]
Or what if I live in a rural area and have very few local branch banks available?
I actually had an issue with this and ended up sending a notarized letter by snail mail, since I didn't feel like making a special 1hr each way trip during business hours to the closest branch.
vkou 6 days ago [-]
> Or what if I live in a rural area and have very few local branch banks available?
Then you have to be ready to accept that there are advantages and disadvantages to your choice of where you live, and that is one of the latter.
There's a reason rural property is so cheap. It comes with a lot of disadvantages and inconveniences and costs that city-dwellers don't need to pay.
datavirtue 6 days ago [-]
City taxes are a never ending bitch.
op00to 6 days ago [-]
There is no right to not be inconvenienced by living in a remote area in any country I’m aware of.
spookie 5 days ago [-]
If a country does not strive to make good use of all its land and attempt to better the lives of its people why are there wars? Clearly they're fine with their top 3 cities. /s
Seriously, you see this in any country of any size. Remote may just mean 300km/186mi off coast. Politicians go where the votes are of course, but this just means disregarding rural areas is a self fulfilling prophecy. The more you do it, the more remote they become.
3 days ago [-]
afh1 6 days ago [-]
You can, at the same time, verify a person's identity upon opening the account, as you mentioned with documents, and use a TOTP MFA instead of SIM-based authentication. If regulators require SIM-based authn, then it's just bad policy, which should come to no one's surprise when it comes to government regulation. Finally, KYC is for the IRS. The illusion of safety makes a good selling point, though.
_DeadFred_ 6 days ago [-]
US regulators don't normally specify down to 'require SIM-based authn'. Instead they give vague directives that companies have to determine their own implementation for meeting. And the implementation needs to be blessed by corporate AND insurance company lawyers, which too often ends up meaning those lawyers dictate the implementation.
terribleperson 6 days ago [-]
My google voice number is unlikely to be stolen from me, but instead I have to use a 'real' phone number that could be compromised by handing cash to an employee at a store.
One time a company retroactively blocked VOIP numbers, which was really stupid.
Krasnol 6 days ago [-]
> My google voice number is unlikely to be stolen from me
I'd say that with Google, chances are that they just stop offering the service.
MetaWhirledPeas 6 days ago [-]
When Google Voice was brand new I snagged me a number. (Since lost because I did not respond to a prompt to keep it alive, or something?) I wonder if they anticipated the cost of keeping those around for decades. Managing someone's personal phone number is a solemn commitment that you can't just drop willy-nilly.
thfuran 6 days ago [-]
The only solemn commitment Google has is to the bottom line.
MetaWhirledPeas 6 days ago [-]
Aren't they still supporting old Google Voice numbers though? I don't see how they could be making any money on that.
Uvix 5 days ago [-]
Only US domestic calls are free, international calls have a per-minute charge.
thfuran 6 days ago [-]
That's one of their older services. I assume they really like the data they get from it.
kyrra 6 days ago [-]
This is why I like Google Fi. It is much harder to do account takeout over of a Google Fi number compared to most telecos. The attacker would have to take over the Google account which seemed to be harder to do.
lokar 6 days ago [-]
I agree, and also use Fi
But, I worry about what happens if I somehow get locked out of the account…
jopsen 6 days ago [-]
Verifying people after account loss/compromise is hard.
So which would you prefer:
(A) A low-level customer service representative can restore your access, but said representative is arguably susceptible to social engineering and other human weaknesses.
(B) Your account can be protected be physically 2FA key (yubikey), but on the case of loss/compromised account processes for recovery are hard to navigate and may not yield successful recovery?
In the case of (A) you have little security. In the case of (B) you can do a LOT to prevent account loss, but if bad things happen (whether your fault or not) you are locked out by default.
From a privacy point of view, I'm not sure that (B) is such a bad option.
e40 5 days ago [-]
You can mitigate (B) by using your own domain with Google Fi and the basic workspace account. That way, if you are locked out you can switch providers taking your domain with you.
jopsen 4 days ago [-]
You still loose data stored, phone number, etc.
But you could make the argument you should do backup of cloud services, the same way you do backup of hard drives.
e40 3 days ago [-]
True, but my Google Fi is attached to a free gmail account (because there is NO way to attach it to a Workspace account!!).
For my Workspace account, I backup with Google Takeout every 2 months to Backblaze B2. I also sync (with rclone) My Drive to a local directory, which is weekly uploaded to B2.
9cb14c1ec0 6 days ago [-]
We need both, clearly advertised for what they are, and then all everyone can make their own risk calculus.
edoceo 6 days ago [-]
Just post on socials (that you can still access) about being locked out and then hope for the best?
lokar 6 days ago [-]
Well, for now, I still have former co-workers there who can help, but that won’t last forever.
kyrra 6 days ago [-]
For the most part, the "have a friend at Google" doesn't help anymore. They even tell us googlers to use the external process when our account gets locked.
bushbaba 6 days ago [-]
Because that real phone number is tied to an imei number which can be used to track your historical and real time location from teleco data
betaby 6 days ago [-]
And yet it is 'impossible' to police to recover stolen iPhone.
ceejayoz 6 days ago [-]
It’s entirely possible. They just don’t care.
dboreham 6 days ago [-]
Unrelated. Tracking data is service-side, not secret to the phone.
danlugo92 6 days ago [-]
Whatsapp just retroactivelly blocked google voice numbers recently
rwmj 6 days ago [-]
That's nothing to do with security, just Meta wanting to know everything about you / being annoyed that another company has that data instead of them.
immibis 6 days ago [-]
Security of shareholder value!
BenjiWiebe 6 days ago [-]
Knock on wood, mine still works. Please, any Whatsapp/Meta engineers, don't go specifically disable mine now that you read this comment.
einsteinx2 6 days ago [-]
How recently?
mjevans 6 days ago [-]
Blanket Denial is the issue.
A PROCESS for verifying the number isn't used for fraud and allowing use. I don't know, maybe the fact that I've been a customer for YEARS, use that number, and have successfully done thousands of dollars in transactions over a platform without any abnormal issue?
fasteo 6 days ago [-]
Does Tello require KYC, that is, is the eSIM linked to an actual identity ?
As least in Europe (psd2) that’s the key for accepting a phone number as a 2FA method
rsync 6 days ago [-]
No KYC with Tello or USMobile.
All of my 2FA Mules[1] are USMobile SIMs attached to pseudonyms which were created out of thin air.
It helps a lot to run your own mail servers and have a few pseudonym domains that are used for only these purposes.
i bought a Tello eSIM to use for my Rabbit R1, am in USA, was not required to provide any KYC, received a (213) LA area code number, recommend Tello so far.
BenjiWiebe 6 days ago [-]
Another cool thing that some companies do: refuse to deal with me because the family business account is in my dad's name, despite me knowing all the correct information to pretend to be my dad.
Like, the only reason I don't answer the phone and say "this is <Dad's name>", is because I'm honest. You'll never keep a bad guy out that already knows all the information that you ask for - he'll just lie and claim to be the business/account owner.
codedokode 6 days ago [-]
Technically they might be right, because your father might not trust you to access the account, so you need some kind of written permission.
> he'll just lie and claim to be the business/account owner.
He can lie, but he doesn't have another person's passport to prove his lies.
luckylion 6 days ago [-]
That written permission is worthless unless notarized & verified (which isn't going to happen for ordinary things) because you can just write it yourself.
And you don't need a passport. I've never met a company that will require full KYC-level video-identification with you on every call. You say that you're you (it doesn't matter whether you actually are you), you give them the secret code and they're happy.
toast0 6 days ago [-]
> For the high barrier cost of $5. Wow, such security. Bravo folks.
$5 is at least 5x the cost of a voip number. I'm not a bank, but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 on the number than if it was $1 or less.
rsync 6 days ago [-]
"... but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 ..."
This is exactly it.
All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
So, when twilio (for instance) refuses to let you 2FA with anything other than tracing back to a real mobile SIM[1] (how ironic ...) it is not to help you - it is designed to slow down abusers.
[1] The "authy" workflow is still backstopped by a mobile SIM.
zahlman 6 days ago [-]
>All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
>These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
Sure does a great job for all the various online social media places that ostensibly have nothing to do with transacting money, still want my phone number, and still get overrun with spam and (promotion of) scams....
toast0 5 days ago [-]
It's a whole bunch of tradeoffs; requiring a working, non-voip phone number does raise the cost for abusers, but it's not enough to make spam unprofitable.
Requiring a deposit would be more direct, but administration of deposits would be a lot of work, and you have an uphill battle to convince users to pay anything, and even if they want to pay, accepting money is hard. And then after all that, some abusers will use your service to check their stolen credit cards.
Basically comes down to: the costs of acceptable levels of fraud < the cost of eliminating all fraud.
There are processes that would more or less eliminate all fraud, but they are such a pain in the ass that we just deal with the fraud instead.
nixosbestos 3 days ago [-]
Okay. So let me just pay an "application fee" or some such instead of making me jump through hoops.
I don't care. I know it's a numbers game. I know they don't care about me. But companies absolutely lose my business because of this bullshit.
lazide 6 days ago [-]
Also, that is clearly a workaround that took some research to do. Aka you’re probably in the top 1% of the population from a ‘figuring out workarounds’ perspective.
VoIP is so well known (and automated) to do, even at $.10, it would be a magnitude easier to do.
Banks are always slow, and behind the times - because they are risk adverse. That has pros and cons.
somat 6 days ago [-]
It makes me think of linux distros.
there are the ones that closely follow software updates and you get to complain that things are breaking all the time.
and there are the stable distros, now you get to complain how old and out of date everything is.
iszomer 6 days ago [-]
I still have about $15 of international calling credit on a GV number I hardly use anymore with no option of transferring or using that balance on a different platform like Google's Play store.
mikeweiss 6 days ago [-]
Can we talk about how Venmo doesn't even let you login from abroad... And their app doesn't provide a decent error message it just 403s.
blackeyeblitzar 6 days ago [-]
The problem is that VOIP numbers, from companies like Bandwidth, are frequently used to perform various frauds. So many financial services ban them because the KYC for real numbers is much better.
silisili 6 days ago [-]
I have more bank and credit accounts than the average person, probably. 5 bank accounts, and 8 credits accounts I can remember as active off the top of my head.
Every single one works with GVoice, except Venmo. Chase, Cap1, Fidelity, etc. Not small players.
So while I think you make a fair enough argument for sure, it doesn't seem to be the case when nobody else does it, and makes Venmo seem like a pain in the arse.
BenjiWiebe 6 days ago [-]
My Gvoice number works with Chase, Citi, Discover, AMEX, Capitol One. Does not work with Wells Fargo, despite allowing you to sign up with it. Took a notarized snail mail to fix that one.
zmgsabst 6 days ago [-]
In practice, these companies get a phone number I possess for 1-3 months on a travel SIM rather than the VOIP number I’ve steadily maintained for two decades and by which the US feds know me (because they don’t care).
axus 6 days ago [-]
Don't all financial institutions need some real identification with physical address to sign up? Phone numbers / email addresses should be for communication, not tracking.
mellow-lake-day 6 days ago [-]
KYC = know your customer?
FergusArgyll 6 days ago [-]
Yes, and AML = Anti Money Laundering
tempodox 6 days ago [-]
Yes.
dr_dshiv 6 days ago [-]
It has nothing to do with Kentucky’s Yummiest Chicken, if that’s what you were thinking.
immibis 6 days ago [-]
Because VOIP requires a verified Google account and phone number, while traditional numbers can, uh, be purchased anonymously at the corner store.
atonse 6 days ago [-]
Depends on which country. In places like India that’s not possible. Your cell phone number becomes a de facto identity so they require all kinds of identity documents to get a SIM.
taneliv 6 days ago [-]
So there's a cottage industry of middle men on the streets who will set you up with a SIM card, or a travel ticket or whatever, for people who don't have identity. (Or in some cases don't want to reveal their identity, but I reckon this is less typical.) Sure, you pay extra for the service, the middle man takes 10%, 30% or 500% and the identity is then with that person---or their fraudulent papers, I don't know how it works in detail.
int_19h 6 days ago [-]
It's usually the other way around - first countries introduce laws that require ID to buy a cell phone ("because criminals"), and then the phone number starts getting used as a de facto identity.
atonse 2 days ago [-]
Ah yeah, good point. That makes more sense.
baobun 6 days ago [-]
> while traditional numbers can, uh, be purchased anonymously at the corner store.
That is a closing window and the case in fewer and fewer places. It wont be long until most people would need to fly across the globe or get involved with organised crime to pull that off...
freeopinion 6 days ago [-]
You keep using that word. I do not think it means what you think it means.
cookiengineer 6 days ago [-]
The same level of security that shitter's checkmark introduced. All checkmark accounts are fake, and the ones without are real people, I guess?
The idea that scammers don't have digital money laying around just waiting on being spent on something is so absurdly out of touch on how everything in cyber works.
disqard 6 days ago [-]
Corporations are "people".
Corporations "eat" money.
Entities that can feed a corporation, are treated as peers, i.e. "people".
Thus, on shitter, if you can pay, you are a person (and get a blue checkmark).
withinboredom 6 days ago [-]
Oh, nice allusion. If corporations eat money and you're not paying, i.e., a free service. You are prey.
immibis 6 days ago [-]
You aren't even the product. You're the raw material.
freeqaz 6 days ago [-]
I work in security and this surprised me to see. Not that these companies got hacked, but the scope of the attack being simultaneous. Coordinated. Popping multiple companies at the same time says something about the goals the PRD has.
It risks a lot of "noise" to do it this way. Why not just bribe employees to listen in on high profile targets? Why try to hit them all and create a top level response at the Presidential level?
This feels optics-driven and political. I'm not sure what it means, but it's interesting to ponder on. Attacking infrastructure is definitely the modern "cold war" of our era.
mike_d 6 days ago [-]
This is a total yawn, and the norm. It looks coordinated because the team who focuses specifically on telecoms had their tools burned. Pick pretty much any sector of interest and the intelligence services of the top 50 countries all have a team dedicated to hacking it. The majority of them are successful.
Sadly even most people in security are woefully unaware of the scope and scale of these operations, even within the networks they are responsible for.
The "noise" here was not from the attacker. They don't want to get caught. But sometimes mistakes happen.
0xbadcafebee 6 days ago [-]
Interestingly, some of those teams dedicated to hacking are either private sector or a branch that nobody has heard of. I once interviewed for a company whose pitch to me was basically "we get indemnity to hack foreign telcos" and "we develop ways to spy that nobody has thought of". That was 20 years ago
hooo 6 days ago [-]
What do those companies look like externally? Are they publically known?
0xbadcafebee 6 days ago [-]
Some are specialized, some are diversified. Definitely public, I believe they all have to be listed on fedgov's contractor list? Some are obvious weapons contractors, some aren't (like extensions of big-name universities). If you see job listings for weapons development, cyber ops, secret-clearance software dev, cryptography, etc, that's a clue.
0xbadcafebee 6 days ago [-]
It probably wasn't a simultaneous attack, they probably penetrated over a long period of time. The defenders just found them all simultaneously (you find one, you go looking for the others)
> Why not just bribe employees to listen in on high profile targets?
Developing assets is complicated and difficult, attacking SS7 remotely is trivial, especially if you have multiple targets to surveil
metalman 6 days ago [-]
Given the noise about huawaie and spy cranes, it would be interesting to know if the "attacks" were
against any and all telecoms equipment, or just chinese stuff, not that I think it would make any difference.
The daylight (heh heh!) trolling for telecom and power cables, is most definitly a (he ha!) signal,
aimed at western politicians.
Another one, is that while there are claims of North Korea , taking crypto, no identifiable victim has stood up.
Western politicians are attempting to redirect the whole worlds economy, based on saving us from
the very things that are happening, just now.
So it does seem more than coincidental.
immibis 6 days ago [-]
Aren't they attacks against the US government mandated backdoors in all equipment?
buildbot 6 days ago [-]
I think this is the perfect time to do something like this, in the midst of a presidential transition. Regardless of the outgoing and incoming politics, things will be more chaotic. While it won't be unnoticed, it's going to be down the lists of things to deal with probably, and possibly forgotten.
6 days ago [-]
marcosdumay 6 days ago [-]
The most incompetent crook is the first one to get caught.
There's a huge selection bias factored into what attacks make the news.
alexpotato 6 days ago [-]
Incompetence is just one dimension on odds of being caught.
You could be an incredibly competent and highly motivated crook and bad luck in the form of an intern looking at logs or a cleaning lady spotting you entering a building could take you down.
Zigurd 6 days ago [-]
I can't confirm it because the descriptions of the hack are unclear but if more network operators say they've been hacked it is more and more likely the Chinese got in by attacking lawful intercept. This could happen in various ways: bribe or blackmail someone in law enforcement with access to a lawful intercept management system (LIMS), a supply chain attack on an LIMS vendor, hacking the authentication between networks and LIMS, etc.
If it is an LI attack the answer to which networks are compromised is: All of them that support automated LI.
That's a nasty attack because LI is designed to not be easily detectable because of worries about network operators knowing who is being tapped.
foobiekr 6 days ago [-]
More likely they got access and then snooped any of the many insecure protocols used to manage network devices.
Anyone who has ever worked in networking will understand what I mean.
The networking industry is comically bad. They use ssh but never ever verify host keys, use agent forwarding, use protocols like RADIUS or SNMP which are completely insecure once you pop a single box and use the almost always global shared secret. Likewise the other protocols.
Do they use secure boot in a meaningful way? So they verify the file system? I have news for you if you think yes.
It’s kind of a joke how bad the situation is.
Twenty years ago someone discovered you could inject forged tcp resets to blow up BGP connections. What did the network industry do? Did they institute BGP over TLS? They did not. Instead they added TCP MD5 hashing (rfc: https://datatracker.ietf.org/doc/html/rfc2385 in 1999) using a shared secret because no one in networking could dream of using PKI. Still true today. If deployed at all, which it usually isn’t. 2010!!
If you want to understand the networking industry consider only this: instead of acknowledging how dumb the situation is and just using tls, instead we got this - https://datatracker.ietf.org/doc/html/rfc5925 - which is almost as dumb as 2385 and just as bad in actual deployment because they just keep using the same deployment model (the shared tuple). Not all vendors that “support” 5925 support the whole
RFC.
As an aside this situation is well known. People have talked about it for literal decades. The vendors have shown little to no interest in making security better except point fixes for the kind of dumb shit they get caught on. Very few security researchers look at networking gear or only look at low end junk that doesn’t really matter.
immibis 6 days ago [-]
FWIW PKI tends to mean a central point of failure. Some Russian organizations can't get TLS certificates because of sanctions.
foobiekr 6 days ago [-]
Pki here does not mean a global CA. You can run your own CAs (and should).
immibis 5 days ago [-]
Since only two parties are involved, why not use the easier pre-shared key system in that case?
foobiekr 5 days ago [-]
For the many reasons I listed. Pre sharesd keys are almost always global and you can’t do forensics to find the leak.
CartwheelLinux 6 days ago [-]
You can cross trust and establish alternative trust paths in PKIs
trod1234 6 days ago [-]
That reasoning is dubious.
They aren't saying that more have been hacked, they are saying that more have been discovered related to that hack. Any adversary at this level would be monitoring the news, and would take appropriate actions (for gain) or roll up the network rather than allow reverse engineering of IOCs.
More than likely this was not an LI based attack, but rather they don't know for sure how they got in. Nearly all of the guidance is standard cybersecurity best practices for monitoring and visibility, and lowering attack surface with few exceptions (in the CISA guidance).
The major changes appear to be the requirements to no longer use TFTP, and the referral to the manufacturer for source of truth hashes (which have not necessarily been provided in the past). A firmware based attack for egress/ingress seems very likely.
For reference, TFTP servers are what send out the ISP configuration for endpoints in their network, the modems (customers), and that includes firmware images (which have no AAA). Additionally as far as I know the hardware involved lacks an ability to properly audit changes to these devices (by design), and TR-47 is rarely used appropriately, the related encryption is also required by law to be backward compatible with known broken encryption. There was a good conference talk on this a few years ago, at Cyphercon 6.
The particular emphasis on TLS1.3 (while now standard practice) suggests that connections may be being downgraded, and the hardware/firmware at CPE bridge may be performing MITM to public sites in earlier versions transparently, if this is the case (its a common capability needed).
The emphasis on using specific DH groups, may point to breaks in the key exchange of groups not known to be broken (but are broken), which may or may not be a factor as well.
If the adversary can control, and insert malicious code into traffic on-the-fly targeting sensitive individuals who have access already, they can easily use information that passes through to break into highly sensitive systems.
The alternative theory while fringe, is maybe they've come up with a way to break feistel networks (in terms of cryptographic breaks).
Awhile back the NSA said they had a breakthrough in cryptography. If that breakthrough was related to attacks on feistel network structures (which almost all modern cryptography is built on), that might explain another way (although this is arguably wild speculation at this point). Nearly every computer has a backdoor co-processor built-in in the form of Trustzone, Management Engine, or AMD's PSP. Its largely only secured by crypto without proper audit trails.
It presents a low hanging concentrated fruit into almost every computation platform on earth, and by design, its largely not auditable or visible. Food for thought.
Quantum computer breaks a single signing key for said systems, acting like a golden key back door to everything. All the eggs in one basket. Not out of the realm of possibility at the nation state level. No visibility means no perception or ability to react, or isolate the issues except indirectly.
foobiekr 6 days ago [-]
You don’t need to bring up quantum computers. Almost all protocols in the networking industry are basically running with a shared secret that is service global. Pop any box at all and you have the world for any traffic you can capture.
The problem with the shared secret model isn’t that it can be stolen, it’s that it is globally shared within a provider network. You can’t root it in a hardware device. You can’t do forensics to see from what node it was stolen.
We are talking about an industry where they still connect console servers, often to serial terminal aggregators that are on the internal network alongside the management Ethernet ports, which have dumb guessable passwords, often the same one on every box, that all their bottom tier overseas contractors know.
It’s just sad.
trod1234 6 days ago [-]
> You don't need to bring up quantum computers. Almost all protocols in the networking ...
Its true that those protocols are basically running shared secrets, but those areas all have some visibility with auditing and monitoring.
You crack a root or signing key at the co-processor level and you can effectively warp and control what anyone sees or does with almost no forensics being possible.
It fundamentally allows a malevolent entity the ability to alter what you see on the fly with no defense possible. Such is the problem with embedded vulnerabilities, its just like that NewAg train thing.
Antitrust and bricking for monopolistic benefit is far more newsworthy then say embedding a remote radio-controlled off-switch with no plausible cover, that can brick the trains as they move harvests, food stuffs, or military equipment.
Its corruption, not national security. Would many believe that its the latter over the former when it does both?
It is sad that our societal systems have become so brittle that they cannot check or effectively stop the structural defects and destructive influences within itself.
ChrisArchitect 6 days ago [-]
Some related prior discussion:
PRC Targeting of Commercial Telecommunications Infrastructure
Wasn’t it a couple years ago the intelligence community was arguing for backdoor mandates, and now the FBI recommends Signal for safe chats? Such a farce. Hopefully the new admin goes through their emails and text messages over the last 4 years. Privacy for me, not for thee, I suppose…
Animats 7 days ago [-]
"...implies that the attack wasn't against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers"
Yup. The attack hit the CALEA backdoor via a wiretapping outsourcing company.
Which one?
Who else is in that business? There aren't that many wiretapping outsourcing companies.
Verisign used to be in this business but apparently no longer is.
supriyo-biswas 6 days ago [-]
Thank you for posting this. The search term "calea solutions"[1] also brings up some relevant material, such as networking companies advising how to set up interception, and an old note from the DoJ[2] grumbling about low adoption in 2004 and interesting tidbits about how the government sponsored the costs for its implementation.
where from ""...implies that the attack wasn't against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers" comes from ? from schneier ?
because if you go to the actual reporting in wsj for example, it doesn't imply that attack was against TTP providers. also TTP providers are optional
wiretap systems are on the telecom provider side and it a bunch of different and in many cases ordinary networking equipment that can be easily misconfigured.
TTP (aka companies listed above) are optional and usually used by companies that don't have their own legal department to process warrants/want to deal with fine details of intercepts
bn-l 6 days ago [-]
> wiretapping outsourcing company
Is it a great idea to give all that info to India as well?
baybal2 6 days ago [-]
[dead]
llamaimperative 7 days ago [-]
Nothing contradictory (in philosophy), really: they said American law enforcement should be able to break encryption when they have warrants and they now say Chinese spies should not be able to.
This is obviously technically impossible, but the desire for that end state makes a ton of sense from the IC’s perspective.
hunter2_ 6 days ago [-]
That something can simultaneously be impossible and sensible is peculiar. It almost suggests that the technique has merely not yet been figured out.
Secrets fail unsafe. Maybe an alternative doesn't.
btilly 6 days ago [-]
It is sensible that people would want the impossible. It isn't sensible to try to mandate it.
Government keeps trying to mandate it in various ways. With predictably bad results.
tzs 6 days ago [-]
How is it obviously technically impossible?
btilly 6 days ago [-]
Whatever method is available to American law enforcement is eventually going to become available to Chinese spies. The record of keeping this kind of secret is abysmal. If by no other means, then by social engineering the same access that local police departments were supposed to have.
Salt Typhoon - which this discussion is about - is an example. Tools for tracking people that were supposed to be for our side, turn out to also be used by the Chinese. Plus the act of creating partial security often creates new security holes that can be exploited in unexpected ways.
Either you build things to be secure, or you have to assume that it will someday be broken. There is no in between.
llamaimperative 5 days ago [-]
Something either has X degree of security (for everyone) or it does not.
petesergeant 6 days ago [-]
The FBI has a weird mandate in that it's both counter-espionage and counter-crime, and those are two quite different missions. Unsurprising to know that counter-espionage want great encryption, and counter-crime want backdoorable encryption.
rat87 6 days ago [-]
You want the new anti democratic/authoritarian administration to look through the FBIs emails to find something to frame them for? You sure that's wise? Even if they don't respect privacy like they should?
JTbane 6 days ago [-]
It seems like every few years law enforcement puts out statements about how good encryption is for criminals, and then they have to walk it back as data breaches happen.
kube-system 6 days ago [-]
Sometimes you're on offense, sometimes you're on defense. The government does both.
snypher 7 days ago [-]
It doesn't take much to read between the lines on those two statements. Feds have access to Signal if they want it, but are using it as filter paper against most attacks against the public etc.
tptacek 7 days ago [-]
The "feds" do not have access to Signal, except by CNE attacks against individual phones. Signal's security does not rely on you trusting the Signal organization.
snypher 7 days ago [-]
It's ok for someone to believe that, but I don't believe that. Unfortunately there is no practical way to verify it either.
What are you talking about? Signal is open source, and its cryptographic security is trivially verifiable. If you don't trust the nonprofit behind it for whatever reason, you can simply compile it yourself.
viraptor 7 days ago [-]
> and its cryptographic security is trivially verifiable
That's going quite far. Even with all the details of it documented and open, there's a relatively small number of people who can actually verify that both the implementation is correct and the design is safe. Even though I can understand how it works, I wouldn't claim I can verify it in any meaningful way.
tptacek 7 days ago [-]
Multiple teams have done formal verification of the Signal Protocol, which won the Levchin Prize at Real World Crypto in 2017.
viraptor 7 days ago [-]
Sure, there are teams who have done it. But it's not trivial. The fact there's a price for it shows it's not trivial. If I choose a random developer, it's close to guaranteed they wouldn't be able to reproduce that. The chances go to 0 for a random Signal user.
Alternatively: it's trivial for people sufficiently experienced with cryptography. And that's a tiny pool of people overall.
tptacek 6 days ago [-]
The idea isn't that you do formal verification of the protocol every time you run it. It suffices for the protocol to be formally verified once, and then just to run that one protocol. If you thought otherwise, you might as well stop trusting AES and ChaCha20.
greyface- 6 days ago [-]
It is possible for the core protocol to be tightly secure, while a bug in a peripheral area of the software leads to total compromise. Weakest link, etc. One-time formal verification is only sufficient in a very narrow sense.
tptacek 6 days ago [-]
It is also possible for a state-level adversary to simply hijack your phone, whatever it is, and moot everything Signal does to protect your communications. Cryptographically speaking, though, Signal is more or less the most trustworthy thing we have.
chasil 6 days ago [-]
Just look at PuTTY and e521 keys.
Or go back to Dual_EC_DRBG.
Unless DJB has blessed it, I'll pass.
tptacek 6 days ago [-]
What do those two issues have to do with each other?
chasil 6 days ago [-]
These were showstopper bugs that betrayed anything they touched.
Avoiding this is obviously a huge effort.
tptacek 6 days ago [-]
Dual EC was a "showstopper bug"?
er4hn 6 days ago [-]
It did stop openssl whenever you tried to use it in production mode ;)
warkdarrior 7 days ago [-]
If you compile it yourself, can you still connect to the Signal servers?
But the client is designed to not trust the server, that's why encryption is end-to-end. So does it matter?
greyface- 6 days ago [-]
In some sense, no - the protocol protects the contents of your messages. In another sense, yes - a compromised server is much easier to collect metadata from.
tptacek 6 days ago [-]
Metadata, yes. Of course, the protocols, and thus all the inconveniences of the Signal app people constantly complain about, are designed to minimize that metadata. But: yes. Contents of messages, though? No.
greyface- 6 days ago [-]
If Signal, the service, was designed to minimize metadata collection, then why is it so insistent on verifying each user's connection to an E.164 telephone number at registration? Even now, when we have usernames, they require us to prove a phone number which they pinky-swear they won't tell anyone. Necessary privacy tradeoff for spam prevention, they say. This isn't metadata minimization, and telephone number is a uniquely compromising piece of metadata for all but the most paranoid of users who use unique burner numbers for everything.
tptacek 6 days ago [-]
This is the most-frequently-asked question about Signal, it has a clear answer, the answer is privacy-preserving, and you can read it over and over and over again by typing "Signal" into the search bar at the bottom of this page.
greyface- 6 days ago [-]
The answer is not privacy-preserving for any sense of the word "privacy" that includes non-disclosure of a user's phone number as a legitimate privacy interest. Your threat model is valid for you, but it is not universal.
tptacek 6 days ago [-]
The question you posed, how Signal's identifier system minimizes metadata, has a clear answer. I'm not interested in comparative threat modeling, but rather addressing the specific objection you raised. I believe we've now disposed of it.
greyface- 6 days ago [-]
I don't believe there has been any such disposition in this thread. There have been vague assertions that it's been asked and answered elsewhere. Meanwhile, the Signal source code, and experience with the software, clearly demonstrates that a phone number is required to be proven for registration, and is persisted server-side for anti-spam, account recovery, and (as of a few months ago, optional) contact discovery purposes.
tapoxi 6 days ago [-]
Yes. There's also libraries that do this, like libsignal.
ghostpepper 7 days ago [-]
It’s not practically open source though - how many people actually build it themselves and sideload onto their Android/iphone?
How much effort would it be for the US government to force Google to ship a different APK from everyone else to a single individual?
tptacek 6 days ago [-]
I don't know, a lot? They could with the same amount of effort just get Google to ship a backdoored operating system. Or the chipset manufacturer to embed a hardware vulnerability.
gertop 6 days ago [-]
"Here's a court order, you must serve this tainted APK we built to the user at this email"
VS
"You must backdoor the operating system used on billions of devices. Nobody can know about it but we somehow made it a law that you must obey."
Come on, that's not the same amount of efforts at all.
tptacek 6 days ago [-]
Looks like exactly the same amount of effort to me?
nprateem 6 days ago [-]
Effort maybe but not likelihood of discovery
devops99 6 days ago [-]
The cryptography is not where Signal is vulnerable. What Signal is running on, as in operating system and/or hardware that runs other embedded software on "hidden cores", is how the private keys can be taken.
Anything you can buy retail will for sure fuck you the user over.
Intermernet 6 days ago [-]
Retail hardware actually has a better track record at the moment than bespoke, closed market devices. ANOM was a trap and most closed encryption schemes are hideously buggy. You're actually better off with Android and signal. If we had open baseband it would be better, but we don't, so it's not.
Perfect security isn't possible. See "reflections on trusting trust".
devops99 6 days ago [-]
Bespoke but-not-really-bespoke closed-market devices made by the right people are very secure, but they are not sold to the profane (you).
> ANOM was a trap
Yes, ANOM was intended to be a trap.
> and most closed encryption schemes are hideously buggy
Yes they are. Hence some of us use open encryption schemes on our closed-market devices.
> You're actually better off with Android and signal.
I am better off with closed-market devices than I am with any retail device.
> If we had open baseband it would be better
And the ability to audit what is loaded on the handset, and the ability to reflash, etc. In the real-world all we have so far is punting this problem over to another compute board.
> Perfect security isn't possible.
Perhaps, but I was not after "perfect security", I was just after "security" and no retail device will ever give me that, but a closed-market device already has.
Oh, so none of this has anything to do with Signal. Ok!
devops99 6 days ago [-]
In theory, "none of this has anything to do with Signal", and you are correct ; but back over here in reality: Signal runs on these systems.
Hence the security afforded by Signal is very weak in-practice and questionable at best.
fragmede 6 days ago [-]
> Unfortunately there is no practical way to verify it either.
discuss an exceedingly clear assassination plot against the President exclusively over signal with yourself between a phone that's traceable back to you, and a burner that isn't. if the secret service pays you a visit, and that's the only way they could have come by it, then you have you answer.
hunter2_ 6 days ago [-]
I think the bar for paying such a visit would be infinitely high (they would find a way to defend in a more clandestine manner) to keep the ruse going.
nprateem 6 days ago [-]
Let us know how that goes
buckle8017 6 days ago [-]
Signal's servers have access to your profile, settings, contacts, and block list if the PIN you select has low security.
tptacek 6 days ago [-]
Which is to say: in the worst-case plausible failure model for Signal, they get the same metadata access as all the other messengers do. OK!
daneel_w 6 days ago [-]
Not all other messengers require a mobile phone number in order to get access, meaning not all other messengers have a view of users' social networks - some of them are anonymous, and Signal is not. It's a fundamental difference. But we've been here before.
immibis 5 days ago [-]
They kill people based on metadata - they told us so. They don't need the rest.
We believe that all of the vulnerabilities we discovered have been mitigated by Threema's recent patches. This means that, at this time, the security issues we found no longer pose any threat to Threema customers, including OnPrem instances that have been kept up-to-date. On the other hand, some of the vulnerabilities we discovered may have been present in Threema for a long time.
tptacek 6 days ago [-]
For what it's worth, and obviously I could have been clearer about this: what's interesting about that link is the description of Threema's design, not the specific vulnerabilities the team found.
devops99 6 days ago [-]
[flagged]
int0x29 6 days ago [-]
While the statments are contradictory I wouldn't take it as sign of some vast conspiracy. I would just take it as a sign they are stuck needing to give out some kind of guidance to prevent foreign access. While they are a domestic police service they are also a counterintelligence service and thus need to provide some guidance there.
2OEH8eoCRo0 6 days ago [-]
Telcos need a way to comply with court orders. That's it.
rsingel 6 days ago [-]
No, the feds require CALEA-backdoors. Absent CALEA, a telecom could say we don't have the data or the capability
s5300 7 days ago [-]
US Military has atleast privately switched away from any Signal usage within the past few months – it’s undoubtedly compromised in some way. If the FBI is recommending it it’s for exploitative purposes & a false premise of safety.
blackeyeblitzar 6 days ago [-]
So what’s the alternative
impossiblefork 6 days ago [-]
Completely avoiding sensitive communication over mobile phones.
edm0nd 6 days ago [-]
Session, Matrix, Tox perhaps
glaucon 6 days ago [-]
I know nothing about this field so I went looking for those product names.
Thanks, that does seem more plausible than the one I found.
immibis 5 days ago [-]
SimpleX?
s5300 6 days ago [-]
[dead]
devops99 6 days ago [-]
[flagged]
rsingel 6 days ago [-]
What a load of horseshit.
Yeah, if a nation-state thinks you are a bad enough actor, they might use a high power way to get at you. See Pegasus, for instance
But those exploits are rare, expensive and can be blown.
No one has ever said Signal is perfect security.
But it is damn good. Your SMSes aren't sitting in plaintext on your mobile ISPs network. You aren't going to have them intercepted by a fake mobile tower. And if you and your recipients use disappearing messages, good luck to any prosecutor trying to get them off a device.
And as for Apple sending a fake update? Might could happen but 1) Apple fought this once and 2) it'd be hard to do in any widespread way without being detected
Saying Signal protects you from fuck all is not just wrong, it's irresponsible AF.
It's like saying that locks, firewalls, alarm systems, curtains and network monitoring don't work because some people know how to defeat them.
Signal is a great security upgrade for almost anyone. I love seeing more people use it.
Normalizing encryption is great.
devops99 6 days ago [-]
[flagged]
jesseendahl 6 days ago [-]
The amount of one-off work this would take is quite high, so the amount of motivation for a company like Apple to say “No, you can’t legally compel us to to allocate engineering resources to this” is also quite high.
My point is that they (and other tech companies) would be highly incentivized against implementing something like a malicious update targeting a single device/user based purely on capitalistic motivations, rather than philosophical/ethical ones.
devops99 6 days ago [-]
I wish I could agree with you but the real world doesn't work this way. Companies that don't play ball get broken up with anti-trust. Or what happened to the CEO (former CEO) of Qwest happens.
The "infrastructure" for the targeted updates is implemented by compartmentalized teams, who will be comprised of the clearance community, and the "external" people who work with them are a part of the clearance community.
jesseendahl 6 days ago [-]
>I wish I could agree with you but the real world doesn't work this way.
The real world does work this way. Businesses make business decisions based on bottom-line impact, and businesses generally push back very strongly against governments whenever a government asks them to do things that will cause them to make less money and/or waste money.
>The "infrastructure" for the targeted updates is implemented by compartmentalized teams, who will be comprised of the clearance community, and the "external" people who work with them are a part of the clearance community.
I agree that would be how it would work if it actually happened, but I think you overestimate the appetite (and even ability) of big tech to have any desire to do this kind of thing.
If you are implying that there are teams within big tech companies who secretly do this kind of thing, even against the wishes of other engineering teams (including security engineering teams) within the company... well that seems like a recipe for getting some of the company's most talented and highly paid security engineers incredibly pissed off if they ever find out — and it's very likely they would eventually find out, because it would be extremely difficult to hide this kind of thing over time.
devops99 6 days ago [-]
How about you tell the former CEO of Qwest, or William Binney, or Jacob Applebaum how it is you are so sure you think the world works. I implore you, respectfully, to consider what they have told the world and give some time, on top of the time you have probably already given this topic -- give some extra time to this topic, after seeing what they have shared with us.
> and businesses generally push back very strongly against governments whenever a government asks them to do things that will cause them to make less money and/or waste money.
Did Facebook and Twitter do this when the federal government told them to censor?
What did Mike Benz' interview with Tucker (whether you dislike or like Tucker is neither here nor there so let's not get distracted by that) in February of this year (2024) reveal to all of us?
> of big tech to have any desire to do this kind of thing.
Apple is and always will be subservient to NSA, CIA, and the State Department. If you believe today -- after taking a moment to really, truly, seriously think about it -- that it is the other way around, you have a very special kind of stunted personal development.
> (including security engineering teams) within the company...
I respectfully implore you to look into the publicly available information about how many people at Facebook, Google, Twitter (pre-Musk), and Apple have NSA or other "glowie" backgrounds.
> well that seems like a recipe for getting some of the company's most talented and highly paid security engineers incredibly pissed off
You are correct here.
> if they ever find out
They won't, not unless they already have the appropriate clearance, and once they do they will take those secrets to the grave, or else -- unless they can make it to Moscow instead of a black site operated on foreign soil.
> and it's very likely they would eventually find out
Provided they can get into parts of buildings, buildings that aren't even on the same campus, that they aren't authorized to get into, which will never happen. So..
6 days ago [-]
Intermernet 6 days ago [-]
Do you have any alternatives that you think are better?
devops99 6 days ago [-]
If your core value/goal is security, then it is tantamount to eliminate the potential for software that is not device owner-controlled to undermine Signal's security by stealing private keys. This means 1.] no closed source OS and 2.] no closed source software somewhere else on the system-on-chip either in the form of a closed source bootloader or "baseband firmware" or other "firmware" running on "hidden cores". This disqualifies all iPhones, Pixel handhelds, and Samsung handhelds.
If something else such as some kind of "user convenience" supersedes the core value/goal of security, then the below is not for you. But rich people and the economically less advantaged alike can all have this solution, one way or another.
The only way to achieve the goal is to run a modified "libre" (zero binary blobs) branch of GrapheneOS on a compute board that can load and run Linux without requiring any binary blobs to do so itself. This rules out any compute board (that I am aware of) that has a 5G radio on it. We could use a 5G radio as a WWAN card, but these all require closed firmware and we don't really have a way to protect a host system from them.
So, running another separate compute board that does have a 5G radio on it is necessary.
Another way to achieve the goal is a system-on-chip that is 100% libre and trustworthy, but good luck with that. Maybe the Librem 5 is (honest speculation) a viable candidate.
The secure boot problem is also not easy to solve. One way is a read-only SD card, but this has limitations. Another way is, and you might have guessed: another compute board. This isn't an uncommon pattern already, see F-Secure's Armory MK II device (which has a wireless chip on it that can be removed via heatgun).
Since we are already running more than one compute board, we can use additional separate compute boards for encryption and decryption, if we want to.
To interface with the board that runs GrapheneOS, a USB touchscreen that is smartphone-sized can be used. The culmination of all of this is a very small backpack containing the compute boards, battery, maybe antenna, etc.
Very rich people and their families already have these kinds of solutions. Other people who are rich in other ways (hacker's mind and motivation) also already have these kinds of solutions.
mindslight 6 days ago [-]
You're trying to step away from the mainstream thought for some very real reasons. But you have not developed good models to guide you once you're out in the woods.
devops99 6 days ago [-]
I want to emphasize, I do want you to tell me how my approach is wrong, because if you can successfully do that then everybody wins. But if all you can contribute is this model is wrong but without specifying meaningful substance as to how then all you're really doing is sharting with your keyboard.
So, please do enlighten us to what you think the flaws are, or point out actual flaws, or some major gap. At the very least you'll be able to highlight what floating working assumptions I didn't manage to preempt. And, I'll honestly appreciate it a lot.
devops99 6 days ago [-]
The "mainstream thought" is more concerned about what Steam games they can play than real-world InfoSec.
My "mainstream" is those who attend DEF CON, Blackhat, CCC, or FOSDEM, and also possess technical competency.
Ask anyone worth their salt if they can and should trust binary blobs.
Ask anyone worth their salt if they can and should trust retail system-on-chip that remain effectively undocumented, sans before the hardware source files hit the ODM.
If you believe "just trust me bro" regarding the kernel space binary blobs shipped with GrapheneOS or elsewhere on the system-on-chip is in the category of a "good model", those of us with seeking more than "just trust me bro" tier security are not your audience.
But the actual hackers in the world agree 1,000% with my mindset.
> But you have not developed good models to guide you once you're out in the woods.
I have GrapheneOS with zero binary blobs, and a solution based on compartmentalization that has been demonstrated to work in production even for the most user-iest of users.
If you believe you have something to contribute to improve it, please do.
mindslight 6 days ago [-]
I tried to keep my message brief and non-specific in hopes that you wouldn't jump on me. But alas.
The type of "model" I'm talking about are threat models that create practical security for yourself, without them "proving too much" and making you fall into the trap of designing bespoke solutions that solve the one problem you're focused on while creating many more.
> If you believe "just trust me bro" regarding the kernel space binary blobs shipped with GrapheneOS or elsewhere on the system-on-chip is in the category of a "good model"
No - but I believe it's the best security I'm currently able to achieve with a device that fits in my pocket, has long battery life, is in frequent contact with cell towers, and is inherently meant to communicate with other people running similar devices.
> I have GrapheneOS with zero binary blobs, and a solution based on compartmentalization that has been demonstrated to work in production even for the most user-iest of users.
If you think you've got a better approach for secure hardware to run Android on, then please by all means share! I would love to see it. So far, your allusions fit the all-too-common pattern of security through obscurity.
For perspective, my main desktop/server is an Asus KGPE with zero blobs in the main processor domain. I just don't see the point of fixating on this for the mobile ecosystem dumpster fire - over there mitigating mass surveillance is the best one can hope for. If you think you're a specific target of state/corpo attackers, then to me the current best answer is "don't trust your phone".
devops99 6 days ago [-]
A solution in production today I am aware of is new to some (and very not new to others) and relatively still somewhat novel, but it is not really bespoke anymore, this pattern specifically was standardized many years ago. I am also aware of relatively young hackers implementing these things themselves.
An in-kernel binary blob, and untrustworthy binary blobs running on other "hidden cores", very similar to the Intel ME/AMT situation, is a problem that many acknowledge is very serious. Perhaps I am among few who attempted to solve this problem, and did solve this problem, but I am not at all in a minority who view it as a very serious, and intolerable problem. Anyone worth their salt does view the problem as intolerable, the difference on our side is we did something about it.
> the one problem you're focused on while creating many more.
What "many more" problems do you speculate have been created with this approach? This is what I was hoping you could contribute, but I don't see this in your response. I wish I could write that I am disappointed.
> So far, your allusions fit the all-too-common pattern of security through obscurity.
Eliminating binary blobs that run in kernel space on the compute board where the user's messages are decrypted and displayed is not "security through obscurity", this is a hard technical difference and is not obscurity.
I am rather disappointed that you spent the time to respond, yet either did not read my previous post, or did not comprehend it, or just didn't consider the implications ; I believe the third is the case.
As you implicitly acknowledge yourself, an ASUS KPGE with zero blobs, we CAN NOT run binary blobs in kernel space, or binary blobs in an Intel ME/AMT equivalent situation, and have a system we can -- if we are being honest with ourselves -- trust to be secure.
> I believe it's the best security I'm currently able to achieve with
Our "device" does not fit in the pocket, we don't at this time have the means to fit it in a pocket, so we did not attempt this. Users who value the pocket experience over an ipso facto secure device are not our audience (and we don't respect such users). What we have does have better battery life than a device intended for a pockete, better radio connectivity to cell towers, and is inherently meant to also communicate with the lowest common denominator.
What our device also does, is provide a fair playing field for open source software to achieve meaningful security with others who also acted on a better value system by making a choice to do so.
Yes, the network effect is small, but when the other users are your spouse, or your children, or your best friend, or the other board members of a corporation you oversee, or members of your congressional staff, or a journalist, the network effect although not quantitatively meaningful is qualitatively extremely meaningful.
> If you think you're a specific target of state/corpo attackers, then current best answer is "don't trust your phone".
Thank you for acknowledging the problem we solved, you arrived at the same answer we had already arrived at: do not trust the system-on-chip running binary blobs on hidden cores with a binary blob bootloader and also binary blobs in kernel.
> but I believe it's the best security I'm currently able to achieve
My camp, fortunately, has different capabilities.
> then please by all means share! I would love to see it.
I expect someone will be able to do this in a proper way this coming year. At this time I post what I can post here because I would like to see others who have the wherewithal -- which is a matter of willpower, not economic status -- do this.
As of today, the general software developer / IT admin but-not-actually-a-hacker crowd has no idea how fucked it really is.
mindslight 6 days ago [-]
You still have not described your answer in concrete terms, yet you continue to boast about it. This is the crux of the problem.
Piecing it together - it sounds like a larger piece of kit, the main application processor running deblobbed Graphene, with the radios isolated out over USB. Sure, that's always been possible... but what's the draw? Once you're larger than the fits-in-pocket form factor, your comparables include a straightforward deblobbable laptop with WWAN that can just run a libre OS that wasn't created by a surveillance company.
But sure maybe you're aimed at Graphene enthusiasts who are focused on its additional security features despite its adversarial lineage. But why not come right out and say that? Instead of focusing on the positive value, you're basically just shitting on everything else.
Then furthermore, this whole thing started with you condemning Signal itself [0]. If you're solving the treacherous hardware/firmware problem, then what the heck are you using as a messaging program if it's not Signal or similar? Which is why I'm talking about the worries of bespoke solutions...
[0] personally I don't really use Signal because the whole mobile-first trust-Google teetering-on-the-edge-of-proprietary thing has always left a bad taste in my mouth, and practically it's just unwieldy to tie myself to a program that's stuck on the phone I leave by my front door. But it's hard to argue that it isn't secure within the context it's carved out for itself.
devops99 6 days ago [-]
> Once you're larger than the fits-in-pocket form factor, your comparables include a straightforward deblobbable laptop
To add some further clarity, some people use our solution at music festivals, the kinds of music festivals where people camp outdoors for a few days at a time.
Try "texting" your dad (who also has the same secure mobile solution), texting your girlfriend (who also has the same solution), and your buddy you met at another camp two days ago, while waiting to be served a drink while you're also half-way tripping balls. Not happening on a fuckin' laptop, brah.
devops99 6 days ago [-]
> Once you're larger than the fits-in-pocket form factor, your comparables include a straightforward deblobbable laptop
A laptop is NOT a comparable user experience to something someone can hold in their hand while on foot:
a USB touchscreen that is smartphone-sized can be used. The culmination of all of this is a very small backpack
> maybe you're aimed at Graphene enthusiasts
Very rich people and their families already have these kinds of solutions. Other people who are rich in other ways (hacker's mind and motivation) also already have these kinds of solutions.
that has been demonstrated to work in production even for the most user-iest of users.
> with the radios isolated out over USB. Sure, that's always been possible...
And some people actually went ahead and did it. The core idea was not my original idea, it had been done already in one form or another (though not as refined as ours') quite long before. All my camp did was package it so that non-technical people could have something that "just works". Many of the users of these solutions are not technical at all.
A combination of USB and ethernet. In some of these setups the "radio" is a retail Android device that is connected to ethernet via USB.
> Then furthermore, this whole thing started with you condemning Signal itself
Nothing I wrote condemns Signal, but simply confronts the hard reality that Signal does not protect users because by virtue of the platforms Signal runs on de facto, Signal can not protect users. Signal can protect users on my camp's devices however, as was already explained here https://news.ycombinator.com/item?id=42556652
> because the whole mobile-first trust-Google teetering-on-the-edge-of-proprietary thing has always left a bad taste in my mouth
I appreciate that you landed on the some of the same answers that I and others near me did. The key difference is we went ahead and acted on these concerns.
> You still have not described your answer in concrete terms
I feel I have shared more than enough that a thinking person can put 2 and 2 together. I also already already wrote "I expect someone will be able to [release this information] in a proper way this coming year." here https://news.ycombinator.com/item?id=42560339
Your limits within reasonably expected reading comprehension have exhausted my available patience. That said, relative to the rest of the world, we likely have more in common than not.
edits: fixed some grammar
mindslight 6 days ago [-]
Scattering tidbits around in different comments while including dodgy unsubstantiated appeals to authority like "Very rich people and their families already have these kinds of solutions" does not make for a compelling argument.
It sounds like you have something real, that solves a real problem while adding its own drawbacks, that works for your requirements. Focus on the specific value proposition, including the specific technical details in technical forums. Otherwise, you just sound like a crank. And the security field has a long history of cranks arguing against mainstream advice to sound edgy and authoritative (eg what you said regarding Signal) while then pushing their own bespoke solutions that survive through lack of scrutiny.
devops99 6 days ago [-]
> Scattering tidbits around
What I am showing you is I already answered your question, you fail at reading comprehension, or you fail at comprehending the very concepts themselves. Probably the latter.
> that could solve a real problem, while adding its own drawbacks.
It already solved a real problem. I have asked you repeatedly to specify a real-world drawback other than the physical profile (which the users find tolerable), you have not done this successfully.
> Focus on the specific value proposition
We already did this, and delivered.
> arguing against mainstream advice
Mainstream advice in the security world is, to consider a device secure:
- do not run binary blobs in kernel space
- do not run binary blobs in higher-privileged cores on the board
> (eg what you said regarding Signal)
The concept that something running in userspace can not protect users when 1.] the host OS is already compromised (binary blobs in kernel space) and 2.] underlying "hardware" is already compromised (via firmware on higher privileged cores, similar to Intel ME/AMT) is EXTREMELY MAINSTREAM.
> appeals to authority like "Very rich people and their families already have these kinds of solutions" does not make for a compelling argument.
Very rich people and their families already have these kinds of solutions. Other people who are rich in other ways (hacker's mind and motivation) also already have these kinds of solutions.
The authority that I did appeal to, ultimately, are Systems Administrators and relatively novice hackers equipped to prepare these solutions for themselves.
> their own bespoke solutions
The pattern was standardized over a decade ago. Our own implementation is already standardized with enough units in production that it's not bespoke anymore.
> that survive through lack of scrutiny.
If you were capable of implementing this solution on your own, which you have already effectively admitted you are not, then scrutiny from someone like yourself would be worth more than two rat shits, but you can not, so it is not.
At this point, you are clearly a midwit intelligent enough to comprehend what I have posted, but you still continue to post utter garbage. And ultimately I perceive you as a moderately mentally ill fucking moron.
Intermernet 6 days ago [-]
You keep saying you have a device that solves the problem, but you don't provide any actual details above what everyone already knows. You keep insulting people when they call you out on it. Either show your hand or be more humble. The other options don't make you look good, trustworthy or competent at all.
The people (other than me) in this thread have provable track records talking about this field. They're asking for more details and you just keep insulting them.
devops99 6 days ago [-]
> but you don't provide any actual details above what everyone already knows.
What we did was put de-blobbed GrapheneOS on a compute board, put secure boot on another compute board, punt the radio onto a separate compute board, add a battery, and manage it all with a management board in a small backpack, with a USB touchscreen for user interface.
Then we productized it for select groups of people.
But, it's really not that complicated. Like it's really not. Many people have built these kinds of things before.
If you want to try to tell me that mindslight has a "provable track record" talking about this field, I have a very very hard time believing something like that because -- and I'm being honest here -- as any reasonable person will also conclude: his responses he has posted here are really fuckin' stupid.
devops99 6 days ago [-]
I would at a later time directly link to you the next-generation builds that we do have permission (the previous was not my corp) to make public-facing in 2025, as I already wrote in another comment. However, your overall reply is kind of dumb. So, if you feel you are entitled to demand a "finished product", build it yourself.
And, yes, I will continue to look down my nose at you as someone who is grossly inferior to me.
Intermernet 6 days ago [-]
As someone grossly inferior to you I welcome your constructive feedback and I hope that many others adopt your attitude and social behaviour. Only in this way can we truly improve our world.
All hail devops99 and may the platforms that you build be favoured by your subjects, as unworthy as they surely are.
devops99 6 days ago [-]
Good Intermernet, very good Intermernet, keep up the good behavior and the key to your chastity cage shall be unlocked come May 2026.
Intermernet 5 days ago [-]
I suggest you re-read the entire thread and try to see your replies from a perspective other than your own. I don't think you realise how terrible you look here. It's not just the general cringiness of over confident youth, it's the doubling down on false superiority, the stench of antisocial tendency and the immature claims of success without the slightest hint of any evidence.
I'd be pleasantly surprised, and believe you'd achieved a modicum of self awareness, if you just deleted everything you posted here. But I fear that would be out of character...
devops99 5 days ago [-]
[dead]
mindslight 6 days ago [-]
Get some help, seriously.
devops99 6 days ago [-]
If everything you posted in this thread wasn't so demonsrably, and pathetically stupid, I might be able to take you somewhat seriously. But they are, so I can't.
mindslight 5 days ago [-]
What do you get out of attacking everyone who engages with you?
Sorry to be the one to break it to you, but your description isn't that technically interesting - no aspects of getting Graphene running on the devboard, or other difficulties integrating the parts. The idea of separating out the baseband isn't really novel either. A decade ago I gave a shot at using a mifi+tablet to move in that direction, and to see how far I could get without a proper voice plan. (I eventually got bored and moved on). You're not sitting on some super special idea here, and this vague passive voice "existence proof" style of writing is cagey and tedious to read. Which is probably how I ended up skipping over some actual details.
But do you know what is very interesting? That you've found a niche where the backpack form factor isn't a huge drawback, as well as group(s) of people who actually appreciate the threat model enough to keep spending extra effort doing a nonstandard thing. Those are all social factors that could actually sustain this type of device, rather than merely being passing curiosities that users eventually move on from. Basically it needs to be easy for people to piece together such a setup while mindlessly following a guide, as well as point other curious people to a description of it - the polar opposite of the trash elitist attitude you're pushing. (eg what specific dev boards straightforwardly run Graphene? I don't see any listed on the website)
And so if you actually care about widespread communications security rather than just being some combative wanker on a message board, please please please try to level up your wisdom for your next sockpuppet nym.
devops99 5 days ago [-]
> of people who actually appreciate the threat model enough to keep spending extra effort
The "product" is already successful. Some spent effort, others spent money.
Those who did the latter include defense contractor or other government backgrounds, ""conservative"" (aka normal people) moms who were censored on Facebook and Twitter as early as 2019 and had enough pattern recognition to know the unlawful censorship reached all the way up into the federal government, journalists, and some are in the category of politician.
Think of what Tucker Carlson shared with the public "the NSA got into my Signal account, which I didn't know they could do". I don't expect our solution to stand up to NSA, but unlike a retail device the starting point of the digital playing field on my camp's solution doesn't let digital intrusion be a cakewalk for "glowies" like retail devices do. Glowies have to work significantly harder to compromise what we have.
Some of the "Instagram famous" gen Z stereotypical "hot girls" who are computer illiterate and generally aloof (vapid on the surface) were immediately willing to tolerate the overhead of "touchscreen cabled to a backpack" when they were told "when you do a call with mom or dad, that call does actually stay protected". Trashy aka "low socioeconomic status" people don't give a shit about family privacy/autonomy, but these people do give a shit about it.
All aforementioned categories of users have already experienced suffering abuse, or anticipate being abused, or they simply have enough dignity in their life that they're not going to just give it away like typical retards do ; they are not going to "eventually move on" from "this computer I carry on my person every day is not designed for me to get fucked over" and then downgrade to a retail device that is by design (in one way or another) positioned to fuck them over. Sans a "burner" device for some specific narrow purpose (Instagram presence) that has had its internal mic gutted and has hardware shutters on its cameras.
The technical concept is what I am allowed to post about so that's what I did. As I already wrote earlier (and also then later cited I had written), something cohesive will be posted later this year, and if the person I expect to do it doesn't then I'll do it myself. Or, one of the other existing players in the space will, or someone else entirely (and I'd be perfectly happy with that).
.
> You're not sitting on some super special idea here
I appreciate you acknowledging this point, a point that I had emphasized, and I feel I had done so rather clearly, several times above. Many Qubes users have been doing this since 2018.
The essential thing my camp did that was "special" was package it professionally in a way that "normie" users can succeed with it out-of-the-box.
Like with any specific operating system and hardware combination there are implementation specific bugs here and there, but nothing major.
.
> how far I could get without a proper voice plan.
Some use "2FA mule", like this https://kozubik.com/items/2famule/ ; though we advise to physically remove the microphone of the 2FA mule and presume any WiFi/Bluetooth traffic from it is hostile.
Those who need PSTN (legacy phone network) voice or 911 can use another device for that.
No one using our mini-backpack is missing out on any functionality they actually need.
.
> eg what specific dev boards straightforwardly run Graphene? I don't see any listed on the website
I do appreciate you bothering to look. I actually do. There are boards that can run with zero blobs, they are intended for production use as sold, so long as they can run a Linux kernel and have a GPU that Android can use, they can run GrapheneOS.
Our solution is not supported nor known about by the GrapheneOS project, we have our own branch and cicd and all that.
.
> Which is probably how I ended up skipping over some actual details.
Yeah, the performance (or lack thereof) of your reading comprehension has been rather noticeable.
.
> the polar opposite of the trash elitist attitude you're pushing.
Okay but no matter what happens, I will always get more money and more pussy than you.
mindslight 4 days ago [-]
You would get even further if you could learn to avoid tripping over your own ego.
4 days ago [-]
devops99 4 days ago [-]
Those are big words for someone who openly admits to running binary blobs in the kernel of the device carried on-person.
jmward01 6 days ago [-]
This is why we need device to device encryption on top of all the security that a telco has. There is no excuse for any connection I make being unencrypted at any point except the receiver.
mike_d 6 days ago [-]
While you aren't wrong about needing end to end encryption, that would not have helped here. What China was after was meta data (who is communicating with who), which is a completely different problem to solve.
whimsicalism 6 days ago [-]
the articles i saw said they could record phone calls at will
trollied 6 days ago [-]
Yes, but not by man-in-the-middle attacks between the device and the network. There are systems internal to the provider that let you listen to any call.
immibis 6 days ago [-]
Because the US government forces them to have these systems and to not encrypt the calls. There should be more attention on the fact that, essentially, the US government hacked US telecoms for China's benefit.
bilbo0s 6 days ago [-]
Let's not overstate it. The US government hacks telecom for the benefit of the US government. Now having said that, as someone above mentioned, the intelligence agencies of the top 50 national governments are obviously all keen to use those hacks for their own benefit. And the flip side of that is that the US government is very interested in stopping these other national governments from succeeding.
Clearly, the counter-intel part of the US government effort has been less successful than the surveillance and intelligence gathering effort. But that doesn't mean that the US government wants all those other nations to be able to gather data from these systems. Our government wants nothing more than to be the only national government capable of gathering data from these systems.
sneak 6 days ago [-]
Make your phone calls with Signal and you don't have this problem. So far the US government isn't forcing anyone to use unencrypted calling.
int_19h 6 days ago [-]
The hard part is making all the other people you need to regularly talk to use Signal.
sneak 5 days ago [-]
If getting people to download free apps and sign in to them with their phone number were hard, most of HN wouldn’t exist.
int_19h 4 days ago [-]
Getting people to download free apps is easy.
Getting them to actually use them is hard, especially when the whole point of the app is to communicate with other people, and literally none of the people they regularly communicate with other than yourself use (or even know about) Signal.
0xbadcafebee 6 days ago [-]
Since the 80s you can spy on anyone's calls using the telco's standard maintenance features. You dial up a number, you then dial another number, and you're basically patched in to the second number, can listen in on any current call. There was a different system required by the government for taps, but linemen have their own method so they can diagnose issues. At least that used to be the case through the 2010s.
Stupidity and banality is a far greater threat than conspiracy.
mike_hearn 6 days ago [-]
Well obviously there is a good excuse, that users do not want to and cannot generally deal with key management. Even dealing with phone numbers is a hassle, and now you want to add a public key on top? One which cannot easily be written down, and is presumably tied to the handset so if you lose and replace your phone you stop being able to receive all phone calls until you manually somehow distribute your new key to everyone else?
End to end encryption has proven to be unworkable in every context it's been tried. There are no end-to-end encrypted systems in the world today that have any use, and in fact the term has been repurposed by the tech industry to mean pseudo encrypted, where the encryption is done using software that is also controlled by the adversary, making it meaningless. But as nobody was doing real end-to-end encryption anyway, the engineers behind that decision can perhaps be forgiven for it.
btown 6 days ago [-]
> pseudo encrypted, where the encryption is done using software that is also controlled by the adversary
I'd say there's a very real use for this, though, which is that with mobile applications it's more complicated to compromise a software deployment chain than it is to compromise a server-side system. If you're a state-level attacker and you want to coordinate a deployment of listening capabilities on Signal, say, you need to persistently compromise Signal's software supply chain and/or build systems, and do so in advance of other attacks you might want to coordinate with, because you need to wait for an entire App Store review cycle for your code to propagate to devices. The moment someone notices (say, a security researcher MITM'ing themselves) that traffic doesn't match the Signal protocol, your existence has been revealed. Whereas for the telcos in question, it seems it was possible to just compromise a server-side system to gain persistent listening capabilities, which could happen silently.
Now, this can and should be a lot better, if, say, the Signal app was built not by Signal but by Apple and Google themselves, on build servers that provably create and release reproducible builds straight from a GitHub commit. It would remove the ability for Signal to be compromised in a non-community-auditable way. But even without this, it's a nontrivial amount of defense-in-depth.
mike_hearn 5 days ago [-]
Yes it's not useless and can help mitigate insider threats, but that isn't how it's presented by the messaging companies.
knallfrosch 6 days ago [-]
You can just force Google/Apple to roll out compromised versions to selected users and force them to keep their mouth shut about it.
fn-mote 6 days ago [-]
Your comment concerns the situation where the state level attacker is the US.
As the article points out, there are many other adversaries to be concerned about. Protecting against them would be good. Don’t give up so quickly.
Aside - not the main point —>
I actually do not know if we are at the level of “forced speech” in the US. Publishing hacked apps would fall under that category. Forced silence is something and less powerful. Still bad, obviously.
supertrope 6 days ago [-]
Apple Facetime is painless enough. It can't mitigate targeted government espionage but it raises the bar from mass collection of plaintext.
Hilift 6 days ago [-]
The US Treasury just announced they had an incursion by Chinese threat actors. Their "cyber security vendor" had a remote access key compromised, enabling the attackers access to endpoints within Treasury.
BlueTemplar 6 days ago [-]
AFAIK this would not be news for EU telecoms : they are being operated by Chinese companies, so those have permanent access to nearly everything anyway.
Well at least American telecoms are fighting them. The European MO is to not only let themselves be conquered, but they actually pay China to do it. Thankfully American online services are on Europe's side, and work harder than anyone to protect their communications. These services don't even charge Europe anything, and Europe rewards them with billions of dollars of fines for doing it. Europe also defaced our websites in an effort to tax the attention economy, and removed legal protections for open source developers.
topspin 6 days ago [-]
> fighting them
That's amusing. I'll grant that US companies haven't outright surrendered, and are still at least permitted to engage in lip service on the issue. But actual "fighting"? That would mean a tech world that looks very different than what we have today, and would fatally conflict with no end of "interests" in the US.
__m 6 days ago [-]
> American online services are on Europe's side, and work harder than anyone to protect their communications
Yeah sure, except giving the NSA access and complying with the CLOUD Act.
est 6 days ago [-]
> capability to geolocate millions of individuals
I guess Starlink could easily geolocate every 4G/5G phone IMIE with huge direct-to-celll attennas
mike_hearn 6 days ago [-]
Modern mobile phone protocols do not expose your IMEI encrypted, they have a multi-step process in which temporary identifiers are used to identify the device to most of the network. So this is not necessarily the case.
yapyap 6 days ago [-]
even with SS7 ?
mike_hearn 5 days ago [-]
Happy new year!
SS7 only gets into the picture after the handset has connected to the home network, from what I understand (n.b. not a telco engineer). The IMEI is exposed to the network, but only to your network and only after the handset sets up an encrypted and authenticated connection with it.
5G uses a thing called a GUTI to identify handsets, not an IMEI. Think of it like a GUTI being a temporary IPv6 address allocated for a few hours by DHCP, and the IMEI being like a browser cookie. IMEI is exposed to your home network and networks you roam onto, but merely being in range of a tower doesn't expose it, and it's never transmitted in the clear over the air.
Also, within a network most of the components don't get access to the IMEI either.
betaby 6 days ago [-]
Last time I saw SS7 in production about a decade ago. Which operator uses SS7 today?
immibis 6 days ago [-]
All? But it's something internal to networks and between networks, not between a network and a user device, so I don't see the relevance to IMEI catchers which intercept the radio link.
Answer delayed by hours due to HN rate limiting.
betaby 6 days ago [-]
> All?
None? As I said I have not seen SS7 for a decade+ in USA/Canada.
IMEI catches has nothing to do with SS7.
amiga386 6 days ago [-]
You may not have seen it, but do you care to explain this Veritasium video of 3 months ago where they specifically gain (not entirely legal) access to the SS7 network to hack Linus Sebastian's phone?
The Signaling System 7 (SS7) and Diameter protocols play a critical role in
U.S. telecommunications infrastructure supporting fixed and mobile service
providers in processing and routing calls and text messages between networks,
enabling interconnection between fixed and mobile networks, and providing call
session information such as Caller ID and billing data for circuit switched
infrastructure. Over the last several years, numerous reports have called
attention to security vulnerabilities present within SS7 networks and suggest
that attackers target SS7 to obtain subscribers’ location information.
This is dated March 2024. It's talking about the very thing you say you haven't seen for more than a decade. To me, it sounds like that thing (the SS7 network) is alive and well in the USA, and the federal government is concerned about its lax security allowing spies to discover phone users' location information - the very topic we're discussing.
It sounds like you're talking mince.
betaby 5 days ago [-]
Key word here is '_and_'. Yes, I have not seen SS7 in a decade. On over hand Diameter is widely used everywhere.
amiga386 5 days ago [-]
You just sound like an unreliable witness.
If your claim is that there is literally no SS7 in US and Canadian telephone networks, then that is straight-up wrong. It exists in every network that still supports 2G/3G wireless protocols and classic PSTN standards. It was replaced in 4G/5G and SIP, but that requires your operator only supports those protocols and doesn't continue to support the old protocols. If it does, it will still have SS7 signalling and will still be susceptible to attacks (though it is free to run its own security to block them).
If your claim is that you haven't seen SS7 in a decade, then sure, maybe you haven't. But given there is actual, ongoing spying, impersonation, etc., that can be demonstrated in North America in 2024, and everyone involved says "it's due to SS7", and you're out here saying it's-so-rare-you-haven't-seen-in-a-decade, then what exactly is happening? What are the hackers using then, when the experts say they're exploiting SS7, if you insist it's not there?
That attack demonstrated on Linus channel, while it IS about SS7 I doubt it had SS7 interfaced in USA/Canada. Important details were left in that demo, while some hints were given. SS7 is definitely a thing in some countries though.
Linus channel demonstrated attack is not a direct one, but rather trickery, in way similar to domains 'apple.com' and 'аррle.com'.
amiga386 3 days ago [-]
I directly ask you: do you think there is at least one SS7 network in the USA or Canada, yes or no?
If you claim there are no SS7 networks in the USA or Canada, please explain:
1) why the FCC believes they exist and need to be secured, as per their March 2024 note
2) what the UMTS networks, still operational in Canada, are using for messaging (note the 2025 dates in https://en.wikipedia.org/wiki/3G#Phase-out for Canada; 2G/3G is still alive and well there. And I note that most of the 3G phase out in the USA was in 2022, not in 2014 which is what they'd have to be for you to not have seen SS7 for a decade)
3) what the POTS networks, still operational in the USA and Canada, are using for messaging (noting that FCC 19-72 only removes the requirement on ILECs to provide UME Analog Loops to CLECs, and does not require them to shut down POTS networks entirely by August 2022. For example, AT&T only plans to have transitioned 50% of its POTS network by 2025)
Oh wow! I wonder how well it works in a crowded urban environment as opposed to the less crowded areas their examples of poachers and illegal fishing vessels operate in?
reversethread 5 days ago [-]
Poachers and illegal fishing vessels are better PR than foreign dissidents. :9
The federal government wouldn't pay hundreds of millions of dollars[0] to catch one or two fishing boats.
> "We detect no activity by nation-state actors in our networks at this time," an AT&T spokesperson said.
Sounds like the root of the issue.
ram_rattle 5 days ago [-]
I was working in telecom research company where the director looked my in disbelief that hacks can actually in telecom, his eyes became wide just when I showed few small hacks, wonder what he is thinking now, lol
lowbloodsugar 5 days ago [-]
If I’ve learned anything about security it’s that once someone has admin access there’s no way your system is clean. It might look that way, but the system is lying to you, and even if you clean that part up there’s backdoors and Trojans just waiting in firmware, boot loaders, network stacks, backups, everything. Like does your system have any “workarounds” or can you wipe everything and redeploy? Guarantee it’s the former. Ok well then how do you know this bespoke thing is what was originally written by that guy five years ago.
outside1234 6 days ago [-]
How is this not an act of war? If they sent people physically over to do this it would be, so how is this different?
yehbit 6 days ago [-]
Better security is smaller nodes or value and more of them. But it’s more profitable to say screw others security and monopolize everything
jmclnx 6 days ago [-]
>This public-private effort aims to put in place minimum cybersecurity
Nice, we do not what the CEOs of these telcos have to give up their bonuses. So we force them to do the just bare minimum. Isn't capitalism great.
votepaunchy 6 days ago [-]
Minimum is not “bare minimum”. The alternative to minimum requirements is no requirements.
gertop 6 days ago [-]
Not allowing foreign entities to spy on their customers feels like the bare minimum to me.
JumpCrisscross 6 days ago [-]
> So we force them to do the just bare minimum. Isn't capitalism great
This has nothing to do with capitalism. The Soviet Union wasn’t a paragon of information security.
lenerdenator 6 days ago [-]
It does, at least with respect to how the US does capitalism.
The goal is to make the number at the bottom of the piece of paper bigger by a large enough margin in the next ninety days. If you can prove that there's the imminent risk of a specific cyberattack in the next 90 days and that it will have an adverse impact on getting that number bigger, fine, company leadership will pay attention, but that's rarely the case. Most cyberattacks are obviously clandestine in nature, and by the time they're found, the move isn't to harden infrastructure against known unknowns, but to reduce legal exposure and financial liability for leaving infrastructure unsecured. It's cheaper, and makes the number at the bottom of the piece of paper bigger.
gruez 6 days ago [-]
>The goal is to make the number at the bottom of the piece of paper bigger by a large enough margin in the next ninety days. If you can prove that there's the imminent risk of a specific cyberattack in the next 90 days and that it will have an adverse impact on getting that number bigger, fine, company leadership will pay attention, but that's rarely the case.
1. Capitalists seem pretty content with money losing ventures for far more than "the next ninety days", as long as they think it'll bring them future profits. Amazon and Uber are famous examples.
2. You think the government (or whatever the capitalism alternative is) aren't under the same pressure? Unless we live in a post scarcity economy, there's always going to be a beancounter looking at the balance sheet and scrutinizing expenses.
keybored 6 days ago [-]
I’m pretty sure that the Soviet Union was state capitalist.
gruez 6 days ago [-]
"true communism has never been tried"
immibis 3 days ago [-]
Well it hasn't. No one knows exactly what communism is, but they're pretty sure it's not a dictatorship.
keybored 6 days ago [-]
My guy/gal, state capitalism as a transition towards socialism and then to communism was an explicit Marxist policy by the Soviet Union. Hence that state (of state capitalism) was a part of the big-C Communism of the Soviet Union.
Sometimes thought-terminating quips are not enough.
codedokode 6 days ago [-]
Imagine if the calls were E2E encrypted, phone accounts were anonymous, there were no identifiers like IMEI, and phone companies didn't detect and record geolocation... this attack would be much harder.
outside1234 6 days ago [-]
How is this not an act of war?
ggm 6 days ago [-]
This feels like the perfect time for two outcomes: Ripley's solution, and deploy clean slate IPv6.
gorgoiler 6 days ago [-]
Can you elaborate? The first I assume is “take off and nuke the site from orbit”, per Aliens (1986). What are you advocating for with IPv6? Increasing the enumeration space for IP addresses from 32 bits to /64 prefixes?
ggm 6 days ago [-]
I'm really just advocating for a drop in replacement. You wouldn't redeploy the addressing architecture you have, instead disrupt the surface the salt gets into. If you did a drop in why not go the whole hog and make it a 6 fabric?
daneel_w 6 days ago [-]
But, a drop-in replacement of what? SS7? Diameter? Chinese cellular base stations from Huawei etc.? The collective telco IT infra and the shoddy security practices (or lack thereof)?
ggm 6 days ago [-]
"Yes"
immibis 5 days ago [-]
The addressing architecture isn't the problem though?
But if you think it is, I encourage you to run Yggdrasil.
ggm 5 days ago [-]
No, it's true the addressing architecture isn't the problem. The uncertainty fear and doubt over all the deployed equipment is the problem. Hence "take off and nuke it from orbit" -do a complete replacement. IFF you buy that, why would you replace like (v4) with like? Why not replace with auto addressed self deploying v6 (for instance) which some people have been advocating for at scale, rather than a bundle of assumptions in dhcpv6 static assignment? Sure a lot of things would be static, but you could simplify the NMS burdens massively and reduce your routing complexity in the IGP.
It's not a totally silly suggestion and it's not totally sensible either. Light hearted. I doubt any exec in any telco outside of Jio or maybe Comcast would go there. Amongst other things, they'd destroy a lot of capital value doing the Ripley. Well.. the liberated v4 sell replaces some of that until the price crashes..
GenerocUsername 6 days ago [-]
So this is obviously the intelligence agencies cleaning data before Trump takes control right
andy_ppp 6 days ago [-]
War with China is starting to seem increasingly likely, we need to seriously prepare our industry now to manufacture things again and stop giving them our technology.
The NSA/CIA need to start making systems more secure by default and stop thinking spying on their own populations is a top priority.
jamesmotherway 6 days ago [-]
China-nexus threat actors tend to be focused on espionage, including intellectual property theft. "Prepositioning" is a more recent observation, but it doesn't mean a war is inevitable. While it would be useful in that scenario, in others it may act only as a deterrent. Everyone should hope a war does not occur.
The NSA and CIA are neither able nor authorized to defend all privately-owned critical infrastructure. While concerns about agency oversight are warranted, I can assure you that spying on the population is not their top priority. It's abundantly clear that foreign threats aren't confined to their own geographies and networks. That can't be addressed without having the capability to look inward.
Secure by Design is an initiative led by CISA, which frequently shares guidance and threat reporting from the NSA and their partners. Unfortunately, they also can't unilaterally secure the private sector overnight.
These are difficult problems. Critical infrastructure owners and operators need to rise to the challenge we face.
notyourwork 6 days ago [-]
The NSA/CIA need to start paying higher salaries to encourage more talent to go into the government sector. I remember in undergrad we had an NSA recruiter come talk to our computer science class. After the discussion, I was able to chat them up on the side and they mentioned salary being the hardest problem with recruiting top talent. Big tech pays too much and government not enough. Where would you go when you graduate?
2OEH8eoCRo0 6 days ago [-]
Do they pay too little or have big tech monopolies distorted the market with their firehoses of cash? Bit of both?
marcosdumay 6 days ago [-]
[flagged]
_bin_ 6 days ago [-]
you’re understanding is flawed. the us government exists with the sole purpose of safeguarding American interests and liberty. if collecting chinese’ geolocation helped that, i’d be okay with us doing it. of course, china is objectively worse in that we haven’t actually done this to anyone’s knowledge and given that she built her whole economy off lying, cheating, and stealing.
i think you’re mistaking a sentiment of “china is dangerous, her interests specifically contradict ours, and we must permanently cripple her power before she gets worse” for “china is in violation of muh international norm #627!!”
Krasnol 6 days ago [-]
What war?
The digital has been running for quite a while, and there won't be a real one. China has nothing to gain from starting one. I mean seriously...why would you shoot your customer?
rickydroll 6 days ago [-]
> I mean seriously...why would you shoot your customer?
It depends on your goal. If it is strictly a commercial relationship, “shooting your customer” could be advantageous for preserving a revenue stream. Customer lock-in Could be seen as a form of “shooting your customer"
If your goal is political, "shooting your customer" may enable a regime change that is friendlier to you. We have done this multiple times in the Middle East, Central America, and South America.
lenerdenator 6 days ago [-]
The difference is, China has more have-nots than the US has people. The US is the main source of value creation for China. If Xi wants to not have a coup and be beh... I mean, if Xi wants to guarantee the future prosperity of the PRC, he needs to raise those have-nots out of poverty and the way to do that is by selling stuff to Americans and stealing their IP, not creating a shooting war with a country that has enough nuclear weapons to make this planet uninhabitable to intelligent life for centuries.
The US has done what it has done in the regions you list because they're already unstable (particularly the Middle East) and have no way of striking decisive blows against US territory.
kiba 6 days ago [-]
The way to do that is to actually have stronger consumption in China, not antagonize the US.
c217w 5 days ago [-]
[dead]
Arudtommy 5 days ago [-]
[dead]
devops99 6 days ago [-]
We can never trust them again.
We must implement as LAW that a SIM card can provide and only provide a Zero Knowledge Proof of "this SIM is valid for this cellular/data plan up to a specific date".
If they want to track us all the time, whatever, if they can't keep that data safe from the Chinese Communist Party, then they aren't competent enough to have it.
rsync 6 days ago [-]
"We must implement as LAW that a SIM card can provide and only provide a Zero Knowledge Proof ..."
Now is a good time to remind everyone that a SIM card is a full blown computer with CPU, RAM and NV storage.
Further, your carrier can upload and execute code on your SIM card without your knowledge or the knowledge of the higher level application processor functions of your telephone.
deadso 6 days ago [-]
Is there any sandboxing to prevent access from the SIM card computer to information on your phone? And if so, absent of some (admittedly not very unlikely) 0day allowing sandbox escape, what would a malicious SIM program be able to do?
immibis 5 days ago [-]
Yes, the card is a peripheral device to the phone - a hardware security key. It can't steal all your data for the same reason your Yubikey can't.
Answer delayed by hours due to HN rate limiting.
devops99 5 days ago [-]
Basically this.
And, hopefully your USB stack, or your phone's equivalent to SIM interface, doesn't have vulnerabilities that the small computer that is the SIM card could exploit.
Operating systems that center their efforts on protecting high risk users like Qubes dedicated a whole copy of Linux running in a Xen VM to interface with USB devices.
It'd be great if more information were available on how devices like Google's Pixel devices harden the interface for SIM cards.
mfkp 6 days ago [-]
Luckily e-sims are becoming more common.
immibis 5 days ago [-]
Unluckily because they can only be issued by registered and licensed members of the GSM alliance, IIRC.
hooverd 6 days ago [-]
I can't believe the CPC would do this- add a backdoor to American technology for American agencies.
devops99 6 days ago [-]
but that would be illegal and therefore impossibru /s
gruez 6 days ago [-]
>and only provide a Zero Knowledge Proof of "this SIM is valid for this cellular/data plan up to a specific date".
How do you implement bandwidth quotas with this?
Nevermark 5 days ago [-]
With a zero knowledge proof of the service type. With client side managed and generated temporary IDs.
Giorgi 6 days ago [-]
[flagged]
bru3s 6 days ago [-]
[flagged]
webdoodle 6 days ago [-]
They need to release all the metadata for Jefferey Epstein et al. Clearly the U.S. government isn't going to after 20 years of lies and deceit.
AndyMcConachie 6 days ago [-]
The people involved in this have all the reason to blame China or Chinese backed groups for this, but has there been any actual evidence released that confirms this? Attribution is notoriously difficult and the only thing the public has to go on is the word of people involved.
Yet when one reads these articles it's just, "China, China, China!!!"
Anyone have a link to actual evidence?
nextworddev 6 days ago [-]
Usually if North Korea or Russia did it, they say North Korea or Russia did it.
GordonS 6 days ago [-]
Honestly, it feels like they just pick a nation based on the current narrative. They already have plenty to bash Russia with regarding the Ukraine war, and they need to keep sinophobia alive and kicking, hence China.
Plainly I have no real evidence for this, other than the constant lack of evidence for their claims, and the doubts that are cast within the infosec community when data is available.
nextworddev 6 days ago [-]
Since OP asked for evidence, maybe we should ask for the evidence that backs your hypothesis that bad reporting about China = unsubstantiated sinophobia
INGSOCIALITE 6 days ago [-]
we've always been at war with eurasia
GordonS 6 days ago [-]
Unfortunately much of the West seems to have mistaken 1984 for a manual, rather than a cautionary work of fiction.
michaelt 6 days ago [-]
Many times in the past, a piece of malware developed by one group has been co-opted by another group. You see a virus like Stuxnet or Mirai that's working well, you just replace the payload, or switch the command-and-control code over to yourself. Then you launch an attack, but the weapon has someone else's fingerprints all over it.
As such, even if Xi Jinping himself had stood up at the UN and claimed responsibility for a particular Windows kernel-mode rootkit, that still wouldn't be incontrovertible evidence.
oldpersonintx 6 days ago [-]
[dead]
Rendered at 15:15:34 GMT+0000 (UTC) with Wasmer Edge.
That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Until the data breaches lead to serious $$$ impact for the company, the impact of these breaches will simply be waved off and pushed down to users. ("Sorry, we didn't protect your stuff at all. But, here's some credit monitoring!") Even in the profession of software development and engineering, very few people actually take data security seriously. There's lots of talk in the industry, but also lots of pisspoor practices when it comes to actually implementing the tech in a business.
In principle the insurance company then dictates security requirements back to the company in order to keep the premiums manageable.
However, in practice the insurance company has no deep understanding of the company and so the security requirements are blunt and ineffective at preventing breaches. They are very effective at covering the asses of the decision makers though... "we tried: look we implemented all these policies and bought this security software and installed it on our machines! Nobody could possibly have prevented such an advanced attack that bypassed all these precautions!"
Another problem is that often the IT at large enterprises is functionally incompetent. Even when the individual people are smart and incentivised (which is no guarantee) the entire department is steeped in legacy ways of doing things and caught between petty power struggles of executives. You can't fix that with financial incentives because most of these companies would go bankrupt before figuring out how to change.
I don't see things improving unless someone spoon-feeds these companies solutions to these problems in a low risk (ie. nobody's going to get fired over implementing them) way.
Often the end result is having just enough red tape to turn a 2 week project into an 8 month project, and yet not enough as to make sure it's impossible for someone to, say, build a data lake into a new cloud for some reports that just happen to have names, addresses and emails. Too big to manage.
Audit trails (of who did/saw what in a system) and PII-reduction (so you don't know who did what) are fundamentally at odds.
Assuming you are already handling "sensitive PII" SSNs/payroll/HIPPA/creditcard# data appropriately, which constitutes security best practice: PII-reduction or audit-reduction?
How would they then enforce this in a large company with 50k programmers? This was what the previous post was discussing.
Not to mention, a lot of this data is necessary. If you're invoicing, you need to store the names and many other kinds of sensitive data of your customers, you are legally required to do so.
It’s not easy, but it can move the needle over time.
It is often much easier to use an email address or a SSN when a randomly generated id, or even a hash of the original data would work fine.
I'm not saying that we shouldn't put more effort into reducing the amount of data kept, but it isn't as simple as just saying "collect less data".
And sometimes you can't avoid keeping PII.
So we could make the PII less valuable by not using for things that attract fraudsters.
Even during the best of times people simply do not give a fuck about privacy.
Honestly, if there is a problem at all I would say it's the uselessness of the Intelligence Community when actually posed with an espionage attack on our national security. FBI and CISA's response has been "Can't do; don't use." and I haven't heard a peep from the CIA or NSA.
I've seen the same thing at previous jobs; I had a lot to do and knew a lot of security issues that could potentially cause us problems, but management wasn't willing to give me any more resources (like hiring someone else) despite increasing my workload and responsibilities for no extra pay. Surprise, one of our game's beta testers discovered a misconfigured firewall and default password and got access to one of our backend MySQL servers. Thankfully they reported it to us right away, but... geez.
Well I care. I’d pay a premium to a telco that prioritized security and privacy. But they all are terrible, hovering up data, selling it indiscriminately and not protecting it. If they all suck then the default is to use the cheapest.
It’s definitely why I use Apple devices because I can buy directly from Apple and they don’t allow carriers to install their “junkware”.
A BS in CS has maybe one class on security, and then maybe employees have a yearly hour-long seminar on security to remind them to think about security. That isn't enough. And the security team and engineers that put the effort into learning more about security and privacy often aren't enough to guard against every possible problem.
People were shitting a brick over a pretty minor change in photo and location processing at Apple. That’s because they don’t screw up like this.
(Google, on the other hand, is the opposite.)
But, as far as I can tell, the only reason why Apple does this is because privacy these days can be sold as a premium, luxury feature.
> That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Frankly, any company that says they're a technology or software business should be building these kinds of systems. They can grab FOSS implementations and build on top or hire people who build these kinds of systems from the ground up. There's plenty of people in platform engineering in the US who could use those jobs. There's zero excuse other than that they don't want to spend the money to protect their customers data.
Telecoms will not get fined for this breach, or fined at amount that is meaningful, so they are not going to care.
Politics has historically incentivized job creation.
As SRE, I'm just over everyone running around acting like another tool is going to solve the problem. It's not, incentives need to be present not to be completely terrible at their job.
Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
> Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
I am a SRE. I stopped using that title professionally some time ago and started focusing on what makes companies reach for SRE when the skillset is the same as a platform engineer.
A post I wrote on the subject: https://ooo-yay.com/blog/posts/2024/you-probably-dont-need-s...
But don't worry, as soon as this catastrophe is over we'll be back to encryption is bad, security is bad, give us an easy way to get all your data or the bad guys win.
I have to admire those pioneers for seeing this and being right about it. I also admire them for influencing companies like Apple (in some cases by working there and designing things like iMessage, which is basically PGP for texts.) It doesn’t fix a damn thing when it comes to the traditional telecom providers, but it does mean we now have backup systems that aren’t immediately owned.
She was not amused or empathetic to their plight in the slightest. Population of at least 2 I guess.
The SS7 protocol provides the ability to determine which RNC/MMC a phone is paired with at any given time: it's fundamental to the nature of the functioning of the network. A sufficiently sophisticated adversary, with sufficient access to telephony hardware, could simply issue those protocol instructions to determine the location.
Somewhat of a tangent: does anyone have any resources on designing/implementing E2E encryption for an app where users have shared "team" data? I understand the basics of how it works when there's just one user involved, but I'm hoping to learn more about how shared data scenarios (e.g. shared E2E group chats like Facebook Messsenger) are implemented.
It should give you some ideas on how it's done.
[1] https://nfil.dev/coding/encryption/python/double-ratchet-exa...
Leak or lose a customer's location tracking data? That'll be $10,000 per data point per customer please.
It would convert this stuff from an asset into a liability.
The current state is clearly broken and unsustainable, but good luck getting any significant penalties through legislation with a far-right government.
Same principle as fines for hard-to-localize pollution.
If they had the option the telecommunication companies would love to encrypt traffic and obscure it so much that they have no plausible way of figuring out what is going on. Then they can take customer money and throw their hands up in honest confusion when anyone wants them to moderate their customer's behaviour.
They don't because that would be super-illegal. The police and intelligence services demand that they snoop, log and avoid data-minimisation techniques. It is entirely a question of regulatory demand and time that these sort of breaches happen; if the US government demands the data then sooner or later the Chinese government will get a copy too. I assume that is a trade off the US government is happy to make.
Who put the backdoor there? The US government did.
A telecommunications carrier may comply with CALEA in different ways:
https://www.fcc.gov/caleaIntelligence agencies also stockpile software vulnerabilities that they don't report to the vendor because they want to exploit the security flaw themselves.
We'll never have a secure internet when it's being constantly and systematically undermined.
Currently, with proprietary software, there's an incentive for companies to not even acknowledge bugs and it costs them money to fix issues, so they often rely on security through obscurity which is not much of a solution.
Funny that Venmo won't let me use a voip number, but I signed up for Tello, activated an eSIM while abroad and was immediately able to receive an SMS and sign-up. For the high barrier cost of $5. Wow, such security. Bravo folks.
Some companies have much lower thresholds for their KYC, but end up being facilitators of crime and draw scrutiny over time by both their more regulated partners and their governments.
I’d note that the US is relatively lax in these requirements compared to Singapore, Canada, Japan, and increasingly the EU. In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
Except that person you’re responding to explains succinctly how this is security theater that accomplishes little and ultimately is just a thinly veiled tactic for harassing users / coercive data collection. And the person above that is commenting that unnecessary data collection is just an incentive for hackers.
Comments like this just feel like apologism for bad policies, at best. Does anyone really think that people need to be scrutinized because most money laundering is small transactions from individuals, or, is it still huge transactions from huge customers that banks want to protect?
The issue though boils down to governments don’t want the financial infrastructure in their jurisdiction to allow unfettered crime. I’ve never seen a single government (granted I’ve never seen what happens in extremely oppressive regimes as we don’t generally do business there due to sanctions controls) who actively collects KYC outside of large transactions, the regulations exist to ensure a minimum baseline of KYC so the companies themselves can comply and reduce their own losses and instability as someone is often kiss liable in fraud and in money laundering or sanctions evasion some institution is subject to fines for facilitation.
But to be frank I think very little of what’s done is materially successful against most competent criminals and the consequences of being caught is usually just being blocked until they find a way around. To that end it’s a bit of not security theatre but compliance theatre. On the other hand it does act as a high pass filter as most fraud and financial crime is NOT competent. By and large retail finserv is a minimization effort not a prevention effort.
The regulations that are effective at prevention are usually so restrictive and so difficult to implement that they’re absurd for both the finserv to implement and for the participants to get through the hurdles.
I don’t know there’s any perfect solutions, and what exists is generally dumb, but the intentions are at the core well intended. It’s foolish tho to look at something as complex as financial infrastructure and wave it away as harassment and coercion rather than well intentioned incompetence.
When vtuber-esque deepfakes become trivial for the average person, I wonder what the next stage in this cat-and-mouse becomes. DNA-verficiation-USB-dongles?
I actually had an issue with this and ended up sending a notarized letter by snail mail, since I didn't feel like making a special 1hr each way trip during business hours to the closest branch.
Then you have to be ready to accept that there are advantages and disadvantages to your choice of where you live, and that is one of the latter.
There's a reason rural property is so cheap. It comes with a lot of disadvantages and inconveniences and costs that city-dwellers don't need to pay.
Seriously, you see this in any country of any size. Remote may just mean 300km/186mi off coast. Politicians go where the votes are of course, but this just means disregarding rural areas is a self fulfilling prophecy. The more you do it, the more remote they become.
One time a company retroactively blocked VOIP numbers, which was really stupid.
I'd say that with Google, chances are that they just stop offering the service.
But, I worry about what happens if I somehow get locked out of the account…
So which would you prefer:
(A) A low-level customer service representative can restore your access, but said representative is arguably susceptible to social engineering and other human weaknesses.
(B) Your account can be protected be physically 2FA key (yubikey), but on the case of loss/compromised account processes for recovery are hard to navigate and may not yield successful recovery?
In the case of (A) you have little security. In the case of (B) you can do a LOT to prevent account loss, but if bad things happen (whether your fault or not) you are locked out by default.
From a privacy point of view, I'm not sure that (B) is such a bad option.
But you could make the argument you should do backup of cloud services, the same way you do backup of hard drives.
For my Workspace account, I backup with Google Takeout every 2 months to Backblaze B2. I also sync (with rclone) My Drive to a local directory, which is weekly uploaded to B2.
A PROCESS for verifying the number isn't used for fraud and allowing use. I don't know, maybe the fact that I've been a customer for YEARS, use that number, and have successfully done thousands of dollars in transactions over a platform without any abnormal issue?
All of my 2FA Mules[1] are USMobile SIMs attached to pseudonyms which were created out of thin air.
It helps a lot to run your own mail servers and have a few pseudonym domains that are used for only these purposes.
[1] https://kozubik.com/items/2famule/
Like, the only reason I don't answer the phone and say "this is <Dad's name>", is because I'm honest. You'll never keep a bad guy out that already knows all the information that you ask for - he'll just lie and claim to be the business/account owner.
> he'll just lie and claim to be the business/account owner.
He can lie, but he doesn't have another person's passport to prove his lies.
And you don't need a passport. I've never met a company that will require full KYC-level video-identification with you on every call. You say that you're you (it doesn't matter whether you actually are you), you give them the secret code and they're happy.
$5 is at least 5x the cost of a voip number. I'm not a bank, but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 on the number than if it was $1 or less.
This is exactly it.
All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
So, when twilio (for instance) refuses to let you 2FA with anything other than tracing back to a real mobile SIM[1] (how ironic ...) it is not to help you - it is designed to slow down abusers.
[1] The "authy" workflow is still backstopped by a mobile SIM.
>These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
Sure does a great job for all the various online social media places that ostensibly have nothing to do with transacting money, still want my phone number, and still get overrun with spam and (promotion of) scams....
Requiring a deposit would be more direct, but administration of deposits would be a lot of work, and you have an uphill battle to convince users to pay anything, and even if they want to pay, accepting money is hard. And then after all that, some abusers will use your service to check their stolen credit cards.
Relevant reading.
Basically comes down to: the costs of acceptable levels of fraud < the cost of eliminating all fraud.
There are processes that would more or less eliminate all fraud, but they are such a pain in the ass that we just deal with the fraud instead.
I don't care. I know it's a numbers game. I know they don't care about me. But companies absolutely lose my business because of this bullshit.
VoIP is so well known (and automated) to do, even at $.10, it would be a magnitude easier to do.
Banks are always slow, and behind the times - because they are risk adverse. That has pros and cons.
there are the ones that closely follow software updates and you get to complain that things are breaking all the time.
and there are the stable distros, now you get to complain how old and out of date everything is.
Every single one works with GVoice, except Venmo. Chase, Cap1, Fidelity, etc. Not small players.
So while I think you make a fair enough argument for sure, it doesn't seem to be the case when nobody else does it, and makes Venmo seem like a pain in the arse.
That is a closing window and the case in fewer and fewer places. It wont be long until most people would need to fly across the globe or get involved with organised crime to pull that off...
The idea that scammers don't have digital money laying around just waiting on being spent on something is so absurdly out of touch on how everything in cyber works.
Corporations "eat" money.
Entities that can feed a corporation, are treated as peers, i.e. "people".
Thus, on shitter, if you can pay, you are a person (and get a blue checkmark).
It risks a lot of "noise" to do it this way. Why not just bribe employees to listen in on high profile targets? Why try to hit them all and create a top level response at the Presidential level?
This feels optics-driven and political. I'm not sure what it means, but it's interesting to ponder on. Attacking infrastructure is definitely the modern "cold war" of our era.
Sadly even most people in security are woefully unaware of the scope and scale of these operations, even within the networks they are responsible for.
The "noise" here was not from the attacker. They don't want to get caught. But sometimes mistakes happen.
> Why not just bribe employees to listen in on high profile targets?
Developing assets is complicated and difficult, attacking SS7 remotely is trivial, especially if you have multiple targets to surveil
There's a huge selection bias factored into what attacks make the news.
You could be an incredibly competent and highly motivated crook and bad luck in the form of an intern looking at logs or a cleaning lady spotting you entering a building could take you down.
If it is an LI attack the answer to which networks are compromised is: All of them that support automated LI.
That's a nasty attack because LI is designed to not be easily detectable because of worries about network operators knowing who is being tapped.
Anyone who has ever worked in networking will understand what I mean.
The networking industry is comically bad. They use ssh but never ever verify host keys, use agent forwarding, use protocols like RADIUS or SNMP which are completely insecure once you pop a single box and use the almost always global shared secret. Likewise the other protocols.
Do they use secure boot in a meaningful way? So they verify the file system? I have news for you if you think yes.
It’s kind of a joke how bad the situation is.
Twenty years ago someone discovered you could inject forged tcp resets to blow up BGP connections. What did the network industry do? Did they institute BGP over TLS? They did not. Instead they added TCP MD5 hashing (rfc: https://datatracker.ietf.org/doc/html/rfc2385 in 1999) using a shared secret because no one in networking could dream of using PKI. Still true today. If deployed at all, which it usually isn’t. 2010!!
If you want to understand the networking industry consider only this: instead of acknowledging how dumb the situation is and just using tls, instead we got this - https://datatracker.ietf.org/doc/html/rfc5925 - which is almost as dumb as 2385 and just as bad in actual deployment because they just keep using the same deployment model (the shared tuple). Not all vendors that “support” 5925 support the whole RFC.
As an aside this situation is well known. People have talked about it for literal decades. The vendors have shown little to no interest in making security better except point fixes for the kind of dumb shit they get caught on. Very few security researchers look at networking gear or only look at low end junk that doesn’t really matter.
They aren't saying that more have been hacked, they are saying that more have been discovered related to that hack. Any adversary at this level would be monitoring the news, and would take appropriate actions (for gain) or roll up the network rather than allow reverse engineering of IOCs.
More than likely this was not an LI based attack, but rather they don't know for sure how they got in. Nearly all of the guidance is standard cybersecurity best practices for monitoring and visibility, and lowering attack surface with few exceptions (in the CISA guidance).
The major changes appear to be the requirements to no longer use TFTP, and the referral to the manufacturer for source of truth hashes (which have not necessarily been provided in the past). A firmware based attack for egress/ingress seems very likely.
For reference, TFTP servers are what send out the ISP configuration for endpoints in their network, the modems (customers), and that includes firmware images (which have no AAA). Additionally as far as I know the hardware involved lacks an ability to properly audit changes to these devices (by design), and TR-47 is rarely used appropriately, the related encryption is also required by law to be backward compatible with known broken encryption. There was a good conference talk on this a few years ago, at Cyphercon 6.
https://www.youtube.com/watch?v=_hk2DsCWGXs
The particular emphasis on TLS1.3 (while now standard practice) suggests that connections may be being downgraded, and the hardware/firmware at CPE bridge may be performing MITM to public sites in earlier versions transparently, if this is the case (its a common capability needed).
The emphasis on using specific DH groups, may point to breaks in the key exchange of groups not known to be broken (but are broken), which may or may not be a factor as well.
If the adversary can control, and insert malicious code into traffic on-the-fly targeting sensitive individuals who have access already, they can easily use information that passes through to break into highly sensitive systems.
The alternative theory while fringe, is maybe they've come up with a way to break feistel networks (in terms of cryptographic breaks).
Awhile back the NSA said they had a breakthrough in cryptography. If that breakthrough was related to attacks on feistel network structures (which almost all modern cryptography is built on), that might explain another way (although this is arguably wild speculation at this point). Nearly every computer has a backdoor co-processor built-in in the form of Trustzone, Management Engine, or AMD's PSP. Its largely only secured by crypto without proper audit trails.
It presents a low hanging concentrated fruit into almost every computation platform on earth, and by design, its largely not auditable or visible. Food for thought.
Quantum computer breaks a single signing key for said systems, acting like a golden key back door to everything. All the eggs in one basket. Not out of the realm of possibility at the nation state level. No visibility means no perception or ability to react, or isolate the issues except indirectly.
The problem with the shared secret model isn’t that it can be stolen, it’s that it is globally shared within a provider network. You can’t root it in a hardware device. You can’t do forensics to see from what node it was stolen.
We are talking about an industry where they still connect console servers, often to serial terminal aggregators that are on the internal network alongside the management Ethernet ports, which have dumb guessable passwords, often the same one on every box, that all their bottom tier overseas contractors know.
It’s just sad.
Its true that those protocols are basically running shared secrets, but those areas all have some visibility with auditing and monitoring.
You crack a root or signing key at the co-processor level and you can effectively warp and control what anyone sees or does with almost no forensics being possible.
It fundamentally allows a malevolent entity the ability to alter what you see on the fly with no defense possible. Such is the problem with embedded vulnerabilities, its just like that NewAg train thing.
Antitrust and bricking for monopolistic benefit is far more newsworthy then say embedding a remote radio-controlled off-switch with no plausible cover, that can brick the trains as they move harvests, food stuffs, or military equipment.
Its corruption, not national security. Would many believe that its the latter over the former when it does both?
It is sad that our societal systems have become so brittle that they cannot check or effectively stop the structural defects and destructive influences within itself.
PRC Targeting of Commercial Telecommunications Infrastructure
https://news.ycombinator.com/item?id=42132014
AT&T, Verizon reportedly hacked to target US govt wiretapping platform
https://news.ycombinator.com/item?id=41766610
Yup. The attack hit the CALEA backdoor via a wiretapping outsourcing company. Which one?
* NEX-TECH: https://www.nex-tech.com/carrier/calea/
* Substentio: https://www.subsentio.com/solutions/platforms-technologies/
* Sy-Tech: https://www.sytechcorp.com/calea-lawful-intercept
Who else is in that business? There aren't that many wiretapping outsourcing companies.
Verisign used to be in this business but apparently no longer is.
[1] https://www.google.com/search?client=firefox-b-d&q=calea+sol...
[2] https://oig.justice.gov/reports/FBI/a0419/findings.htm
That seems pretty clear.
wiretap systems are on the telecom provider side and it a bunch of different and in many cases ordinary networking equipment that can be easily misconfigured.
TTP (aka companies listed above) are optional and usually used by companies that don't have their own legal department to process warrants/want to deal with fine details of intercepts
Is it a great idea to give all that info to India as well?
This is obviously technically impossible, but the desire for that end state makes a ton of sense from the IC’s perspective.
Secrets fail unsafe. Maybe an alternative doesn't.
Government keeps trying to mandate it in various ways. With predictably bad results.
Salt Typhoon - which this discussion is about - is an example. Tools for tracking people that were supposed to be for our side, turn out to also be used by the Chinese. Plus the act of creating partial security often creates new security holes that can be exploited in unexpected ways.
Either you build things to be secure, or you have to assume that it will someday be broken. There is no in between.
That's going quite far. Even with all the details of it documented and open, there's a relatively small number of people who can actually verify that both the implementation is correct and the design is safe. Even though I can understand how it works, I wouldn't claim I can verify it in any meaningful way.
Alternatively: it's trivial for people sufficiently experienced with cryptography. And that's a tiny pool of people overall.
Or go back to Dual_EC_DRBG.
Unless DJB has blessed it, I'll pass.
Avoiding this is obviously a huge effort.
How much effort would it be for the US government to force Google to ship a different APK from everyone else to a single individual?
VS
"You must backdoor the operating system used on billions of devices. Nobody can know about it but we somehow made it a law that you must obey."
Come on, that's not the same amount of efforts at all.
Anything you can buy retail will for sure fuck you the user over.
Perfect security isn't possible. See "reflections on trusting trust".
> ANOM was a trap
Yes, ANOM was intended to be a trap.
> and most closed encryption schemes are hideously buggy
Yes they are. Hence some of us use open encryption schemes on our closed-market devices.
> You're actually better off with Android and signal.
I am better off with closed-market devices than I am with any retail device.
> If we had open baseband it would be better
And the ability to audit what is loaded on the handset, and the ability to reflash, etc. In the real-world all we have so far is punting this problem over to another compute board.
> Perfect security isn't possible.
Perhaps, but I was not after "perfect security", I was just after "security" and no retail device will ever give me that, but a closed-market device already has.
> See "reflections on trusting trust".
Already saw it. You're welcome to see:
Hence the security afforded by Signal is very weak in-practice and questionable at best.
discuss an exceedingly clear assassination plot against the President exclusively over signal with yourself between a phone that's traceable back to you, and a burner that isn't. if the secret service pays you a visit, and that's the only way they could have come by it, then you have you answer.
You want to use this, by all means.
Lessons Learned
We believe that all of the vulnerabilities we discovered have been mitigated by Threema's recent patches. This means that, at this time, the security issues we found no longer pose any threat to Threema customers, including OnPrem instances that have been kept up-to-date. On the other hand, some of the vulnerabilities we discovered may have been present in Threema for a long time.
I believe the Session referred to is here ... https://getsession.org/
Tox is here ? https://tox.chat/
The Matrix i found seems to have been closed down earlier this month ... https://en.m.wikipedia.org/wiki/Matrix_(app) ... that's assuming I found the correct "matrix".
If it matters to you don't take my word for those being the correct points of contact, that's just me searching for two minutes.
As a side rant, I wish people would choose less generic names for their projects, calling something "session" ? You might as well call it "thing".
Yeah, if a nation-state thinks you are a bad enough actor, they might use a high power way to get at you. See Pegasus, for instance
But those exploits are rare, expensive and can be blown.
No one has ever said Signal is perfect security.
But it is damn good. Your SMSes aren't sitting in plaintext on your mobile ISPs network. You aren't going to have them intercepted by a fake mobile tower. And if you and your recipients use disappearing messages, good luck to any prosecutor trying to get them off a device.
And as for Apple sending a fake update? Might could happen but 1) Apple fought this once and 2) it'd be hard to do in any widespread way without being detected
Saying Signal protects you from fuck all is not just wrong, it's irresponsible AF.
It's like saying that locks, firewalls, alarm systems, curtains and network monitoring don't work because some people know how to defeat them.
Signal is a great security upgrade for almost anyone. I love seeing more people use it.
Normalizing encryption is great.
My point is that they (and other tech companies) would be highly incentivized against implementing something like a malicious update targeting a single device/user based purely on capitalistic motivations, rather than philosophical/ethical ones.
The "infrastructure" for the targeted updates is implemented by compartmentalized teams, who will be comprised of the clearance community, and the "external" people who work with them are a part of the clearance community.
The real world does work this way. Businesses make business decisions based on bottom-line impact, and businesses generally push back very strongly against governments whenever a government asks them to do things that will cause them to make less money and/or waste money.
>The "infrastructure" for the targeted updates is implemented by compartmentalized teams, who will be comprised of the clearance community, and the "external" people who work with them are a part of the clearance community.
I agree that would be how it would work if it actually happened, but I think you overestimate the appetite (and even ability) of big tech to have any desire to do this kind of thing.
If you are implying that there are teams within big tech companies who secretly do this kind of thing, even against the wishes of other engineering teams (including security engineering teams) within the company... well that seems like a recipe for getting some of the company's most talented and highly paid security engineers incredibly pissed off if they ever find out — and it's very likely they would eventually find out, because it would be extremely difficult to hide this kind of thing over time.
Did Facebook and Twitter do this when the federal government told them to censor?
What did Mike Benz' interview with Tucker (whether you dislike or like Tucker is neither here nor there so let's not get distracted by that) in February of this year (2024) reveal to all of us?
> of big tech to have any desire to do this kind of thing.
Apple is and always will be subservient to NSA, CIA, and the State Department. If you believe today -- after taking a moment to really, truly, seriously think about it -- that it is the other way around, you have a very special kind of stunted personal development.
> (including security engineering teams) within the company...
I respectfully implore you to look into the publicly available information about how many people at Facebook, Google, Twitter (pre-Musk), and Apple have NSA or other "glowie" backgrounds.
> well that seems like a recipe for getting some of the company's most talented and highly paid security engineers incredibly pissed off
You are correct here.
> if they ever find out
They won't, not unless they already have the appropriate clearance, and once they do they will take those secrets to the grave, or else -- unless they can make it to Moscow instead of a black site operated on foreign soil.
> and it's very likely they would eventually find out
Provided they can get into parts of buildings, buildings that aren't even on the same campus, that they aren't authorized to get into, which will never happen. So..
If something else such as some kind of "user convenience" supersedes the core value/goal of security, then the below is not for you. But rich people and the economically less advantaged alike can all have this solution, one way or another.
The only way to achieve the goal is to run a modified "libre" (zero binary blobs) branch of GrapheneOS on a compute board that can load and run Linux without requiring any binary blobs to do so itself. This rules out any compute board (that I am aware of) that has a 5G radio on it. We could use a 5G radio as a WWAN card, but these all require closed firmware and we don't really have a way to protect a host system from them.
So, running another separate compute board that does have a 5G radio on it is necessary.
Another way to achieve the goal is a system-on-chip that is 100% libre and trustworthy, but good luck with that. Maybe the Librem 5 is (honest speculation) a viable candidate.
The secure boot problem is also not easy to solve. One way is a read-only SD card, but this has limitations. Another way is, and you might have guessed: another compute board. This isn't an uncommon pattern already, see F-Secure's Armory MK II device (which has a wireless chip on it that can be removed via heatgun).
Since we are already running more than one compute board, we can use additional separate compute boards for encryption and decryption, if we want to.
To interface with the board that runs GrapheneOS, a USB touchscreen that is smartphone-sized can be used. The culmination of all of this is a very small backpack containing the compute boards, battery, maybe antenna, etc.
Very rich people and their families already have these kinds of solutions. Other people who are rich in other ways (hacker's mind and motivation) also already have these kinds of solutions.
So, please do enlighten us to what you think the flaws are, or point out actual flaws, or some major gap. At the very least you'll be able to highlight what floating working assumptions I didn't manage to preempt. And, I'll honestly appreciate it a lot.
My "mainstream" is those who attend DEF CON, Blackhat, CCC, or FOSDEM, and also possess technical competency.
Ask anyone worth their salt if they can and should trust binary blobs.
Ask anyone worth their salt if they can and should trust retail system-on-chip that remain effectively undocumented, sans before the hardware source files hit the ODM.
If you believe "just trust me bro" regarding the kernel space binary blobs shipped with GrapheneOS or elsewhere on the system-on-chip is in the category of a "good model", those of us with seeking more than "just trust me bro" tier security are not your audience.But the actual hackers in the world agree 1,000% with my mindset.
> But you have not developed good models to guide you once you're out in the woods.
I have GrapheneOS with zero binary blobs, and a solution based on compartmentalization that has been demonstrated to work in production even for the most user-iest of users.
If you believe you have something to contribute to improve it, please do.
The type of "model" I'm talking about are threat models that create practical security for yourself, without them "proving too much" and making you fall into the trap of designing bespoke solutions that solve the one problem you're focused on while creating many more.
> If you believe "just trust me bro" regarding the kernel space binary blobs shipped with GrapheneOS or elsewhere on the system-on-chip is in the category of a "good model"
No - but I believe it's the best security I'm currently able to achieve with a device that fits in my pocket, has long battery life, is in frequent contact with cell towers, and is inherently meant to communicate with other people running similar devices.
> I have GrapheneOS with zero binary blobs, and a solution based on compartmentalization that has been demonstrated to work in production even for the most user-iest of users.
If you think you've got a better approach for secure hardware to run Android on, then please by all means share! I would love to see it. So far, your allusions fit the all-too-common pattern of security through obscurity.
For perspective, my main desktop/server is an Asus KGPE with zero blobs in the main processor domain. I just don't see the point of fixating on this for the mobile ecosystem dumpster fire - over there mitigating mass surveillance is the best one can hope for. If you think you're a specific target of state/corpo attackers, then to me the current best answer is "don't trust your phone".
An in-kernel binary blob, and untrustworthy binary blobs running on other "hidden cores", very similar to the Intel ME/AMT situation, is a problem that many acknowledge is very serious. Perhaps I am among few who attempted to solve this problem, and did solve this problem, but I am not at all in a minority who view it as a very serious, and intolerable problem. Anyone worth their salt does view the problem as intolerable, the difference on our side is we did something about it.
> the one problem you're focused on while creating many more.
What "many more" problems do you speculate have been created with this approach? This is what I was hoping you could contribute, but I don't see this in your response. I wish I could write that I am disappointed.
> So far, your allusions fit the all-too-common pattern of security through obscurity.
Eliminating binary blobs that run in kernel space on the compute board where the user's messages are decrypted and displayed is not "security through obscurity", this is a hard technical difference and is not obscurity.
I am rather disappointed that you spent the time to respond, yet either did not read my previous post, or did not comprehend it, or just didn't consider the implications ; I believe the third is the case.
As you implicitly acknowledge yourself, an ASUS KPGE with zero blobs, we CAN NOT run binary blobs in kernel space, or binary blobs in an Intel ME/AMT equivalent situation, and have a system we can -- if we are being honest with ourselves -- trust to be secure.
> I believe it's the best security I'm currently able to achieve with
Our "device" does not fit in the pocket, we don't at this time have the means to fit it in a pocket, so we did not attempt this. Users who value the pocket experience over an ipso facto secure device are not our audience (and we don't respect such users). What we have does have better battery life than a device intended for a pockete, better radio connectivity to cell towers, and is inherently meant to also communicate with the lowest common denominator.
What our device also does, is provide a fair playing field for open source software to achieve meaningful security with others who also acted on a better value system by making a choice to do so.
Yes, the network effect is small, but when the other users are your spouse, or your children, or your best friend, or the other board members of a corporation you oversee, or members of your congressional staff, or a journalist, the network effect although not quantitatively meaningful is qualitatively extremely meaningful.
> If you think you're a specific target of state/corpo attackers, then current best answer is "don't trust your phone".
Thank you for acknowledging the problem we solved, you arrived at the same answer we had already arrived at: do not trust the system-on-chip running binary blobs on hidden cores with a binary blob bootloader and also binary blobs in kernel.
> but I believe it's the best security I'm currently able to achieve
My camp, fortunately, has different capabilities.
> then please by all means share! I would love to see it.
I expect someone will be able to do this in a proper way this coming year. At this time I post what I can post here because I would like to see others who have the wherewithal -- which is a matter of willpower, not economic status -- do this.
As of today, the general software developer / IT admin but-not-actually-a-hacker crowd has no idea how fucked it really is.
Piecing it together - it sounds like a larger piece of kit, the main application processor running deblobbed Graphene, with the radios isolated out over USB. Sure, that's always been possible... but what's the draw? Once you're larger than the fits-in-pocket form factor, your comparables include a straightforward deblobbable laptop with WWAN that can just run a libre OS that wasn't created by a surveillance company.
But sure maybe you're aimed at Graphene enthusiasts who are focused on its additional security features despite its adversarial lineage. But why not come right out and say that? Instead of focusing on the positive value, you're basically just shitting on everything else.
Then furthermore, this whole thing started with you condemning Signal itself [0]. If you're solving the treacherous hardware/firmware problem, then what the heck are you using as a messaging program if it's not Signal or similar? Which is why I'm talking about the worries of bespoke solutions...
[0] personally I don't really use Signal because the whole mobile-first trust-Google teetering-on-the-edge-of-proprietary thing has always left a bad taste in my mouth, and practically it's just unwieldy to tie myself to a program that's stuck on the phone I leave by my front door. But it's hard to argue that it isn't secure within the context it's carved out for itself.
To add some further clarity, some people use our solution at music festivals, the kinds of music festivals where people camp outdoors for a few days at a time.
Try "texting" your dad (who also has the same secure mobile solution), texting your girlfriend (who also has the same solution), and your buddy you met at another camp two days ago, while waiting to be served a drink while you're also half-way tripping balls. Not happening on a fuckin' laptop, brah.
A laptop is NOT a comparable user experience to something someone can hold in their hand while on foot:
> maybe you're aimed at Graphene enthusiasts Both of the responses above were already written in the parent comment here https://news.ycombinator.com/item?id=42557398And also already written in a parent comment here https://news.ycombinator.com/item?id=42559741
> with the radios isolated out over USB. Sure, that's always been possible...And some people actually went ahead and did it. The core idea was not my original idea, it had been done already in one form or another (though not as refined as ours') quite long before. All my camp did was package it so that non-technical people could have something that "just works". Many of the users of these solutions are not technical at all.
A combination of USB and ethernet. In some of these setups the "radio" is a retail Android device that is connected to ethernet via USB.
> Then furthermore, this whole thing started with you condemning Signal itself
Nothing I wrote condemns Signal, but simply confronts the hard reality that Signal does not protect users because by virtue of the platforms Signal runs on de facto, Signal can not protect users. Signal can protect users on my camp's devices however, as was already explained here https://news.ycombinator.com/item?id=42556652
> because the whole mobile-first trust-Google teetering-on-the-edge-of-proprietary thing has always left a bad taste in my mouth
I appreciate that you landed on the some of the same answers that I and others near me did. The key difference is we went ahead and acted on these concerns.
> You still have not described your answer in concrete terms
I feel I have shared more than enough that a thinking person can put 2 and 2 together. I also already already wrote "I expect someone will be able to [release this information] in a proper way this coming year." here https://news.ycombinator.com/item?id=42560339
Your limits within reasonably expected reading comprehension have exhausted my available patience. That said, relative to the rest of the world, we likely have more in common than not.
edits: fixed some grammar
It sounds like you have something real, that solves a real problem while adding its own drawbacks, that works for your requirements. Focus on the specific value proposition, including the specific technical details in technical forums. Otherwise, you just sound like a crank. And the security field has a long history of cranks arguing against mainstream advice to sound edgy and authoritative (eg what you said regarding Signal) while then pushing their own bespoke solutions that survive through lack of scrutiny.
What I am showing you is I already answered your question, you fail at reading comprehension, or you fail at comprehending the very concepts themselves. Probably the latter.
> that could solve a real problem, while adding its own drawbacks.
It already solved a real problem. I have asked you repeatedly to specify a real-world drawback other than the physical profile (which the users find tolerable), you have not done this successfully.
> Focus on the specific value proposition
We already did this, and delivered.
> arguing against mainstream advice
Mainstream advice in the security world is, to consider a device secure:
> (eg what you said regarding Signal)The concept that something running in userspace can not protect users when 1.] the host OS is already compromised (binary blobs in kernel space) and 2.] underlying "hardware" is already compromised (via firmware on higher privileged cores, similar to Intel ME/AMT) is EXTREMELY MAINSTREAM.
> appeals to authority like "Very rich people and their families already have these kinds of solutions" does not make for a compelling argument.
But this WAS NOT MY ARGUMENT. My argument, as posted here https://news.ycombinator.com/item?id=42557398, was:
The authority that I did appeal to, ultimately, are Systems Administrators and relatively novice hackers equipped to prepare these solutions for themselves.> their own bespoke solutions
The pattern was standardized over a decade ago. Our own implementation is already standardized with enough units in production that it's not bespoke anymore.
> that survive through lack of scrutiny.
If you were capable of implementing this solution on your own, which you have already effectively admitted you are not, then scrutiny from someone like yourself would be worth more than two rat shits, but you can not, so it is not.
At this point, you are clearly a midwit intelligent enough to comprehend what I have posted, but you still continue to post utter garbage. And ultimately I perceive you as a moderately mentally ill fucking moron.
The people (other than me) in this thread have provable track records talking about this field. They're asking for more details and you just keep insulting them.
The message I posted here https://news.ycombinator.com/item?id=42557398 is excessively detailed.
What we did was put de-blobbed GrapheneOS on a compute board, put secure boot on another compute board, punt the radio onto a separate compute board, add a battery, and manage it all with a management board in a small backpack, with a USB touchscreen for user interface.
Then we productized it for select groups of people.
But, it's really not that complicated. Like it's really not. Many people have built these kinds of things before.
If you want to try to tell me that mindslight has a "provable track record" talking about this field, I have a very very hard time believing something like that because -- and I'm being honest here -- as any reasonable person will also conclude: his responses he has posted here are really fuckin' stupid.
And, yes, I will continue to look down my nose at you as someone who is grossly inferior to me.
All hail devops99 and may the platforms that you build be favoured by your subjects, as unworthy as they surely are.
I'd be pleasantly surprised, and believe you'd achieved a modicum of self awareness, if you just deleted everything you posted here. But I fear that would be out of character...
Sorry to be the one to break it to you, but your description isn't that technically interesting - no aspects of getting Graphene running on the devboard, or other difficulties integrating the parts. The idea of separating out the baseband isn't really novel either. A decade ago I gave a shot at using a mifi+tablet to move in that direction, and to see how far I could get without a proper voice plan. (I eventually got bored and moved on). You're not sitting on some super special idea here, and this vague passive voice "existence proof" style of writing is cagey and tedious to read. Which is probably how I ended up skipping over some actual details.
But do you know what is very interesting? That you've found a niche where the backpack form factor isn't a huge drawback, as well as group(s) of people who actually appreciate the threat model enough to keep spending extra effort doing a nonstandard thing. Those are all social factors that could actually sustain this type of device, rather than merely being passing curiosities that users eventually move on from. Basically it needs to be easy for people to piece together such a setup while mindlessly following a guide, as well as point other curious people to a description of it - the polar opposite of the trash elitist attitude you're pushing. (eg what specific dev boards straightforwardly run Graphene? I don't see any listed on the website)
And so if you actually care about widespread communications security rather than just being some combative wanker on a message board, please please please try to level up your wisdom for your next sockpuppet nym.
The "product" is already successful. Some spent effort, others spent money.
Those who did the latter include defense contractor or other government backgrounds, ""conservative"" (aka normal people) moms who were censored on Facebook and Twitter as early as 2019 and had enough pattern recognition to know the unlawful censorship reached all the way up into the federal government, journalists, and some are in the category of politician.
Think of what Tucker Carlson shared with the public "the NSA got into my Signal account, which I didn't know they could do". I don't expect our solution to stand up to NSA, but unlike a retail device the starting point of the digital playing field on my camp's solution doesn't let digital intrusion be a cakewalk for "glowies" like retail devices do. Glowies have to work significantly harder to compromise what we have.
Some of the "Instagram famous" gen Z stereotypical "hot girls" who are computer illiterate and generally aloof (vapid on the surface) were immediately willing to tolerate the overhead of "touchscreen cabled to a backpack" when they were told "when you do a call with mom or dad, that call does actually stay protected". Trashy aka "low socioeconomic status" people don't give a shit about family privacy/autonomy, but these people do give a shit about it.
All aforementioned categories of users have already experienced suffering abuse, or anticipate being abused, or they simply have enough dignity in their life that they're not going to just give it away like typical retards do ; they are not going to "eventually move on" from "this computer I carry on my person every day is not designed for me to get fucked over" and then downgrade to a retail device that is by design (in one way or another) positioned to fuck them over. Sans a "burner" device for some specific narrow purpose (Instagram presence) that has had its internal mic gutted and has hardware shutters on its cameras.
The technical concept is what I am allowed to post about so that's what I did. As I already wrote earlier (and also then later cited I had written), something cohesive will be posted later this year, and if the person I expect to do it doesn't then I'll do it myself. Or, one of the other existing players in the space will, or someone else entirely (and I'd be perfectly happy with that).
.
> You're not sitting on some super special idea here
I appreciate you acknowledging this point, a point that I had emphasized, and I feel I had done so rather clearly, several times above. Many Qubes users have been doing this since 2018.
The essential thing my camp did that was "special" was package it professionally in a way that "normie" users can succeed with it out-of-the-box.
Like with any specific operating system and hardware combination there are implementation specific bugs here and there, but nothing major.
.
> how far I could get without a proper voice plan.
Some use "2FA mule", like this https://kozubik.com/items/2famule/ ; though we advise to physically remove the microphone of the 2FA mule and presume any WiFi/Bluetooth traffic from it is hostile.
Those who need PSTN (legacy phone network) voice or 911 can use another device for that.
No one using our mini-backpack is missing out on any functionality they actually need.
.
> eg what specific dev boards straightforwardly run Graphene? I don't see any listed on the website
I do appreciate you bothering to look. I actually do. There are boards that can run with zero blobs, they are intended for production use as sold, so long as they can run a Linux kernel and have a GPU that Android can use, they can run GrapheneOS.
Our solution is not supported nor known about by the GrapheneOS project, we have our own branch and cicd and all that.
.
> Which is probably how I ended up skipping over some actual details.
Yeah, the performance (or lack thereof) of your reading comprehension has been rather noticeable.
.
> the polar opposite of the trash elitist attitude you're pushing.
Okay but no matter what happens, I will always get more money and more pussy than you.
Clearly, the counter-intel part of the US government effort has been less successful than the surveillance and intelligence gathering effort. But that doesn't mean that the US government wants all those other nations to be able to gather data from these systems. Our government wants nothing more than to be the only national government capable of gathering data from these systems.
Getting them to actually use them is hard, especially when the whole point of the app is to communicate with other people, and literally none of the people they regularly communicate with other than yourself use (or even know about) Signal.
Stupidity and banality is a far greater threat than conspiracy.
End to end encryption has proven to be unworkable in every context it's been tried. There are no end-to-end encrypted systems in the world today that have any use, and in fact the term has been repurposed by the tech industry to mean pseudo encrypted, where the encryption is done using software that is also controlled by the adversary, making it meaningless. But as nobody was doing real end-to-end encryption anyway, the engineers behind that decision can perhaps be forgiven for it.
I'd say there's a very real use for this, though, which is that with mobile applications it's more complicated to compromise a software deployment chain than it is to compromise a server-side system. If you're a state-level attacker and you want to coordinate a deployment of listening capabilities on Signal, say, you need to persistently compromise Signal's software supply chain and/or build systems, and do so in advance of other attacks you might want to coordinate with, because you need to wait for an entire App Store review cycle for your code to propagate to devices. The moment someone notices (say, a security researcher MITM'ing themselves) that traffic doesn't match the Signal protocol, your existence has been revealed. Whereas for the telcos in question, it seems it was possible to just compromise a server-side system to gain persistent listening capabilities, which could happen silently.
Now, this can and should be a lot better, if, say, the Signal app was built not by Signal but by Apple and Google themselves, on build servers that provably create and release reproducible builds straight from a GitHub commit. It would remove the ability for Signal to be compromised in a non-community-auditable way. But even without this, it's a nontrivial amount of defense-in-depth.
As the article points out, there are many other adversaries to be concerned about. Protecting against them would be good. Don’t give up so quickly.
Aside - not the main point —>
I actually do not know if we are at the level of “forced speech” in the US. Publishing hacked apps would fall under that category. Forced silence is something and less powerful. Still bad, obviously.
https://berthub.eu/articles/posts/5g-elephant-in-the-room/
So is that not the case for USA telecoms ?
That's amusing. I'll grant that US companies haven't outright surrendered, and are still at least permitted to engage in lip service on the issue. But actual "fighting"? That would mean a tech world that looks very different than what we have today, and would fatally conflict with no end of "interests" in the US.
Yeah sure, except giving the NSA access and complying with the CLOUD Act.
I guess Starlink could easily geolocate every 4G/5G phone IMIE with huge direct-to-celll attennas
SS7 only gets into the picture after the handset has connected to the home network, from what I understand (n.b. not a telco engineer). The IMEI is exposed to the network, but only to your network and only after the handset sets up an encrypted and authenticated connection with it.
5G uses a thing called a GUTI to identify handsets, not an IMEI. Think of it like a GUTI being a temporary IPv6 address allocated for a few hours by DHCP, and the IMEI being like a browser cookie. IMEI is exposed to your home network and networks you roam onto, but merely being in range of a tower doesn't expose it, and it's never transmitted in the clear over the air.
Also, within a network most of the components don't get access to the IMEI either.
Answer delayed by hours due to HN rate limiting.
None? As I said I have not seen SS7 for a decade+ in USA/Canada. IMEI catches has nothing to do with SS7.
https://www.youtube.com/watch?v=wVyu7NB7W6Y
Are you saying the SS7 messages they're looking at of a Canadian telephone subscriber just aren't there?
And this is the EFF saying in July 2024 that the FCC should really make telcos address vulnerabilities in SS7:
https://www.eff.org/deeplinks/2024/07/eff-fcc-ss7-vulnerable...
Are you saying they're just wrong, those SS7 networks don't exist in the USA?
I mean, the article links the FCC request-for-comment on SS7 networks. Just as a quote: https://docs.fcc.gov/public/attachments/DA-24-308A1.pdf
This is dated March 2024. It's talking about the very thing you say you haven't seen for more than a decade. To me, it sounds like that thing (the SS7 network) is alive and well in the USA, and the federal government is concerned about its lax security allowing spies to discover phone users' location information - the very topic we're discussing.It sounds like you're talking mince.
If your claim is that there is literally no SS7 in US and Canadian telephone networks, then that is straight-up wrong. It exists in every network that still supports 2G/3G wireless protocols and classic PSTN standards. It was replaced in 4G/5G and SIP, but that requires your operator only supports those protocols and doesn't continue to support the old protocols. If it does, it will still have SS7 signalling and will still be susceptible to attacks (though it is free to run its own security to block them).
If your claim is that you haven't seen SS7 in a decade, then sure, maybe you haven't. But given there is actual, ongoing spying, impersonation, etc., that can be demonstrated in North America in 2024, and everyone involved says "it's due to SS7", and you're out here saying it's-so-rare-you-haven't-seen-in-a-decade, then what exactly is happening? What are the hackers using then, when the experts say they're exploiting SS7, if you insist it's not there?
Why did the GSMA publish this security paper in 2019? https://www.gsma.com/solutions-and-impact/technologies/secur...
Why are they promoting a Code of Conduct for GT lessees? https://www.gsma.com/solutions-and-impact/technologies/secur...
If you claim there are no SS7 networks in the USA or Canada, please explain:
1) why the FCC believes they exist and need to be secured, as per their March 2024 note
2) what the UMTS networks, still operational in Canada, are using for messaging (note the 2025 dates in https://en.wikipedia.org/wiki/3G#Phase-out for Canada; 2G/3G is still alive and well there. And I note that most of the 3G phase out in the USA was in 2022, not in 2014 which is what they'd have to be for you to not have seen SS7 for a decade)
3) what the POTS networks, still operational in the USA and Canada, are using for messaging (noting that FCC 19-72 only removes the requirement on ILECs to provide UME Analog Loops to CLECs, and does not require them to shut down POTS networks entirely by August 2022. For example, AT&T only plans to have transitioned 50% of its POTS network by 2025)
The federal government wouldn't pay hundreds of millions of dollars[0] to catch one or two fishing boats.
[0] https://www.usaspending.gov/award/CONT_AWD_N6600122C0065_970...
Sounds like the root of the issue.
Nice, we do not what the CEOs of these telcos have to give up their bonuses. So we force them to do the just bare minimum. Isn't capitalism great.
This has nothing to do with capitalism. The Soviet Union wasn’t a paragon of information security.
The goal is to make the number at the bottom of the piece of paper bigger by a large enough margin in the next ninety days. If you can prove that there's the imminent risk of a specific cyberattack in the next 90 days and that it will have an adverse impact on getting that number bigger, fine, company leadership will pay attention, but that's rarely the case. Most cyberattacks are obviously clandestine in nature, and by the time they're found, the move isn't to harden infrastructure against known unknowns, but to reduce legal exposure and financial liability for leaving infrastructure unsecured. It's cheaper, and makes the number at the bottom of the piece of paper bigger.
1. Capitalists seem pretty content with money losing ventures for far more than "the next ninety days", as long as they think it'll bring them future profits. Amazon and Uber are famous examples.
2. You think the government (or whatever the capitalism alternative is) aren't under the same pressure? Unless we live in a post scarcity economy, there's always going to be a beancounter looking at the balance sheet and scrutinizing expenses.
Sometimes thought-terminating quips are not enough.
But if you think it is, I encourage you to run Yggdrasil.
It's not a totally silly suggestion and it's not totally sensible either. Light hearted. I doubt any exec in any telco outside of Jio or maybe Comcast would go there. Amongst other things, they'd destroy a lot of capital value doing the Ripley. Well.. the liberated v4 sell replaces some of that until the price crashes..
The NSA/CIA need to start making systems more secure by default and stop thinking spying on their own populations is a top priority.
The NSA and CIA are neither able nor authorized to defend all privately-owned critical infrastructure. While concerns about agency oversight are warranted, I can assure you that spying on the population is not their top priority. It's abundantly clear that foreign threats aren't confined to their own geographies and networks. That can't be addressed without having the capability to look inward.
Secure by Design is an initiative led by CISA, which frequently shares guidance and threat reporting from the NSA and their partners. Unfortunately, they also can't unilaterally secure the private sector overnight.
These are difficult problems. Critical infrastructure owners and operators need to rise to the challenge we face.
i think you’re mistaking a sentiment of “china is dangerous, her interests specifically contradict ours, and we must permanently cripple her power before she gets worse” for “china is in violation of muh international norm #627!!”
The digital has been running for quite a while, and there won't be a real one. China has nothing to gain from starting one. I mean seriously...why would you shoot your customer?
It depends on your goal. If it is strictly a commercial relationship, “shooting your customer” could be advantageous for preserving a revenue stream. Customer lock-in Could be seen as a form of “shooting your customer"
If your goal is political, "shooting your customer" may enable a regime change that is friendlier to you. We have done this multiple times in the Middle East, Central America, and South America.
The US has done what it has done in the regions you list because they're already unstable (particularly the Middle East) and have no way of striking decisive blows against US territory.
We must implement as LAW that a SIM card can provide and only provide a Zero Knowledge Proof of "this SIM is valid for this cellular/data plan up to a specific date".
If they want to track us all the time, whatever, if they can't keep that data safe from the Chinese Communist Party, then they aren't competent enough to have it.
Now is a good time to remind everyone that a SIM card is a full blown computer with CPU, RAM and NV storage.
Further, your carrier can upload and execute code on your SIM card without your knowledge or the knowledge of the higher level application processor functions of your telephone.
Answer delayed by hours due to HN rate limiting.
And, hopefully your USB stack, or your phone's equivalent to SIM interface, doesn't have vulnerabilities that the small computer that is the SIM card could exploit.
Operating systems that center their efforts on protecting high risk users like Qubes dedicated a whole copy of Linux running in a Xen VM to interface with USB devices.
It'd be great if more information were available on how devices like Google's Pixel devices harden the interface for SIM cards.
How do you implement bandwidth quotas with this?
Yet when one reads these articles it's just, "China, China, China!!!"
Anyone have a link to actual evidence?
Plainly I have no real evidence for this, other than the constant lack of evidence for their claims, and the doubts that are cast within the infosec community when data is available.
As such, even if Xi Jinping himself had stood up at the UN and claimed responsibility for a particular Windows kernel-mode rootkit, that still wouldn't be incontrovertible evidence.