Shocking Dark + Industrial Landscape
Damnation Alley + Technology
Damnation Alley + Technology

There are two iron laws of security that are often tragically ignored:
I. “There is no abstract ‘security’ - only security from some specific threat”
II. “There is no security in obscurity.”
Bridgefy, an app that’s been billed as a way for protesters to communicate securely, illustrates both of them.
Bridgefy is an offline messaging tool - a mobile app that uses Bluetooth to pass encrypted messages around a crowd where there is no internet access.
It was originally billed as being useful for big festivals and concerts out in the countryside, where there were lots of people but little or no internet connectivity.
However, as protests have spread around the world, the company has promoted its product as a tool for at-risk protesters seeking to coordinate uprisings for which they might face severe retaliation, including imprisonment, torture and murder.
https://arstechnica.com/features/2020/08/bridgefy-the-app-promoted-for-mass-protests-is-a-privacy-disaster/
In April, a group of Royal Holloway researchers audited the app and found it severely unsuitable for these contexts, potentially exposing users to life-threatening hazards. They told the company about these flaws then, but have only now published their findings.
https://martinralbrecht.files.wordpress.com/2020/08/bridgefy-abridged.pdf
The researchers’ findings reveal that the threats to users from using the app at festivals are very different to the threats that protesters face in repressive regimes (“There is no abstract ‘security’ - only security from some specific threat”).
They also find that the product team made a bunch of mistakes that they overlooked, a common problem (it’s why I can’t find my own typos!) that exposed users to attacks from anyone who knew how to hunt for these errors (“There is no security in obscurity”).
For example, the app sends the ID of both the sender and recipient of every message “in the clear” (without encryption). That allows an attacker who intercepts this metadata to assemble social graphs: Alice knows Bob, Bob knows Carol.
This might expose concertgoers to some risk (for example, if Carol is arrested for selling drugs, Alice and Bob’s messages to her might put them under suspicion). But in a protest context, that exposes the whole movement to risk.
What’s more, the identifiers the app uses are tied to users’ phone numbers: an attacker at a concert would need access to a database that maps phone numbers to real identities. A state-level adversary can simply demand these connections from the phone company.
But not all the flaws in the system stem from the differences in threats at concerts and protests. Some of Bridefy’s flaws threaten users in ANY context, and stem from the developers’ own blind spots about errors in their thinking.
For example, the system doesn’t have any “out of band” way to initialize keys between users. That means that when Alice wants to send a secret message to Bob, she first announces to the whole network that she is Alice and this is her public key that Bob should use.
An attacker in the network can - rather than passing that message on - replace it with a message that substitutes their OWN key, and thereafter intercept, read, and relay all the messages from Alice to Bob (a “man in the middle” attack).
Worse than that, the actual encryption formatting used for the messages is PKCS #1, a system that has been deprecated since 1998 due to unsalvageable flaws.
The app also fails to do vital forms of input sanitization: it doesn’t check for “zip bombs” - small compressed files that, when decompressed, expand to junk files that are millions of times larger. These bombs could crash enough devices in the network to shut it down.
Though Bridgefy has known of the vulnerabilities since April, they are only now announcing them. They attribute the delay to their fruitless internal efforts to remediate these defects, and their ultimate conclusion that their system needs to be rebuilt from the ground up.
They say they are now doing that work, rebuilding the app around the Signal protocol, which is very robust and has been widely probed to identify and shore up weaknesses.
It’s good that they’re doing this. A third iron law of security is that “Security is a process, not a product” - that is, security is always contingent, and requires constant tending and upgrading to patch newly identified defects.
We can’t and shouldn’t expect products to be perfectly secure - all we can ask is that product teams are transparent about which threats they considered in their design, how their products work, and which defects have been identified in them.
Unfortunately, while Bridgefy is doing the right thing by acknowledging these bugs, thanking the reasearch team, and fixing the bugs, the rest of their conduct is less than exemplary.
It was wrong to promote an app designed for concerts as a tool for protesters without considering the differences in the threats to those user populations.
Worse, though the team has known of these defects since April, they didn’t start correcting the record on end-to-end encryption promises until June. And, as Dan Goodin points out on Ars Technica, their messaging continues to imply that it is safe to use.
why is this being reported like it’s a good thing? and it’s not just women who are at risk, I’m sure this would cause hell for illegal immigrants and people who are running from abusers
What would happen is: murder, sexual assault, home invasion, burglary, stalking, abuse, doxxing, kidnapping
And this goes for everyone, especially Poc, queer folks, disabled folks, immigrants and women.
(I most likely even forgot some.)
In short: hell would break loose. Sounds like a fucking black mirror episode.
Also, from another reblog:

(A little more on this: The Secretive Company That Might End Privacy as We Know It)
PALANTIR?
‘Cartoon villain’ indeed, what the fuck.
It’s increasingly being used by local law enforcement agencies in the U.S. and Canada, scraping social medial profiles for photos to match up against. The above NYT article is well worth the read.
if kirk and spock hadn’t fucked in the sands of vulcan we would have to manually open the doors every time we go to the store
sorry,,, What does this mean?
if star trek hadn’t become as popular as it did (which is in large parts due to kirk’s and spock’s relationship) a lot of scientific advancements wouldn’t have happened / would’ve taken longer, including slidey doors :D
Yepp, including those adorable flip phones I keep hoping will make a comeback

Shocking Dark + Industrial Landscape
Beyond the Amazon Echo assistant in your kitchen or the disembodied apps that glow on your iPhone screen are a hidden army of “ghost workers” who intervene when algorithms trip up. This invisible labour force administers complicated takeaway orders, moderates explicit Facebook content and verifies your Uber driver’s picture when an algorithm fails to recognise their new haircut.
Who does this kind of work? People in the US, India and elsewhere, working from their bedroom or kitchen counter, connected to the internet and often earning less than the minimum wage. Ghost workers are pivotal to digital capitalism: the Pew Research Center estimates they number 20 million people.
the ‘automated’ future of capitalism is one of half alive computers and half dead people
I’m glad this article exists because so many people assume technology today is way more advanced than it actually is, which leads to this whole idea of an easy transition from work to automation.
Cray supercomputers at Lawrence Livermore National Laboratory, circa 1984.