Forcing your computer to rat you out
Powerful people imprisoned by the cluelessness of their own isolation, locked up with their own motivated reasoning: “It’s impossible to get a CEO to understand something when his quarterly earnings call depends on him not understanding it.”
Take Mark Zuckerberg. Zuckerberg insists that anyone who wanted to use a pseudonym online is “two-faced,” engaged in dishonest social behavior. The Zuckerberg Doctrine claims that forcing people to use their own names is a way to ensure civility. This is an idea so radioactively wrong, it can be spotted from orbit.
From the very beginning, social scientists (both inside and outside Facebook) told Zuckerberg that he was wrong. People have lots of reasons to hide their identities online, both good and bad, but a Real Names Policy affects different people differently:
https://memex.craphound.com/2018/01/22/social-scientists-have-warned-zuck-all-along-that-the-facebook-theory-of-interaction-would-make-people-angry-and-miserable/
For marginalized and at-risk people, there are plenty of reasons to want to have more than one online identity — say, because you are a #MeToo whistleblower hoping that Harvey Weinstein won’t sic his ex-Mossad mercenaries on you:
https://www.newyorker.com/news/news-desk/harvey-weinsteins-army-of-spies
Or maybe you’re a Rohingya Muslim hoping to avoid the genocidal attentions of the troll army that used Facebook to organize — under their real, legal names — to rape and murder you and everyone you love:
https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/
But even if no one is looking to destroy your life or kill you and your family, there are plenty of good reasons to present different facets of your identity to different people. No one talks to their lover, their boss and their toddler in exactly the same way, or reveals the same facts about their lives to those people. Maintaining different facets to your identity is normal and healthy — and the opposite, presenting the same face to everyone in your life, is a wildly terrible way to live.
None of this is controversial among social scientists, nor is it hard to grasp. But Zuckerberg stubbornly stuck to this anonymity-breeds-incivility doctrine, even as dictators used the fact that Facebook forced dissidents to use their real names to retain power through the threat (and reality) of arrest and torture:
https://pluralistic.net/2023/01/25/nationalize-moderna/#hun-sen
Why did Zuck cling to this dangerous and obvious fallacy? Because the more he could collapse your identity into one unitary whole, the better he could target you with ads. Truly, it is impossible to get a billionaire to understand something when his mega-yacht depends on his not understanding it.
This motivated reasoning ripples through all of Silicon Valley’s top brass, producing what Anil Dash calls “VC QAnon,” the collection of conspiratorial, debunked and absurd beliefs embraced by powerful people who hold the digital lives of billions of us in their quivering grasp:
https://www.anildash.com/2023/07/07/vc-qanon/
These fallacy-ridden autocrats like to disguise their demands as observations, as though wanting something to be true was the same as making it true. Think of when Eric Schmidt — then the CEO of Google — dismissed online privacy concerns, stating “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place”:
https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-dismisses-privacy
Schmidt was echoing the sentiments of his old co-conspirator, Sun Microsystems CEO Scott McNealy: “You have zero privacy anyway. Get over it”:
https://www.wired.com/1999/01/sun-on-privacy-get-over-it/
Both men knew better. Schmidt, in particular, is very jealous of his own privacy. When Cnet reporters used Google to uncover and publish public (but intimate and personal) facts about Schmidt, Schmidt ordered Google PR to ignore all future requests for comment from Cnet reporters:
https://www.cnet.com/tech/tech-industry/how-cnet-got-banned-by-google/
(Like everything else he does, Elon Musk’s policy of responding to media questions about Twitter with a poop emoji is just him copying things other people thought up, making them worse, and taking credit for them:)
https://www.theverge.com/23815634/tesla-elon-musk-origin-founder-twitter-land-of-the-giants
Schmidt’s actions do not reflect an attitude of “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Rather, they are the normal response that we all have to getting doxed.
When Schmidt and McNealy and Zuck tell us that we don’t have privacy, or we don’t want privacy, or that privacy is bad for us, they’re disguising a demand as an observation. “Privacy is dead” actually means, “When privacy is dead, I will be richer than you can imagine, so stop trying to save it, goddamnit.”
We are all prone to believing our own bullshit, but when a tech baron gets high on his own supply, his mental contortions have broad implications for all of us. A couple years after Schmidt’s anti-privacy manifesto, Google launched Google Plus, a social network where everyone was required to use their “real name.”
This decision — justified as a means of ensuring civility and a transparent ruse to improve ad targeting — kicked off the Nym Wars:
https://epeus.blogspot.com/2011/08/google-plus-must-stop-this-identity.html
One of the best documents to come out of that ugly conflict is “Falsehoods Programmers Believe About Names,” a profound and surprising enumeration of all the ways that the experiences of tech bros in Silicon Valley are the real edge-cases, unreflective of the reality of billions of their users:
https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/
This, in turn, spawned a whole genre of programmer-fallacy catalogs, falsehoods programmers believe about time, currency, birthdays, timezones, email addresses, national borders, nations, biometrics, gender, language, alphabets, phone numbers, addresses, systems of measurement, and, of course, families:
https://github.com/kdeldycke/awesome-falsehood
But humility is in short supply in tech. It’s impossible to get a programmer to understand something when their boss requires them not to understand it. A programmer will happily insist that ordering you to remove your “mask” is for your own good — and not even notice that they’re taking your skin off with it.
There are so many ways that tech executives could improve their profits if only we would abandon our stubborn attachment to being so goddamned complicated. Think of Netflix and its anti-passsword-sharing holy war, which is really a demand that we redefine “family” to be legible and profitable for Netflix:
https://pluralistic.net/2023/02/02/nonbinary-families/#red-envelopes
But despite the entreaties of tech companies to collapse our identities, our families, and our online lives into streamlined, computably hard-edged shapes that fit neatly into their database structures, we continue to live fuzzy, complicated lives that only glancingly resemble those of the executives seeking to shape them.
Now, the rich, powerful people making these demands don’t plan on being constrained by them. They are conservatives, in the tradition of #FrankWilhoit, believers in a system of “in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect”:
https://crookedtimber.org/2018/03/21/liberals-against-progressives/#comment-729288
As with Schmidt’s desire to spy on you from asshole to appetite for his own personal gain, and his violent aversion to having his own personal life made public, the tech millionaires and billionaires who made their fortune from the flexibility of general purpose computers would like to end that flexibility. They insist that the time for general purpose computers has passed, and that today, “consumers” crave the simplicity of appliances:
https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/
It is in the War On General Purpose Computing that we find the cheapest and flimsiest rhetoric. Companies like Apple — and their apologists — insist that no one wants to use third-party app stores, or seek out independent repair depots — and then spend millions to make sure that it’s illegal to jailbreak your phone or get it fixed outside of their own official channel:
https://doctorow.medium.com/apples-cement-overshoes-329856288d13
The cognitive dissonance of “no one wants this,” and “we must make it illegal to get this” is powerful, but the motivated reasoning is more powerful still. It is impossible to get Tim Cook to understand something when his $49 million paycheck depends on him not understanding it.
The War on General Purpose Computing has been underway for decades. Computers, like the people who use them, stubbornly insist on being reality-based, and the reality of computers is that they are general purpose. Every computer is a Turing complete, universal Von Neumann machine, which means that it can run every valid program. There is no way to get a computer to be almost Turing Complete, only capable of running programs that don’t upset your shareholders’ fragile emotional state.
There is no such thing as a printer that will only run the “reject third-party ink” program. There is no such thing as a phone that will only run the “reject third-party apps” program. There are only laws, like the Section 1201 of the Digital Millennium Copyright Act, that make writing and distributing those programs a felony punishable by a five-year prison sentence and a $500,000 fine (for a first offense).
That is to say, the War On General Purpose Computing is only incidentally a technical fight: it is primarily a legal fight. When Apple says, “You can’t install a third party app store on your phone,” what they means is, “it’s illegal to install that third party app store.” It’s not a technical countermeasure that stands between you and technological self-determination, it’s a legal doctrine we can call “felony contempt of business model”:
https://locusmag.com/2020/09/cory-doctorow-ip/
But the mighty US government will not step in to protect a company’s business model unless it at least gestures towards the technical. To invoke DMCA 1201, a company must first add the thinnest skin of digital rights management to their product. Since 1201 makes removing DRM illegal, a company can use this molecule-thick scrim of DRM to felonize any activity that the DRM prevents.
More than 20 years ago, technologists started to tinker with ways to combine the legal and technical to tame the wild general purpose computer. Starting with Microsoft’s Palladium project, they theorized a new “Secure Computing” model for allowing companies to reach into your computer long after you had paid for it and brought it home, in order to discipline you for using it in ways that undermined its shareholders’ interest.
Secure Computing began with the idea of shipping every computer with two CPUs. The first one was the normal CPU, the one you interacted with when you booted it up, loaded your OS, and ran programs. The second CPU would be a Trusted Platform Module, a brute-simple system-on-a-chip designed to be off-limits to modification, even by its owner (that is, you).
The TPM would ship with a limited suite of simple programs it could run, each thoroughly audited for bugs, as well as secret cryptographic signing keys that you were not permitted to extract. The original plan called for some truly exotic physical security measures for that TPM, like an acid-filled cavity that would melt the chip if you tried to decap it or run it through an electron-tunneling microscope:
https://pluralistic.net/2020/12/05/trusting-trust/#thompsons-devil
This second computer represented a crack in the otherwise perfectly smooth wall of a computer’s general purposeness; and Trusted Computing proposed to hammer a piton into that crack and use it to anchor a whole superstructure that could observe — and limited — the activity of your computer.
This would start with observation: the TPM would observe every step of your computer’s boot sequence, creating cryptographic hashes of each block of code as it loaded and executed. Each stage of the boot-up could be compared to “known good” versions of those programs. If your computer did something unexpected, the TPM could halt it in its tracks, blocking the boot cycle.
What kind of unexpected things do computers do during their boot cycle? Well, if your computer is infected with malware, it might load poisoned versions of its operating system. Once your OS is poisoned, it’s very hard to detect its malicious conduct, since normal antivirus programs rely on the OS to faithfully report what your computer is doing. When the AV program asks the OS to tell it which programs are running, or which files are on the drive, it has no choice but to trust the OS’s response. When the OS is compromised, it can feed a stream of lies to users’ programs, assuring these apps that everything is fine.
That’s a very beneficial use for a TPM, but there’s a sinister flipside: the TPM can also watch your boot sequence to make sure that there aren’t beneficial modifications present in your operating system. If you modify your OS to let you do things the manufacturer wants to prevent — like loading apps from a third-party app-store — the TPM can spot this and block it.
Now, these beneficial and sinister uses can be teased apart. When the Palladium team first presented its research, my colleague Seth Schoen proposed an “owner override”: a modification of Trusted Computing that would let the computer’s owner override the TPM:
https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html
This override would introduce its own risks, of course. A user who was tricked into overriding the TPM might expose themselves to malicious software, which could harm that user, as well as attacking other computers on the user’s network and the other users whose data were on the compromised computer’s drive.
But an override would also provide serious benefits: it would rule out the monopolistic abuse of a TPM to force users to run malicious code that the manufacturer insisted on — code that prevented the user from doing things that benefited the user, even if it harmed the manufacturer’s shareholders. For example, with owner override, Microsoft couldn’t force you to use its official MS Office programs rather than third-party compatible programs like Apple’s iWork or Google Docs or LibreOffice.
Owner override also completely changed the calculus for another, even more dangerous part of Trusted Computing: remote attestation.
Remote Attestation is a way for third parties to request a reliable, cryptographically secured assurances about which operating system and programs your computer is running. In Remote Attestation, the TPM in your computer observes every stage of your computer’s boot, gathers information about all the programs you’re running, and cryptographically signs them, using the signing keys the manufacturer installed during fabrication.
You can send this “attestation” to other people on the internet. If they trust that your computer’s TPM is truly secure, then they know that you have sent them a true picture of your computer’s working (the actual protocol is a little more complicated and involves the remote party sending you a random number to cryptographically hash with the attestation, to prevent out-of-date attestations).
Now, this is also potentially beneficial. If you want to make sure that your technologically unsophisticated friend is running an uncompromised computer before you transmit sensitive data to it, you can ask them for an attestation that will tell you whether they’ve been infected with malware.
But it’s also potentially very sinister. Your government can require all the computers in its borders to send a daily attestation to confirm that you’re still running the mandatory spyware. Your abusive spouse — or abusive boss — can do the same for their own disciplinary technologies. Such a tool could prevent you from connecting to a service using a VPN, and make it impossible to use Tor Browser to protect your privacy when interacting with someone who wishes you harm.
The thing is, it’s completely normal and good for computers to lie to other computers on behalf of their owners. Like, if your IoT ebike’s manufacturer goes out of business and all their bikes get bricked because they can no longer talk to their servers, you can run an app that tricks the bike into thinking that it’s still talking to the mothership:
https://nltimes.nl/2023/07/15/alternative-app-can-unlock-vanmoof-bikes-popular-amid-bankruptcy-fears
Or if you’re connecting to a webserver that tries to track you by fingerprinting you based on your computer’s RAM, screen size, fonts, etc, you can order your browser to send random data about this stuff:
https://jshelter.org/fingerprinting/
Or if you’re connecting to a site that wants to track you and nonconsensually cram ads into your eyeballs, you can run an adblocker that doesn’t show you the ads, but tells the site that it did:
https://www.eff.org/deeplinks/2019/07/adblocking-how-about-nah
Owner override leaves some of the beneficial uses of remote attestation intact. If you’re asking a friend to remotely confirm that your computer is secure, you’re not going to use an override to send them bad data about about your computer’s configuration.
And owner override also sweeps all of the malicious uses of remote attestation off the board. With owner override, you can tell any lie about your computer to a webserver, a site, your boss, your abusive spouse, or your government, and they can’t spot the lie.
But owner override also eliminates some beneficial uses of remote attestation. For example, owner override rules out remote attestation as a way for strangers to play multiplayer video games while confirming that none of them are using cheat programs (like aimhack). It also means that you can’t use remote attestation to verify the configuration of a cloud server you’re renting in order to assure yourself that it’s not stealing your data or serving malware to your users.
This is a tradeoff, and it’s a tradeoff that’s similar to lots of other tradeoffs we make online, between the freedom to do something good and the freedom to do something bad. Participating anonymously, contributing to free software, distributing penetration testing tools, or providing a speech platform that’s open to the public all represent the same tradeoff.
We have lots of experience with making the tradeoff in favor of restrictions rather than freedom: powerful bad actors are happy to attach their names to their cruel speech and incitement to violence. Their victims are silenced for fear of that retaliation.
When we tell security researchers they can’t disclose defects in software without the manufacturer’s permission, the manufacturers use this as a club to silence their critics, not as a way to ensure orderly updates.
When we let corporations decide who is allowed to speak, they act with a mixture of carelessness and self-interest, becoming off-the-books deputies of authoritarian regimes and corrupt, powerful elites.
Alas, we made the wrong tradeoff with Trusted Computing. For the past twenty years, Trusted Computing has been creeping into our devices, albeit in somewhat denatured form. The original vision of acid-filled secondary processors has been replaced with less exotic (and expensive) alternatives, like “secure enclaves.” With a secure enclave, the manufacturer saves on the expense of installing a whole second computer, and instead, they draw a notional rectangle around a region of your computer’s main chip and try really hard to make sure that it can only perform a very constrained set of tasks.
This gives us the worst of all worlds. When secure enclaves are compromised, we not only lose the benefit of cryptographic certainty, knowing for sure that our computers are only booting up trusted, unalterted versions of the OS, but those compromised enclaves run malicious software that is essentially impossible to detect or remove:
https://pluralistic.net/2022/07/28/descartes-was-an-optimist/#uh-oh
But while Trusted Computing has wormed its way into boot-restrictions — preventing you from jailbreaking your computer so it will run the OS and apps of your choosing — there’s been very little work on remote attestation…until now.
Web Environment Integrity is Google’s proposal to integrate remote attestation into everyday web-browsing. The idea is to allow web-servers to verify what OS, extensions, browser, and add-ons your computer is using before the server will communicate with you:
https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md
Even by the thin standards of the remote attestation imaginaries, there are precious few beneficial uses for this. The googlers behind the proposal have a couple of laughable suggestions, like, maybe if ad-supported sites can comprehensively refuse to serve ad-blocking browsers, they will invest the extra profits in making things you like. Or: letting websites block scriptable browsers will make it harder for bad people to auto-post fake reviews and comments, giving users more assurances about the products they buy.
But foundationally, WEI is about compelling you to disclose true facts about yourself to people who you want to keep those facts from. It is a Real Names Policy for your browser. Google wants to add a new capability to the internet: the ability of people who have the power to force you to tell them things to know for sure that you’re not lying.
The fact that the authors assume this will be beneficial is just another “falsehood programmers believe”: there is no good reason to hide the truth from other people. Squint a little and we’re back to McNealy’s “Privacy is dead, get over it.” Or Schmidt’s “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”
And like those men, the programmers behind this harebrained scheme don’t imagine that it will ever apply to them. As Chris Palmer — who worked on Chromium — points out, this is not compatible with normal developer tools or debuggers, which are “incalculably valuable and not really negotiable”:
https://groups.google.com/a/chromium.org/g/blink-dev/c/Ux5h_kGO22g/m/5Lt5cnkLCwAJ
This proposal is still obscure in the mainstream, but in tech circles, it has precipitated a flood of righteous fury:
https://arstechnica.com/gadgets/2023/07/googles-web-integrity-api-sounds-like-drm-for-the-web/
As I wrote last week, giving manufacturers the power to decide how your computer is configured, overriding your own choices, is a bad tradeoff — the worst tradeoff, a greased slide into terminal enshittification:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
This is how you get Unauthorized Bread:
https://arstechnica.com/gaming/2020/01/unauthorized-bread-a-near-future-tale-of-refugees-and-sinister-iot-appliances/
All of which leads to the question: what now? What should be done about WEI and remote attestation?
Let me start by saying: I don’t think it should be illegal for programmers to design and release these tools. Code is speech, and we can’t understand how this stuff works if we can’t study it.
But programmers shouldn’t deploy it in production code, in the same way that programmers should be allowed to make pen-testing tools, but shouldn’t use them to attack production systems and harm their users. Programmers who do this should be criticized and excluded from the society of their ethical, user-respecting peers.
Corporations that use remote attestation should face legal restrictions: privacy law should prevent the use of remote attestation to compel the production of true facts about users or the exclusion of users who refuse to produce those facts. Unfair competition law should prevent companies from using remote attestation to block interoperability or tie their products to related products and services.
Finally, we must withdraw the laws that prevent users and programmers from overriding TPMs, secure enclaves and remote attestations. You should have the right to study and modify your computer to produce false attestations, or run any code of your choosing. Felony contempt of business model is an outrage. We should alter or strike down DMCA 1201, the Computer Fraud and Abuse Act, and other laws (like contract law’s “tortious interference”) that stand between you and “sole and despotic dominion” over your own computer. All of that applies not just to users who want to reconfigure their own computers, but also toolsmiths who want to help them do so, by offering information, code, products or services to jailbreak and alter your devices.
Tech giants will squeal at this, insisting that they serve your interests when they prevent rivals from opening up their products. After all, those rivals might be bad guys who want to hurt you. That’s 100% true. What is likewise true is that no tech giant will defend you from its own bad impulses, and if you can’t alter your device, you are powerless to stop them:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
Companies should be stopped from harming you, but the right place to decide whether a business is doing something nefarious isn’t in the boardroom of that company’s chief competitor: it’s in the halls of democratically accountable governments:
https://www.eff.org/wp/interoperability-and-privacy
So how do we get there? Well, that’s another matter. In my next book, The Internet Con: How to Seize the Means of Computation (Verso Books, Sept 5), I lay out a detailed program, describing which policies will disenshittify the internet, and how to get those policies:
https://www.versobooks.com/products/3035-the-internet-con
Predictably, there are challenges getting this kind of book out into the world via our concentrated tech sector. Amazon refuses to carry the audio edition on its monopoly audiobook platform, Audible, unless it is locked to Amazon forever with mandatory DRM. That’s left me self-financing my own DRM-free audio edition, which is currently available for pre-order via this Kickstarter:
http://seizethemeansofcomputation.org
I’m kickstarting the audiobook for “The Internet Con: How To Seize the Means of Computation,” a Big Tech disassembly manual to disenshittify the web and bring back the old, good internet. It’s a DRM-free book, which means Audible won’t carry it, so this crowdfunder is essential. Back now to get the audio, Verso hardcover and ebook:
https://www.kickstarter.com/projects/doctorow/the-internet-con-how-to-seize-the-means-of-computation
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai
[Image ID: An anatomical drawing of a flayed human head; it has been altered to give it a wide-stretched mouth revealing a gadget nestled in the back of the figure's throat, connected by a probe whose two coiled wires stretch to an old fashioned electronic box. The head's eyes have been replaced by the red, menacing eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.' Behind the head is a code waterfall effect as seen in the credits of the Wachowskis' 'The Matrix.']
Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
2K notes
·
View notes
Shamir Secret Sharing
It’s 3am. Paul, the head of PayPal database administration carefully enters his elaborate passphrase at a keyboard in a darkened cubicle of 1840 Embarcadero Road in East Palo Alto, for the fifth time. He hits Return. The green-on-black console window instantly displays one line of text: “Sorry, one or more wrong passphrases. Can’t reconstruct the key. Goodbye.”
There is nerd pandemonium all around us. James, our recently promoted VP of Engineering, just climbed the desk at a nearby cubicle, screaming: “Guys, if we can’t get this key the right way, we gotta start brute-forcing it ASAP!” It’s gallows humor – he knows very well that brute-forcing such a key will take millions of years, and it’s already 6am on the East Coast – the first of many “Why is PayPal down today?” articles is undoubtedly going to hit CNET shortly. Our single-story cubicle-maze office is buzzing with nervous activity of PayPalians who know they can’t help but want to do something anyway. I poke my head up above the cubicle wall to catch a glimpse of someone trying to stay inside a giant otherwise empty recycling bin on wheels while a couple of Senior Software Engineers are attempting to accelerate the bin up to dangerous speeds in the front lobby. I lower my head and try to stay focused. “Let’s try it again, this time with three different people” is the best idea I can come up with, even though I am quite sure it will not work.
It doesn’t.
The key in question decrypts PayPal’s master payment credential table – also known as the giant store of credit card and bank account numbers. Without access to payment credentials, PayPal doesn’t really have a business per se, seeing how we are supposed to facilitate payments, and that’s really hard to do if we no longer have access to the 100+ million credit card numbers our users added over the last year of insane growth.
This is the story of a catastrophic software bug I briefly introduced into the PayPal codebase that almost cost us the company (or so it seemed, in the moment.) I’ve told this story a handful of times, always swearing the listeners to secrecy, and surprisingly it does not appear to have ever been written down before. 20+ years since the incident, it now appears instructive and a little funny, rather than merely extremely embarrassing.
Before we get back to that fateful night, we have to go back another decade. In the summer of 1991, my family and I moved to Chicago from Kyiv, Ukraine. While we had just a few hundred dollars between the five of us, we did have one secret advantage: science fiction fans.
My dad was a highly active member of Zoryaniy Shlyah – Kyiv’s possibly first (and possibly only, at the time) sci-fi fan club – the name means “Star Trek” in Ukrainian, unsurprisingly. He translated some Stansilaw Lem (of Solaris and Futurological Congress fame) from Polish to Russian in the early 80s and was generally considered a coryphaeus at ZSh.
While USSR was more or less informationally isolated behind the digital Iron Curtain until the late ‘80s, by 1990 or so, things like FidoNet wriggled their way into the Soviet computing world, and some members of ZSh were now exchanging electronic mail with sci-fi fans of the free world.
The vaguely exotic news of two Soviet refugee sci-fi fans arriving in Chicago was transmitted to the local fandom before we had even boarded the PanAm flight that took us across the Atlantic [1]. My dad (and I, by extension) was soon adopted by some kind Chicago science fiction geeks, a few of whom became close friends over the years, though that’s a story for another time.
A year or so after the move to Chicago, our new sci-fi friends invited my dad to a birthday party for a rising star of the local fandom, one Bruce Schneier. We certainly did not know Bruce or really anyone at the party, but it promised good food, friendly people, and probably filk. My role was to translate, as my dad spoke limited English at the time.
I had fallen desperately in love with secret codes and cryptography about a year before we left Ukraine. Walking into Bruce’s library during the house tour (this was a couple years before Applied Cryptography was published and he must have been deep in research) felt like walking into Narnia.
I promptly abandoned my dad to fend for himself as far as small talk and canapés were concerned, and proceeded to make a complete ass out of myself by brazenly asking the host for a few sheets of paper and a pencil. Having been obliged, I pulled a half dozen cryptography books from the shelves and went to work trying to copy down some answers to a few long-held questions on the library floor. After about two hours of scribbling alone like a man possessed, I ran out of paper and decided to temporarily rejoin the party.
On the living room table, Bruce had stacks of copies of his fanzine Ramblings. Thinking I could use the blank sides of the pages to take more notes, I grabbed a printout and was about to quietly return to copying the original S-box values for DES when my dad spotted me from across the room and demanded I help him socialize. The party wrapped soon, and our friends drove us home.
The printout I grabbed was not a Ramblings issue. It was a short essay by Bruce titled Sharing Secrets Among Friends, essentially a humorous explanation of Shamir Secret Sharing.
Say you want to make sure that something really really important and secret (a nuclear weapon launch code, a database encryption key, etc) cannot be known or used by a single (friendly) actor, but becomes available, if at least n people from a group of m choose to do it. Think two on-duty officers (from a cadre of say 5) turning keys together to get ready for a nuke launch.
The idea (proposed by Adi Shamir – the S of RSA! – in 1979) is as simple as it is beautiful.
Let’s call the secret we are trying to split among m people K.
First, create a totally random polynomial that looks like: y(x) = C0 * x^(n-1) + C1 * x^(n-2) + C2 * x^(n-3) ….+ K. “Create” here just means generate random coefficients C. Now, for every person in your trusted group of m, evaluate the polynomial for some randomly chosen Xm and hand them their corresponding (Xm,Ym) each.
If we have n of these points together, we can use Lagrange interpolating polynomial to reconstruct the coefficients – and evaluate the original polynomial at x=0, which conveniently gives us y(0) = K, the secret. Beautiful. I still had the printout with me, years later, in Palo Alto.
It should come as no surprise that during my time as CTO PayPal engineering had an absolute obsession with security. No firewall was one too many, no multi-factor authentication scheme too onerous, etc. Anything that was worth anything at all was encrypted at rest.
To decrypt, a service would get the needed data from its database table, transmit it to a special service named cryptoserv (an original SUN hardware running Solaris sitting on its own, especially tightly locked-down network) and a special service running only there would perform the decryption and send back the result.
Decryption request rate was monitored externally and on cryptoserv, and if there were too many requests, the whole thing was to shut down and purge any sensitive data and keys from its memory until manually restarted.
It was this manual restart that gnawed at me. At launch, a bunch of configuration files containing various critical decryption keys were read (decrypted by another key derived from one manually-entered passphrase) and loaded into the memory to perform future cryptographic services.
Four or five of us on the engineering team knew the passphrase and could restart cryptoserv if it crashed or simply had to have an upgrade. What if someone performed a little old-fashioned rubber-hose cryptanalysis and literally beat the passphrase out of one of us? The attacker could theoretically get access to these all-important master keys. Then stealing the encrypted-at-rest database of all our users’ secrets could prove useful – they could decrypt them in the comfort of their underground supervillain lair.
I needed to eliminate this threat.
Shamir Secret Sharing was the obvious choice – beautiful, simple, perfect (you can in fact prove that if done right, it offers perfect secrecy.) I decided on a 3-of-8 scheme and implemented it in pure POSIX C for portability over a few days, and tested it for several weeks on my Linux desktop with other engineers.
Step 1: generate the polynomial coefficients for 8 shard-holders.
Step 2: compute the key shards (x0, y0) through (x7, y7)
Step 3: get each shard-holder to enter a long, secure passphrase to encrypt the shard
Step 4: write out the 8 shard files, encrypted with their respective passphrases.
And to reconstruct:
Step 1: pick any 3 shard files.
Step 2: ask each of the respective owners to enter their passphrases.
Step 3: decrypt the shard files.
Step 4: reconstruct the polynomial, evaluate it for x=0 to get the key.
Step 5: launch cryptoserv with the key.
One design detail here is that each shard file also stored a message authentication code (a keyed hash) of its passphrase to make sure we could identify when someone mistyped their passphrase. These tests ran hundreds and hundreds of times, on both Linux and Solaris, to make sure I did not screw up some big/little-endianness issue, etc. It all worked perfectly.
A month or so later, the night of the key splitting party was upon us. We were finally going to close out the last vulnerability and be secure. Feeling as if I was about to turn my fellow shard-holders into cymeks, I gathered them around my desktop as PayPal’s front page began sporting the “We are down for maintenance and will be back soon” message around midnight.
The night before, I solemnly generated the new master key and securely copied it to cryptoserv. Now, while “Push It” by Salt-n-Pepa blared from someone’s desktop speakers, the automated deployment script copied shard files to their destination.
While each of us took turns carefully entering our elaborate passphrases at a specially selected keyboard, Paul shut down the main database and decrypted the payment credentials table, then ran the script to re-encrypt with the new key. Some minutes later, the database was running smoothly again, with the newly encrypted table, without incident.
All that was left was to restore the master key from its shards and launch the new, even more secure cryptographic service.
The three of us entered our passphrases… to be met with the error message I haven’t seen in weeks: “Sorry, one or more wrong passphrases. Can’t reconstruct the key. Goodbye.” Surely one of us screwed up typing, no big deal, we’ll do it again. No dice. No dice – again and again, even after we tried numerous combinations of the three people necessary to decrypt.
Minutes passed, confusion grew, tension rose rapidly.
There was nothing to do, except to hit rewind – to grab the master key from the file still sitting on cryptoserv, split it again, generate new shards, choose passphrases, and get it done. Not a great feeling to have your first launch go wrong, but not a huge deal either. It will all be OK in a minute or two.
A cursory look at the master key file date told me that no, it wouldn’t be OK at all. The file sitting on cryptoserv wasn’t from last night, it was created just a few minutes ago. During the Salt-n-Pepa-themed push from stage, we overwrote the master key file with the stage version. Whatever key that was, it wasn’t the one I generated the day before: only one copy existed, the one I copied to cryptoserv from my computer the night before. Zero copies existed now. Not only that, the push script appears to have also wiped out the backup of the old key, so the database backups we have encrypted with the old key are likely useless.
Sitrep: we have 8 shard files that we apparently cannot use to restore the master key and zero master key backups. The database is running but its secret data cannot be accessed.
I will leave it to your imagination to conjure up what was going through my head that night as I stared into the black screen willing the shards to work. After half a decade of trying to make something of myself (instead of just going to work for Microsoft or IBM after graduation) I had just destroyed my first successful startup in the most spectacular fashion.
Still, the idea of “what if we all just continuously screwed up our passphrases” swirled around my brain. It was an easy check to perform, thanks to the included MACs. I added a single printf() debug statement into the shard reconstruction code and instead of printing out a summary error of “one or more…” the code now showed if the passphrase entered matched the authentication code stored in the shard file.
I compiled the new code directly on cryptoserv in direct contravention of all reasonable security practices – what did I have to lose? Entering my own passphrase, I promptly got “bad passphrase” error I just added to the code. Well, that’s just great – I knew my passphrase was correct, I had it written down on a post-it note I had planned to rip up hours ago.
Another person, same error. Finally, the last person, JK, entered his passphrase. No error. The key still did not reconstruct correctly, I got the “Goodbye”, but something worked. I turned to the engineer and said, “what did you just type in that worked?”
After a second of embarrassed mumbling, he admitted to choosing “a$$word” as his passphrase. The gall! I asked everyone entrusted with the grave task of relaunching crytposerv to pick really hard to guess passphrases, and this guy…?! Still, this was something -- it worked. But why?!
I sprinted around the half-lit office grabbing the rest of the shard-holders demanding they tell me their passphrases. Everyone else had picked much lengthier passages of text and numbers. I manually tested each and none decrypted correctly. Except for the a$$word. What was it…
A lightning bolt hit me and I sprinted back to my own cubicle in the far corner, unlocked the screen and typed in “man getpass” on the command line, while logging into cryptoserv in another window and doing exactly the same thing there. I saw exactly what I needed to see.
Today, should you try to read up the programmer’s manual (AKA the man page) on getpass, you will find it has been long declared obsolete and replaced with a more intelligent alternative in nearly all flavors of modern Unix.
But back then, if you wanted to collect some information from the keyboard without printing what is being typed in onto the screen and remain POSIX-compliant, getpass did the trick. Other than a few standard file manipulation system calls, getpass was the only operating system service call I used, to ensure clean portability between Linux and Solaris.
Except it wasn’t completely clean.
Plain as day, there it was: the manual pages were identical, except Solaris had a “special feature”: any passphrase entered that was longer than 8 characters long was automatically reduced to that length anyway. (Who needs long passwords, amiright?!)
I screamed like a wounded animal. We generated the key on my Linux desktop and entered our novel-length passphrases right here. Attempting to restore them on a Solaris machine where they were being clipped down to 8 characters long would never work. Except, of course, for a$$word. That one was fine.
The rest was an exercise in high-speed coding and some entirely off-protocol file moving. We reconstructed the master key on my machine (all of our passphrases worked fine), copied the file to the Solaris-running cryptoserv, re-split it there (with very short passphrases), reconstructed it successfully, and PayPal was up and running again like nothing ever happened.
By the time our unsuspecting colleagues rolled back into the office I was starting to doze on the floor of my cubicle and that was that. When someone asked me later that day why we took so long to bring the site back up, I’d simply respond with “eh, shoulda RTFM.”
RTFM indeed.
P.S. A few hours later, John, our General Counsel, stopped by my cubicle to ask me something. The day before I apparently gave him a sealed envelope and asked him to store it in his safe for 24 hours without explaining myself. He wanted to know what to do with it now that 24 hours have passed.
Ha. I forgot all about it, but in a bout of “what if it doesn’t work” paranoia, I printed out the base64-encoded master key when we had generated it the night before, stuffed it into an envelope, and gave it to John for safekeeping. We shredded it together without opening and laughed about what would have never actually been a company-ending event.
P.P.S. If you are thinking of all the ways this whole SSS design is horribly insecure (it had some real flaws for sure) and plan to poke around PayPal to see if it might still be there, don’t. While it served us well for a few years, this was the very first thing eBay required us to turn off after the acquisition. Pretty sure it’s back to a single passphrase now.
Notes:
1: a member of Chicagoland sci-fi fan community let me know that the original news of our move to the US was delivered to them via a posted letter, snail mail, not FidoNet email!
513 notes
·
View notes