Tumgik
#big data
Text
Tumblr media
Fifty per cent of web users are running ad blockers. Zero per cent of app users are running ad blockers, because adding a blocker to an app requires that you first remove its encryption, and that’s a felony. (Jay Freeman, the American businessman and engineer, calls this “felony contempt of business-model”.) So when someone in a boardroom says, “Let’s make our ads 20 per cent more obnoxious and get a 2 per cent revenue increase,” no one objects that this might prompt users to google, “How do I block ads?” After all, the answer is, you can’t. Indeed, it’s more likely that someone in that boardroom will say, “Let’s make our ads 100 per cent more obnoxious and get a 10 per cent revenue increase.” (This is why every company wants you to install an app instead of using its website.) There’s no reason that gig workers who are facing algorithmic wage discrimination couldn’t install a counter-app that co-ordinated among all the Uber drivers to reject all jobs unless they reach a certain pay threshold. No reason except felony contempt of business model, the threat that the toolsmiths who built that counter-app would go broke or land in prison, for violating DMCA 1201, the Computer Fraud and Abuse Act, trademark, copyright, patent, contract, trade secrecy, nondisclosure and noncompete or, in other words, “IP law”. IP isn’t just short for intellectual property. It’s a euphemism for “a law that lets me reach beyond the walls of my company and control the conduct of my critics, competitors and customers”. And “app” is just a euphemism for “a web page wrapped in enough IP to make it a felony to mod it, to protect the labour, consumer and privacy rights of its user”.
11K notes · View notes
fuzzyghost · 1 month
Text
Tumblr media
6K notes · View notes
seobot · 11 months
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
616 notes · View notes
131 notes · View notes
Text
121 notes · View notes
Link
Popular large language models (LLMs) like OpenAI’s ChatGPT and Google’s Bard are energy intensive, requiring massive server farms to provide enough data to train the powerful programs. Cooling those same data centers also makes the AI chatbots incredibly thirsty. New research suggests training for GPT-3 alone consumed 185,000 gallons (700,000 liters) of water. An average user’s conversational exchange with ChatGPT basically amounts to dumping a large bottle of fresh water out on the ground, according to the new study. Given the chatbot’s unprecedented popularity, researchers fear all those spilled bottles could take a troubling toll on water supplies, especially amid historic droughts and looming environmental uncertainty in the US.
[...]
Water consumption issues aren’t limited to OpenAI or AI models. In 2019, Google requested more than 2.3 billion gallons of water for data centers in just three states. The company currently has 14 data centers spread out across North America which it uses to power Google Search, its suite of workplace products, and more recently, its LaMDa and Bard large language models. LaMDA alone, according to the recent research paper, could require millions of liters of water to train, larger than GPT-3 because several of Google’s thirsty data centers are housed in hot states like Texas; researchers issued a caveat with this estimation, though, calling it an “ approximate reference point.”
Aside from water, new LLMs similarly require a staggering amount of electricity. A Stanford AI report released last week looking at differences in energy consumption among four prominent AI models, estimating OpenAI’s GPT-3 released 502 metric tons of carbon during its training. Overall, the energy needed to train GPT-3 could power an average American’s home for hundreds of years.
2K notes · View notes
fuzzyghost · 9 months
Text
Tumblr media
383 notes · View notes