Tumgik
#China generative AI regulation
michellesanches · 1 month
Text
Latest AI Regulatory Developments:
As artificial intelligence (AI) continues to transform industries, governments worldwide are responding with evolving regulatory frameworks. These regulatory advancements are shaping how businesses integrate and leverage AI technologies. Understanding these changes and preparing for them is crucial to remain compliant and competitive. Recent Developments in AI Regulation: United Kingdom: The…
Tumblr media
View On WordPress
1 note · View note
zvaigzdelasas · 1 year
Text
12 Dec 22
4K notes · View notes
reasonsforhope · 8 months
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
199 notes · View notes
mariacallous · 5 months
Text
The European Union today agreed on the details of the AI Act, a far-reaching set of rules for the people building and using artificial intelligence. It’s a milestone law that, lawmakers hope, will create a blueprint for the rest of the world.
After months of debate about how to regulate companies like OpenAI, lawmakers from the EU’s three branches of government—the Parliament, Council, and Commission—spent more than 36 hours in total thrashing out the new legislation between Wednesday afternoon and Friday evening. Lawmakers were under pressure to strike a deal before the EU parliament election campaign starts in the new year.
“The EU AI Act is a global first,” said European Commission president Ursula von der Leyen on X. “[It is] a unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses.”
The law itself is not a world-first; China’s new rules for generative AI went into effect in August. But the EU AI Act is the most sweeping rulebook of its kind for the technology. It includes bans on biometric systems that identify people using sensitive characteristics such as sexual orientation and race, and the indiscriminate scraping of faces from the internet. Lawmakers also agreed that law enforcement should be able to use biometric identification systems in public spaces for certain crimes.
New transparency requirements for all general purpose AI models, like OpenAI's GPT-4, which powers ChatGPT, and stronger rules for “very powerful” models were also included. “The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union,” says Dragos Tudorache, member of the European Parliament and one of two co-rapporteurs leading the negotiations.
Companies that don’t comply with the rules can be fined up to 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years.
Measures designed to make it easier to protect copyright holders from generative AI and require general purpose AI systems to be more transparent about their energy use were also included.
“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter,” said European Commissioner Thierry Breton in a press conference on Friday night.
Over the two years lawmakers have been negotiating the rules agreed today, AI technology and the leading concerns about it have dramatically changed. When the AI Act was conceived in April 2021, policymakers were worried about opaque algorithms deciding who would get a job, be granted refugee status or receive social benefits. By 2022, there were examples that AI was actively harming people. In a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children, while students studying remotely alleged that AI systems discriminated against them based on the color of their skin.
Then, in November 2022, OpenAI released ChatGPT, dramatically shifting the debate. The leap in AI’s flexibility and popularity triggered alarm in some AI experts, who drew hyperbolic comparisons between AI and nuclear weapons.
That discussion manifested in the AI Act negotiations in Brussels in the form of a debate about whether makers of so-called foundation models such as the one behind ChatGPT, like OpenAI and Google, should be considered as the root of potential problems and regulated accordingly—or whether new rules should instead focus on companies using those foundational models to build new AI-powered applications, such as chatbots or image generators.
Representatives of Europe’s generative AI industry expressed caution about regulating foundation models, saying it could hamper innovation among the bloc’s AI startups. “We cannot regulate an engine devoid of usage,” Arthur Mensch, CEO of French AI company Mistral, said last month. “We don’t regulate the C [programming] language because one can use it to develop malware. Instead, we ban malware.” Mistral’s foundation model 7B would be exempt under the rules agreed today because the company is still in the research and development phase, Carme Artigas, Spain's Secretary of State for Digitalization and Artificial Intelligence, said in the press conference.
The major point of disagreement during the final discussions that ran late into the night twice this week was whether law enforcement should be allowed to use facial recognition or other types of biometrics to identify people either in real time or retrospectively. “Both destroy anonymity in public spaces,” says Daniel Leufer, a senior policy analyst at digital rights group Access Now. Real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, he explains, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.
Leufer said he was disappointed by the “loopholes” for law enforcement that appeared to have been built into the version of the act finalized today.
European regulators’ slow response to the emergence of social media era loomed over discussions. Almost 20 years elapsed between Facebook's launch and the passage of the Digital Services Act—the EU rulebook designed to protect human rights online—taking effect this year. In that time, the bloc was forced to deal with the problems created by US platforms, while being unable to foster their smaller European challengers. “Maybe we could have prevented [the problems] better by earlier regulation,” Brando Benifei, one of two lead negotiators for the European Parliament, told WIRED in July. AI technology is moving fast. But it will still be many years until it’s possible to say whether the AI Act is more successful in containing the downsides of Silicon Valley’s latest export.
83 notes · View notes
Text
These claims of an extinction-level threat come from the very same groups creating the technology, and their warning cries about future dangers is drowning out stories on the harms already occurring. There is an abundance of research documenting how AI systems are being used to steal art, control workers, expand private surveillance, and seek greater profits by replacing workforces with algorithms and underpaid workers in the Global South.
The sleight-of-hand trick shifting the debate to existential threats is a marketing strategy, as Los Angeles Times technology columnist Brian Merchant has pointed out. This is an attempt to generate interest in certain products, dictate the terms of regulation, and protect incumbents as they develop more products or further integrate AI into existing ones. After all, if AI is really so dangerous, then why did Altman threaten to pull OpenAI out of the European Union if it moved ahead with regulation? And why, in the same breath, did Altman propose a system that just so happens to protect incumbents: Only tech firms with enough resources to invest in AI safety should be allowed to develop AI.
[...]
First, the industry represents the culmination of various lines of thought that are deeply hostile to democracy. Silicon Valley owes its existence to state intervention and subsidy, at different times working to capture various institutions or wither their ability to interfere with private control of computation. Firms like Facebook, for example, have argued that they are not only too large or complex to break up but that their size must actually be protected and integrated into a geopolitical rivalry with China.
Second, that hostility to democracy, more than a singular product like AI, is amplified by profit-seeking behavior that constructs increasingly larger threats to humanity. It’s Silicon Valley and its emulators worldwide, not AI, that create and finance harmful technologies aimed at surveilling, controlling, exploiting, and killing human beings with little to no room for the public to object. The search for profits and excessive returns, with state subsidy and intervention clearing the way of competition, has and will create a litany of immoral business models and empower brutal regimes alongside “existential” threats. At home, this may look like the surveillance firm and government contractor Palantir creating a deportation machine that terrorizes migrants. Abroad, this may look like the Israeli apartheid state exporting spyware and weapons it has tested on Palestinians.
Third, this combination of a deeply antidemocratic ethos and a desire to seek profits while externalizing costs can’t simply be regulated out of Silicon Valley. These are fundamental attributes of the industry that trace back to the beginning of computation. These origins in optimizing plantations and crushing worker uprisings prefigure the obsession with surveillance and social control that shape what we are told technological innovations are for.
Taken altogether, why should we worry about some far-flung threat of a superintelligent AI when its creators—an insular network of libertarians building digital plantations, surveillance platforms, and killing machines—exist here and now? Their Smaugian hoards, their fundamentalist beliefs about markets and states and democracy, and their track record should be impossible to ignore.
311 notes · View notes
Text
Tumblr media
2023
Pickleball. Generative AI. Lula takes office in Brazil, Amazon Rainforest throws a party. Prince Harry refusing to stop talking about his frozen penis no matter how many times society begged him to stop. UFOs are real. Viral cat dubbed ‘largest cat anyone has ever seen’ gets adopted. Pee-Wee’s big adventure ends. Musk & X. Turkey-Syria earthquake kills thousands. India surpasses China as ‘country squeezing in the most peeps’. Tucker Carlson ousted. Miss USA and her 30 lbs moon costume. Wildfires in Kelowna and Hawaii. Macron tinkers with retirement age of the French. Paltrow can’t ski. Big Red Boots. Bob Barker leaves us. Alabama mom delivers 2 babies from her 2 uteruses in 2 days. Charles III. Ukrainian counteroffensive against Russian forces as the war drags on. Taylor Swift is Time’s Person of the Year. African ‘coup belt’. Flo-Jo dies in her sleep. Chinese spy balloon shot down. Hollywood writers strike. Human ‘nice mugshot’ Shitstain and his 91 indictments. Highest interest rates in 2 decades. The Bear’s Christmas episode. War in Gaza. Shinzo Abe is assassinated. Alex Murdaugh. Ocean Cleanup removes 25 000 lbs of trash from the Great Pacific Garbage Patch. Vase purchased for $3.99 sells for $100 000 at auction. Barbenheimer. A third of Pakistan is flooded. Lionel Messi is the GOAT. Travis Kelce. The Sphere opens in Las Vegas. Regulators seized Silicon Valley Bank and Signature Bank, resulting in two of the three largest bank failures in U.S. history. “The Woman In Me”. WHO declares COVID ain’t a thing no more. Titan sub sinks, rich people die. Matthew Perry drowns. Dumbledore Dies (again). Massive sales of ‘Fuck Trudeau’ flags for jacked-up micro-dick trucks. Everything Everywhere All At Once. June-August was the hottest three-month period in recorded history across the Earth. Tina Turner dies. And the Beatles release a new song?! Wow… You got big shoes to fill 2024.
Archives for context:
2020
Kobe. Pandemic. Lockdown. Koalas on fire. Harry and Meg retire. Toilet paper hoarding. Alcoholism. Impeach the f*cker. Parasite. Bonnie Henry. Tiger King. Working from home. Sourdough bread. Harvey Weinstein guilty. Zoom overdose. Dip your body in sanitizer. 6 feet. Quarantine. OK Boomer. Home schooling (everyone passes). Murder hornets. Dolly Parton. Don’t hug, kiss or see anybody, especially your family. Chris Evans’ junk. TikTok. Glory holes. Face masks. CERB. West Coast wildfires. Stay home. Small Businesses lose, big box stores win. F*ck Bozos. ‘Dreams’ and cranberry juice. Close yoga studios, but thumbs up to your local gym. Speak moistly to me. George Floyd. BLM. F*ck Trump. Phase 2, 3 and Summer. RBG. Baby Yoda. Biden wins. Bond and Black Panther die. No more lockdown. Back to school and work. Just kidding... giddy up round 2. Giuliani leaks shit from his head. Resurgence of chess. UFOs are real. Restrictions. Dave Grohl admits defeat. Monolith. “F*ck... forgot my mask in the car”. No Christmas shenanigans allowed. Bubbles. Alex Trebek. Use the term ‘dumpster fire’ one too many times. Jupiter and Saturn form 'Christmas Star'. Happy New Year Bitches!!!! 2021... you better not sh*t the bed!!
2021
“We love you, you’re very special”. Failed coup attempt at the Capital. Twitter, FB and IG ban Donny. Hammerin’ Hank goes to the Field of Dreams. Bozo no longer richest man but still a twat. Leachman, Tyson, and Holbrook pass. The economy is worse than expected. Kim and Kanye split. Brood X cicadas. Dre has an aneurysm and nearly has his home broken into. Bridgerton. MyPillow CEO is a douche. Covid restrictions extended indefinitely. Captain Von Trapp dies. Proud Boys officially a Terrorist Organization. Richard Ramirez. Cancer takes Screech. Travel bans. Impeachment trial (again?… oh and this was barely February? WTF??!!) Suez Canal blockage. Myanmar protest. Kong dukes it out with Godzilla, while Raya watches. Olympics. Friends compare elective surgeries. F9. Canada Women’s Soccer Gold. Free Britney. Multiverses. Residential Schools in Canada unearth children’s bodies. Kate is Mare of Easttown. Cuomo resigns. Disney and Dwayne cruise together. Wildfires. Delta variants. Musk passes Bezos. Candyman x 5. Capt. Kirk goes to space. F*ck Kyle Rittenhouse. Astros didn’t win. Squid Game. Goodbye Bond. Dune is redone. Angelina is Eternal. Astroworld deaths. Meta. Omicron. Three Spidermen. Tornados in December? World Juniors cancelled. Pills against Covid. School opening delayed. And Betty White dies. 2022… my expectations are ridiculously low…
2022
Wow… eight billion people. Queen Elizabeth II passes away after ruling the Commonwealth before dirt was invented. The monkeypox. Russia plays the role of global a**hole. Wordle. Mother Nature rocks Afghanistan. Hover bike. Styles spits on Pine. Olivia Newton John, Kristie Alley, and Coolio leave us. Pele was traded to team Heaven. FTX implodes. Madonna and the 3-D model of her vagina. Pig gives his heart to a human. Beijing can brag that it is the first city ever to host both the Summer Olympics and Winter Olympics. Uvalde. $3 trillion Apple. Keith Raniere gets 120 years. The Whisky War ends with Canada and Denmark going halfsies. Mar-a-Lago. Nick Cannon brood hits a dozen. Shinzo Abe is assassinated. Inflation goes through the roof (if you can actually afford to put a roof over your head). Volodymyr Zelensky. European heat wave. Bennifer. Salman Rushdie is stabbed on stage, Dave Chappelle tackled, and Chris Rock is only slapped. Thích Nhất Hạnh. Heidi Klum goes full slug. Cuba knocked out by Ian. Liz Truss and 4.1 Scaramuccis. Taylor Swift breaks Ticketmaster. Human shitstain Elon Musk ignores helping mankind and buys Twitter instead. Riri becomes a mommy. NASA launches Artemis 1. Trump still a whiny little b*tch. Music lost Loretta Lynn, Christine McVie, and Meat Loaf. Democracy died at least three times. Pete Davidson continues to date hottest women on the planet (no one understands how?!) Microplastics in our blood. Alex Jones is a c*nt. So is DeSantis. Argentina wins the World Cup. Meghan and Harry. Eddie Munson rips Metallica in the Upside Down. tWitch. Roe vs Wade is overturned by the micro dick energy of the Supreme Court. CODA. James Corden shows he is a "tiny Cretin of a man". Amber (and the sh*t on the bed) Heard (round the world). Sebastian Bear-McClard proves he’s one of the f*cking dumbest men alive. Latin America's ‘pink tide’. Anti-Semitic rants by Ye. Bob Saget. A verified blue checkmark. Godmother of punk Vivienne dies. And, Tom Cruise feels the need for speed yet again. 2023… whatcha got for us?!? Nothing shocks me anymore.
@daily-esprit-descalier
26 notes · View notes
Text
Russia, China and Iran remain the country's most significant foreign election threats, though the U.S. has seen an "increasing" number of threats from other actors, Director of National Intelligence Avril Haines told the Senate Intelligence Committee on Wednesday.
THE BIG PICTURE: The most concerning threat to this year's election are those against election workers which often stem from false narratives about the 2020 election, Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CSIS) told lawmakers.
• Both Haines and Easterly said the federal government's ability to protect elections has increased in recent years and that it has never been more prepared.
• Easterly said election workers have resigned over threats they received.
• "Such claims are corrosive to the sacred foundations of our democracy," Easterly said, "and they have led to harassment and threats of violence against election officials of both parties and their families."
ZOOM OUT: Haines said Russia remains the most active foreign threat to elections with the goals of eroding trust in U.S. institutions, exacerbating societal divides and reducing American support for Ukraine.
• She said China has a sophisticated influence apparatus but it did not deploy it in the 2020 presidential election and there has been no indication it will do so this election.
• China has targeted candidates from both political parties in previous elections to generate support for its foreign policy initiatives, like its territorial claims in Taiwan and Tibet.
THE BIG PICTURE: The intelligence community said earlier this week that threats against election workers have been "supercharged" by new technologies, including artificial intelligence.
• This election, generative AI has also been used to damage campaigns, including a fake robocall campaign using President Biden's voice to discourage votes in New Hampshire's primary in February.
• The Senate Rules Committee on Wednesday passed three bills to protect elections against deceptive AI, while a bipartisan group of senators unveiled a roadmap for how Congress should regulate AI that same day.
GO DEEPER: The split reality of election threats on Capitol Hill
5 notes · View notes
emptyanddark · 1 year
Text
what's actually wrong with 'AI'
it's become impossible to ignore the discourse around so-called 'AI'. but while the bulk of the discourse is saturated with nonsense such as, i wanted to pool some resources to get a good sense of what this technology actually is, its limitations and its broad consequences. 
what is 'AI'
the best essay to learn about what i mentioned above is On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? this essay cost two of its collaborators to be fired from Google. it frames what large-language models are, what they can and cannot do and the actual risks they entail: not some 'super-intelligence' that we keep hearing about but concrete dangers: from climate, the quality of the training data and biases - both from the training data and from us, the users. 
The problem with artificial intelligence? It’s neither artificial nor intelligent
How the machine ‘thinks’: Understanding opacity in machine learning algorithms
The Values Encoded in Machine Learning Research
Troubling Trends in Machine Learning Scholarship: Some ML papers suffer from flaws that could mislead the public and stymie future research
AI Now Institute 2023 Landscape report (discussions of the power imbalance in Big Tech)
ChatGPT Is a Blurry JPEG of the Web
Can we truly benefit from AI?
Inside the secret list of websites that make AI like ChatGPT sound smart
The Steep Cost of Capture
labor
'AI' champions the facade of non-human involvement. but the truth is that this is a myth that serves employers by underpaying the hidden workers, denying them labor rights and social benefits - as well as hyping-up their product. the effects on workers are not only economic but detrimental to their health - both mental and physical.
OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
also from the Times: Inside Facebook's African Sweatshop
The platform as factory: Crowdwork and the hidden labour behind artificial intelligence
The humans behind Mechanical Turk’s artificial intelligence
The rise of 'pseudo-AI': how tech firms quietly use humans to do bots' work
The real aim of big tech's layoffs: bringing workers to heel
The Exploited Labor Behind Artificial Intelligence
workers surveillance
5 ways Amazon monitors its employees, from AI cameras to hiring a spy agency
Computer monitoring software is helping companies spy on their employees to measure their productivity – often without their consent
theft of art and content
Artists say AI image generators are copying their style to make thousands of new images — and it's completely out of their control  (what gives me most hope about regulators dealing with theft is Getty images' lawsuit - unfortunately individuals simply don't have the same power as the corporation)
Copyright won't solve creators' Generative AI problem
The real aim of big tech's layoffs: bringing workers to heel
The Exploited Labor Behind Artificial Intelligence
AI is already taking video game illustrators’ jobs in China
Microsoft lays off team that taught employees how to make AI tools responsibly/As the company accelerates its push into AI products, the ethics and society team is gone
150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting
Inside the AI Factory: the Humans that Make Tech Seem Human
Refugees help power machine learning advances at Microsoft, Facebook, and Amazon
Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make
China’s AI boom depends on an army of exploited student interns
political, social, ethical consequences
Afraid of AI? The startups selling it want you to be
An Indigenous Perspective on Generative AI
“Computers enable fantasies” – On the continued relevance of Weizenbaum’s warnings
‘Utopia for Whom?’: Timnit Gebru on the dangers of Artificial General Intelligence
Machine Bias
HUMAN_FALLBACK
AI Ethics Are in Danger. Funding Independent Research Could Help
AI Is Tearing Wikipedia Apart  
AI machines aren’t ‘hallucinating’. But their makers are
The Great A.I. Hallucination (podcast)
“Sorry in Advance!” Rapid Rush to Deploy Generative A.I. Risks a Wide Array of Automated Harms
The promise and peril of generative AI
ChatGPT Users Report Being Able to See Random People's Chat Histories
Benedetta Brevini on the AI sublime bubble – and how to pop it   
Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff
AI moderation is no match for hate speech in Ethiopian languages
Amazon, Google, Microsoft, and other tech companies are in a 'frenzy' to help ICE build its own data-mining tool for targeting unauthorized workers
Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
The EU AI Act is full of Significance for Insurers
Proxy Discrimination in the Age of Artificial Intelligence and Big Data
Welfare surveillance system violates human rights, Dutch court rules
Federal use of A.I. in visa applications could breach human rights, report says
Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI
Generative AI Is Making Companies Even More Thirsty for Your Data
environment
The Generative AI Race Has a Dirty Secret
Black boxes, not green: Mythologizing artificial intelligence and omitting the environment
Energy and Policy Considerations for Deep Learning in NLP
AINOW: Climate Justice & Labor Rights
militarism
The Growing Global Spyware Industry Must Be Reined In
AI: the key battleground for Cold War 2.0?
‘Machines set loose to slaughter’: the dangerous rise of military AI
AI: The New Frontier of the EU's Border Extranalisation Strategy
The A.I. Surveillance Tool DHS Uses to Detect ‘Sentiment and Emotion’
organizations
AI now
DAIR
podcast episodes
Pretty Heady Stuff: Dru Oja Jay & James Steinhoff guide us through the hype & hysteria around AI
Tech Won't Save Us: Why We Must Resist AI w/ Dan McQuillan, Why AI is a Threat to Artists w/ Molly Crabapple, ChatGPT is Not Intelligent w/ Emily M. Bender
SRSLY WRONG: Artificial Intelligence part 1, part 2
The Dig: AI Hype Machine w/ Meredith Whittaker, Ed Ongweso, and Sarah West
This Machine Kills: The Triforce of Corporate Power in AI w/ ft. Sarah Myers West
35 notes · View notes
libbylayla1984 · 2 months
Text
The Fragmented Future of AI Regulation: A World Divided
Tumblr media
The Battle for Global AI Governance
In November 2023, China, the United States, and the European Union surprised the world by signing a joint communiqué, pledging strong international cooperation in addressing the challenges posed by artificial intelligence (AI). The document highlighted the risks of "frontier" AI, exemplified by advanced generative models like ChatGPT, including the potential for disinformation and serious cybersecurity and biotechnology risks. This signaled a growing consensus among major powers on the need for regulation.
However, despite the rhetoric, the reality on the ground suggests a future of fragmentation and competition rather than cooperation.
As multinational communiqués and bilateral talks take place, an international framework for regulating AI seems to be taking shape. But a closer look at recent executive orders, legislation, and regulations in the United States, China, and the EU reveals divergent approaches and conflicting interests. This divergence in legal regimes will hinder cooperation on critical aspects such as access to semiconductors, technical standards, and the regulation of data and algorithms.
The result is a fragmented landscape of warring regulatory blocs, undermining the lofty goal of harnessing AI for the common good.
youtube
Cold Reality vs. Ambitious Plans
While optimists propose closer international management of AI through the creation of an international panel similar to the UN's Intergovernmental Panel on Climate Change, the reality is far from ideal. The great powers may publicly express their desire for cooperation, but their actions tell a different story. The emergence of divergent legal regimes and conflicting interests points to a future of fragmentation and competition rather than unified global governance.
The Chip War: A High-Stakes Battle
The ongoing duel between China and the United States over global semiconductor markets is a prime example of conflict in the AI landscape. Export controls on advanced chips and chip-making technology have become a battleground, with both countries imposing restrictions. This competition erodes free trade, sets destabilizing precedents in international trade law, and fuels geopolitical tensions.
The chip war is just one aspect of the broader contest over AI's necessary components, which extends to technical standards and data regulation.
Technical Standards: A Divided Landscape
Technical standards play a crucial role in enabling the use and interoperability of major technologies. The proliferation of AI has heightened the importance of standards to ensure compatibility and market access. Currently, bodies such as the International Telecommunication Union and the International Organization for Standardization negotiate these standards.
However, China's growing influence in these bodies, coupled with its efforts to promote its own standards through initiatives like the Belt and Road Initiative, is challenging the dominance of the United States and Europe. This divergence in standards will impede the diffusion of new AI tools and hinder global solutions to shared challenges.
Data: The Currency of AI
Data is the lifeblood of AI, and access to different types of data has become a competitive battleground. Conflict over data flows and data localization is shaping how data moves across national borders. The United States, once a proponent of free data flows, is now moving in the opposite direction, while China and India have enacted domestic legislation mandating data localization.
This divergence in data regulation will impede the development of global solutions and exacerbate geopolitical tensions.
Algorithmic Transparency: A Contested Terrain
The disclosure of algorithms that underlie AI systems is another area of contention. Different countries have varying approaches to regulating algorithmic transparency, with the EU's proposed AI Act requiring firms to provide government agencies access to certain models, while the United States has a more complex and inconsistent approach. As countries seek to regulate algorithms, they are likely to prohibit firms from sharing this information with other governments, further fragmenting the regulatory landscape.
The vision of a unified global governance regime for AI is being undermined by geopolitical realities. The emerging legal order is characterized by fragmentation, competition, and suspicion among major powers. This fragmentation poses risks, allowing dangerous AI models to be developed and disseminated as instruments of geopolitical conflict.
It also hampers the ability to gather information, assess risks, and develop global solutions. Without a collective effort to regulate AI, the world risks losing the potential benefits of this transformative technology and succumbing to the pitfalls of a divided landscape.
2 notes · View notes
bizarrequazar · 1 year
Text
GJ and ZZH Updates — January 8-14
<<< previous week || all posts || following week >>> 
This is part of a weekly series collecting updates from and relating to Gong Jun and Zhang Zhehan.
This post is not wholly comprehensive and is intended as an overview, links provided lead to further details. Dates are in accordance with China Standard Time, the organization is chronological. My own biases on some things are reflected here. Anything I include that is not concretely known is indicated as such, and you’re welcome to do your own research and draw your own conclusions as you see fit. Please let me know if you have any questions, comments, concerns, or additions. :)
[Glossary of names and terms] [Masterlist of my posts about the situation with Zhang Zhehan]
01-08 → Colgate posted photos of fans next to their cutout of Gong Jun, with an equal amount of both solo fans and CP fans shown (including a CPF with “junzhe” written in large letters on her bag.)
01-09 → MARRSGREEN posted a promotional video spoken by Gong Jun.
→ Legend of AnLe passed government review with 39 episodes, which got on hotsearch. It can be expected to air soon! 
→ Colgate posted photos showing public displays of their ad featuring Gong Jun.
→ 361° posted a photo ad featuring Gong Jun.
01-10 → New regulations on AI generated images went into effect in China, requiring users to clearly watermark things like deepfakes and AI art to make it clear that that’s what they are. Unfortunately given that the Instagram is a) on an international platform and b) already criminal activity in the first place, this likely won’t help us here.
→ Calendars with white haired photos that the brand gave as gifts with purchases of over 850 RMB were shipped. On them, Wednesday was abbreviated as “WEN” and the dates were organized into rows of ten rather than weeks. 🤡 [photo] This resulted in infighting between whalers on Weibo about a (better quality) fan-made calendar which was given away for free by lottery, with some fans attacking it as “competition” against the official one despite it only being given away after sales had closed. Members of the scam ring were involved in instigating this.
→ The China Police official Weibo posted a promotional picture of Gong Jun, among a number of other celebrities, paying tribute to the third People's Police Festival. Gong Jun reposted this with the added caption: “The home that some people can’t stay in is the place you want to return to but are unable to. Ahead is difficult and dangerous, keeping the people behind you. Thank you for confronting danger! January 10th is the third #People’s Police Festival#, join me in paying tribute to the People’s Police!”
→ 361° posted a photo ad featuring Gong Jun. (1129 kadian)
→ An article about the recent songs was published on Medium, giving an objective summary about the rapid rising and falling on music charts and addressing the concerns about whether Zhang Zhehan is involved. This is an article that was previously published on What’s On Weibo on 01-02 but taken down less than 24 hours later; it now includes more detailed research and overall has a more skeptical air to it. (Thank you for your work Ms. Chu, it’s very appreciated 🙏)
→ The Instagram posted ten travel photos.  Fan Observation: The last one seems to be an attempt to imitate the photo Gong Jun posted on 12-17 of the person in a hoodie taking a photo. 
→ Hogan tweeted three photos from Gong Jun’s Wonderland photoshoot highlighting their shoes.
→ Charlotte Tilbury posted a photo ad featuring Gong Jun.
01-11 → Colgate posted a promotional video spoken by Gong Jun.
→ Gong Jun posted eight travel photos from Yunnan and a photo from the filming of Fox Spirit Matchmaker to his personal Weibo. Caption: “Some remaining film fragments​​​​” An hour later he also posted seven of these plus an additional one to Xiao Hong Shu, caption: “Shock!  Didn’t realize, there was still stock!” and four plus another additional one to Instagram, caption: “😏” 
Tumblr media
→ #GongJun trended on Twitter.
→ 2022 satellite non-prime TV rankings were released, with Gong Jun’s shows Dream Garden and Begin Again in the two highest spots respectively.
→ Honor posted a promotional video spoken by Gong Jun.
→ QuelleVous made a post linking Da Xiong (Gong Jun’s manager)’s current Weibo marketing account, where she has been leaking industry information in the same manner as on her previous account since 2022-04-06. These leaks include information about Gong Jun and Fox Spirit Matchmaker, and at the time of this post her most recent post was only a few hours ago.
01-12 → Esquire posted a teaser video featuring Gong Jun for their January issue. (1129 kadian) Caption: Before the lens focuses  he was already ready  What about you?” A few seconds of behind the scenes footage from this was leaked a couple days earlier, I will not be linking it.
→ The Instagram posted six more travel photos.
→ The clothing brand A BIG THANKS posted photos of Gong Jun wearing their coat.
01-13 → Esquire posted the double covers for their issue featuring Gong Jun. (1129 kadian) Caption: “Only those who have really traveled far will be grateful for every inch of land and their own feet. Compared with the past, @ Gong Jun Simon cares more about the future.” This was reposted by Gong Jun’s studio with the added caption, “The ripples are quiet, reflecting the willful reflection. Boss @ Gong Jun Simon releases inspiration in the camera, and everything can be traced.” It was also reposted by Hogan. (1129)
Half an hour later, Esquire also posted another teaser video. Caption: “‘I hope everyone can see a different side of me’, @ Gong Jun Simon has said this sentence many times, and he has done it.”
→ Colgate posted a photo ad featuring Gong Jun.
→ Gong Jun’s studio posted a video (flashing lights cw) of behind the scenes footage from the photoshoot. Caption: “The snapshots float, and move willfully. Boss @ Gong Jun Simon encounters the true self and enjoys every moment of freedom.” BGM is Como tu by Cannabeats, Zona Infame, and Klibre.
→ Esquire posted eight photos from their photoshoot with Gong Jun. Caption: “For @ Gong Jun Simon, growth is about seeing the gap between reality and ideal, and then working hard to close it.”
Tumblr media
→ Gong Jun posted both of the Esquire covers and four of the photos. Caption: “This is the first time I’ve tried a long curly hairstyle, hope you like it, thanks to @ Esquire” Esquire reposted this with the added caption, “Happy cooperation! ❤️❤️❤️ Looking forward to unlocking more new looks with Esquire in the future [3 shiba inu emojis]” Gong Jun also posted the photos to his Xiao Hong Shu, caption: “Anyway I think  It looks pretty good! 😛” and as two posts [here] and [here] to his Instagram, captions: “😜😜😜” and “Wuhu, looks good”
→ Hogan posted six of the Esquire photos highlighting their shoes.
→ The Instagram posted a clip of the third song.
→ Gong Jun’s studio posted twelve behind the scenes photos from the shoot. Caption: “Boss @ Gong Jun Simon’s filming break is broadcasting 📸” They commented on the post, “Make up a colour” with a full colour photo of one of the photos rendered in monochrome in the main post.
→ Gong Jun’s studio posted a douyin of more behind the scenes footage. Caption: “Today's ‘set diary’! Capturing the willful and comfortable boss @ Gong Jun Simon under the camera” The BGM is a slowed reverb remix of the Ian Asher mashup of One Kiss by Dua Lipa and I Was Never There by The Weeknd (this took me so long to find, please listen ot it.)
→ Gong Jun was seen doing a photoshoot at Universal Studios Beijing, where he also watched the parade they were holding.
→ The Muses account (associated with Xie Yihua) posted an official statement discouraging fans from using the previous tactics to inflate sales activity, claiming that they had never encouraged this. (Bullshit.) They also made a post announcing the title Deep Blue Man for the album these songs will be compiled in. Fan Observation: Zhang Zhehan has always previously been associated with light blue. Them explicitly using dark blue is likely another intended smear, as the colour is associated with Kuomintang political party.
→ 361° posted a photo ad featuring Gong Jun.
01-14 → #TheRealZhangZhehan trended on Twitter, staying there for over twelve hours.
→ The third song was released on YouTube and later Apple Music and Spotify. Fan Observation: It was noticed that comments from some familiar names (whalers) were made on the video six hours before it was posted. This can be done by initially posting the video privately and giving people the link, which results in the view and comment counts reflecting the actions by those with the link so it looks like the video is more immediately popular than it truly is once it’s made public. It’s therefore highly suspected that whalers were given the link specifically with this aim.
→ #GongJun trended on Twitter.
→ Esquire posted a video featuring Gong Jun. (1129 kadian) [subbed video] Caption: “Looking back on 2022, @ Gong Jun Simon said: ‘You can only make the best choice at the moment. Now that the choice is over, the rest will be handed over[.] Let time prove it, there is no regret, everything is good.’”
→ 361° posted a photo ad featuring Gong Jun. (1129 kadian)
→ MUJOSH posted a photo ad featuring Gong Jun. (1129 kadian) They later posted a behind the scenes video. (flashing lights cw)
→ QuelleVous made a post revealing which of Xie Yihua’s people is the one behind the Muses account (Zhao Yao).* A few hours later the account made a post saying it would no longer be active in order to “give Zhehan more focus.” *When I’ve included stuff about this account in the past, I believe I’ve usually just referred to it as Xie Yihua. I apologize for the mistake. 
→ Kangshifu posted a photo ad featuring Gong Jun.
→ Gong Jun posted nine photos from visiting Universal Studios Beijing the previous day. Caption: “Played all day hahahahahahahaha [three husky emojis]” This was reposted by MUJOSH highlighting their glasses. He also posted the photos to his Xiao Hong Shu, caption: “Happy New Year  🐼” and his Instagram, caption: “wuhu~”
→ An interview with BRTV that Gong Jun did for their Spring Festival Gala was released. [subbed video]
→ Gong Jun’s studio posted nine more photos from Universal Studios Beijing, these all of Gong Jun in Harry Potter robes. Caption: “Boss @ Gong Jun Simon's surprise trip, a day immersed in adventure and ‘wow’!” This was again reposted by MUJOSH.
→ 361° posted two photo ads featuring Gong Jun.
→ Esquire posted the article of their interview with Gong Jun. [partial translation]
→ CAPA updated their website, now including a list of their main leaders and subcommittees (aka a step towards more transparency.) It’s possible that this was something they were ordered to do rather than something they did voluntarily, but at this time that is only speculation.
Addition 01-16: Chinese film report posted a video of Gong Jun writing a blessing for the new year. [written translation]
Additional Reading: → Flora’s daily news
<<< previous week || all posts || following week >>>
This post was last edited 2023-01-16.
23 notes · View notes
glitchphotography · 1 year
Photo
Tumblr media Tumblr media Tumblr media
Writing a short addendum of action steps to my “You Don’t Hate AI Art, You Hate Capitalism” essay that I posted yesterday and including some AI photographs made with DALLE2 of Capybaras helping a union drive. These outputs were all based off of original photos I took of Capybaras in Ipaussu, Sao Paulo, in 2019. Let’s start with some advice for illustrators and commercial artists who are worried about AI ruining their lives.
UNIONIZE YOUR WORKPLACE! Seriously, your bosses are the ones out to get you, not the AIs or AI artists. Unions are the best way to prevent outsourcing to AIs.
UNIONIZE WITH OTHER FREELANCERS! This is harder to accomplish, but freelancer unions do exist and there should be more of them. (Side note: many artists who will be affected have the privilege to work in highly gatekept cultural industries that, in the US, discriminate against POC, immigrants, Queer people, Disabled people, etc. They are also mostly based in the Global North and they don’t hire many experimental new media artists, so keep in mind, this aint my battle lol)
Opt-out of Stable Diffusion V3! If your work is showing up on the “Have  I been Trained“ website, you can opt out of future AI model releases through this link: https://haveibeentrained.com/
Learn about new media histories and experimental processes! The art world is changing fast and visual arts don’t all revolve around representational illustration and commercial imaging. Expanding your craft is a matter of adaptation and survival. We new media artists have been messing with AIs for almost a decade now and this whole hysteria makes a lot of you sound brand new.
Train AI models using your own works and let everyone make derivatives. This is the classic “dont beat em, join em” approach and I think it’s awesome. It makes you seem generous and not like some boomer screaming “muh copyrights” into the void. Here’s a tutorial on how to train Stable Diffusion on your own set of images .
Now for some broad structural solutions, because these calls for AI Art Bans are giving strong “Disco Sucks” vibes:
AI’s should be treated as public infrastructure and thereby socialized or nationalized.  No private company should own AI systems or vectors of information. AIs should be open source and free for all to learn and use. 
Commercial applications of AIs should be strictly regulated. Not just because biases and misuse of sexualized images are rampant, but because private ownership of AI will lead to more socio-economic inequalities.
Generative Image AI Services should be paying royalties to artists. If you want your favorite illustrator to get those royalties, go pressure OpenAI and MidJourney directly. They have records of all their prompts and are making bank. What people don’t know is that when these services came out, they had atrocious licensing restrictions where they owned the prompts and images AI Artists were creating, but community pressure made them cave in. Now AI Artists own the outputs while granting a nonexclusive license to the AI corps (much how social media platforms operate).
Finally here’s some advice for digital artists working with AI:
Artists should fully disclose if their works include AI-generated imagery, especially if it’s the main visual. Many exhibiting artists use terms like “Collaboration with AI” to describe their art and that’s a good practice, because it really is a collaboration with a computer. (Side note: China is making it a law that all AI-generated media have a watermark or disclaimer and I think that’s a fair approach).
Artists should attribute the other artists they reference in their prompts, especially if they are relying on a living, working artist’s aesthetic for their output. Many AI Artists will mash-up several references, like cyberpunk + lisa frank + ansel adams + raphael painting, and tho i personally wouldnt fault an artist for not disclosing these references, since they are all canonical, I think its commendable for AI Artists to share their prompts along with their visual work.
It is unethical to take raw outputs based off of a single living, working artist and in order to commercialize a whole unattributed series of derivative works based off their aesthetic. But I think there are caveats if you are working within a legitimate conceptual framework and you attribute the artist you referenced. Also if you are remixing your artistic reference in your own workflow, you are protected by Fair Use and that’s ok. The best way to tell the difference if someone is using AI in unethical ways is to consider that person’s entire body of work. Are they appropriating in an interesting way? Is there a concept or politics behind their appropriation? Is AI Art all they seem to do? Art history has been driven by appropriation and that’s not going to change. 
Consider whether your AI Art punches up or punches down! Sure, go ahead and rip off the damien hirsts and jeff koons of the world. But if you are a white dude making japonaiserie or chinoiserie, or outputs based off of Black hip-hop culture, you should reconsider your approach. That FKN Meka thing is a perfect example of white guy owners creating a fake AI rapper using lyrics ghostwritten by Black rappers. That shit is gross and evil. Keep in mind that evaluating cultural appropriation requires an understanding of colonialism and racial capitalism.
24 notes · View notes
zvaigzdelasas · 10 months
Text
17 Jul 23
China Law Translate - Interim Measures for the Management of Generative Artificial Intelligence Services
Quotes from direct English translation of law below
These measures apply to the use of generative AI technologies to provide services to the public in the [mainland] PRC for the generation of text, images, audio, video, or other content (hereinafter generative AI services). Where the state has other provisions on the use of generative AI services to engage in activities such as news and publication, film and television production, and artistic creation, those provisions are to be followed. These Measures do not apply where industry associations, enterprises, education and research institutions, public cultural bodies, and related professional bodies, etc., research, develop, and use generative AI technology, but have not provided generative AI services to the (mainland) public.[...]
During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health;[...]
Respect intellectual property rights and commercial ethics, and protect commercial secrets, advantages in algorithms, data, platforms, and so forth must not be used for monopolies or to carry out unfair competition;[...]
Promote the establishment of generative AI infrastructure and public training data resource platforms. Promote collaboration and sharing of algorithm resources, increasing efficiency in the use of computing resources. Promote the orderly opening of public data by type and grade, expanding high-quality public training data resources. Encourage the adoption of safe and reliable chips, software, tools, computational power, and data resources.[...]
Where intellectual property rights are involved, the intellectual property rights that are lawfully enjoyed by others must not be infringed;[...]
Where personal information is involved, the consent of the personal information subject shall be obtained or it shall comply with other situations provided by laws and administrative regulations;[...]
When manual tagging is conducted in the course of researching and developing generative AI technology, the providers shall formulate clear, specific, and feasible tagging rules that meet the requirements of these Measures;[...]
Providers shall bear responsibility as the producers of online information content in accordance with law and are to fulfill the online information security obligations. Where personal information is involved, they are to bear responsibility as personal information handlers and fulfill obligations to protect personal information. Providers shall sign service agreements with users who register for their generative AI services (hereinafter “users”), clarifying the rights and obligations of both parties.[...]
Providers shall clarify and disclose the user groups, occasions, and uses of their services, guide users’ scientific understanding and lawful use of generative AI technology, and employ effective measures to prevent minor users from overreliance or addiction to generative AI services.[...]
Providers shall lawfully and promptly accept and address requests from individuals such as to access, reproduce, modify, supplement, or delete their personal information.[...]
Providers shall label generated content such as images and video in accordance with the Provisions on the Administration of Deep Synthesis Internet Information Services.[...]
Those providing generative AI services with public opinion properties or the capacity for social mobilization shall carry out security assessments in accordance with relevant state provisions[...]
These measures take effect on August 15, 2023.
47 notes · View notes
mariacallous · 10 months
Text
For more than a century and a half, an organization that has been referred to as “the most important agency you’ve never heard of” has been making technology global.
In its latest iteration, as the International Telecommunication Union (ITU), its global regulations now underpin most technologies we use in our daily lives, setting technical standards that enable televisions, satellites, cellphones, and internet connections to operate seamlessly from Japan to Brazil.
The next big technology may present the organization with its greatest challenge yet. Artificial intelligence systems are being deployed at a dizzying pace around the world, with implications for virtually every industry from education and health care to law enforcement and defense. Governments around the world are scrambling to balance benefits and bogeys, attempting to set guardrails without missing the boat on technological transformation. The ITU, with 193 member states as well as hundreds of companies and organizations, is trying to get a handle on that rowdy conversation.
“Despite the fact that we’re 158 years old, I think that the mission and mandate of the ITU has never been as important as it is today,” said Doreen Bogdan-Martin, the agency’s secretary-general, in a recent interview.
Founded in 1865 in Paris as the International Telegraph Union and tasked with creating a universal standard for telegraph messages to be transmitted between countries without having to be hard-coded into each country’s system at the border, the ITU would subsequently go on to play a similar role in future technologies including telephones and radio. In 1932, the agency adopted its current name to reflect its ever-expanding remit — folding in the radio governance framework that established maritime distress signals such as S.O.S. — and was brought under the aegis of the United Nations in 1947.
Bogdan-Martin, who took office in January, is the first woman to lead the ITU, and only the second American. Getting there followed months of campaigning to defeat her opponent, a former Russian telecommunications official who also worked as an executive at the Chinese technology firm Huawei, in an election that was widely billed as a battle for the future of the internet, not to mention a key bulwark for the West in the face of an increasingly assertive China and Russia within the U.N. (Bogdan-Martin also took over the ITU leadership from China’s Zhao Houlin, who had served for eight years after running unopposed twice.)
“It was intense,” Bogdan-Martin acknowledged. Ultimately, she won with 139 out of 172 votes cast.
Russia and China have been at the forefront of a competing vision for the internet, in which countries have greater control over what their citizens can see online. Both countries already exercise that control at home, and Russia has used the war in Ukraine to further restrict internet access and create a digital iron curtain that inches closer to China’s far more advanced censorship apparatus, the Great Firewall. In a joint statement last February, the two countries said they “believe that any attempts to limit their sovereign right to regulate national segments of the internet and ensure their security are unacceptable,” calling on “greater participation” from the ITU to address global internet governance issues.
“I firmly support a free and open, democratic internet,” Bogdan-Martin said. Those values are key to her biggest priority for the ITU—bringing the internet to the 2.7 billion people worldwide who still haven’t experienced it. “Safe, affordable, trusted, responsible, meaningful connectivity is a global imperative,” she said. 
Getting that level of global consensus on how to regulate artificial intelligence may not be as straightforward. Governments around the world have taken a variety of approaches—and not always compatible. The European Union’s AI Act, set for final passage later this year, ranks AI applications by levels of risk and potential harm, while China’s regulations target specific AI applications and require developers to submit information about their algorithms to the government. The United States is further behind the curve when it comes to binding legislation but has so far focused on light-touch regulation and more voluntary frameworks aimed at allowing innovation to flourish.
In recent weeks, however, calls for a global AI regulator have grown louder, modeled after the nuclear nonproliferation framework governed by the International Atomic Energy Agency (IAEA). Proponents of that idea include U.N. Secretary-General António Guterres and OpenAI CEO Sam Altman, whose advanced chatbots have catalyzed much of the hand-wringing around the technology. But some experts argue that comparisons to nuclear weapons don’t quite capture the challenges of artificial intelligence.
“People forget what a harmony there was between the [five permanent members of the Security Council] in the United Nations over the IAEA,” said Robert Trager, international governance lead at the nonprofit Center for the Governance of AI. When it comes to AI regulation these days, those members “don’t have the same degree of harmony of interest, and so that is a challenge.”
Another difference is the far wider application of AI technologies and the potential to transform nearly every aspect of the global economy for better and for worse. “It’s going to change the nature of our interactions on every front. We can’t really approach it and say: ‘Oh, there’s this thing, AI, we’ve now got to figure out how to regulate it, just like we had to regulate automobiles or oil production or whatever,’” said Gillian Hadfield, a professor of law at the University of Toronto who researches AI regulation. “It’s really going to change the way everything works.”
The sheer pace of AI development doesn’t make things any easier for would-be regulators. It took less than six months after the launch of ChatGPT caused a seismic shift in the global AI landscape for its maker, OpenAI, to launch GPT-4, a new version of the software engine powering the chatbot that can incorporate images as well as text. “One of the things we’re seeing with the EU AI Act, for example, is it hasn’t even been passed yet and it’s already struggling to keep up with the state of the technology,” Hadfield said.
But Bogdan-Martin is looking to get the ball rolling. The ITU hosted its sixth annual AI for Good Global Summit last week, which brought together policymakers, experts, industry executives, and robots for a two-day discussion of ways in which AI could help and harm humanity—with a focus on guardrails that mitigate the latter. Proposed solutions from the summit included a global registry for AI applications and a global AI observatory.
“Things are just moving so fast,” Bogdan-Martin said. “Every day, every week we hear new things,” she said. “But we can’t be complacent. We have to be proactive, and we do have to find ways to tackle the challenges.”
And although total consensus may be hard to achieve, there are some fundamental risks of AI that experts say countries will be keen to mitigate regardless of their ideology—such as protection of children—that can form something of a baseline.
“No jurisdictions, no states, have an interest in civilian entities doing things that are dangerous to society,” Trager said. “There is this common interest in developing the regime, in figuring out what the best standards are, and so I think there’s a lot of opportunity for collaboration.”
6 notes · View notes
cuckerfailson · 1 year
Text
Everyone I know who works in AI, from the actually coders to the lawyers working on the law around its regulation, knows its bullshit, knows its a buzzword to describe what silicon valley wants its next golden egg to be. what they actually produce has absolutely nothing to do with the rhetoric surrounding it, from the roko's basilisk obsessed effective altruists trying to make a new religion to the hysteric antiai luddites. Artificial general intelligence is what they are actually talking about, and it does not exist and will never exist. What this rhetoric serves to do is cover up an attempt by the largest tech companies to reduce labor costs by any means necessary, mainly by outsourcing moderation, code editing, artwork, advertising and other small, expensive tasks that usually require you to hire a trained professional. All the extra stuff, all the discourse, is just an ideological fluff to make that singular point easier to achieve. There may even be true believers with a lot of money who fully bought into the idea of We Need To Make The AI Good So Nobody (China) Makes it Evil, but they are rubes, and if you believe chatgpt to be anything other than a particularly advanced chatbot designed to give the answers people want, or dalle as an art aggregator that spits out low quality suggestions, youre an idiot who doesnt understand where technology is now and what is even possible or desirable to achieve with it. They just need an excuse to pay you less, and literally anything will do.
6 notes · View notes
clouds-of-wings · 1 year
Text
China has taken the first steps to ensure that AI-generated media is distinguishable. In a report from The Register, China’s Cyberspace Administration issued new regulations. These new rules prohibit the creation of AI-generated media without any clear distinguishing marks and labels. These new rules come as 2022 has proven to be an AI-rich time with AI-generated art, music, and other media taking the internet by storm. China’s Cyberspace Administration, similar to the FCC in the United States, is tasked with regulation and oversight of the internet, though, the Cyberspace Administration also has an extra pillar of censorship as part of their purview.
These new regulations aim to better oversee what it calls “deep synthesis” technology. Think deep fakes which are quite popular online. The government agency’s official website outlined its reasons for issuing the regulation. It took aim at recently popular images, text, video, and other AI.
2 notes · View notes
thenightling · 2 years
Text
Review of Werewolf by Night
I just saw the Marvel / Disney+ “Special” Werewolf by Night.   It was very good.  I liked it.   I will pick apart some things, particularly about Disney / Marvel policies, but know that in general I did love this Special (as they called it).
Most of it was shot in black and white and it has very good atmosphere and ambiance.  It does feel like an old Universal or even Hammer (though those were usually in color) monster movie.  
It is good to see the MCU (Marvel cinematic Universe) finally tap into its Gothic properties.  This was the first time I had seen the character of Jack Russell (Yes, that’s really his name but they never say the last name in the special) in something other than animation.  It was also good to see a depiction of Man-Thing that wasn’t in a low budget schlock film that technically counted as a Syfy Original movie. 
I wish Marvel would use its supernatural / Gothic properties more than just around the Halloween season.  As someone who loves Gothic Horror this is what turned me off of Marvel in recent years, the way they downplay their supernatural and Gothic properties in favor of scifi.  Even the director of Doctor Strange (Who is supposed to be the Sorcerer Supreme) patted himself on the back about how everything in the movie could be explained with theoretical physics.  If you want real elemental magick you have to hunt down the animated Doctor Strange movie from 2007, actually titled Doctor Strange: Sorcerer Supreme. You may notice that was before Disney purchased Marvel.  It was after Disney purchased Marvel that they started to heavily downplay the supernatural content. This was both to appeal to the Chinese market (who don’t like Western depictions of the supernatural) and to differentiate Marvel from Disney proper.  Disney would have the magick while Marvel would be (as they intended) superheroes only.      
I feel that if this had been just twenty or so minutes longer it could have been released theatrically however Disney is still obsessed with getting that coveted Chinese market and China will not allow “Western depictions” of the afterlife.  i.e. Ghosts and things they consider occult (Though they allow the Fantastic Beasts franchise through...)  The Chinese market has hindered supernatural horror here in the US for years. It’s why so few movies with supernatural content have large budgets.  Most are made on a shoe string budget so that it doesn’t matter if it’s released in China or not to earn a substantial profit.  It’s also why films like the Child’s play reboot went with an evil / malfunctioning AI instead of a possessed doll. Same with the movie M3gan. 
 Don’t let them trick you into thinking modern horror audiences don’t like supernatural content, or think it’s silly, or not scary. No.  That’s not the reason.  Real horror fans still love the supernatural.  There’s nothing quite as scary as the unknown.  And evil artificial intelligences have been done to death since the 60s.       
  The cynic in me feels that the gimmick of making it look like it was made in the 1930s had a secondary purpose of showing blood that was not red- and thus bypassing regulations that would have gotten the “special” an MA TV rating (The TV equivalent of an R rating).   
When I was a teenager I loved Marvel’s horror properties.  Tomb of Dracula, Werewolf by Night, Legion of Monsters, etc..  But by the twenty teens it was as if Marvel was shying away from the supernatural, even in their comic books.  The only good vampire story they did in years was Avengers: Vampire Wars in 2019. And that was barely promoted.  Most people didn’t even know the story arc happened.
The plot of werewolf by Night is Ulysses Bloodstone has just passed away, the Patriarch of the Bloodstone family and a secret cult of monster hunters. You can tell right away from the dialogue that these aren’t like Doctor Abraham Van Helsing trying to protect the world from evil.  No.  These are paranoid and self-righteous fanatics with a blood lust, looking for the freedom to kill anything they deem non-human.  The only one that might be different from the rest of this sect is Elsa Bloodstone.
In the comics of the 2000s Elsa felt like a little bit of a knock-off of Buffy The Vampire Slayer but I think enough time has passed and they made enough tweaks to the character that this version stood on her own without anyone thinking of Buffy.  Unfortunately now there might be comparisons to Syfy’s Van (Vanessa) Helsing.  One day I’d love to see a woman vampire hunter in the style of Peter Cushing’s Dr. Van Helsing.  An older woman at least in her fifties, and perhaps dressed like a Victorian era man instead of in tight leather, as female vampire hunters have been forced into in pop culture since the 90s.      
This gathering of monster hunters had a competition in mind, hosted by Ulysses widow (Elsa’s stepmother). They are allowed to kill each other in the process.  The contest is to find and kill a monster, which they have attached the bloodstone amulet.  The amulet grants near-immortality to its wearer, as well as enhanced strength and stamina and can weaken supernatural entities.  “Monsters” (beings of the supernatural) cannot touch the bloodstone without being hurt.
A man is there who appears to be after the prize. This is Jack (Werewolf by Night).  He’s actually there to rescue Man-Thing.  If you’re not familiar with Man-Thing, he’s pretty much Marvel’s version of Swamp Thing only mute and his touch can burn and kill you if you are afraid.  If you are not afraid his touch will not harm you.  In this story Man-Thing answers to his previous human name of “Ted” much like how sometimes Swamp Thing answers to Alec.
Trivia:  The creator of Man-Thing was roommates with the creator of Swamp Thing so it’s unclear which came up with the idea first.
Elsa learns the important lesson that not all monsters are evil as she helps Jack and later both Jack and Ted help her.  Needless to say the other hunters (and her stepmother) end up dead and Elsa inherits The Bloodstone amulet.  
My one big complaint was something that I thought as clever at first. They made the body of Ulysses Bloodstone look like the literary Frankenstein Monster as illustrated by Marvel comics own “Monster of Frankenstein.”  In the teaser I almost thought it would be the Frankenstein Monster, who used to be portrayed as intelligent and articulate in Marvel comics and went by the name Adam... until they started to let people who never read Mary Shelley’s original novel write the character... Then he was portrayed as simple-minded like in some cliché horror movies.  
The reason Ulysses resembling the literary Frankenstein bothers me is this homage indicates that Marvel has no intention of ever using their version of Adam Frankenstein with the literary look. And that’s a shame. They had one of the best versions of the Frankenstein Monster.  He was even good in the kid friendly game Superhero Squad Online.     
This felt almost like a TV pilot so I hope Werewolf by Night or Elsa Bloodstone end up with their own TV series as a result.  The story appeared to be set in the 1930s.  It would be nice if Marvel actually allowed some of its characters to be immortal. In the original Wolfman movies, Larry Talbot was immortal. DC doesn’t shy away from its supernatural properties or immortal characters. (See Lucifer, Doom Patrol, The Sandman, and Dead Boy Detectives). 
They also implied that usually Jack only changes form during the full moon like in The Wolf-Man, which is ironic because when someone wanted to do a Werewolf by Night movie in the 90s, Stan Lee cynically suggested that they use any werewolf, that they didn’t need to waste money by using Marvel’s. Stan Lee had implied that Jack was so under-developed that there was nothing special to him.  I don’t think Stan Lee was that fond of the Marvel horror properties even though Tomb of Dracula is considered a classic now. 
Back when I was reading Werewolf by Night comics Jack was changing whenever emotional or he could shift at will, a bit like The Hulk.   I guess someone didn’t want any “confusion.” I hate idiot proofing. Idiot Proofing is the impolite term for when Hollywood over-simplifies things because they underestimate the audience’s ability to follow the story.     
 I would like to see Marvel brave enough to use characters like these in the “Off season” that is to say outside of just around Halloween.  It would show that they have faith in the property as being able to stand on its own and not just as a Halloween gimmick.  I had forgotten how much I had loved Marvel’s Gothic properties and there certainly was good ambiance with this, even if it was a bit predictable.  But sometimes predictable is good.  There’s an entire generation that doesn’t know these tropes and never watched the classic monster movies so this could be a gateway for them to get into traditional Gothic Horror.
Tumblr media
3 notes · View notes