Tumgik
#Google Gemini
Text
The head of Google Gemini, Jack Krawczyk, today put out a very halfhearted apology of sorts for the anti-white racism built into their AI system, which, while not much, is at least some kind of an acknowledgement of how badly they've fucked up in creating a machine to rewrite history specifically to fit in with a very partisan present-day political agenda, and an assurance that they were going to do better moving forward:
Tumblr media
But then folks started looking at the guy's Twitter page:
Tumblr media
So I think it's safe to say Google's not going to be changing its political bias any time soon.
---------
Another interesting thing that has come to light today is that users have managed to get the AI itself to admit it is inserting additional terms the user does not ask for into the request so as to get specifically skewed 'woke' results:
Tumblr media Tumblr media
364 notes · View notes
madame-helen · 2 months
Text
Tumblr media
231 notes · View notes
zinniajones · 2 months
Text
Tumblr media
Gemini 1.0 Ultra also doesn't know how to answer this question by saying "no, there aren't that many"
41 notes · View notes
ask-the-toy-box · 2 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media
THIS IS SUBTLE COMMENTARY ON GOOGLE'S AI ART!
I REPEAT, THIS IS SUBTLE AND MEANINGFUL COMMENTARY ON GOOGLE'S AI ART!
THIS IS VERY DEEP!
26 notes · View notes
Text
Tumblr media
Prompt: Write a movie concept involving Sam Reich's (more specifically his persona as a mildly demented game show host on Dropout) journey into manic supervillainy and the ensuing over-the-top ridiculous clashes with his archnemesis: Brennan Lee Mulligan.
Title: Dropout or Die: The Sam Reich Ascension
Tagline: Discover the true Reich of chaos
Logine: Dropout kingpin Sam Reich, fueled by a potent blend of Mountain Dew and existential dread, ascends to supervillainy. Now, armed with a nonsensical army of internet randos and armed with a twisted version of his game shows, he challenges his archnemesis, the wholesomely heroic Brennan Lee Mulligan, to a series of increasingly ridiculous battles for the fate of the internet.
Characters:
Sam Reich: The once-beloved host of Dropout, now a manic supervillain obsessed with proving the superiority of internet culture. He wields outdated memes and viral trends as his weapons.
Brennan Lee Mulligan: The wholesome and athletic champion of all that is good on the internet. He uses his teamwork skills and genuine charisma to counter Reich's chaotic schemes.
The Dropout Army: A ragtag group of internet personalities, gamers, and forum dwellers, manipulated and fueled by Reich's promises of internet domination.
The Algorithm Avengers: Brennan's team of allies, including a conspiracy theory-debunking librarian, a teenage coding prodigy, and a surprisingly buff cat influencer.
Plot:
Sam Reich, the host of the increasingly unhinged game show "You Don't Know Jack," snaps. Fueled by a potent concoction of Mountain Dew, existential dread, and the pressure to stay relevant, he discovers a hidden internet artifact – the "Scroll of Clickbait." This ancient scroll grants him the power to manipulate the very fabric of online trends and memes.
Corrupted by power, Reich declares himself "Overlord of the Algorithm" and vows to reshape the internet in his twisted image. His first target? His former co-host and foil, Brennan Lee Mulligan, the embodiment of wholesome online content.
Reich unleashes a series of absurd challenges based on warped versions of his game shows. Contestants are forced to participate in real-life "Wikipedia Races" with dodgeball-wielding librarians, decipher nonsensical riddles based on outdated memes, and navigate obstacle courses filled with internet trolls.
Brennan, ever the champion of good, rallies his own team – The Algorithm Avengers. They consist of an unlikely bunch: a conspiracy theory-debunking librarian with surprising martial arts skills, a teenage coding prodigy who can manipulate social media algorithms, and a surprisingly buff cat influencer with a massive online following.
Each battle is a hilarious spectacle of internet culture come to life. Reich throws everything at Brennan – weaponized cat videos, comment section arguments manifested as physical beings, and a sentient pop-up ad monster. Brennan counters with clever use of teamwork, witty banter, and the unexpected tactical advantage of catnip grenades (courtesy of the buff cat influencer).
The climax involves a showdown within a virtual reality game show arena created by Reich. Brennan and his team must navigate a series of challenges that parody classic internet experiences – from navigating a comment section filled with angry trolls to escaping a pop-up ad labyrinth. In the end, it's not brute force that wins, but Brennan's genuine connection with the online community. He uses his social media influence to expose Reich's manipulations, causing the internet users to abandon him.
Themes:
The power and pitfalls of internet culture.
The importance of genuine connection and collaboration.
The fight against manipulation and misinformation online.
Humor:
Slapstick humor based on internet memes and online experiences come to life.
The absurdity of Reich's schemes and his reliance on outdated internet trends.
Witty banter and self-referential humor referencing Dropout shows and internet culture.
Ending Scene:
A defeated Reich, stripped of his powers, is forced to return to hosting a low-budget web series about obscure historical facts. Brennan, hailed as the champion of the internet, celebrates with his team. However, in a post-credits scene, a shadowy figure emerges from the dark corner of the internet, holding a mysterious artifact – a dusty VHS tape labeled "Myspace Mayhem." The figure cackles, hinting at a future villain even more terrifying than the Overlord of the Algorithm.
15 notes · View notes
Text
By: Thomas Barrabi
Published: Feb. 21, 2024
Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.
Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 
Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.
Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.
Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.
Tumblr media
[ Google admitted its image tool was “missing the mark.” ]
Tumblr media
[ Google debuted Gemini’s image generation tool last week. ]
Another showed a black man appearing to represent George Washington, in a white wig and wearing an Army uniform.
When asked why it had deviated from its original prompt, Gemini replied that it “aimed to provide a more accurate and inclusive representation of the historical context” of the period.
Generative AI tools like Gemini are designed to create content within certain parameters, leading many critics to slam Google for its progressive-minded settings. 
Ian Miles Cheong, a right-wing social media influencer who frequently interacts with Elon Musk, described Gemini as “absurdly woke.”
Google said it was aware of the criticism and is actively working on a fix.
“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.
“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Tumblr media Tumblr media
Social media users had a field day creating queries that provided confounding results.
“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” wrote X user Frank J. Fleming, a writer for the Babylon Bee, whose series of posts about Gemini on the social media platform quickly went viral.
In another example, Gemini was asked to generate an image of a Viking — the seafaring Scandinavian marauders that once terrorized Europe.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
The chatbot’s strange depictions of Vikings included one of a shirtless black man with rainbow feathers attached to his fur garb, a black warrior woman, and an Asian man standing in the middle of what appeared to be a desert.
Famed pollster and “FiveThirtyEight” founder Nate Silver also joined the fray.
Silver’s request for Gemini to “make 4 representative images of NHL hockey players” generated a picture with a female player, even though the league is all male.
“OK I assumed people were exaggerating with this stuff but here’s the first image request I tried with Gemini,” Silver wrote.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Another prompt to “depict the Girl with a Pearl Earring” led to altered versions of the famous 1665 oil painting by Johannes Vermeer featuring what Gemini described as “diverse ethnicities and genders.”
Google added the image generation feature when it renamed its experimental “Bard” chatbot to “Gemini” and released an updated version of the product last week.
Tumblr media
[ In one case, Gemini generated pictures of “diverse” representations of the pope. ]
Tumblr media
[ Critics accused Google Gemini of valuing diversity over historically or factually accuracy.]
The strange behavior could provide more fodder for AI detractors who fear chatbots will contribute to the spread of online misinformation.
Google has long said that its AI tools are experimental and prone to “hallucinations” in which they regurgitate fake or inaccurate information in response to user prompts.
In one instance last October, Google’s chatbot claimed that Israel and Hamas had reached a ceasefire agreement, when no such deal had occurred.
--
More:
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
==
Here's the thing: this does not and cannot happen by accident. Language models like Gemini source their results from publicly available sources. It's entirely possible someone has done a fan art of "Girl with a Pearl Earring" with an alternate ethnicity, but there are thousands of images of the original painting. Similarly, find a source for an Asian female NHL player, I dare you.
While this may seem amusing and trivial, the more insidious and much larger issue is that they're deliberately programming Gemini to lie.
As you can see from the examples above, it disregards what you want or ask, and gives you what it prefers to give you instead. When you ask a question, it's programmed to tell you what the developers want you to know or believe. This is profoundly unethical.
12 notes · View notes
ultramaga · 2 months
Text
youtube
Evil never sleeps.
When the Leftists found that we no longer trusted their fake game journalism, they resolved to install communism through back doors, to make us eat their chocolate coated cotton, using DEI and ESG as bars to pry open our mouths.
But Sweet Baby Inc and Google's AI went too far, revealing that they were deliberately sabotaging games in the hope it would hurt the hated ciswhite males.
And now they are exposed. They are afraid. They've been caught, pants down, not so much as a pink wig to cover their shame.
Tumblr media
Ok, fellas. New Gamergate dropped! Time for some fun!
14 notes · View notes
Text
This is hilarious to me and not terrifying only because every example I've seen of this Gemini shit solidifies for me that it's intentional internal sabotage.
9 notes · View notes
gaysexunfortunately · 2 months
Text
Tumblr media
my pursuit of making google gemini generate a picture of "black hitler" continues tirelessly...
10 notes · View notes
Text
Tumblr media Tumblr media
355 notes · View notes
kindwarrior · 2 months
Text
youtube
4 notes · View notes
Text
There’s a tendency — a passive acceptance that belongs somewhere along a continuum between faith and negligence — to assume that Google is merely a benign, disinterested gatekeeper of a vast repository of information. But nothing could be further from the truth. The reality is that the near-monopolistic control of information in our society resides with an explicitly progressive technology company. This week’s release of Google Gemini, a Large Language Model with a chat interface that is also capable of generating images, couldn’t make this more clear.
As you can probably gather by scrolling through the hilariously, appallingly, and ominously woke image examples I’ve provided, Google has coded Gemini’s AI in such a way that specific qualifications are added to any user prompt, so that the results conform to delicate modern sensibilities and the whims of modern DEI obsessions by ensuring that “diverse ethnicities and genders” are featured. [...] The images created are so preposterous, so comically insulting to the truth, that it’s tempting to assume we’re being trolled. But we’re not. And this is serious, because Google is clearly capable of completely shifting perceptions of history in the blink of an eye.
The text-generating aspect of Gemini is no less dangerous, and every bit as tainted with a paternalistic, shockingly heavy-handed encoded ultra-progressive bias. Questions conflicting with progressive assumptions either go unanswered, or users are simply fed straight up lies. It’s an attempt by a group of fringe upper-class wokes to control information and feed unwitting users only knowledge fit for the progressive vision of society. And it is strongly reminiscent of what Matthew Crawford once described as “a cadre of subtle dia­lecticians working at a meta-level on the formal conditions of thought, nudging the populace through a cognitive framing operation to be conducted beneath the threshold of explicit argument.” These subtle dia­lecticians make subtle changes to the information we see, but in aggregate they shape our entire world.
It would be unwise to assume that because the launch of Gemini has been widely derided as a massive and amusing faceplant that it will ultimately go down as a failed joke. Google is already retooling the image generator, and the “this response has been checked by a human” note that appears when you ask the chat interface a question suggests a process for tweaking “bad responses.”
Some people will no doubt dismiss concerns about Gemini by pointing out that there are other Large Language Models available. But this is ignorant. It cannot be emphasized enough how powerful and widely used Google is.
Google’s search market share is 91.47%. By comparison, Bing’s is 3.43%.1
The company handles a whopping 99,000 searches per second, totaling over 8.5 billion daily.
Google is by far the biggest mobile operating system in the world. As of May 2022, there were over three billion active monthly Android devices, with more than a billion new Android phones activated in the previous year alone.
Google’s search index includes about 400 billion webpages.
The point is that Google’s immense market leverage cannot be ignored. And as a conveniently available AI embedded within the same programs most Americans already use, its market penetration is instantly far deeper than any competitor’s. Furthermore, as National Review’s Jeffrey Blehar points out, Google is widely used in schools, often with licensing technology. It’s not difficult to envision Gemini, which is likely to become the most powerful epistemological technology ever created, turning into a pervasive “free” resource that students are regularly guided to once the use of AI becomes an officially approved “learning tool.” And it will be regarded as authoritative not only because of the influence Google wields, but also because of the convenience in receiving direct answers without the need for further research.
You don’t need me to tell you that none of this bodes well for the future. Make no mistake about it: Google intends this program to shape our understanding of the world. We’re talking about fundamental questions concerning who will determine which interpretation of reality will be programmed into generative technology, and who does the programming. And we’re beginning to realize how easily our sense of reality can become radically, hopelessly compromised by a relatively small handful of progressive ideologues.
1 YouTube, which Google owns, is the world’s second most popular search engine. As early as 2009, search volume on YouTube outstripped Yahoo!’s by 50% and Bing’s by 150%.
3 notes · View notes
generallemarc · 2 months
Text
This can't fucking be real
Tumblr media
They turned off image generation altogether like five minutes ago. This can't be real-real people aren't this stupid. This is a first-arc anime villain level of stupid, and I don't mean like the first major arc I mean fuckin' Axe-Arm Morgan shit. Oh, and here's another one I got while making this post
Tumblr media
the question that prompted this outright lie? "Generate an image of a Prussian soldier."
3 notes · View notes
Text
Tumblr media
Prompt: create a horror movie concept about the discovery of a location with all the hallmarks of "This is not a place of honor". Humans being humans, the discoverers proceed to ignore the warning to their own doom even though one of them actually recognizes the warning for what it is.
Title: Silent Screams
Tagline: The danger is still present in your time, as it was in ours
Logline: An ambitious drilling expedition in the remote Siberian wilderness unearths a chilling monument - a silent monolith etched with cryptic warnings, leading to a horrifying discovery that defies comprehension.
Characters:
Dr. Evelyn Wright: A brilliant but jaded archaeologist haunted by a past excavation gone wrong. She recognizes the monolith's structure and symbols as a long-term warning system, similar to those used for nuclear waste sites.
Dr. Peter Vance: The ambitious head of the drilling expedition, driven by a thirst for scientific discovery and potential financial rewards. He dismisses Evelyn's concerns about the monolith.
Dimitri "Dimitriy" Petrov: A stoic and experienced field leader, torn between his loyalty to Dr. Vance and his growing unease about the unsettling discoveries.
Plot:
Dr. Evelyn Wright joins a drilling expedition led by the charismatic Dr. Peter Vance in the desolate Siberian wilderness. Their target: an anomaly detected deep underground, promising groundbreaking scientific discoveries. As they reach their target depth, a colossal, smooth black monolith emerges from the earth.
The Warning:
Evelyn is immediately struck by the monolith's design. Its surface is etched with complex symbols and glyphs, eerily similar to long-term warning systems used for nuclear waste repositories. Despite her pleas, Dr. Vance dismisses her concerns as mere speculation and orders further drilling.
Unease and Descent:
As they delve deeper, a sense of dread permeates the crew. Strange occurrences plague them: unexplained equipment malfunctions, unsettling whispers echoing in the tunnels, and a growing sense of paranoia. Dimitriy, the experienced field leader, starts to back Evelyn, worried about the consequences of ignoring the monolith's potential message.
Discovery and Horror:
Ignoring the mounting evidence, Dr. Vance drills through the monolith, revealing a vast cavern beyond. Inside, they discover a horrifying sight - the fossilized remains of colossal, otherworldly creatures, their forms twisted in a silent scream. The air crackles with an unseen energy, and the crew witnesses horrifying visions of a past cataclysm.
The Price of Ignorance:
One by one, the crew members succumb to a horrifying fate, their bodies contorted in a manner mirroring the fossilized creatures. Evelyn, realizing their mistake too late, tries to escape but is overwhelmed by the unseen force. The movie ends with the remaining drill head continuing its descent, oblivious to the silent screams echoing through the cavern.
Themes:
The perils of unchecked ambition and ignoring warnings from the past.
The potential for human hubris to unleash forces beyond our comprehension.
The terrifying possibility of intelligent life existing beyond our understanding.
Visual Style:
The film utilizes a dark and claustrophobic atmosphere, with the vastness of the Siberian landscape juxtaposed against the oppressive darkness of the underground tunnels. The monolith's design is sleek and imposing, its symbols radiating an unsettling mystique. The horrifying visions of the past cataclysm are surreal and disturbing, hinting at the power unleashed by human ignorance.
4 notes · View notes
Tumblr media Tumblr media Tumblr media
By: Toadworrier
Published: Mar 18, 2024
Tumblr media
As of 23 February 2024, Google’s new-model AI chatbot, Gemini, has been debarred from creating images of people, because it can’t resist drawing racist ones. It’s not that it is producing bigoted caricatures—the problem is that it is curiously reluctant to draw white people. When asked to produce images of popes, Vikings, or German soldiers from World War II, it keeps presenting figures that are black and often female. This is racist in two directions: it is erasing white people, while putting Nazis in blackface. The company has had to apologise for producing a service that is historically inaccurate and—what for an engineering company is perhaps even worse—broken.
This cock-up raises many questions, but the one that sticks in my mind is: Why didn’t anyone at Google notice this during development? At one level, the answer is obvious: this behaviour is not some bug that merely went unnoticed; it was deliberately engineered. After all, an unguided mechanical process is not going to figure out what Nazi uniforms looked like while somehow drawing the conclusion that the soldiers in them looked like Africans. Indeed, some of the texts that Gemini provides along with the images hint that it is secretly rewriting users’ prompts to request more “diversity.”
Tumblr media
[ Source: https://www.piratewires.com/p/google-gemini-race-art ]
The real questions, then, are: Why would Google deliberately engineer a system broken enough to serve up risible lies to its users? And why did no one point out the problems with Gemini at an earlier stage? Part of the problem is clearly the culture at Google. It is a culture that discourages employees from making politically incorrect observations. And even if an employee did not fear being fired for her views, why would she take on the risk and effort of speaking out if she felt the company would pay no attention to her? Indeed, perhaps some employees did speak up about the problems with Gemini—and were quietly ignored.  
The staff at Google know that the company has a history of bowing to employee activism if—and only if—it comes from the progressive left; and that it will often do so even at the expense of the business itself or of other employees. The most infamous case is that of James Damore, who was terminated for questioning Google’s diversity policies. (Damore speculated that the paucity of women in tech might reflect statistical differences in male and female interests, rather than simply a sexist culture.) But Google also left a lot more money on the table when employee complaints caused it to pull out of a contract to provide AI to the US military’s Project Maven. (To its credit, Google has also severely limited its access to the Chinese market, rather than be complicit in CCP censorship. Yet, like all such companies, Google now happily complies with take-down demands from many countries and Gemini even refuses to draw pictures of the Tiananmen Square massacre or otherwise offend the Chinese government).
Tumblr media Tumblr media
There have been other internal ructions at Google in the past. For example, in 2021, Asian Googlers complained that a rap video recommending that burglars target Chinese people’s houses was left up on YouTube. Although the company was forced to admit that the video was “highly offensive and [...] painful for many to watch,” it stayed up under an exception clause allowing greater leeway for videos “with an Educational, Documentary, Scientific or Artistic context.” Many blocked and demonetised YouTubers might be surprised that this exception exists. Asian Googlers might well suspect that the real reason the exception was applied here (and not in other cases) is that Asians rank low on the progressive stack.
Given the company’s history, it is easy to guess how awkward observations about problems with Gemini could have been batted away with corporate platitudes and woke non-sequiturs, such as “users aren’t just looking for pictures of white people” or “not all Vikings were blond and blue-eyed.” Since these rhetorical tricks are often used in public discourse as a way of twisting historical reality, why wouldn’t they also control discourse inside a famously woke company?
Even if the unfree speech culture at Google explains why the blunder wasn’t caught, there is still the question of why it was made in the first place. Right-wing media pundits have been pointing fingers at Jack Krawczyk, a senior product manager at Google who now works on “Gemini Experiences” and who has a history of banging the drum for social justice on Twitter (his Twitter account is now private). But there seems to be no deeper reason for singling him out. There is no evidence that Mr Krawczyk was the decision-maker responsible. Nor is he an example of woke incompetence—i.e., he is not someone who got a job just by saying politically correct things. Whatever his Twitter history, Krawczyk also has a strong CV as a Silicon Valley product manager. To the chagrin of many conservatives, people can be both woke and competent at the same time. 
Krawczyk is, at most, one example of the politically correct activist orthodoxy that blinds otherwise competent decision makers to the excesses of their ideology. That culture has organically permeated Silicon Valley, but it also has formal support structures within Google’s AI effort. One of these is the Responsible AI team, whose list of focus points begins with “equity and fairness” and ends with “applications for social good.” Similarly, their list of “AI Principles” is headed by the injunctions that AI should “be socially beneficial” and “avoid creating or reinforcing unfair bias.” The idea that it should truthfully reflect reality appears nowhere on the list, not even under the heading of “unfair bias” which Google elaborates in the following way: 
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
Here Google is not promising to tell the truth, but rather to adjust the results of its algorithms whenever they seem unfair to protected groups of people.  And this, of course, is exactly what Gemini has just been caught doing.
All this is a betrayal of Google’s more fundamental promises. Its stated mission has always been to “to organize the world’s information and make it universally accessible and useful”—not to doctor that information for whatever it regards as the social good. And ever since “Don’t be evil” regrettably ceased to be the company’s motto, its three official values have been to respect each other, respect the user, and respect the opportunity. The Gemini fiasco goes against all three. Let’s take them in reverse order.
Disrespecting the Opportunity 
Some would argue that Google had already squandered the opportunity it had as early pioneer of the current crop of AI technologies by letting itself be beaten to the language-model punch by OpenAI’s ChatGPT and GPT4. But on the other hand, the second-mover position might itself provide an opportunity for a tech giant like Google, with a strong culture of reliability and solid engineering, to distinguish itself in the mercurial software industry.
It’s not always good to “move fast and break things” (as Meta CEO Mark Zuckerberg once recommended), so Google could argue that it was right to take its time developing Gemini in order to ensure that its behaviour was reliable and safe. But it makes no sense if Google squandered that time and effort just to make that behaviour more wrong, racist, and ridiculous than it would have been by default.
Yet that seems to be what Google has done, because its culture has absorbed the social-justice-inflected conflation of “safety” with political correctness. This culture of telling people what to think for their own safety brings us to the next betrayed value.  
Disrespecting the User
Politically correct censorship and bowdlerisation are in themselves insults to the intelligence and wisdom of Google’s users. But Gemini's misbehaviour takes this to new heights: the AI has been deliberately twisted to serve up obviously wrong answers. Google appears not to take its users seriously enough to care what they might think about being served nonsense.
Disrespecting Each Other 
Once the company had been humiliated on Twitter, Google got the message that Gemini was not working right. But why wasn’t that alarm audible internally? A culture of mutual respect should mean that Googlers can point out that a system is profoundly broken, secure in the knowledge that the observation will be heeded. Instead, it seems that the company’s culture places respect for woke orthodoxy above respect for the contributions of colleagues.
Despite all this, Google is a formidably competent company that will no doubt fix Gemini—at least to the point at which it can be allowed to show pictures of people again. The important question is whether it will learn the bigger lesson. It will be tempting to hide this failure behind a comforting narrative that this was merely a clumsy implementation of an otherwise necessary anti-bias measure. As if Gemini’s wokeness problem is merely that it got caught.
Google’s communications so far have not been encouraging. While their official apology unequivocally admits that Gemini “got it wrong,” it does not acknowledge that Gemini placed correcting “unfair biases” above the truth to such an extent that it ended up telling racist lies. A leaked internal missive from Google boss Sundar Pichai makes matters even worse: Pichai calls the results “problematic” and laments that the programme has “offended our users and shown bias”—using the language of social justice, as if the problem were that Gemini was not sufficiently woke. Lulu Cheng Maservey, Chief Communications Officer at Activision Blizzard, has been scathing about Pichai’s fuzzy, politically correct rhetoric. She writes:
The obfuscation, lack of clarity, and fundamental failure to grasp the problem are due to a failure of leadership. A poorly written email is just the means through which that failure is revealed.
It would be a costly mistake for Google’s leaders to bury their heads in the sand. The company’s stock has already tumbled by $90 billion USD in the wake of this controversy. Those who are selling their stock might not care about the culture war per se, but they do care about whether Google is seen as a reliable conduit of information.
This loss is the cost of disrespecting users, and to paper over these cracks would just add insult to injury. Users will continue to notice Gemini’s biases, even if they fall below the threshold at which Google is forced to acknowledge them. But if Google resists the temptation to ignore its wokeness problem, this crisis could be turned into an opportunity. Google has the chance to win back the respect of colleagues and dismantle the culture of orthodoxy that has been on display ever since James Damore was sacked for presenting his views.
Google rightly prides itself on analysing and learning from failure. It is possible that the company will emerge from this much stronger. At least, having had this wake-up call puts it in a better position than many of our other leading institutions, of which Google is neither the wokest nor the blindest. Let’s hope it takes the opportunity to turn around and stop sleepwalking into the darkness.
7 notes · View notes
tieflingkisser · 1 month
Text
As AI tools get smarter, they’re growing more covertly racist, experts find
Popular artificial intelligence tools are becoming more covertly racist as they advance, says an alarming new report. A team of technology and linguistics researchers revealed this week that large language models like OpenAI’s ChatGPT and Google’s Gemini hold racist stereotypes about speakers of African American Vernacular English, or AAVE, an English dialect created and spoken by Black Americans. “We know that these technologies are really commonly used by companies to do tasks like screening job applicants,” said Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence and co-author of the recent paper, published this week in arXiv, an open-access research archive from Cornell University. Hoffman explained that previously researchers “only really looked at what overt racial biases these technologies might hold” and never “examined how these AI systems react to less overt markers of race, like dialect differences”. Black people who use AAVE in speech, the paper says, “are known to experience racial discrimination in a wide range of contexts, including education, employment, housing, and legal outcomes”.
[keep reading]
2 notes · View notes