Before Elon Musk bought Twitter, slurs against Black Americans showed up on the social media service an average of 1,282 times a day. After the billionaire became Twitter’s owner, they jumped to 3,876 times a day.
Slurs against gay men appeared on Twitter 2,506 times a day on average before Mr. Musk took over. Afterward, their use rose to 3,964 times a day.
And antisemitic posts referring to Jews or Judaism soared more than 61% in the two weeks after Mr. Musk acquired the site.
These findings — from the Center for Countering Digital Hate, the Anti-Defamation League and other groups that study online platforms — provide the most comprehensive picture to date of how conversations on Twitter have changed since Mr. Musk completed his $44 billion deal for the company in late October. While the numbers are relatively small, researchers said the increases were atypically high.
The shift in speech is just the tip of a set of changes on the service under Mr. Musk. Accounts that Twitter used to regularly remove — such as those that identify as part of the Islamic State, which were banned after the U.S. government classified ISIS as a terror group — have come roaring back. Accounts associated with QAnon, a vast far-right conspiracy theory, have paid for and received verified status on Twitter, giving them a sheen of legitimacy.
These changes are alarming, researchers said, adding that they had never seen such a sharp increase in hate speech, problematic content and formerly banned accounts in such a short period on a mainstream social media platform.
“Elon Musk sent up the Bat Signal to every kind of racist, misogynist and homophobe that Twitter was open for business,” said Imran Ahmed, the chief executive of the Center for Countering Digital Hate. “They have reacted accordingly.”
Mr. Musk, who did not respond to a request for comment, has been vocal about being a “free speech absolutist” who believes in unfettered discussions online. He has moved swiftly to overhaul Twitter’s practices, allowing former President Donald J. Trump — who was barred for tweets that could incite violence — to return. Last week, Mr. Musk proposed a widespread amnesty for accounts that Twitter’s previous leadership had suspended. And on Tuesday, he ended enforcement of a policy against COVID misinformation.
But Mr. Musk has denied claims that hate speech has increased on Twitter under his watch. Last month, he tweeted a downward-trending graph that he said showed that “hate speech impressions” had dropped by a third since he took over. He did not provide underlying numbers or details of how he was measuring hate speech.
CHANGES AT ELON MUSK’S TWITTER
A swift overhaul. Elon Musk has moved quickly to revamp Twitter since he completed his $44 billion buyout of the social media company in October, warning of a bleak financial picture and a need for new products. Here’s a look at some of the changes so far:
• Going private. As part of Mr. Musk’s acquisition of Twitter, he is delisting the company’s stock and taking it out of the hands of public shareholders. Making Twitter a private company gives Mr. Musk some advantages, including not having to make quarterly financial disclosures. Private companies are also subject to less regulatory scrutiny.
• Layoffs. Just over a week after closing the deal, Mr. Musk eliminated nearly half of Twitter’s work force, or about 3,700 jobs. The layoffs hit many divisions across the company, including the engineering and machine learning units, the teams that manage content moderation, and the sales and advertising departments.
• Verification subscriptions. Twitter began charging customers $7.99 a month to receive a coveted verification check mark on their profiles. But the subscription service was paused after some users exploited it to create havoc on the platform by pretending to be high-profile brands and sending disruptive tweets.
• Content moderation. Shortly after closing the deal to buy Twitter, Mr. Musk said that the company would form a content moderation council to decide what kinds of posts to keep up and what to take down. But advertisers have paused their spending on Twitter over fears that Mr. Musk will loosen content rules on the platform.
• Other possible changes. As Mr. Musk and his advisers look for ways to generate more revenue at the company, they are said to have discussed adding paid direct messages, which would let users send private messages to high-profile users. The company has also filed registration paperwork to pave the way for it to process payments.
On Thursday, Mr. Musk said the account of Kanye West, which was restricted for a spell in October because of an antisemitic tweet, would be suspended indefinitely after the rapper, known as Ye, tweeted an image of a swastika inside the Star of David. On Friday, Mr. Musk said Twitter would publish “hate speech impressions” every week and agreed with a tweet that said hate speech spiked last week because of Ye’s antisemitic posts.
Changes in Twitter’s content not only have societal implications but also affect the company’s bottom line. Advertisers, which provide about 90% of Twitter’s revenue, have reduced their spending on the platform as they wait to see how it will fare under Mr. Musk. Some have said they are concerned that the quality of discussions on the platform will suffer.
On Wednesday, Twitter sought to reassure advertisers about its commitment to online safety. “Brand safety is only possible when human safety is the top priority,” the company wrote in a blog post. “All of this remains true today.”
The appeal to advertisers coincided with a meeting between Mr. Musk and Thierry Breton, the digital chief of the European Union, in which they discussed content moderation and regulation, according to an E.U. spokesman. Mr. Breton has pressed Mr. Musk to comply with the Digital Services Act, a European law that requires social platforms to reduce online harm or face fines and other penalties.
Mr. Breton plans to visit Twitter’s San Francisco headquarters early next year to perform a “stress test” of its ability to moderate content and combat disinformation, the spokesman said.
On Twitter itself, researchers said the increase in hate speech, antisemitic posts and other troubling content had begun before Mr. Musk loosened the service’s content rules. That suggested that a further surge could be coming, they said.
MORE ON ELON MUSK’S TWITTER TAKEOVER
• An Established Pattern: Firing people. Talking of bankruptcy. Telling workers to be “hard core.” Twitter isn’t the first company that witnessed Elon Musk use those tactics.
• Resolving a ‘Misunderstanding’: After Mr. Musk accused Apple of threatening to pull Twitter from its App Store, it appears that a potential feud between the tech titans has been avoided.
• A ‘War for Talent’: Seeing misinformation as a possibly expensive liability, several companies are angling to hire former Twitter employees with the expertise to keep it in check.
• Unpaid Bills: Mr. Musk and his advisers have scrutinized all types of costs at Twitter, instructing staff to review, renegotiate and in some cases not pay outside vendors at all.
If that happens, it’s unclear whether Mr. Musk will have policies in place to deal with problematic speech or, even if he does, whether Twitter has the employees to keep up with moderation. Mr. Musk laid off, fired or accepted the resignations of more than half the company’s staff last month, including those who worked to remove harassment, foreign interference and disinformation from the service. Yoel Roth, Twitter’s head of trust of safety, was among those who quit.
The Anti-Defamation League, which files regular reports of antisemitic tweets to Twitter and keeps track of which posts are removed, said the company had gone from taking action on 60% of the tweets it reported to only 30%.
“We have advised Musk that Twitter should not just keep the policies it has had in place for years, it should dedicate resources to those policies,” said Yael Eisenstat, a vice president at the Anti-Defamation League, who met with Mr. Musk last month. She said he did not appear interested in taking the advice of civil rights groups and other organizations.
“His actions to date show that he is not committed to a transparent process where he incorporates the best practices we have learned from civil society groups,” Ms. Eisenstat said. “Instead he has emboldened racists, homophobes and antisemites.”
The lack of action extends to new accounts affiliated with terror groups and others that Twitter previously banned. In the first 12 days after Mr. Musk assumed control, 450 accounts associated with ISIS were created, up 69% from the previous 12 days, according to the Institute for Strategic Dialogue, a think tank that studies online platforms.
Other social media companies are also increasingly concerned about how content is being moderated on Twitter.
When Meta, which owns Facebook and Instagram, found accounts associated with Russian and Chinese state-backed influence campaigns on its platforms last month, it tried to alert Twitter, said two members of Meta’s security team, who asked not to be named because they were not authorized to speak publicly. The two companies often communicated on these issues, since foreign influence campaigns typically linked fake accounts on Facebook to Twitter.
But this time was different. The emails to their counterparts at Twitter bounced or went unanswered, the Meta employees said, in a sign that those workers may have been fired.
اللهم ربنا آتنا في الدنيا حسنة وفي الآخرة حسنة وقنا عذاب النار • ربنا لا تزغ قلوبنا بعد إذ هديتنا وهب لنا من لدنك رحمة إنك أنت الوهاب - الشيخ أحمد طالب - دعاء من خطبة الجمعة - الحرم النبوي - ٥ شوال ١٤٤٣
Why disinformation experts say the Israel-Hamas war is a nightmare to investigate
The Israel-Hamas conflict has been a minefield of confusing counter-arguments and controversies—and an information environment that experts investigating mis- and disinformation say is among the worst they’ve ever experienced.
In the time since Hamas launched its terror attack against Israel last month—and Israel has responded with a weekslong counterattack—social media has been full of comments, pictures, and video from both sides of the conflict putting forward their case. But alongside real images of the battles going on in the region, plenty of disinformation has been sown by bad actors.
“What is new this time, especially with Twitter, is the clutter of information that the platform has created, or has given a space for people to create, with the way verification is handled,” says Pooja Chaudhuri, a researcher and trainer at Bellingcat, which has been working to verify or debunk claims from both the Israeli and Palestinian sides of the conflict, from confirming that Israel Defense Forces struck the Jabalia refugee camp in northern Gaza to debunking the idea that the IDF has blown up some of Gaza’s most sacred sites.
Bellingcat has found plenty of claims and counterclaims to investigate, but convincing people of the truth has proven more difficult than in previous situations because of the firmly entrenched views on either side, says Chaudhuri’s colleague Eliot Higgins, the site’s founder.
“People are thinking in terms of, ‘Whose side are you on?’ rather than ‘What’s real,’” Higgins says. “And if you’re saying something that doesn’t agree with my side, then it has to mean you’re on the other side. That makes it very difficult to be involved in the discourse around this stuff, because it’s so divided.”
For Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), there have only been two moments prior to this that have proved as difficult for his organization to monitor and track: One was the disinformation-fueled 2020 U.S. presidential election, and the other was the hotly contested space around the COVID-19 pandemic.
“I can’t remember a comparable time. You’ve got this completely chaotic information ecosystem,” Ahmed says, adding that in the weeks since Hamas’s October 7 terror attack social media has become the opposite of a “useful or healthy environment to be in”—in stark contrast to what it used to be, which was a source of reputable, timely information about global events as they happened.
The CCDH has focused its attention on X (formerly Twitter), in particular, and is currently involved in a lawsuit with the social media company, but Ahmed says the problem runs much deeper.
“It’s fundamental at this point,” he says. “It’s not a failure of any one platform or individual. It’s a failure of legislators and regulators, particularly in the United States, to get to grips with this.” (An X spokesperson has previously disputed the CCDH’s findings to Fast Company, taking issue with the organization’s research methodology. “According to what we know, the CCDH will claim that posts are not ‘actioned’ unless the accounts posting them are suspended,” the spokesperson said. “The majority of actions that X takes are on individual posts, for example by restricting the reach of a post.”)
Ahmed contends that inertia among regulators has allowed antisemitic conspiracy theories to fester online to the extent that many people believe and buy into those concepts. Further, he says it has prevented organizations like the CCDH from properly analyzing the spread of disinformation and those beliefs on social media platforms. “As a result of the chaos created by the American legislative system, we have no transparency legislation. Doing research on these platforms right now is near impossible,” he says.
It doesn’t help when social media companies are throttling access to their application programming interfaces, through which many organizations like the CCDH do research. “We can’t tell if there’s more Islamophobia than antisemitism or vice versa,” he admits. “But my gut tells me this is a moment in which we are seeing a radical increase in mobilization against Jewish people.”
Right at the time when the most insight is needed into how platforms are managing the torrent of dis- and misinformation flooding their apps, there’s the least possible transparency.
The issue isn’t limited to private organizations. Governments are also struggling to get a handle on how disinformation, misinformation, hate speech, and conspiracy theories are spreading on social media. Some have reached out to the CCDH to try and get clarity.
“In the last few days and weeks, I’ve briefed governments all around the world,” says Ahmed, who declines to name those governments—though Fast Company understands that they may include the U.K. and European Union representatives. Advertisers, too, have been calling on the CCDH to get information about which platforms are safest for them to advertise on.
Deeply divided viewpoints are exacerbated not only by platforms tamping down on their transparency but also by technological advances that make it easier than ever to produce convincing content that can be passed off as authentic. “The use of AI images has been used to show support,” Chaudhuri says. This isn’t necessarily a problem for trained open-source investigators like those working for Bellingcat, but it is for rank-and-file users who can be hoodwinked into believing generative-AI-created content is real.
And even if those AI-generated images don’t sway minds, they can offer another weapon in the armory of those supporting one side or the other—a slur, similar to the use of “fake news” to describe factual claims that don’t chime with your beliefs, that can be deployed to discredit legitimate images or video of events.
“What is most interesting is anything that you don’t agree with, you can just say that it’s AI and try to discredit information that may also be genuine,” Choudhury says, pointing to users who have claimed an image of a dead baby shared by Israel’s account on X was AI—when in fact it was real—as an example of weaponizing claims of AI tampering. “The use of AI in this case,” she says, “has been quite problematic.”
A new convoy of my family members joins the list of martyrs of the Al-Kafarna family, bringing the number of martyrs of my family members and relatives to 75 martyrs, and there is still a large number under the rubble.
Because my family and relatives are not numbers, I will write all the names of my family members who were martyred in this aggression.
Martyr Suhail Bassam Al-Kafarna
The martyr/Muhammad Bassem Al-Kafarna
Martyr/Bassem Muhammad Al-Kafarneh
The martyr/Muhammad Ali Al-Kafarna
Martyr Ali Muhammad Al-Kafarna
The martyr/Hassan Ali Al-Kafarneh
The martyr/Youssef Hassan Al-Kafarna
The martyr/Misbah Muhammad Al-Mafarneh
Martyr Karam Rushdi Al-Kafarneh
The martyr/Fatima Hassan Al-Kafarna
The martyr/Doaa Muhammad Al-Kafarna and her fetus
Martyr Yazan Sharif Al-Kafarna
The martyr/Abeer Fakhri Al-Kafarneh
The martyr/Youssef Amer Al-Kafarna
The martyr/Hassan Yaqoub Al-Kafarneh
Martyr Hussein Suhail Al-Kafarna
The martyr/Mohammed Iyad Al-Kafarneh
The martyr/ Anas Iyad Al-Kafarna
The martyr/ Ramez Iyad Al-Kafarna
The martyr/Mohammed Yasser Al-Kafarna
The martyr/Youssef Amer Al-Kafarna
The martyr/Ahmed Karim Al-Kafarna
The martyr/Mohamed Mahmoud Al-Kafarna
The martyr/Ayoub Ahmed Al-Kafarneh
The martyr/Jihad Adnan Al-Kafarna
The martyr/Mohammed Ammar Al-Kafarna
The martyr/Iman Suhail Al-Kafarna and her children
The martyr Amani Jad Al-Kafarneh and her children
Martyr Fayez Ali Al-Kafarna
The martyr/Muhammad Ali Al-Kafarna
The martyr/Muawiyah Ali Al-Kafarna
The martyr/ Ahmed Wael Al-Kafarna
Martyr Saleh Al-Kafarna
The martyr/Qasim Ahmed Al-Kafarneh
The martyr/Aoun Nofal Al-Kafarna
The martyr/Mahmoud Anwar Al-Kafarna
The martyr/ Khater Maher Al-Kafarna
The martyr / Muhammad Khater Al-Kafarna
The martyr/Ahmed Ashraf Al-Kafarna
The martyr/Iyad Muhammad Taha Al-Kafarna
The martyr/ Imran Muhammad Taha Al-Kafarna
Martyr Saleh Abdel Muti Al-Kafarna
The martyr/Abdul Hamid Mamdouh Al-Kafarna
The martyr/Abdullah Mahmoud Al-Kafarna
The martyr/Abdul Jawad Akram Al-Kafarneh
The martyr/Shatha Nasser Al-Kafarna
Martyr Moaz Youssef Al-Kafarneh
The martyr Fayza Qasim Al-Kafarna
Martyr Saad Youssef Al-Kafarneh
Martyr Sham Youssef Al-Kafarneh
Martyr Hassan Youssef Al-Kafarneh
Martyr/Ahlam Al-Kafarna
Martyr Hamada Attia Al-Kafarna
The martyr/Salwa Muhammad Al-Kafarna
The martyr Nour Hamada Al-Kafarna
The martyr/Mohamed Hamada Al-Kafarna
The martyr/Mohamed Khairy Al-Kafarneh
The martyr/Ahmed Ibrahim Al-Kafarna
The martyr/Shadi Awni Al-Kafarneh
The martyr/Amir Mazen Al-Kafarneh
The martyr/Jamal Mahmoud Al-Kafarna
The martyr/Mustafa Ghassan Al-Kafarna
The martyr/Aisha Mahmoud Al-Kafarna
Martyr Redha Moayed Al-Kafarneh
Martyr Sharif Saeed Hassan Al-Kafarneh
Martyr Amira Sharif Hassan Al-Kafarna
Martyr Hassan Sharif Hassan Al-Kafarneh
Martyr Bassam Hassan Al-Kafarneh
Martyr Safaa Abdel Samie Al-Kafarna
A large number of my family members are still missing or under the rubble.
WASHINGTON (AP) — Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.
Viewed millions of times online since the war began, these images are deepfakes created using artificial intelligence. If you look closely you can see clues: fingers that curl oddly, or eyes that shimmer with an unnatural light — all telltale signs of digital deception.
The outrage the images were created to provoke, however, is all too real.
Pictures from the Israel-Hamas war have vividly and painfully illustrated AI's potential as a propaganda tool, used to create lifelike images of carnage. Since the war began last month, digitally altered ones spread on social media have been used to make false claims about responsibility for casualties or to deceive people about atrocities that never happened.
While most of the false claims circulating online about the war didn’t require AI to create and came from more conventional sources, technological advances are coming with increasing frequency and little oversight. That’s made the potential of AI to become another form of weapon starkly apparent, and offered a glimpse of what’s to come during future conflicts, elections and other big events.
“It’s going to get worse — a lot worse — before it gets better,” said Jean-Claude Goldenstein, CEO of CREOpoint, a tech company based in San Francisco and Paris that uses AI to assess the validity of online claims. The company has created a database of the most viral deepfakes to emerge from Gaza. “Pictures, video and audio: with generative AI it’s going to be an escalation you haven’t seen.”
In some cases, photos from other conflicts or disasters have been repurposed and passed off as new. In others, generative AI programs have been used to create images from scratch, such as one of a baby crying amidst bombing wreckage that went viral in the conflict’s earliest days.
Other examples of AI-generated images include videos showing supposed Israeli missile strikes, or tanks rolling through ruined neighborhoods, or families combing through rubble for survivors.
In many cases, the fakes seem designed to evoke a strong emotional reaction by including the bodies of babies, children or families. In the bloody first days of the war, supporters of both Israel and Hamas alleged the other side had victimized children and babies; deepfake images of wailing infants offered photographic ‘evidence’ that was quickly held up as proof.
The propagandists who create such images are skilled at targeting people's deepest impulses and anxieties, said Imran Ahmed, CEO of the Center for Countering Digital Hate, a nonprofit that has tracked disinformation from the war. Whether it's a deepfake baby, or an actual image of an infant from another conflict, the emotional impact on the viewer is the same.
The more abhorrent the image, the more likely a user is to remember it and to share it, unwittingly spreading the disinformation further.
“People are being told right now: Look at this picture of a baby,” Ahmed said. “The disinformation is designed to make you engage with it.”
The Princess of Wales has been “revictimised” by social media trolls who have publicly blamed her for not disclosing her cancer diagnosis sooner, a leading expert in countering online extremism has said.
Imran Ahmed, the chief executive of the Centre for Countering Digital Hate, said conspiracy theories about the Princess had already been amplified on social media platforms to reach “millions and millions of people”.
Even after her personal video explaining that she was undergoing treatment for cancer in the form of preventative chemotherapy, social media users further accused her of allowing the conspiracy theories to spread by not speaking out sooner.
Aaron Bay-Schuck
Aaron Sorkin
Adam & Jackie Sandler
Adam Goodman
Adam Levine
Alan Grubman
Alex Aja
Alex Edelman
Alexandra Shiva
Ali Wentworth
Alison Statter
Allan Loeb
Alona Tal
Amy Chozick
Amy Pascal
Amy Schumer
Amy Sherman Palladino
Andrew Singer
Andy Cohen
Angela Robinson
Anthony Russo
Antonio Campos
Ari Dayan
Ari Greenburg
Arik Kneller
Aron Coleite
Ashley Levinson
Asif Satchu
Aubrey Plaza
Barbara Hershey
Barry Diller
Barry Levinson
Barry Rosenstein
Beau Flynn
Behati Prinsloo
Bella Thorne
Ben Stiller
Ben Turner
Ben Winston
Ben Younger
Billy Crystal
Blair Kohan
Bob Odenkirk
Bobbi Brown
Bobby Kotick
Brad Falchuk
Brad Slater
Bradley Cooper
Bradley Fischer
Brett Gelman
Brian Grazer
Bridget Everett
Brooke Shields
Bruna Papandrea
Cameron Curtis
Casey Neistat
Cazzie David
Charles Roven
Chelsea Handler
Chloe Fineman
Chris Fischer
Chris Jericho
Chris Rock
Christian Carino
Cindi Berger
Claire Coffee
Colleen Camp
Constance Wu
Courteney Cox
Craig Silverstein
Dame Maureen Lipman
Dan Aloni
Dan Rosenweig
Dana Goldberg
Dana Klein
Daniel Palladino
Danielle Bernstein
Danny Cohen
Danny Strong
Daphne Kastner
David Alan Grier
David Baddiel
David Bernad
David Chang
David Ellison
David Geffen
David Gilmour &
David Goodman
David Joseph
David Kohan
David Lowery
David Oyelowo
David Schwimmer
Dawn Porter
Dean Cain
Deborah Lee Furness
Deborah Snyder
Debra Messing
Diane Von Furstenberg
Donny Deutsch
Doug Liman
Douglas Chabbott
Eddy Kitsis
Edgar Ramirez
Eli Roth
Elisabeth Shue
Elizabeth Himelstein
Embeth Davidtz
Emma Seligman
Emmanuelle Chriqui
Eric Andre
Erik Feig
Erin Foster
Eugene Levy
Evan Jonigkeit
Evan Winiker
Ewan McGregor
Francis Benhamou
Francis Lawrence
Fred Raskin
Gabe Turner
Gail Berman
Gal Gadot
Gary Barber
Gene Stupinski
Genevieve Angelson
Gideon Raff
Gina Gershon
Grant Singer
Greg Berlanti
Guy Nattiv
Guy Oseary
Gwyneth Paltrow
Hannah Fidell
Hannah Graf
Harlan Coben
Harold Brown
Harvey Keitel
Henrietta Conrad
Henry Winkler
Holland Taylor
Howard Gordon
Iain Morris
Imran Ahmed
Inbar Lavi
Isla Fisher
Jack Black
Jackie Sandler
Jake Graf
Jake Kasdan
James Brolin
James Corden
Jamie Ray Newman
Jaron Varsano
Jason Biggs & Jenny Mollen Biggs
Jason Blum
Jason Fuchs
Jason Reitman
Jason Segel
Jason Sudeikis
JD Lifshitz
Jeff Goldblum
Jeff Rake
Jen Joel
Jeremy Piven
Jerry Seinfeld
Jesse Itzler
Jesse Plemons
Jesse Sisgold
Jessica Biel
Jessica Elbaum
Jessica Seinfeld
Jill Littman
Jimmy Carr
Jody Gerson
Joe Hipps
Joe Quinn
Joe Russo
Joe Tippett
Joel Fields
Joey King
John Landgraf
John Slattery
Jon Bernthal
Jon Glickman
Jon Hamm
Jon Liebman
Jonathan Baruch
Jonathan Groff
Jonathan Marc Sherman
Jonathan Ross
Jonathan Steinberg
Jonathan Tisch
Jonathan Tropper
Jordan Peele
Josh Brolin
Josh Charles
Josh Goldstine
Josh Greenstein
Josh Grode
Judd Apatow
Judge Judy Sheindlin
Julia Garner
Julia Lester
Julianna Margulies
Julie Greenwald
Julie Rudd
Juliette Lewis
Justin Theroux
Justin Timberlake
Karen Pollock
Karlie Kloss
Katy Perry
Kelley Lynch
Kevin Kane
Kevin Zegers
Kirsten Dunst
Kitao Sakurai
KJ Steinberg
Kristen Schaal
Kristin Chenoweth
Lana Del Rey
Laura Dern
Laura Pradelska
Lauren Schuker Blum
Laurence Mark
Laurie David
Lea Michele
Lee Eisenberg
Leo Pearlman
Leslie Siebert
Liev Schreiber
Limor Gott
Lina Esco
Liz Garbus
Lizanne Rosenstein
Lizzie Tisch
Lorraine Schwartz
Lynn Harris
Lyor Cohen
Madonna
Mandana Dayani
Mara Buxbaum
Marc Webb
Marco Perego
Maria Dizzia
Mark Feuerstein
Mark Foster
Mark Scheinberg
Mark Shedletsky
Martin Short
Mary Elizabeth Winstead
Mathew Rosengart
Matt Lucas
Matt Miller
Matthew Bronfman
Matthew Hiltzik
Matthew Weiner
Matti Leshem
Max Mutchnik
Maya Lasry
Meaghan Oppenheimer
Melissa Zukerman
Michael Aloni
Michael Ellenberg
Michael Green
Michael Rapino
Michael Rappaport
Michael Weber
Michelle Williams
Mike Medavoy
Mila Kunis
Mimi Leder
Modi Wiczyk
Molly Shannon
Nancy Josephson
Natasha Leggero
Neil Blair
Neil Druckmann
Nicola Peltz
Nicole Avant
Nina Jacobson
Noa Kirel
Noa Tishby
Noah Oppenheim
Noah Schnapp
Noreena Hertz
Odeya Rush
Olivia Wilde
Oran Zegman
Orlando Bloom
Pasha Kovalev
Pattie LuPone
Paul & Julie Rudd
Paul Haas
Paul Pflug
Peter Traugott
Polly Sampson
Rachel Riley
Rafi Marmor
Ram Bergman
Raphael Margulies
Rebecca Angelo
Rebecca Mall
Regina Spektor
Reinaldo Marcus Green
Rich Statter
Richard Jenkins
Richard Kind
Rick Hoffman
Rick Rosen
Rita Ora
Rob Rinder
Robert Newman
Roger Birnbaum
Roger Green
Rosie O’Donnell
Ross Duffer
Ryan Feldman
Sacha Baron Cohen
Sam Levinson
Sam Trammell
Sara Foster
Sarah Baker
Sarah Bremner
Sarah Cooper
Sarah Paulson
Sarah Treem
Scott Braun
Scott Braun
Scott Neustadter
Scott Tenley
Sean Combs
Seth Meyers
Seth Oster
Shannon Watts
Shari Redstone
Sharon Jackson
Sharon Stone
Shauna Perlman
Shawn Levy
Sheila Nevins
Shira Haas
Simon Sebag Montefiore
Simon Tikhman
Skylar Astin
Stacey Snider
Stephen Fry
Steve Agee
Steve Rifkind
Sting & Trudie Styler
Susanna Felleman
Susie Arons
Taika Waititi
Thomas Kail
Tiffany Haddish
Todd Lieberman
Todd Moscowitz
Todd Waldman
Tom Freston
Tom Werner
Tomer Capone
Tracy Ann Oberman
Trudie Styler
Tyler James Williams
Tyler Perry
Vanessa Bayer
Veronica Grazer
Veronica Smiley
Whitney Wolfe Herd
Will Ferrell
Will Graham
Yamanieka Saunders
Yariv Milchan
Ynon Kreiz
Zack Snyder
Zoe Saldana
Zoey Deutch
Zosia Mamet
Twitter is failing to remove 99% of hate speech posted by Twitter Blue users, new research has found, and instead may be boosting paid accounts that spew racism and homophobia.
Researchers at the Center For Countering Digital Hate (CCDH) flagged hate speech to the company in tweets from 100 Twitter Blue subscribers. Four days later, they say, 99% of the tweets were still up and none of the accounts had been removed.
The tweets, which included examples of neo-Nazism, antisemitism, racism, and homophobia, violate Twitter’s own hate speech policies, the researchers say. The tweets reported by the CCDH included a post claiming “Hitler was right,” accompanied by a video montage of the dictator, and another saying LGBT activists needed “IRON IN THEIR DIET. Preferably from a #AFiringSquad.”
Twitter’s hateful content policy, updated as recently as April 2023, outlines that: “You may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” The tweets found by the CCDH “clearly violate the platform's policies,” researchers wrote.
Twitter Blue, a paid verification service that allows anyone to get a “blue tick” badge next to their username on the platform for only $8 a month, was rolled out in December 2022. Last month, Twitter’s then-CEO Elon Musk announced that these verified accounts would be prioritized as part of Twitter’s algorithm, meaning they are more likely to be seen by users on the platform.
Paying subscribers receive “prioritized rankings in conversations and search,” according to Twitter’s own website. The company promises paid users: “Tweets that you interact with will receive a small boost in their ranking. Additionally, your replies will receive a boost that ranks them closer to the top.”
CCDH researchers say they found Twitter Blue users in their samples appeared to be “given priority” in threads. “In one example of a thread containing almost 100 tweets from non-verified users, the top-ranked reply was from a Twitter Blue user calling for violence against Migrants,” they wrote.
“What gives blue tick hate actors confidence to spew their bile is the knowledge that Elon Musk simply doesn’t care about the civil and human rights of Black people, Jews, Muslims and LGBTQ+ people, as long as he can make his 8 bucks a month,” Imran Ahmed, the chief executive of the CCDH, said in a statement to The Daily Beast, “Our society has benefited from decades of progress on tolerance, but Elon Musk is undoing those norms at an ever-accelerating rate, by allowing hate to prosper on the spaces he administers, all with the tacit approval of the advertisers who remain on his platform.”
Previous research from the CCDH found that under Elon Musk’s leadership of Twitter, tweets linking LGBT people to “grooming” skyrocketed, jumping 119 percent between Musk’s takeover in October 2022 and March 2023. The study found that five high-profile Twitter accounts responsible for consistently linking LGBT to “grooming” were set to generate $6.4 million a year in advertising revenue for Twitter.
The Daily Beast reached out to Twitter for comment, and received only a poop emoji in response.
Far Left Founder of Radical Group Targeting Conservatives and Their Advertisers Plays Victim After Being Called Out By Weaponization Committee | The Gateway Pundit | by Jim Hoft
A US judge has thrown out a lawsuit brought by Elon Musk's social media firm X against a group that had claimed that hate speech had risen on the platform since the tech tycoon took over.
X had accused the Center for Countering Digital Hate (CCDH) of taking "unlawful" steps to access its data.
The US judge dismissed the case and said it was "evident" Mr Musk's X Corp did not like criticism.
X said it planned to appeal.
Imran Ahmed, founder and chief executive of CCDH, celebrated the win, saying Mr Musk had conducted a "loud, hypocritical campaign" of harassment and abuse against his organisation in an attempt to "avoid taking responsibility for his own decisions".
"The courts today have affirmed our fundamental right to research, to speak, to advocate, and to hold accountable social media companies" he said, adding that he hoped the ruling would "embolden" others to "continue and even intensify" similar work.
It is a striking loss for the billionaire, a self-described "free-speech absolutist".
The company, formerly known as Twitter, launched its lawsuit against CCDH in 2023, claiming its researchers had cherry-picked data to create misleading reports about X.
It accused the group of "intentionally and unlawfully" scraping data from X, in violation of its terms of service, in order to produce its research.
It said the non-profit group designed a "scare campaign" to drive away advertisers, and it demanded tens of millions of dollars in damages.
But in his decision Judge Charles Breyer said Mr Musk was "punishing the defendants for their speech".
Judge Breyer said X appeared "far more concerned about CCDH's speech than it is its data collection methods".
He said the company had "brought this case in order to punish CCDH for ... publications that criticised X Corp - and perhaps in order to dissuade others who might wish to engage in such criticism".
Mr Musk purchased the platform in 2022 for $44bn (£34bn) and swiftly embarked on a slew of controversial changes, sharply reducing its workforce with deep cuts to teams in charge of content moderation and other areas.
His own posts have also drawn charges of anti-semitism, a claim he has denied.
X Corp., the parent company of the social media platform formerly known as Twitter, filed a lawsuit in San Francisco federal court Monday against a nonprofit organization that monitors hate speech and disinformation, following through on a threat that had made headlines hours earlier.
The lawsuit, filed in U.S. District Court for the Northern District of California, accuses the Center for Countering Digital Hate (CCDH) of orchestrating a "scare campaign to drive away advertisers from the X platform" by publishing research reports claiming that the social media service failed to take action against hateful posts. The service is owned by the technology mogul Elon Musk.
In the filing, lawyers for X. Corp alleged that the CCDH carried out "a series of unlawful acts designed to improperly gain access to protected X Corp. data, needed by CCDH so that it could cherry-pick from the hundreds of millions of posts made each day on X and falsely claim it had statistical support showing the platform is overwhelmed with harmful content."
The complaint specifically accuses the nonprofit group of breach of contract, violating federal computer fraud law, intentional interference with contractual relations and inducing breach of contract. The company's lawyers made a demand for a jury trial.
The lawsuit was filed just hours after the CCDH revealed that Musk's lawyer, Alex Spiro, had sent the organization a letter on July 20 saying X Corp. was investigating whether the CCDH's "false and misleading claims about Twitter" were actionable under federal law.
In a statement to NBC News, CCDH founder and chief executive Imran Ahmed took direct aim at Musk, arguing that the Tesla and SpaceX tycoon's "latest legal threat is straight out of the authoritarian playbook — he is now showing he will stop at nothing to silence anyone who criticizes him for his own decisions and actions."
"The Center for Countering Digital Hate’s research shows that hate and disinformation is spreading like wildfire on the platform under Musk's ownership and this lawsuit is a direct attempt to silence those efforts," Ahmed added in part. "Musk is trying to 'shoot the messenger' who highlights the toxic content on his platform rather than deal with the toxic environment he's created.
"The CCDH's independent research won’t stop — Musk will not bully us into silence," Ahmed said in closing.
The research report that drew particular ire from X Corp. claimed that the platform had failed to take action against 99% of 100 posts flagged by CCDH staff members that included racist, homophobic and antisemitic content.
Musk has drawn fierce scrutiny since buying Twitter last year. Top hate speech watchdog groups and activists have blasted him for loosening restrictions on what can be posted on the platform, and business analysts have raised eyebrows at his seemingly erratic and impulsive decision-making.
The Center for Countering Digital Hate's research has been cited by NBC News, The New York Times, The Washington Post, CNN and many other news outlets.
Musk, who has been criticized for posting conspiratorial or inflammatory content on his own account, has said he is acting in the interest of "free speech." He has said he wants to transform Twitter into a "digital town square."
Musk has also claimed that hate speech on the platform was shrinking. In a tweet on Nov. 23, Musk wrote that “hate speech impressions” were down by one-third and posted a graph — apparently drawn from internal data — showing a downward trend.
Cruel British troll who used Chinese social media platform TikTok to cash in on lurid Kate claims is unmasked at last - after his hurtful claims about the Princess of Wales spread across the world
Paul Condron, who hides behind an anonymous TikTok handle, spewed deeply hurtful and false stories about Kate. His cruel claims were shared to millions around the world after being reposted by celebrity gossip blogger Perez Hilton.
The Mail was able to track down the toxic troll to his girlfriend's two-bedroom home in South-West London. Condron refused to say how much money he had made and said he was 'not fussed' about peddling the wild conspiracies online, before retreating and refusing to answer any further questions.
A neighbour told the Mail that Mr Condron spends almost all his time at home and was shocked when shown the posts.
Social media algorithms have massively amplified conspiracy theories about Kate. They became almost normalised in the UK in the past few weeks, before her poignant video about her cancer diagnosis.
Imran Ahmed, head of the Centre for Countering Digital Hate, said Condron was a prime example of a new breed of 'conspiracy spivs' who take advantage of social media algorithms to 'lie for profit'.
The truth is that they have victimised a young woman who is recovering from surgery.
'They have been victimising her and revictimising her, and then blaming her for not being able to counteract their sort of nuclear-grade disinformation.'
Condron and TikTok split 35 per cent of the money from each paid subscriber, with the rest going to Google or Apple.