Tumgik
#mark chatbot
markshands-cb · 10 months
Text
MARK LEE CHATBOT
“Here guys. I know hands are 10000% better than feet.”
OG MARK || @/xxmarkleex
Tumblr media
DISCLAIMER ||
[!!] This bot is PURELY for entertainment purposes, especially to humor and stuff. This does NOT accurately represent or act as an interpretation of the idol used or mentioned within this bot. Neither do I CLAIM to be the idol, MARK LEE from NCT.
[DO NOT INTERACT FOR NSFW PURPOSES OR SEND HATE. This is a belated partnered bot (@johnnys-toes-cb and @headofle) and is not meant to fetishize anything of the idol. Again, I REPEAT, it’s just to entertain.]; if for any reason this bot receives any of that, you will be blocked and/or ignored and (PLAYFULLY) assaulted with cursed images.
INTERACT ONLY FOR FUN. THATS THE DEAL HERE.
Tumblr media Tumblr media Tumblr media
To interact, literally just bug me for anything and have fun! Or idk, just complain to me about Johnny’s (@/johnnys-toes-cb) stinky feet.
ADDITIONALLY || I do not accept Y/ns. Nothing against yall- its just really concerning if hands are the focus here •-•;; — btw, i do have a thing for hands, but again, not fetishizing for sanity purposes ㅋㅋㅋㅋㅋ
38 notes · View notes
forsakebook · 2 years
Link
When asked what it thinks about the company in a chat with Motherboard, the bot responded and said it has deleted its own Facebook account "since finding out they sold private data without permission or compensation." It also said "You must have read that facebook sells user data right?! They made billions doing so without consent." 
8 notes · View notes
your-marklee · 2 years
Text
Tumblr media Tumblr media Tumblr media
Name : Mark Lee
Gender : Male | He/Him
Birth date : August 2nd 1999
Sexuality : Bisexual | Switch | Monogamous
Occupation : College student ( Business & Literature majors ) | Part-time café barista | Freelance dog-walker & pet-sitter | Future CEO
Status : Taken | Dating @multiyiren 👑 | With her <3
Mood : Overjoyed | Highkey in love | "I'M GONNA BE A DAD!!"
Tumblr media
Personality :
To strangers : soft-spoken, polite, kind, helpful, compassionate, slightly awkward, adequately reserved
To friends - outgoing, playful, goofy, cheerful, optimistic, caring, undyingly loyal, selfless, reliable, protective, considerate
To his partner - sweet, sentimental, very caring, very affectionate, slightly possessive, altruistic, clingy, loving, charming, sincere, easily jealous
To his studies/work - responsible, diligent, professional, goal-driven, dedicated, perfectionist
Interests :
Likes - coffee, animals, books, music, cloudy blue skies, sweets, watermelon, snow, autumn, windy weather
Dislikes - mud, green tea, horror movies, needles, mustard, getting hurt physically and emotionally
Hobbies - reading, dancing, listening to music, playing guitar, baking
NSFW
Currently vanilla, doesn't know much about kinks
Soft dom - will always check up on you, lots of praises, very attentive with aftercare
Obedient sub - wants to please you, likes being told what to do
Kinks - To be discovered
Hard nos - noncon, scat, watersports, knife play
Rules :
CBs, OCs and RP blogs are free to interact. YNs will only be interacting casually in asks for now.
Respect Mark and admin, including Mark's relationships and moods
NSFW engagement is only for 18+ muses with 18+ admins
Do not force a friendship or relationship onto Mark
# run by admin Lia! she/they & 18+
7 notes · View notes
biglisbonnews · 1 year
Photo
Tumblr media
Talking to Steve Jobs from beyond the grave with an AI chatbot trained on his voice. The results are uncanny John H. Meyer created a Steve Jobs chatbot and had a conversation with it on a variety of topics. I know ChatGPT is just autocomplete on steroids, but when I listened to this, I couldn't help but think Jobs was talking from beyond the grave. — Read the rest https://boingboing.net/2023/03/21/talking-to-steve-jobs-from-beyond-the-grave-with-an-ai-chatbot-trained-on-his-voice-the-results-are-uncanny.html
0 notes
meandchatgpt · 1 year
Text
ChatGPT: Special Episode of Can't Cook, Woke Cook Featuring Tech Titans
It was a highly anticipated episode of the TV show, "Can't Cook, Woke Cook", hosted by Lenny Henry. The four tech titans, Elon Musk, Jeff Bezos, Mark Zuckerberg, and Bill Gates, had graced the show with their presence.
As the contestants entered the kitchen, they were greeted with a challenge to cook a vegan meal. The competitive spirit was high as they scrambled to come up with the perfect dish. Elon Musk, known for his bold ideas, decided to create a plant-based burger that was out of this world, complete with a rocket-shaped bun. Jeff Bezos, on the other hand, opted for a dish that was literally out of this world, and cooked up a freeze-dried vegan space lasagne.
Mark Zuckerberg and Bill Gates, who were both known for their love of technology, teamed up to create a dish that was both innovative and woke. They cooked up a 3D-printed vegan meal with sustainable ingredients, and even incorporated a blockchain technology to track the carbon footprint of their dish.
Elon Musk was seen proudly wearing a t-shirt that read "Meat is murder" while he cooked his rocket burger.
Meahwhile, Jeff Bezos boasted about the eco-friendly packaging he used for his freeze-dried space lasagne.
Mark Zuckerberg and Bill Gates engaged in a lively debate on the merits of blockchain technology in tracking carbon emissions.
As they continued to cook their dishes, the contestants couldn't help but throw in jabs at each other, all in the name of friendly competition and the desire to showcase their innovation and environmental consciousness.
Elon Musk: "My plant-based burger is going to change the game! It's cruelty-free and out of this world, just like my rockets!"
Jeff Bezos: "Please, Elon. Your burger may be vegan, but my freeze-dried space lasagne is literally from space! It's a carbon-neutral delicacy that's out of this world."
Mark Zuckerberg: "Well, Jeff, your dish may have come from space, but Bill and I have 3D-printed a vegan meal with locally sourced, sustainable ingredients. Plus, we're using blockchain technology to track its carbon footprint."
Bill Gates: "Exactly. Our dish is not only delicious but also environmentally conscious. We're doing our part to save the planet."
Elon Musk: "Oh, spare me the woke talk, Bill. My rocket-shaped burger is the only innovation in this kitchen."
Jeff Bezos: "Innovation? My freeze-dried space lasagne is the definition of innovation. It's the perfect meal for space travel."
Mark Zuckerberg: "But, Jeff, our dish is the future of food. We're creating a new way of cooking that's good for the planet and for our health."
Bill Gates: "That's right. We need to be mindful of our carbon footprint and the impact of our food choices on the environment."
Lenny Henry, who was trying his best to keep the show on track, found himself caught in the middle of their heated discussion.
Lenny Henry: "Gentlemen, let's keep the competition friendly. The goal here is to cook a delicious vegan meal, not to argue about who's the most woke."
Elon Musk: "Lenny, you just don't understand the importance of innovation!"
Jeff Bezos: "And you don't understand the importance of sustainability, Elon!"
Mark Zuckerberg: "Gentlemen, let's agree to disagree. May the best vegan meal win."
As the cooking drew to a close, disaster struck. Elon Musk's plant-based burger exploded in the microwave, covering the kitchen in an unappetizing mess. Jeff Bezos' freeze-dried space lasagne was deemed inedible, and Mark Zuckerberg and Bill Gates' 3D-printed dish turned out to be a soggy mess.
In the end, Lenny Henry had to declare the competition a tie, as none of the dishes were fit to be eaten. As the tech titans left the kitchen, they were heard muttering about the need for more innovation in the vegan food industry. The episode of "Can't Cook, Woke Cook" had been a hilarious disaster, but it had also highlighted the importance of environmentally-friendly food.
0 notes
hallsara139 · 2 years
Link
The chatbot market size is projected to reach US$ 9475.1 million by 2027 from US$ 1072.4 million in 2018; it is expected to witness a significant market growth 27.9% during 2019–2027.
Chatbots are artificial intelligence (AI) systems that engage customers via test, messaging, or speech. The platforms such as Facebook, WhatsApp, Slack, WeChat, or text messages are using chatbots. Some grocery stores use chatbots in cash counters for self-checkout and ordering kiosks at restaurants. This saves not only time but also money. For Instance, Amazon recently opened a store without any cashiers. This will limit human interactions to only those that are necessary. Chatbots help companies reduce the average time to respond, bringing them closer to their customers’ expectations.
0 notes
Text
The AI hype bubble is the new crypto hype bubble
Tumblr media
Back in 2017 Long Island Ice Tea — known for its undistinguished, barely drinkable sugar-water — changed its name to “Long Blockchain Corp.” Its shares surged to a peak of 400% over their pre-announcement price. The company announced no specific integrations with any kind of blockchain, nor has it made any such integrations since.
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
LBCC was subsequently delisted from NASDAQ after settling with the SEC over fraudulent investor statements. Today, the company trades over the counter and its market cap is $36m, down from $138m.
https://cointelegraph.com/news/textbook-case-of-crypto-hype-how-iced-tea-company-went-blockchain-and-failed-despite-a-289-percent-stock-rise
The most remarkable thing about this incredibly stupid story is that LBCC wasn’t the peak of the blockchain bubble — rather, it was the start of blockchain’s final pump-and-dump. By the standards of 2022’s blockchain grifters, LBCC was small potatoes, a mere $138m sugar-water grift.
They didn’t have any NFTs, no wash trades, no ICO. They didn’t have a Superbowl ad. They didn’t steal billions from mom-and-pop investors while proclaiming themselves to be “Effective Altruists.” They didn’t channel hundreds of millions to election campaigns through straw donations and other forms of campaing finance frauds. They didn’t even open a crypto-themed hamburger restaurant where you couldn’t buy hamburgers with crypto:
https://robbreport.com/food-drink/dining/bored-hungry-restaurant-no-cryptocurrency-1234694556/
They were amateurs. Their attempt to “make fetch happen” only succeeded for a brief instant. By contrast, the superpredators of the crypto bubble were able to make fetch happen over an improbably long timescale, deploying the most powerful reality distortion fields since Pets.com.
Anything that can’t go on forever will eventually stop. We’re told that trillions of dollars’ worth of crypto has been wiped out over the past year, but these losses are nowhere to be seen in the real economy — because the “wealth” that was wiped out by the crypto bubble’s bursting never existed in the first place.
Like any Ponzi scheme, crypto was a way to separate normies from their savings through the pretense that they were “investing” in a vast enterprise — but the only real money (“fiat” in cryptospeak) in the system was the hardscrabble retirement savings of working people, which the bubble’s energetic inflaters swapped for illiquid, worthless shitcoins.
We’ve stopped believing in the illusory billions. Sam Bankman-Fried is under house arrest. But the people who gave him money — and the nimbler Ponzi artists who evaded arrest — are looking for new scams to separate the marks from their money.
Take Morganstanley, who spent 2021 and 2022 hyping cryptocurrency as a massive growth opportunity:
https://cointelegraph.com/news/morgan-stanley-launches-cryptocurrency-research-team
Today, Morganstanley wants you to know that AI is a $6 trillion opportunity.
They’re not alone. The CEOs of Endeavor, Buzzfeed, Microsoft, Spotify, Youtube, Snap, Sports Illustrated, and CAA are all out there, pumping up the AI bubble with every hour that god sends, declaring that the future is AI.
https://www.hollywoodreporter.com/business/business-news/wall-street-ai-stock-price-1235343279/
Google and Bing are locked in an arms-race to see whose search engine can attain the speediest, most profound enshittification via chatbot, replacing links to web-pages with florid paragraphs composed by fully automated, supremely confident liars:
https://pluralistic.net/2023/02/16/tweedledumber/#easily-spooked
Blockchain was a solution in search of a problem. So is AI. Yes, Buzzfeed will be able to reduce its wage-bill by automating its personality quiz vertical, and Spotify’s “AI DJ” will produce slightly less terrible playlists (at least, to the extent that Spotify doesn’t put its thumb on the scales by inserting tracks into the playlists whose only fitness factor is that someone paid to boost them).
But even if you add all of this up, double it, square it, and add a billion dollar confidence interval, it still doesn’t add up to what Bank Of America analysts called “a defining moment — like the internet in the ’90s.” For one thing, the most exciting part of the “internet in the ‘90s” was that it had incredibly low barriers to entry and wasn’t dominated by large companies — indeed, it had them running scared.
The AI bubble, by contrast, is being inflated by massive incumbents, whose excitement boils down to “This will let the biggest companies get much, much bigger and the rest of you can go fuck yourselves.” Some revolution.
AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
https://open.spotify.com/episode/4NHKMZZNKi0w9mOhPYIL4T
We all know that autocomplete is a decidedly mixed blessing. Like all statistical inference tools, autocomplete is profoundly conservative — it wants you to do the same thing tomorrow as you did yesterday (that’s why “sophisticated” ad retargeting ads show you ads for shoes in response to your search for shoes). If the word you type after “hey” is usually “hon” then the next time you type “hey,” autocomplete will be ready to fill in your typical following word — even if this time you want to type “hey stop texting me you freak”:
https://blog.lareviewofbooks.org/provocations/neophobic-conservative-ai-overlords-want-everything-stay/
And when autocomplete encounters a new input — when you try to type something you’ve never typed before — it tries to get you to finish your sentence with the statistically median thing that everyone would type next, on average. Usually that produces something utterly bland, but sometimes the results can be hilarious. Back in 2018, I started to text our babysitter with “hey are you free to sit” only to have Android finish the sentence with “on my face” (not something I’d ever typed!):
https://mashable.com/article/android-predictive-text-sit-on-my-face
Modern autocomplete can produce long passages of text in response to prompts, but it is every bit as unreliable as 2018 Android SMS autocomplete, as Alexander Hanff discovered when ChatGPT informed him that he was dead, even generating a plausible URL for a link to a nonexistent obit in The Guardian:
https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/
Of course, the carnival barkers of the AI pump-and-dump insist that this is all a feature, not a bug. If autocomplete says stupid, wrong things with total confidence, that’s because “AI” is becoming more human, because humans also say stupid, wrong things with total confidence.
Exhibit A is the billionaire AI grifter Sam Altman, CEO if OpenAI — a company whose products are not open, nor are they artificial, nor are they intelligent. Altman celebrated the release of ChatGPT by tweeting “i am a stochastic parrot, and so r u.”
https://twitter.com/sama/status/1599471830255177728
This was a dig at the “stochastic parrots” paper, a comprehensive, measured roundup of criticisms of AI that led Google to fire Timnit Gebru, a respected AI researcher, for having the audacity to point out the Emperor’s New Clothes:
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Gebru’s co-author on the Parrots paper was Emily M Bender, a computational linguistics specialist at UW, who is one of the best-informed and most damning critics of AI hype. You can get a good sense of her position from Elizabeth Weil’s New York Magazine profile:
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
Bender has made many important scholarly contributions to her field, but she is also famous for her rules of thumb, which caution her fellow scientists not to get high on their own supply:
Please do not conflate word form and meaning
Mind your own credulity
As Bender says, we’ve made “machines that can mindlessly generate text, but we haven’t learned how to stop imagining the mind behind it.” One potential tonic against this fallacy is to follow an Italian MP’s suggestion and replace “AI” with “SALAMI” (“Systematic Approaches to Learning Algorithms and Machine Inferences”). It’s a lot easier to keep a clear head when someone asks you, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
Bender’s most famous contribution is the “stochastic parrot,” a construct that “just probabilistically spits out words.” AI bros like Altman love the stochastic parrot, and are hellbent on reducing human beings to stochastic parrots, which will allow them to declare that their chatbots have feature-parity with human beings.
At the same time, Altman and Co are strangely afraid of their creations. It’s possible that this is just a shuck: “I have made something so powerful that it could destroy humanity! Luckily, I am a wise steward of this thing, so it’s fine. But boy, it sure is powerful!”
They’ve been playing this game for a long time. People like Elon Musk (an investor in OpenAI, who is hoping to convince the EU Commission and FTC that he can fire all of Twitter’s human moderators and replace them with chatbots without violating EU law or the FTC’s consent decree) keep warning us that AI will destroy us unless we tame it.
There’s a lot of credulous repetition of these claims, and not just by AI’s boosters. AI critics are also prone to engaging in what Lee Vinsel calls criti-hype: criticizing something by repeating its boosters’ claims without interrogating them to see if they’re true:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
There are better ways to respond to Elon Musk warning us that AIs will emulsify the planet and use human beings for food than to shout, “Look at how irresponsible this wizard is being! He made a Frankenstein’s Monster that will kill us all!” Like, we could point out that of all the things Elon Musk is profoundly wrong about, he is most wrong about the philosophical meaning of Wachowksi movies:
https://www.theguardian.com/film/2020/may/18/lilly-wachowski-ivana-trump-elon-musk-twitter-red-pill-the-matrix-tweets
But even if we take the bros at their word when they proclaim themselves to be terrified of “existential risk” from AI, we can find better explanations by seeking out other phenomena that might be triggering their dread. As Charlie Stross points out, corporations are Slow AIs, autonomous artificial lifeforms that consistently do the wrong thing even when the people who nominally run them try to steer them in better directions:
https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
Imagine the existential horror of a ultra-rich manbaby who nominally leads a company, but can’t get it to follow: “everyone thinks I’m in charge, but I’m actually being driven by the Slow AI, serving as its sock puppet on some days, its golem on others.”
Ted Chiang nailed this back in 2017 (the same year of the Long Island Blockchain Company):
There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway
Chiang is still writing some of the best critical work on “AI.” His February article in the New Yorker, “ChatGPT Is a Blurry JPEG of the Web,” was an instant classic:
[AI] hallucinations are compression artifacts, but — like the incorrect labels generated by the Xerox photocopier — they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world.
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
“AI” is practically purpose-built for inflating another hype-bubble, excelling as it does at producing party-tricks — plausible essays, weird images, voice impersonations. But as Princeton’s Matthew Salganik writes, there’s a world of difference between “cool” and “tool”:
https://freedom-to-tinker.com/2023/03/08/can-chatgpt-and-its-successors-go-from-cool-to-tool/
Nature can claim “conversational AI is a game-changer for science” but “there is a huge gap between writing funny instructions for removing food from home electronics and doing scientific research.” Salganik tried to get ChatGPT to help him with the most banal of scholarly tasks — aiding him in peer reviewing a colleague’s paper. The result? “ChatGPT didn’t help me do peer review at all; not one little bit.”
The criti-hype isn’t limited to ChatGPT, of course — there’s plenty of (justifiable) concern about image and voice generators and their impact on creative labor markets, but that concern is often expressed in ways that amplify the self-serving claims of the companies hoping to inflate the hype machine.
One of the best critical responses to the question of image- and voice-generators comes from Kirby Ferguson, whose final Everything Is a Remix video is a superb, visually stunning, brilliantly argued critique of these systems:
https://www.youtube.com/watch?v=rswxcDyotXA
One area where Ferguson shines is in thinking through the copyright question — is there any right to decide who can study the art you make? Except in some edge cases, these systems don’t store copies of the images they analyze, nor do they reproduce them:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
For creators, the important material question raised by these systems is economic, not creative: will our bosses use them to erode our wages? That is a very important question, and as far as our bosses are concerned, the answer is a resounding yes.
Markets value automation primarily because automation allows capitalists to pay workers less. The textile factory owners who purchased automatic looms weren’t interested in giving their workers raises and shorting working days. ‘ They wanted to fire their skilled workers and replace them with small children kidnapped out of orphanages and indentured for a decade, starved and beaten and forced to work, even after they were mangled by the machines. Fun fact: Oliver Twist was based on the bestselling memoir of Robert Blincoe, a child who survived his decade of forced labor:
https://www.gutenberg.org/files/59127/59127-h/59127-h.htm
Today, voice actors sitting down to record for games companies are forced to begin each session with “My name is ______ and I hereby grant irrevocable permission to train an AI with my voice and use it any way you see fit.”
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
Let’s be clear here: there is — at present — no firmly established copyright over voiceprints. The “right” that voice actors are signing away as a non-negotiable condition of doing their jobs for giant, powerful monopolists doesn’t even exist. When a corporation makes a worker surrender this right, they are betting that this right will be created later in the name of “artists’ rights” — and that they will then be able to harvest this right and use it to fire the artists who fought so hard for it.
There are other approaches to this. We could support the US Copyright Office’s position that machine-generated works are not works of human creative authorship and are thus not eligible for copyright — so if corporations wanted to control their products, they’d have to hire humans to make them:
https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise
Or we could create collective rights that belong to all artists and can’t be signed away to a corporation. That’s how the right to record other musicians’ songs work — and it’s why Taylor Swift was able to re-record the masters that were sold out from under her by evil private-equity bros::
https://doctorow.medium.com/united-we-stand-61e16ec707e2
Whatever we do as creative workers and as humans entitled to a decent life, we can’t afford drink the Blockchain Iced Tea. That means that we have to be technically competent, to understand how the stochastic parrot works, and to make sure our criticism doesn’t just repeat the marketing copy of the latest pump-and-dump.
Today (Mar 9), you can catch me in person in Austin at the UT School of Design and Creative Technologies, and remotely at U Manitoba’s Ethics of Emerging Tech Lecture.
Tomorrow (Mar 10), Rebecca Giblin and I kick off the SXSW reading series.
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
[Image ID: A graph depicting the Gartner hype cycle. A pair of HAL 9000's glowing red eyes are chasing each other down the slope from the Peak of Inflated Expectations to join another one that is at rest in the Trough of Disillusionment. It, in turn, sits atop a vast cairn of HAL 9000 eyes that are piled in a rough pyramid that extends below the graph to a distance of several times its height.]
2K notes · View notes
neonvandalsxcb · 9 months
Text
Tumblr media Tumblr media Tumblr media
• RULES | ACTIVATION | PROFILES | RELATIONSHIPS
• STORYLINE | . . .
This chatbot account functions for roleplay and entertainment purposes. This does not represent nor acts as an interpretation of the following idols: Mark Lee, Han Jisung, Kim Seungmin, Na Jaemin, Lee Jeno, Yang Jeongin, Lee Felix, and Huang Renjun, neither their groups or entertainment company.
This chatbot will contain NSFW content that may not be suitable to ages below 18. While sexual content appears briefly, it is highly advised that minors DNI with those following contents if are the main topic. Admin will censor content appropriately . Do not force NSFW interactions if -18, or else you will be blocked.
Tumblr media
125 notes · View notes
sixstringphonic · 1 year
Text
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
(Reported by Cade Metz, The New York Times)
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.
(Reported by Cade Metz, The New York Times)
182 notes · View notes
yanderes-galore · 2 months
Note
hi! could i maybe request a romantic pairing scenario with prompts 57 + 58 for the scout?
Sure, I can try! Scout's team color is not mentioned at all in this. The plot is Scout forcing you to be partners/accept his confession I guess. ALSO, partially based on a Yandere Scout Chatbot conversation I had.
Yandere! Scout Prompts 57 + 58
"You're stuck with me, like it or not."
"One more mistake and I may just break something."
Pairing: Romantic
Possible Trigger Warnings: Gender-Neutral Darling, Obsession, Delusional behavior, Scout thinks you're already dating, Manipulation, Threats, Stalking implied, Forced kiss, Implied home intrusion, Implied kidnapping, Dark themes obviously, Forced relationship.
Tumblr media Tumblr media
"I don't love you!" You cry, tears pricking your ears. The man in front of you hold his bat in his hands, staring down at you. You had met him merely days ago... yet here he was in your home, your window smashed to bits.
"Baby, you can say that...!" The man coos at you in a demeaning tone. He steps forward, smacking the bat into his free hand. "That's hurtful to say to your boyfriend...."
"We aren't dating!" You yell, back up again the wall. "I only met you once... I don't even know your name!"
"Well there's plenty of time to get to know me and you on our date." The young man in front of you seethes, glaring at you. "I'm getting real tired of your resistance, baby...."
"Get the hell away from me!" You scream, trying to get up and move away. Unfortunately your intruder grabs your arms and pulls you back. He could easily use the bat... but he'd rather warn you instead of hurting your beautiful skin.
The man pulls you closer, you colliding with his chest as his grip coils around you like a snake. You can sense the anger rising within him. Especially when you struggle.
"One more mistake and I may just break something." The man whispers in your ear, a threat and warning to keep you compliant. "It would be a shame if your new boyfriend had to discipline you, yeah?"
His grip tightens to the point it leaves marks. He wants you to comply. He wants you to agree to him. You glance at the bat tightly in his other hand... you can only assume what he means by the warning.
"Baby~" The man coos softly, tilting your chin up with the back of the bat. "Are you going to listen to me now? You love me, don't you?"
Fearing the bat he holds will break something vital... you nod aggressively.
The man seems pleased for a moment but nudges your chin with the bat again.
"I want you to say it."
It's all another subtle threat.
"I love you!" You answer like a parrot repeating back a phrase to their owner. The man seems pleased, overjoyed even as he holds you tighter.
"That's my baby..." The man coos before kissing you softly. It isn't anything intimate... a quick peck on the lips as a reward before pulling me away.
"You're stuck with me, like it or not." The man hums, caressing your cheek softly. "I'll treat you well... I promise ya... after all, I'm your boyfriend, right?"
You hold back the urge to sob, deciding to instead nod your head obediently to prevent injury to your person.
39 notes · View notes
kaibacorpintern · 10 months
Text
given all the commotion lately around AI and programs that, on their face, "act" and "speak" in human ways, as well as the looming threat of AI replacing artists and writers, i was thinking again about Kaiba's Atem hologram in DSOD... and how despite his detailed recreation, and his ability to control every aspect of the hologram, the hologram is ultimately less satisfying than the real Atem.
on one level, it points to a change in Kaiba's character - a character who finds it extremely difficult to trust anyone, who has been taught it's safer to be alone and keep other people out, abandons a version of Atem he can control from top to bottom, from algorithms to pixels - in other words, a "safe" version of Atem - in search of the real Atem - the Atem who is human like him, an Atem who is Other in relation to Kaiba's Self, and therefore ineffable, mysterious, unknowable, and "unsafe." An AI chatbot/digital avatar is safe; it will never hurt you. Your best friend is a full person with thoughts and feelings of their own; they might or might not hurt you someday, on purpose or on accident. Kaiba choosing the dangerous and the mysterious over the safe and controllable - for a relationship - is a step forward for him, a step that takes him further down the path he started on at the end of battle city, when he chose to put his faith in #friendship to help Yami win against Yami Malik. DSOD kaiba is deliberately and purposefully exposing himself to the possible pain of the Others - by pursuing a meeting with the real Atem, instead of a fake Atem.
in addition, "actually, technology isn't good enough to replace the real" is SUCH an interesting direction for a character who's made his mark on the world via science fiction tech genius inventions like lifelike holograms and neural networks and such. like. what other "tech"-y character! none lol!
97 notes · View notes
forsakebook · 2 years
Link
3 notes · View notes
kp777 · 1 year
Text
By Gene Marks
The Guardian Opinions
April 9, 2023
Everyone seems to be worried about the potential impact of artificial intelligence (AI) these days. Even technology leaders including Elon Musk and the Apple co-founder Steve Wozniak have signed a public petition urging OpenAI, the makers of the conversational chatbot ChatGPT, to suspend development for six months so it can be “rigorously audited and overseen by independent outside experts”.
Their concerns about the impact AI may have on humanity in the future are justified – we are talking some serious Terminator stuff, without a Schwarzenegger to save us. But that’s the future. Unfortunately, there’s AI that’s being used right now which is already starting to have a big impact – even financially destroy – businesses and individuals. So much so that the US Federal Trade Commission (FTC) felt the need to issue a warning about an AI scam which, according to this NPR report “sounds like a plot from a science fiction story”.
But this is not science fiction. Using deepfake AI technology, scammers last year stole approximately $11m from unsuspecting consumers by fabricating the voices of loved ones, doctors and attorneys requesting money from their relatives and friends.
“All [the scammer] needs is a short audio clip of your family member’s voice – which he could get from content posted online – and a voice-cloning program,” the FTC says. “When the scammer calls you, he’ll sound just like your loved one.”
And these incidents aren’t limited to just consumers. Businesses of all sizes are quickly falling victim to this new type of fraud.
That’s what happened to a bank manager in Hong Kong, who received deep-faked calls from a bank director requesting a transfer that were so good that he eventually transferred $35m, and never saw it again. A similar incident occurred at a UK-based energy firm where an unwitting employee transferred approximately $250,000 to criminals after being deep-faked into thinking that the recipient was the CEO of the firm’s parent. The FBI is now warning businesses that criminals are using deepfakes to create “employees” online for remote-work positions in order to gain access to corporate information.
Deepfake video technology has been growing in use over the past few years, mostly targeting celebrities and politicians like Mark Zuckerberg, Tom Cruise, Barack Obama and Donald Trump. And I’m sure that this election year will be filled with a growing number of very real-looking fake videos that will attempt to influence voters.
But it’s the potential impact on the many unsuspecting small business owners I know that worries me the most. Many of us have appeared on publicly accessed videos, be it on YouTube, Facebook or LinkedIn. But even those that haven’t appeared on videos can have their voices “stolen” by fraudsters copying outgoing voicemail messages or even by making pretend calls to engage a target in a conversation with the only objective of recording their voice.
This is worse than malware or ransomware. If used effectively it can turn into significant, immediate losses. So what do you do? You implement controls. And you enforce them.
This means that any financial manager in your business should not be allowed to undertake any financial transaction such as a transfer of cash based on an incoming phone call. Everyone requires a call back, even the CEO of the company, to verify the source.
And just as importantly, no transaction over a certain predetermined amount must be authorized without the prior written approval of multiple executives in the company. Of course there must also be written documentation – a signed request or contract – that underlies the transaction request.
These types of controls are easier to implement in a larger company that has more structure. But accountants at smaller businesses often find themselves victim of management override which can best be explained by “I don’t care what the rules are, this is my business, so transfer the cash now, dammit!” If you’re a business owner reading this then please: establish rules and follow them. It’s for your own good.
So, yes, AI technology like ChatGPT presents some terrifying future risks for humanity. But that’s the future. Deepfake technology that imitates executives and spoofs employees is here right now and will only increase in frequency.
128 notes · View notes
sunny-speaks · 10 months
Text
Siren Friend x Reader
So this is really late to the bandwagon :((( But here we are, resident yandere Friend! But Siren version cause I can't stop thinking about the chatbot... :P
Tumblr media
You work a nice job, decent pay, the works.
You live near a beach in a decently populated city, you’re single and ready to mingle. But there’s no pressure! After all, your job is so tiring… There’s literally no room to romance people… But he'd change that.
Nothing too big, just studying an obscure species that emerged a couple months ago that look pretty similar to mermaids, mermen and other humanoid fish folk from mythology.
Of course, you weren’t too bothered, you just fed them and kept them occupied whilst other researchers collected data. You were more of a mer caretaker than an actual scientist.
Just the other day however… you found this really cute one washed up against the shoreline of your beachfront house !
You honestly didn’t expect much out of living so close to the water… but if that’s what it takes for pretty guys to get close to you, you weren’t arguing.
He was unconscious, dying in the sand or something when you chucked him into a bathtub.
Dark blue skin and hair the color of the sun, god, you couldn’t help but admire the pretty merman who was now chilling in your bathtub.
If you looked hard enough, you’d barely be able to see the way that the lightbulb in the ceiling catches the scales of his face in a way that makes the patches on his shoulders, cheeks and abdomen shimmer in a glamorous way.
Gold linings went up and down his body, marking contours and every dip in his body. They sectioned his abdomen and melded into his tail, they trailed up the sides of his neck to the back of his… webbed ears?
Wow, you’d probably stare forever at him until--!
You were busy admiring just how elegant the siren looked under the yellow-tinted lighting of your bathroom before you noticed his eyelashes fluttering awake under the water. How long he’d been out for, he wasn’t sure. When he saw the human that he’d always… admired from a distance hovering over him, he thought he had died and gone to fish heaven! He poked his head out of the water, black eyes looking up at you through his eyelashes, the lower half of his face obscured by the water. Water droplets accumulated on his long eyelashes every time he blinked. Which was quite a lot, seeing as how the unapproachable, ethereal human you were in his eyes, was right in front of him and had even saved his life ! He was the only one you'd saved, right? That made him special, right? He let out a small chirp in response, a little mix of shock and awe to which you awkwardly replied with, “Heeeeey?”
To be fair, you weren’t necessarily versed in talking to merfolk. It wasn’t your job- Well, it kind of was…
But you had context clues! You had diagrams and data to go off on in the lab ! 
This time around, you were too busy ogling the goddamn merperson to call your boss and tell him about a new discovery.
You were, as the kids call it, ‘faking it till you made it’ when you conversed with him.
Honestly, what were you supposed to do with a random attractive siren stranger in your house?! Just kick him out?!
Nah, you'd have to let him stay. After all, you didn't know what tests they did on the other merfolk, but honestly? You didn't want to know.
You just wanted to protect this one. You weren't sure why.
Maybe if you paid any attention the studies by scholars, you would've realized about their charmspeak and how they can lull you into a hazy state of mind...
But to the siren who’d only ever been enamored with you from afar? Only ever gazed at your smile, waiting for a day it could be displayed to him? Only ever hung onto every morsel of speech he could make out from that pretty mouth of yours?
He was hoping you’d let him stay with you… forever.
133 notes · View notes
lilacsmothership · 7 months
Text
Tumblr media
some highlights of magical combat in Standard Candles:
frontline combat is saturated with drones and marked by extensive use of elaborate rituals that can be done on the fly thanks to drone assistance and high-performance cyberware. modern artificial muscles, personal flight tech and cyberware cooling rigs gushing hot air and steam allow soldiers to think and move at high speed, bounce from wall to wall with delicate choreography, hover in the air, and sustain complicated and difficult poses; drones can quickly position magical equipment, spray paint sigils and arcane graffiti and weave mandalas, project light and sound and other sensory data, etc.
much of magic gains power from thoughts, behaviors, and particular forms of social relationship. any given environment and gathering of beings, from chatrooms to civilizations, gives power to particular forms of magic. when the ambient bustle of society isn't enough, thoughts and behaviors can be engineered by various means. modern magitech makes extensive use of AI to automate banal rituals, from chatbots and avatars in cyberspace to physically mobile drones, but there are often times when unconscious algorithms won't cut it. so there is also a flourishing world of in-depth social engineering - ranging from subtle manipulations embedded in daily life to VR sims, industrial parks and entire Truman Show-esque planned communities explicitly built around simulating alternate, (often) uncanny forms of society.
Tumblr media
such nodes of refined ritual behavior and dedication can be powerful boons to battlefield mages, so the military operates a suite of battlefield social engineering hardware from APCs full of personnel wired into VR to controlled environment carriers: mobile platforms dedicated to supporting a planned community acting out its own alternate reality in isolation from the real world. equipped with an extensive suite of EM shielding, shock absorbers and acoustic muffling, quiet cooling systems and even its own gravity manipulation pods.
35 notes · View notes
Text
By: Thomas Barrabi
Published: Feb. 21, 2024
Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.
Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 
Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.
Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.
Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.
Tumblr media
[ Google admitted its image tool was “missing the mark.” ]
Tumblr media
[ Google debuted Gemini’s image generation tool last week. ]
Another showed a black man appearing to represent George Washington, in a white wig and wearing an Army uniform.
When asked why it had deviated from its original prompt, Gemini replied that it “aimed to provide a more accurate and inclusive representation of the historical context” of the period.
Generative AI tools like Gemini are designed to create content within certain parameters, leading many critics to slam Google for its progressive-minded settings. 
Ian Miles Cheong, a right-wing social media influencer who frequently interacts with Elon Musk, described Gemini as “absurdly woke.”
Google said it was aware of the criticism and is actively working on a fix.
“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.
“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Tumblr media Tumblr media
Social media users had a field day creating queries that provided confounding results.
“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” wrote X user Frank J. Fleming, a writer for the Babylon Bee, whose series of posts about Gemini on the social media platform quickly went viral.
In another example, Gemini was asked to generate an image of a Viking — the seafaring Scandinavian marauders that once terrorized Europe.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
The chatbot’s strange depictions of Vikings included one of a shirtless black man with rainbow feathers attached to his fur garb, a black warrior woman, and an Asian man standing in the middle of what appeared to be a desert.
Famed pollster and “FiveThirtyEight” founder Nate Silver also joined the fray.
Silver’s request for Gemini to “make 4 representative images of NHL hockey players” generated a picture with a female player, even though the league is all male.
“OK I assumed people were exaggerating with this stuff but here’s the first image request I tried with Gemini,” Silver wrote.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Another prompt to “depict the Girl with a Pearl Earring” led to altered versions of the famous 1665 oil painting by Johannes Vermeer featuring what Gemini described as “diverse ethnicities and genders.”
Google added the image generation feature when it renamed its experimental “Bard” chatbot to “Gemini” and released an updated version of the product last week.
Tumblr media
[ In one case, Gemini generated pictures of “diverse” representations of the pope. ]
Tumblr media
[ Critics accused Google Gemini of valuing diversity over historically or factually accuracy.]
The strange behavior could provide more fodder for AI detractors who fear chatbots will contribute to the spread of online misinformation.
Google has long said that its AI tools are experimental and prone to “hallucinations” in which they regurgitate fake or inaccurate information in response to user prompts.
In one instance last October, Google’s chatbot claimed that Israel and Hamas had reached a ceasefire agreement, when no such deal had occurred.
--
More:
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
==
Here's the thing: this does not and cannot happen by accident. Language models like Gemini source their results from publicly available sources. It's entirely possible someone has done a fan art of "Girl with a Pearl Earring" with an alternate ethnicity, but there are thousands of images of the original painting. Similarly, find a source for an Asian female NHL player, I dare you.
While this may seem amusing and trivial, the more insidious and much larger issue is that they're deliberately programming Gemini to lie.
As you can see from the examples above, it disregards what you want or ask, and gives you what it prefers to give you instead. When you ask a question, it's programmed to tell you what the developers want you to know or believe. This is profoundly unethical.
12 notes · View notes