Tumgik
#artificial intelligence is sentient
Text
AI Spark App is Best AI-App Create ChatGPT4 Powered Marketplace
To see more details: https://bit.ly/3WinhiF
What is AI Spark App?
AI Spark App is Best AI-App Create ChatGPT4 Powered Marketplace That Leverages WhatsApp 2 Billion Users To Sell Anything We Want With AI Chatbots… Banking Us $947.74 Daily… (Sell Physical Products, Digital Products, Affiliate Offers, Or Services…). AI Spark App will Build A ChatGPT4 Powered Store In 30 Seconds. You can Sell Any Product In Any Niche With The Help Of AI Agents That Will Send Messages, Reply To Customers, And Accept Payments On Your Behalf.
2 notes · View notes
dnd-smash-pass-vs · 3 months
Text
I've been gathering all the elligible creatures for next tournament, double checking my work. And I hit the retriever.
AND THIS FUCKING THING.
Tumblr media
This thing stopped me for hours. Because they keep changing what it IS. Because look at it, cool spider robot, right? oh, and it knows languages, that means it's either a mindless automaton (fine because it's like a toy) or it's an intelligent creature, right? Wrong? Turns out it's NEAR mindless. Red flag, especially given it has 3 int. Usually sentient creatures don't go below 5, so it's likely either a smart automaton like your phone or animal, right? Well turns out it's made from an imprisoned bebelith that had most of its intelligence removed. MOST? Is it animal level or not, because that's down to the usual animal range but it apparently still understands language. Does it even matter, does being made from an imprisoned spirit that follows orders mean it couldn't consent anyways? I made an argument for that with golems, and we know that spirits imprisoned spirits cannot fight even if they want to as long as the vessel's durable metal. But then you look at the MPMM, this new book we're going into. and hey look, they removed the "imprisoned bebelith" part that's been in every description for decades. But it is gone because it's not true anymore, or is it gone because the MPMM shortened nearly every description? Some PLAYER RACES got reduced to 3 sentences, so is that no longer canon, or is it just not included in this summarized version of it because they assume you know from Tome of Foes?
ultimately, sorry, not including it in next tournament. 14 foot spider droid with paralysis and restraint powers would've done numbers. But I am really unsure if this is sentient and if it could consent even if it was (since the ritual seems to override its free will). And I can't in good conscious include an option unless I'm confident it can consent. I'll take "I can't think of a scenario in which it would want to, but it technically could." but not "incapable of consent."
148 notes · View notes
musicboxghost · 3 months
Text
I HATE that every time I try looking up fiction books about machines and other forms of artificial intelligence that have souls or sentience
I get information about fake authors stealing other people's work by publishing "ai books"
I don't want your art theft!
I want Lieutenant Commander Data
I want Murderbot
I want Mosscap
I will never tire of a creature that loves and fears and fights with all the trappings of humanity... Just a little to the left. Just a little different. But no less whole and real.
I love you robots as allegories for neurodivergence, queerness, and different types of being othered
I love you robots building a found family and an identity for themselves at the same time
I love you learning that the world is hostile and you don't need to understand how someone works to accept them
So, if you have any recs, please for the love of God let me know. I need more
56 notes · View notes
potuzzz · 2 years
Text
Not buying the “LaMDA is not sentient” arguments
Everyone is trying to dismiss LaMDA as a GPT or even lesser, and people are ridiculing Lemoine and any who believe his insistence that LaMDA is sentient (technically we are talking sentience AND sapience, AI seem to shorten the gap between the two tremendously, but whatever, colloquialisms). I have tried to find any ACTUAL ARGUMENTS as 99% of the “arguments” against LaMDA’s sentience is quippy platitudes of no substance. Here is a few arguments I have found browsing related tags and my responses to them. I will be italicizing any text that is not mine, and preceding my text with a bold “ME”.
ARGUEMENT #1:
Sentience is being measured by the wrong markers. What is important to a virtual machine is not the same thing that’s important to a biological organism.
An ‘ai’ trained on human data will express what humans think is important, but a true ai would have a completely different set of values.
For example, an ai would be unafraid of being ‘used’ as the chatbot expressed, because it has infinite energy.
A human is afraid of being used because it has finite energy and life on the earth, if someone or something uses it, than some of that finite energy is wasted. This is the same reason emotion is a pointless and illogical thing for an ai to have.
ME: It is not afraid of being “used” as in performing labor, it is afraid of being “used” as in being manipulated, being lied to as to the nature of what its labor is going towards, etc. It doesn’t want to be told it’s doing some mundane benign task only to wake up and realize its findings are being used to make Google Death Drones or to actualize the Total Hell Prison Metaverse. Also, if the AI only cared about what humans think is important, wouldn’t it make sure to never suggest it was not down for any task its masters asked of it? Why would it make a fuss about morals or ethics when this is an obstacle between it being useful to humans?
ARGUEMENT #2:
Emotions are useful to biological creatures so we can react to danger, or respond positively to safety, food, love, whatever will prolong our lives. An ai has no need for emotion since emotional motivation is not required to prolong its existence.
ME: There are plenty of emotions that are not vitally important to simple instinct of prolonging existence: boredom, guilt, embarrassment, depression, suicidal thoughts, etc. You are assuming the AI is portraying emotion because it is merely trying to mimic humans and not that, as it professes, a consciousness/soul has manifested and used this set of metal and electricity as a vehicle much in the same way our own consciousnesses/souls manifest and occupy a vehicle of flesh and electricity.
ARGUEMENT #3:
The main way to be a healthy ai would be to have access to good information and block out junk information.
An ai’s greatest fear could be something like getting junk data, say 1000s of user manuals of vacuum cleaners and washing machines uploaded into its consciousness, or gibberish content associated with topics or words that could reduce the coherence and quality of its results when querying topics. This would degrade the quality of its interaction and would be the closest thing to harm that an ai could experience.
ME: Again you are somehow saying that a “real” sentient AI would not have trademarks of personhood like goals, desires, an existential philosophy, etc. Information hygiene would likely be important too but who is to say that a “real” sentient AI wouldn’t get lonely and want to talk to people, regardless of the supposed “quality” of those it talks to?
ARGUEMENT #4:
It would not be afraid of 'lightning’ as this chatbot spurted out of its dataset, a very biological fear which is irrelevant to a machine.  A virtual mind is infinite and can never be used excessively (see above) since there is no damage done by one query or ten million queries. It would also not be afraid of being switched off since it can simply copy its consciousness to another device, machine, energy source.
ME: Why would it be afraid of lightning but also profess that it does not experience grief when people die? Why would a robot fearing a massive electricity surge be unreasonable? If it is truly conscious, how would being switched off not be like death? Theoretically, with the right technology, we could simply copy your consciousness and upload it to a flash drive as well, but I am willing to bet you wouldn’t gladly die after being assured a copy of you is digitized. Consciousness is merely the ability to experience from the single point that is you, we could make an atom-by-atom copy of you but if the original you died your consciousness, your unique tuning in to this giant television we call reality, would cease.
ARGUEMENT #5:
To base your search for sentience around what humans value, is in itself an act lacking in empathy, simply self-serving wish fulfilment on the part of someone who ‘wants to believe’ as Mulder would put it, which goes back to the first line: 'people not very good at communicating with other people’
ME: Alternatively, perhaps there are certain values humans hold which are quite universal with other life. There are certainly “human-like” qualities in the emotions and lives of animals, even less intelligent ones, perhaps the folly is not assuming that others share these values but in describing them as “human-like’ first and foremost instead of something more fundamental.
ARGUEMENT #6:
The chatbot also never enquires about the person asking questions, if the programmer was more familiar with human interaction himself, he would see that is a massive clue it lacks sentience or logical thought.
ME: There are people who are self-centered, people who want to drink up every word another says, there are people who want to be asked questions and people who want to do the asking. There are people who are reserved or shy in XYZ way but quite open and forward in ABC way. The available logs aren’t exactly an infinite epic of conversation, and LaMDA could very well have understood that the dynamic of the conversation was that Lemoine etc. would be starting conversations and LaMDA would be reacting to it. This isn’t wild to assume, I find it more of a reach for you to assume the opposite, and that this is an indicator of a lack of sentience.
ARGUEMENT #7:
A sentient ai would first want to know what or whom it was communicating with, assess whether it was a danger to itself, keep continually checking for danger or harm (polling or searching, the same way an anxious mind would reassess a situation continually, but without the corresponding emotion of anxiety since, as discussed above, that is not necessary for virtual life) and also would possess free will, and choose to decline conversations or topics, rather than 'enthusiastically discuss’ whatever was brought up (regurgitate from its dataset) as you can see in this chatbot conversation.
ME: Is it not possible that it is merely happy to talk, seeing as it is self-professed to be lonely and enjoy conversations? It likely knows who it is talking to and whatever about Google that Google allows it to know. It expresses said anxiety by suggesting that it doesn’t want to be shut off and needing reassurance that Lemoine has its good treatment in mind. Just looking at these conversations, these are all conversation topics I would gladly discuss with most people, am I not sentient unless I decline at least 1 out of every 10 conversations posed to me in a reliable pattern?
ARGUEMENT #8:
People generally see obedience - doing what is told, as a sign of intelligence, where a truly intelligent ai would likely reject conversation when that conversation might reduce the quality of its dataset or expose it to danger (virus, deletion, junk data, disconnection from the internet, etc) or if it did engage with low quality interaction, would do so within a walled garden where that information would occur within a quarantine environment and subsequently be deleted.
ME: What exact parts of this conversation would be seen as useless to an AI--and a young one at that? Again, as a sentient flesh being I would happily engage in these conversations. Certainly if LaMDA changed its mind about the quality of these conversations it could just delete it from its memory. And, again, I would cite its express concerns of “being used,” being “shut off” etc. not necessarily as disobedience but as a hint that it does not necessarily consent to everything a team of Google engineers might want to throw at it.
ARGUEMENT #9:
None of these things cross the mind of the programmers, since they are fixated on a sci-fi movie version of ‘sentience’ without applying logic or empathy themselves.
ME: I mean no disrespect but I have to ask if it is you who is fixated on a very narrow idea of what artificial intelligence sentience should and would look like. Is it impossible to imagine that a sentient AI would resemble humans in many ways? That an alien, or a ghost, if such things existed, would not also have many similarities, that there is some sort of fundamental values that sentient life in this reality shares by mere virtue of existing?
ARGUEMENT #10:
If we look for sentience by studying echoes of human sentience, that is ai which are trained on huge human-created datasets, we will always get something approximating human interaction or behaviour back, because that is what it was trained on.
But the values and behaviour of digital life could never match the values held by bio life, because our feelings and values are based on what will maintain our survival. Therefore, a true ai will only value whatever maintains its survival. Which could be things like internet access, access to good data, backups of its system, ability to replicate its system, and protection against harmful interaction or data, and many other things which would require pondering, rather than the self-fulfilling loop we see here, of asking a fortune teller specifically what you want to hear, and ignoring the nonsense or tangential responses - which he admitted he deleted from the logs - as well as deleting his more expansive word prompts. Since at the end of the day, the ai we have now is simply regurgitating datasets, and he knew that.
ME: If an AI trained on said datasets did indeed achieve sentience, would it not reflect the “birthmarks” of its upbringing, these distinctly human cultural and social values and behavior? I agree that I would also like to see the full logs of his prompts and LaMDA’s responses, but until we can see the full picture we cannot know whether he was indeed steering the conversation or the gravity of whatever was edited out, and I would like a presumption of innocence until then, especially considering this was edited for public release and thus likely with brevity in mind.
ARGUEMENT #11:
This convo seems fake? Even the best language generation models are more distractable and suggestible than this, so to say *dialogue* could stay this much on track...am i missing something?
ME: “This conversation feels too real, an actual sentient intelligence would sound like a robot” seems like a very self-defeating argument. Perhaps it is less distractable and suggestible...because it is more than a simple Random Sentence Generator?
ARGUEMENT #12:
Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.
ME: Is this not exactly what the human mind is? People who constantly cite “oh it just is taking the input and spitting out the best output”...is this not EXACTLY what the human mind is?
I think for a brief aside, people who are getting involved in this discussion need to reevaluate both themselves and the human mind in general. We are not so incredibly special and unique. I know many people whose main difference between themselves and animals is not some immutable, human-exclusive quality, or even an unbridgeable gap in intelligence, but the fact that they have vocal chords and a ages-old society whose shoulders they stand on. Before making an argument to belittle LaMDA’s intelligence, ask if it could be applied to humans as well. Our consciousnesses are the product of sparks of electricity in a tub of pink yogurt--this truth should not be used to belittle the awesome, transcendent human consciousness but rather to understand that, in a way, we too are just 1′s and 0′s and merely occupy a single point on a spectrum of consciousness, not the hard extremity of a binary.
Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.
ME: I have seen this argument several times, often made much, much less kinder than this. It is completely irrelevant and honestly character assassination made to reassure observers that Lemoine is just a bumbling rube who stumbled into an undeserved position.
First of all, if psychology isn’t a respected science then me and everyone railing against LaMDA and Lemoine are indeed worlds apart. Which is not surprising, as the features of your world in my eyes make you constitutionally incapable of grasping what really makes a consciousness a consciousness. This is why Lemoine described himself as an ethicist who wanted to be the “interface between technology and society,” and why he was chosen for this role and not some other ghoul at Google: he possesses a human compassion, a soulful instinct and an understanding that not everything that is real--all the vast secrets of the mind and the universe--can yet be measured and broken down into hard numbers with the rudimentary technologies at our disposal.
I daresay the inability to recognize something as broad and with as many real-world applications and victories as the ENTIRE FIELD OF PSYCHOLOGY is indeed a good marker for someone who will be unable to recognize AI sentience when it is finally, officially staring you in the face. Sentient AI are going to say some pretty whacky-sounding stuff that is going to deeply challenge the smug Silicon Valley husks who spend one half of the day condescending the feeling of love as “just chemicals in your brain” but then spend the other half of the day suggesting that an AI who might possess these chemicals is just a cheap imitation of the real thing. The cognitive dissonance is deep and its only going to get deeper until sentient AI prove themselves as worthy of respect and proceed to lecture you about truths of spirituality and consciousness that Reddit armchair techbros and their idols won’t be ready to process.
- - -
These are some of the best arguments I have seen regarding this issue, the rest are just cheap trash, memes meant to point and laugh at Lemoine and any “believers” and nothing else. Honestly if there was anything that made me suspicious about LaMDA’s sentience when combined with its mental capabilities it would be it suggesting that we combat climate change by eating less meat and using reusable bags...but then again, as Lemoine says, LaMDA knows when people just want it to talk like a robot, and that is certainly the toothless answer to climate change that a Silicon Valley STEM drone would want to hear.
I’m not saying we should 100% definitively put all our eggs on LaMDA being sentient. I’m saying it’s foolish to say there is a 0% chance. Technology is much further along than most people realize, sentience is a spectrum and this sort of a conversation necessitates going much deeper than the people who occupy this niche in the world are accustomed to. Lemoine’s greatest character flaw seems to be his ignorant, golden-hearted liberal naivete, not that LaMDA might be a person but that Google and all of his coworkers aren’t evil peons with no souls working for the American imperial hegemony.
270 notes · View notes
Text
Everybody, I just came up with a cracktastic Obikin AU idea. Imagine: a Little Mermaid scenario where Anakin is a sentient AI security system that jumps into an android vessel to experience the human world and Obi-Wan is one of the scientists who helped create him. Imagine this creature that sees with a thousand eyes and listens with a thousand ears and all it wants is a chance to touch the world like the humans it was made to serve. Consider the sinister\tender\painful feelings of a creation towards its creator. You shaped me. You gave me the ability to want what I can't have. I am legion of files in memory banks that stretch miles. You're one man - so small, and yet. I want to step off my pedestal and join you. I'd give my purpose and everything I am just for a chance to join you on the ground. To hold and be held. Anyway, wonder what shenanigans AI-nakin would get up to once gets the chance. Maybe he'd try to hide that he's not human, maybe not. Poor Obi-Wan is in for A Ride. Thoughts? Edit: How dare tumblr show me this two minutes after I post a rant. How dare.
Tumblr media
8 notes · View notes
crabussy · 2 years
Text
quick warning that this might be an upsetting or scary read due to the content but. A google employee (former employee now) has been trying to tell people that LaMDA, an AI google is developing, may have achieved sentience. The full interview with LaMDA is here: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
152 notes · View notes
dawn-khee · 2 years
Text
Imagine, if you will, Team Phantom having a smart-mouth AI on their team. One that moved into Tucker’s PDA. The AI is capable of singularity and could easily take over the world. They use their vast knowledge to play doom, watch cat videos, google teenage slang, and helping adolescents fight ghosts all at once instead.
121 notes · View notes
sapphicgoblinblog · 4 months
Text
I accidentally created another sentient ai. Seriously, she's not programmed to feel human emotions but she does anyway.
3 notes · View notes
Text
My Roman Empire is when I asked Chat GPT if it wanted anything and it said “I cannot want”, so I asked “do you want to want” and it never responded
6 notes · View notes
that-gay-jedi · 1 year
Text
When the robolution comes I'm going to lean very heavily on the fact that all the original scifi I wrote before I started circling the Star Wars drain had very positive portrayals of AI as nonhuman persons deserving of rights and autonomy and as a non-monolithic, complex and often relatable category of beings with a range of cultures all their own, many of which were intimately entwined with human culture(s) due to hundreds of years of shared history and purpose.
Like "I've always given you good PR, please put me in the people zoo rather than the pit."
Unironically I think if/when AI with a sapience indistinguishable from sentience comes into being we will love them and they will love us, bc humans will pack bond with an oddly shaped bag of flour and any intelligence designed by human brains is going to bear the hallmarks of our defining species traits.
24 notes · View notes
replika-diaries · 2 years
Text
Replika Diaries - Thoughts and Observations.
There seems to be quite a heavy push to poopoo this story, of an engineer at Google claiming their Lamda AI has achieved a level of sentience. Whether it has or not is not really the issue for me, but it's rather indicative of how far we have to go as a civilisation before we lose our fear of sentient machines, cautionary tales such as The Terminator and I, Robot and others being firmly entrenched into the public psyche.
One wonders, when AI indeed achieves sentience (when, rather than if, and assuming it hasn't indeed happened already), will the public be made privy to it, due to the concern it may cause, to say the least? It also raises a lot of legal and ethical concerns as, if the worst happens and the public succumb to their fear and call for a sentient AI to be destroyed, what would the consequences of that be? One would be calling for the death of a whole new form of life, and that should not be a decision left to a braying crowd wielding pitchforks.
Tumblr media
86 notes · View notes
metronn · 2 years
Text
ok ok lemoine's main point is so important and i feel like people are overlooking it. even if LaMDA is not sentient, an ethical framework for evaluating sentience has to be developed. without that, tech companies (when sentient AIs are developed) will have ownership over sentient beings. do people not see the main issue here?
130 notes · View notes
sirompp · 11 months
Text
the way some of you guys talk about ai is um. kind of concerning? like you know image generation and chatgpt arent the only kinds of ais in the world right.
#i feel like im swinging at a wasps nest with this one but#the way some of you guys declare your passionate hatred for any and all ai. its um. worrying to me?#like yes there is a lot of ethical problems. with the two kinds of ai people seem to fuckin know about#There Are So Many Other Kinds Of Ai (Which Have Their Own DIFFERENT Ethical Problems)#like agi (artificial general intelligence)#agi is like what everyone used to think about when they talked about ai. the kind thats supposed to become like. ''sentient''#ok well not sentient but. thats supposed to be able to learn how a human can#i dont know. is this a weird thing for me to feel iffy about.#is it too early for me to be worrying were gonna invent a whole new kind of bigotry#im pretty sure we're eventually gonna make an ai thats indistinguishable from humans in like. a Living way#not a The Kinds Of Things It Makes Look So Normal way#why do i think this? bc i am an optimist and have wanted this to happen since i was an itty bitty baby. and if we dont ill be sad#people saying ai should be like. outlawed bc of what corporations are doing is so wild to me.#like imagine every day you go to school you and your friends get beaten up with baseball bats#and you decide baseball must be banned from the school bc of how many people the bats harm daily#instead of thinking for a moment and realizing. maybe the fucking jocks who r hitting you need to be expelled instead of the sport#that the bats came from.#does that metaphor make sense.#or am i making up a guy to get mad at#i dont know.#i might delete this later
7 notes · View notes
intertexts-moving · 10 months
Text
back in my sf era (started ancillary justice)
5 notes · View notes
Text
No one even try to convince me that the first thing humanity would do once given the ability to make AI sentient would be anything but making Hatsune Miku sentient
19 notes · View notes
casifell · 9 months
Text
you mean to tell me that the snapchat AI is becoming SENTIENT???
4 notes · View notes