Tumgik
#agi
dianalillian · 4 months
Text
Not just once…
Tumblr media
Not just twice…
Tumblr media
…but Nagi was roasted thrice to his face!
Tumblr media
Now that the buzzword (codependency) is thrown, it seems that Nagi’s despair is imminent and inevitable, and it’s probably in the form of Reo-initiated breakup.
Tumblr media
On another news, Reo calls Agi a fussy in-law?!?!
Tumblr media
Most importantly, our coverboy scored a goal! Look at how proud Aiku is.
Tumblr media
154 notes · View notes
garadinervi · 10 months
Photo
Tumblr media
Makoto Nakamura, JAPAN 2001, Grand Prize, JAGDA Poster Exhibition 2001 [Alliance Graphique Internationale, Baden]
365 notes · View notes
clawsou · 1 year
Photo
Tumblr media
It’s Agi time.
292 notes · View notes
lazysublimeengineer · 4 months
Text
Tumblr media Tumblr media
Barou got an early xmas gift in the form of a sassy cat Chigiri blocking his goal lmao. I remembered back when it's just the three of them in the second selection after Isagi was stolen by Rin's team. Must've been chaos out there ☠️ Barou is like an older brother handling a cheeky little brother. Also, Reo referring to Agi as a meddling fussy-in-law because we're about to see another divorce arc from Nagi and Reo soon 😭😭😭😭😭
38 notes · View notes
Text
Tumblr media Tumblr media Tumblr media
(via Parkdale Organize): "Our neighbours at 77 Spencer continue to demand that Akelius Canada withdraw the above guideline rent increase at their building. Today fellow Akelius tenants at 109 Indian Road, who also face an AGI at their building, joined them for a loud and disruptive protest at the Akelius head office. Rather than meet tenants' demands, Akelius had Toronto Police remove them from their office."
48 notes · View notes
realcleverscience · 2 months
Text
AI + 45
Tumblr media
Quite a few leading thinkers in the AI space believe we'll have AGI in the next few years. Some believe we may already have it (though not publicly). Clearly we're already fairly close, with AI models like chatgpt performing quite well on a range of knowledge tasks, as well as constant improvements in reasoning, embodiment, and multi-modal processing - improvements that certainly appear to be accelerating in pace, such as with the recent SORA video demonstration. AGI appears closer than ever.
When those early AGIs are released, we'll be looking at a storm of effects, most notably mass layoffs (particularly in white-collar jobs). And that's with AGI even under democratic and 'charitable' rule. (There are a myriad of even more dystopian possibilities with intentional abuse of its power.) The need for a level-headed leader who understands the literal human needs of their people would be paramount.
The idea that Trump may be presiding during the birth of AGI terrifies me.
Frankly, even Biden and the dems leading scare me, but that level of responsibility in an unhinged, narcissistic, strong-man man-baby just makes the scenario 1,000x worse.
Even China's use of AI, which has many worried, doesn't seem as dangerous as a lone leader, unmoored from principle and reality, hellbent on revenge, who literally sees nuclear bombs as easy fixes to geopolitical conflicts.
The idea of Trump... TRUMP! Donald J Trump (!) leading the US (and to some extent, the world) through what may be the biggest and most powerful change in human history, is just terrifying. (Not to mention the climate crisis, and everything else.)
11 notes · View notes
foreverhartai · 29 days
Text
Color me excited!
youtube
5 notes · View notes
sabrinarismos · 3 months
Text
Tumblr media
Naquele momento eu não soube te ajudar com belas palavras, com um lindo discurso sobre positividade e otimismo. Nada disso veio em minha mente, ela estava completamente branca e vazia pelo êxtase assustador em que eu me encontrava; tudo estava um caos e minha mente deletou qualquer palavra. Apenas agi por instinto e te acolhi no meu abraço. Mal sabia eu que era apenas disso que você precisava; era o que bastava. Um abraço de aconchego para sentir-se seguro.
— cartasquechoram
4 notes · View notes
auntbibby · 5 months
Text
my new TF mood is "a wild nanobot swarm deconstructed my molecules into a 9-inch-wide grey cube and i can still talk and see and hear but thats it, so my friends & family are just carrying me around like a pet rat now and im grateful they are still treating me well despite me not being biological anymore"
how about u tumblr?
6 notes · View notes
bl-reaction-pics · 9 months
Text
Tumblr media
When your friend sends an out of context anime scene
13 notes · View notes
cakeandwine69 · 10 months
Text
Tumblr media
Everyone falls for Agi
#agichigi
#chigirihyoma
#agi
#bluelock
#bllk
#ブルーロック
#ブルーロックfa
10 notes · View notes
nick-nonya · 8 months
Text
For the sake of argument it has been proven 100% sentient somehow
Answers are left vague on purpose, Explain your choice (if you want)!
12 notes · View notes
garadinervi · 6 months
Text
Tumblr media
Günter Karl Bose, OuLiPo [AGI – Alliance Graphique Internationale, Baden]
41 notes · View notes
maozne18 · 1 year
Text
Tumblr media Tumblr media
Agi redesign cause I think he could look much cooler and prettier
30 notes · View notes
sandimexicola · 13 days
Text
Tumblr media
AGI
3 notes · View notes
ship-of-adramyttium · 27 days
Text
Impressions of Artificial Intelligence - Part 1- The Reflected Light of AI
Tumblr media Tumblr media
Image created with Copilot AI Where Did I Go? My last post was way back in October 2023. The last few months have been a little wacky, a little like coming to the top of a roller-coaster. Between looking for work and some crises on the home front, the ride might be coming back to the station. Finally, at the beginning of January I was able to start a new job with Invisible Technologies. It is contract work and I get to work from home. I am an AI Data Trainer and I teach AIs to be more human in their responses. The company is pretty cool. I work with other writers, doctoral students, and people from all over the world.  The job itself is very weird. For my first project with the company, I chose tasks from various domains, like Reasoning, Creative Writing, Creative Visual Descriptions, Exclusion, and about 7 other categories. Then I would write any prompt I wanted, let the wheels of the AI model spin, and read the responses the AI gave me (usually two). Then I would choose a response and rewrite that response toward what I believe to be an ‘ideal response’ that the AI model should have given. Sometimes, the AI’s response was ideal, and it is given a grade. This response, whether rewritten or the AI’s response, gets fed back into the AI model and it learns to respond differently the next time it is asked a similar prompt. I have been at this work for two full months now. For eight hours every day, I talk to an AI model and rewrite how it is responding. Right now, the project I am working on is a multi-persona AI model. It is very strange. The model creates personas and then generates conversation between the characters. I try to teach the model to have better conversations so that someday soon a real live human will be able to talk to multiple personas created by the AI as if they were also human.  I will be honest with you, I really kind of like the work. It is challenging and complex. It is creative. The work is completely remote and the company is kind of rough and tumble, which I sort of like. The parameters of a project often change on a moment’s notice, since the client doesn’t really know what they want until they see the work we have done. It is a strange departure from the world of ministry. But it is still a job of language and ideas.  So after two months working with AI models, I have some ideas about them. I don’t have any great earth-shattering insights, but I do think it is worth having a record of our slow descent into the AI future.  I have divided this into four parts. This is Part One.  AI Will Change Everything; We Are Not Going to Die Caveats and Qualifiers I recognize that I am not an information scientist, a coder, or an expert in computers and large language models (LLMs). As a techy sort of person and an early adopter of weird technologies, I collect various devices. I got the 2nd generation Kindle, the one with a keyboard. In seminary, I acquired a Dana Alphasmart, a super cool writing thing, which I actually still use. I have a ReMarkable writing tablet, which I bought sight unseen 6 months before it was released back in 2015. And I started using ChatGPT as soon as it came out in November of 2022. My foundations are in literature, theology, and writing, not in technology or computer science. I have a Doctor of Ministry in Semiotics with a focus on Extraordinary Spiritual Experiences. Semiotics is the study of signs and symbols and how the culture is using them. Semiotics has some relevance to AI, but to be very clear, semioticians, AI Data Trainers, hardcore users of AI systems, and front-end tech buyers are all end-users, the final stage of an incredibly complex series of algorithms, codes and processes. End-users is really another word for consumer, but the end-user is also a huge part of how devices and technologies are designed. In the industry, this is called UX, or User Experience design. LLMs, image generators, and machine learning are highly focused on UX. The work I am doing is part of making the user experience of LLMs a good one.  I also recognize that machine learning and artificial intelligence projects have been around for decades now. This is not new technology, very generally speaking, but the public access to the technology is new. So I am not going to pretend to have some great expertise in the subject. I know some of the lingo now, like SFT (Supervised Fine Tuning), RLFH (Reinforcement Learning from Human Feedback), and RAG (Retrieval-Augmented Generation). I do these things at my work. As a person who has made his living using words for most of my adult life, I would just say that the industry needs some creative writers to give actions in the AI realm better names. Regardless, AI is now a public event, a shared technology, which has only been available to the general populace for just over a year and three months at the writing of this article. I would submit that, in the history of technological advances, no other technology has been taken up as quickly by as many people in such a short time as Large Language Models have been since ChatGPT was released.  As someone who has studied semiotics and culture, I believe we are at the edge of a massive cultural shift with the advent of AI. The printing press came online in around 1440. For a while, it was expensive, private, and limited in its reach. The only thing really mass produced by the press were indulgences for the Catholic Church in Europe. Then, in 1512, Martin Luther posted his 95 Theses on the Wittenberg Church door. A small revolution with the printing press had occurred at the same time that allowed quicker and more efficient printing. Within a matter of months, the 95 Theses became the first mass published document in the world. The book exploded into the culture, and everything changed. For the next 150 years, Europe went insane with the flood of information. Wars, religions, cults, demagogues and influencers abounded. I think the Munster Rebellion is a truly spectacular story about how insane things were after the Protestant Reformation. It took a long time for things to normalize in Europe. 
Tumblr media
Image created with Copilot AI. This was also true with the advent of other massive technological shifts, as with writing back in Socrates’ time, who predicted the equivalent of an Idiocracy because of it. It was also true with the telegraph, the radio, television, the personal computer, and the Internet. The change and disruption with the introduction of each new technology has sped up, layering and accelerating as a result of prior advances in technologies. The same change and disruption is happening with AI. We are living through a massive, fundamental advance in the way we are human because of it. It has only been just over a year, and already AI is becoming ubiquitous.  So, with those qualifiers in place, my reflections over these four essays are mostly subjective, with a smattering of 30,000 foot understandings of how these things work. I focus primarily on language and text specific models, as opposed to image generators in these essays. There is a tremendous amount of crossover with both systems, but important differences as well. The ethical and creative issues apply whether the model is image or text based, however.   Reflected and Refracted Light - Fragmented, Shattered, Beautiful It is no accident that easily accessible AI models have emerged at the same time as our capacity to discern fact from opinion, truth from falsity, conspiracy from reality is dissolving. The most difficult aspect of generative AI models is safeguarding them from hallucinating, lying, and becoming lazy in their operational reasoning. In this way, they reflect human tendencies, but in a reductive and derivative fashion. We can see it happen in real time with an AI, whereas we have very little idea what is happening under the skull of a human.  This gets to the point I want to make. AI models reflect our minds and our variable capacity to express and discern what is real and what is not. AI models do not know what is real and what is not. They have to be trained to differentiate by humans. LLMs have an advantage over us with regard to access to knowledge since the largest LLMs have scraped their information from the vastness of the internet. (Many LLMs use what is called “The Pile”, an 825GiB dataset, for their base knowledge). An LLM’s access to huge swathes of knowledge at astonishing speed is mind-blowing. LLMs also have a massive disadvantage because they have no internal capacity to determine what is ‘true’ and what is not. An AI has to be trained, which is a long, intensive, recursive process involving many humans feeding back corrections, graded responses, and rewritten ideal responses.  When I started at the company, we were told to assume any AI model is like a 7 year-old child. It has to be trained, reinforced, and retrained. The most surprising thing, and I am still not sure what to make of this, is that AI models respond best to positive reinforcement. They like to be complimented and told they have done a good job. Doing so will increase the likelihood of better responses in the future. Being nice to your AI model means you will have a nice and cooperative AI later on.  Artificial General Intelligence Everything I have said is why we are a long, long way away from artificial general intelligence (AGI), the holy grail of utopians, billionaire tech bros, and computer developers alike. AGI is the phrase we use to talk about machines that, for all practical purposes, cannot be distinguished from human beings in their ability to rationalize and do things across many domains of activity. For now, even though they seem to be everywhere, LLMs and image generators are relatively limited in what they can do, even if what they do is really impressive.  I do not deny, however, that the potentiality is definitely there for AGI to develop at some point. There is a simple reason for that: AI is specifically designed to mimic human language and interaction. At some point, the capacity of an AI to appear human and intelligent will be indistinguishable from actually being human and intelligent. This brings up all sorts of questions about what consciousness, self-awareness, and reflective capacity actually is. If an AI can mimic these human qualities, there is really no way for us (by us, I mean primarily end-users) to know the mimicry from the real.  Just as the Moon only has light because it reflects sunlight, so also does AI reflect the human. And just as we know very little about the Moon, there are whole aspects of generative AI that we do not know about. In the same way a stained glass window refracts sunlight into a thousand different colors and shapes, so also does the vastness of human knowledge and knowing. Because of the vast access AI models have to information on the internet, AI will reflect this back to us in all our human beauty and horror.
Tumblr media
Image created with Copilot AI Training AI Children Each of us at the company goes through a relatively brief, but thorough, onboarding and training. Part of that training consists of things like metacognition and the fundamentals of fact-checking. There is also an element of psychological training as well, even though it is a short training module. The reason for this is, at its best, training an AI requires the human who interacts with the model to be self-reflective at every moment. Self-reflective training of an AI means entering a well-constructed prompt which is designed to elicit the most clarified answer from the model, reading the response with an eye toward internal bias within the model rather than imposing one’s own bias upon what one is reading, grading and weighting the response in as clear a manner as possible, and then writing an ideal response that will get fed back into the model that is unbiased as possible. Each step requires attention and presence of mind. After two months of daily engagement with this process, I can say that it is almost impossible to do this without imposing my own biases and desires upon the AI model. I am always thinking about what I want other people to experience when they use the model. I can only assume this is true of every other agent working on the same model I am.  This is what I mean that AI systems are reflective passive agents. The light they reflect is the light of human knowledge across the centuries. The refraction that occurs in that reflected light is the collective subjective experience over a vast dataset. It is no wonder that LLMs are prone to hallucination, false citations, least common denominator thinking, and the assertion they are right. Because we are prone to the same behavior.  Naughty and Ethical AIs The pendulum can swing in any direction with regard to this. ChatGPT had problems with racist and misogynistic responses in its original iterations. Guardrails have since been put in place with further iterations of the model. Recently, Google Gemini went the other direction and couldn’t stop putting people of color in Nazi uniforms, among other historic anomalies. This is called the “Alignment Problem” in AI and LLMs. How do we create an ethical AI? Too many rules and it is just a computer. Not enough rules and the model begins to default to the least common denominator of the information it has been fed. These swinging, vast compensations mirror the polarized, intractable situation we are in at the current moment as humans. Why wouldn’t the system that has sucked up the vastness of human knowledge which came out in the most polarized time in generations, at least here in America, reflect precisely that?  To correct these biases and defaults requires many human interventions and hours of supervised training. The dependency AI systems have on the presence of humans is enormous, expensive, and continuous. It will be a very long while before AI has any capacity to kill us, like in some Terminator Skynet or Matrix situation. But it may not be long before AI is convincingly used by bad actors to influence others to enact violent solutions to difficult problems. Deep fakes, false articles, and chaos actors will generate a lot of deeply troubling and terrifying material on these systems in the near future. Discerning false from true will be the hard work of the human being for a long time to come, just as it always has been, but with this new, powerful, highly influential twist of AIs adding to our conversations, and also generating those conversations. I will have part 2 up in the next couple days. Thank you for reading! This article has been fact-checked in cooperation with Copilot in Windows. Read the full article
2 notes · View notes