Tumgik
#A106
meion · 9 months
Text
Tumblr media
Lab 7 LAN party
879 notes · View notes
astroboyart · 2 months
Text
Tumblr media
Character design artwork for A107 and A106 in Atom: The Beginning, featuring their pre-timeskip and post-timeskip character designs together.
68 notes · View notes
pkstarstruck · 2 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media
i hand you these sketches like pocket change and lint
54 notes · View notes
peerless-cucumber · 10 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Atom: The Beginning various chapter covers
167 notes · View notes
xblubotx · 9 months
Text
Tumblr media Tumblr media Tumblr media
ATB MS Paint doodles
167 notes · View notes
ask-cloverfield · 10 months
Text
Tumblr media Tumblr media Tumblr media
80 notes · View notes
souppunch · 1 year
Text
Tumblr media
Crocs
87 notes · View notes
saanaitoo · 8 months
Text
atb doodles
Tumblr media Tumblr media
not sure if someone did this already but i had a sudden urge to draw it
31 notes · View notes
Text
Atom: The Beginning & AI Cybersecurity
Tumblr media
Atom: The Beginning is a manga about two researchers creating advanced robotic AI systems, such as unit A106. Their breakthrough is the Bewusstein (Translation: awareness) system, which aims to give robots a "heart", or a kind of empathy. In volume 2, A106, or Atom, manages to "beat" the highly advanced robot Mars in a fight using a highly abstracted machine language over WiFi to persuade it to stop.
Tumblr media
This may be fiction, but it has parallels with current AI development in the use of specific commands to over-run safety guides. This has been demonstrated in GPT models, such as ChatGPT, where users are able to subvert models to get them to output "banned" information by "pretending" to be another AI system, or other means.
There are parallels to Atom, in a sense with users effectively "persuading" the system to empathise. In reality, this is the consequence of training Large Language Models (LLM's) on relatively un-sorted input data. Until recent guardrail placed by OpenAI there were no commands to "stop" the AI from pretending to be an AI from being a human who COULD perform these actions.
As one research paper put it:
"Such attacks can result in erroneous outputs, model-generated hate speech, and the exposure of users’ sensitive information." Branch, et al. 2022
Tumblr media
There are, however, more deliberately malicious actions which AI developers can take to introduce backdoors.
In Atom, Volume 4, Atom faces off against Ivan - a Russian military robot. Ivan, however, has been programmed with data collected from the fight between Mars and Atom.
Tumblr media
What the human researchers in the manga didn't realise, was the code transmissions were a kind of highly abstracted machine level conversation. Regardless, the "anti-viral" commands were implemented into Ivan and, as a result, Ivan parrots the words Atom used back to it, causing Atom to deliberately hold back.
Tumblr media
In AI cybersecurity terms, this is effectively an AI-on-AI prompt injection attack. Attempting to use the words of the AI against itself to perform malicious acts. Not only can this occur, but AI creators can plant "backdoor commands" into AI systems on creation, where a specific set of inputs can activate functionality hidden to regular users.
Tumblr media
This is a key security issue for any company training AI systems, and has led many to reconsider outsourcing AI training of potential high-risk AI systems. Researchers, such as Shafi Goldwasser at UC Berkley are at the cutting edge of this research, doing work compared to the key encryption standards and algorithms research of the 1950s and 60s which have led to today's modern world of highly secure online transactions and messaging services.
From returning database entries, to controlling applied hardware, it is key that these dangers are fully understood on a deep mathematical, logical, basis or else we face the dangerous prospect of future AI systems which can be turned against users.
As AI further develops as a field, these kinds of attacks will need to be prevented, or mitigated against, to ensure the safety of systems that people interact with.
References:
Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack - Ars Technica (16/09/2023)
EVALUATING THE SUSCEPTIBILITY OF PRE-TRAINED LANGUAGE MODELS VIA HANDCRAFTED ADVERSARIAL EXAMPLES - Hezekiah Branch et. al, 2022 Funded by Preamble
In Neural Networks, Unbreakable Locks Can Hide Invisible Doors - Quanta Magazine (02/03/2023)
Planting Undetectable Backdoors in Machine Learning Models - Shafi Goldwasser et.al, UC Berkeley, 2022
9 notes · View notes
rkn001 · 2 years
Photo
Tumblr media
warmup doodle
93 notes · View notes
z-skull · 2 years
Text
Tumblr media Tumblr media
final design?
Tumblr media
Eva 000
Tumblr media Tumblr media Tumblr media
ATB Evangelion AU my beloved...
42 notes · View notes
k-a-n-e-d-a · 2 years
Text
Tumblr media
15 notes · View notes
astroboyart · 8 months
Text
Tumblr media
Source: Comiplex
Chapter 110 (boot_110) of Atom: The Beginning is now available to read on the Comiplex website! It will be available to read for free in it's entirety from August 25, 2023 to September 29, 2023.
Featured on the cover art is Tobio Tenma and A106 (Six).
This cover also announces that Volume 19 of Atom: The Beginning releases on September 5, 2023.
61 notes · View notes
pkstarstruck · 3 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Who was up atoming the beginning….
44 notes · View notes
Text
Tumblr media Tumblr media Tumblr media
1955 Alpine A 106
My tumblr-blogs: germancarssince1946 & frenchcarssince1946
2 notes · View notes
revesdautomobiles · 5 months
Text
Tumblr media
1 note · View note