Tumgik
0100100100101101 · 4 years
Photo
Tumblr media
https://www.umibozu.com/
113 notes · View notes
0100100100101101 · 4 years
Photo
Tumblr media
574 notes · View notes
0100100100101101 · 4 years
Text
All Your Memories Are Stored by One Weird, Ancient Molecule
           How does memory work? The further we seem to dive in, the more questions we stumble upon about how the function of memory first evolved. Scientists made a key breakthrough with the identification of the Arc protein in 1995, observing how its role in the plastic changes in neurons was critical to memory consolidation.    
           This protein is already a big deal, but the Arc picture just got a lot more interesting. In a 2018 study published in the journal Cell, a team of researchers at the University of Utah, the University of Copenhagen in Denmark, and MRC Laboratory of Molecular Biology in Cambridge, UK, argue that Arc took its place in the brain as a result of a random chance encounter millions of years ago. Similar to how scientists say the mitochondria in our cells originated as bacteria that our ancient ancestors’ cells absorbed, the Arc protein seems to have started as a virus.    
           The researchers knew they were onto something when they captured an image of Arc that looked an awful lot like a viral capsid, the isohedral protein coat that encapsulates a virus’s genetic material for delivery to host cells during infection.    
           “At the time, we didn’t know much about the molecular function or evolutionary history of Arc,” says study coauthor Jason Shepherd, an assistant professor of neurobiology, anatomy, biochemistry, and ophthalmology at the University of Utah, in a statement. Shepherd has studied Arc for 15 years. “I had almost lost interest in the protein, to be honest. After seeing the capsids, we knew we were onto something interesting.”    
           The main issue that challenges neuroscientists’ understanding of memory is that proteins don’t last very long in the brain, even though memories last nearly a lifetime. So for memories to remain, there must be plastic changes, meaning that neuron structures actually have to change as a result of memory consolidation.    
           This is where Arc comes into play. Previous research on rats illustrated how Arc disrupts memory consolidation, suggesting that Arc is vital in neuronal plasticity.    
           But scientists never thought they would stumble on evidence that pointed to a viral origin for Arc, as these findings suggest.    
           The research team needed to verify this theory, so they tested whether Arc actually acts like a virus. It turns out the Arc capsid encapsulated its own RNA. When they put the Arc capsids into a mouse brain cell culture, the capsids transferred their RNA to the mouse brain cells — just like viral infection does.    
           “We went into this line of research knowing that Arc was special in many ways, but when we discovered that Arc was able to mediate cell-to-cell transport of RNA, we were floored,” says the study’s lead author, postdoctoral fellow Elissa Pastuzyn, Ph.D., in a statement. “No other non-viral protein that we know of acts in this way.”    
           The researchers suspect this virus-mammal collaboration happened sometime between 350 and 400 million years ago when a retrotransposon — the ancestor of modern retroviruses — got its DNA into a four-legged creature. They also suspect that this happened more than once. If they’re right, this research complicates the picture of the evolution of life as we know it. Not only did many mutations happen by random chance to make us what we are today, but we actually borrowed biology from other cells and organisms to get here. A little bit of their history lives on in us today.    
Abstract: The neuronal gene Arc is essential for long-lasting information storage in mammalian brain, mediates various forms of synaptic plasticity, and has been implicated in neurodevelopmental disorders. However, little is known about Arc’s molecular function and evolutionary origins. Here, we show that Arc self-assembles into virus-like capsids that encapsulate RNA. Endogenous Arc protein is released from neurons in extracellular vesicles that mediate the transfer of Arc mRNA into new target cells, where it can undergo activity-dependent translation. Purified Arc capsids are endocytosed and are able to transfer Arc mRNA into the cytoplasm of neurons. These results show that Arc exhibits similar molecular properties of retroviral Gag proteins. Evolutionary analysis indicates that Arc is derived from a vertebrate lineage of Ty3/gypsy retrotransposons, which are also ancestors to retroviruses. These findings suggest that Gag retroelements have been repurposed during evolution to mediate intercellular communication in the nervous system.
22 notes · View notes
0100100100101101 · 5 years
Link
6 notes · View notes
0100100100101101 · 6 years
Link
3 notes · View notes
0100100100101101 · 6 years
Link
The ability to send thoughts directly to another person’s brain is the stuff of science fiction. At least, it used to be.
In recent years, physicists and neuroscientists have developed an armory of tools that can sense certain kinds of thoughts and transmit information about them into other brains. That has made brain-to-brain communication a reality.
These tools include electroencephalograms (EEGs) that record electrical activity in the brain and transcranial magnetic stimulation (TMS), which can transmit information into the brain.
In 2015, Andrea Stocco and his colleagues at the University of Washington in Seattle used this gear to connect two people via a brain-to-brain interface. The people then played a 20 questions–type game.
An obvious next step is to allow several people to join such a conversation, and today Stocco and his colleagues announced they have achieved this using a world-first brain-to-brain network. The network, which they call BrainNet, allows a small group to play a collaborative Tetris-like game. “Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem-solving by humans using a ‘social network’ of connected brains,” they say.
The technology behind the network is relatively straightforward. EEGs measure the electrical activity of the brain. They consist of a number of electrodes placed on the skull that can pick up electrical activity in the brain.
A key idea is that people can change the signals their brain produces relatively easily. For example, brain signals can easily become entrained with external ones. So watching a light flashing at 15 hertz causes the brain to emit a strong electrical signal at the same frequency. Switching attention to a light flashing at 17 Hz changes the frequency of the brain signal in a way an EEG can spot relatively easily.
TMS manipulates brain activity by inducing electrical activity in specific brain areas. For example, a magnetic pulse focused onto the occipital cortex triggers the sensation of seeing a flash of light, known as a phosphene.
Together, these devices make it possible to send and receive signals directly to and from the brain. But nobody has created a network that allows group communication. Until now.
Stocco and his colleagues have created a network that allows three individuals to send and receive information directly to their brains. They say the network is easily scalable and limited only by the availability of EEG and TMS devices.
The proof-of-principle network connects three people: two senders and one person able to receive and transmit, all in separate rooms and unable to communicate conventionally. The group together has to solve a Tetris-like game in which a falling block has to be rotated so that it fits into a space at the bottom of the screen.
The two senders, wearing EEGs, can both see the full screen. The game is designed so the shape of the descending block fits in the bottom row either if it is rotated by 180 degrees or if it is not rotated. The senders have to decide which and broadcast the information to the third member of the group.
To do this, they vary the signal their brains produce. If the EEG picks up a 15 Hz signal from their brains, it moves a cursor toward the right-hand side of the screen. When the cursor reaches the right-hand side, the device sends a signal to the receiver to rotate the block.
The senders can control their brain signals by staring at LEDs on either side of the screen—one flashing at 15 Hz and the other at 17 Hz.
The receiver, attached to an EEG and a TMS, has a different task. The receiver can see only the top half of the Tetris screen, and so can see the block but not how it should be rotated. However, the receiver receives signals via the TMS from each sender, saying either “rotate” or “do not rotate.”
The signals consist of a single phosphene to indicate the block must be rotated or no flash of light to indicate that it should not be rotated. So the data rate is low—just one bit per interaction.
Having received data from both senders, the receiver performs the action. But crucially, the game allows for another round of interaction.
The senders can see the block falling and so can determine whether the receiver has made the right call and transmit the next course of action—either rotate or not—in another round of communication.
This allows the researchers to have some fun. In some of the trials they deliberately change the information from one sender to see if the receiver can determine whether to ignore it. That introduces an element of error often reflected in real social situations.
But the question they investigate is whether humans can work out what to do when the data rates are so low. It turns out humans, being social animals, can distinguish between the correct and false information using the brain-to-brain protocol alone.
That’s interesting work that paves the way for more complex networks. The team says the information travels across a bespoke network set up between three rooms in their labs. However, there is no reason why the network cannot be extended to the Internet, allowing participants around the world to collaborate.
“A cloud-based brain-to-brain interface server could direct information transmission between any set of devices on the brain-to-brain interface network and make it globally operable through the Internet, thereby allowing cloud-based interactions between brains on a global scale,” Stocco and his colleagues say. “The pursuit of such brain-to-brain interfaces has the potential to not only open new frontiers in human communication and collaboration but also provide us with a deeper understanding of the human brain.”
35 notes · View notes
0100100100101101 · 6 years
Photo
Tumblr media
Untitled
Instagram: umibozu_
2K notes · View notes
0100100100101101 · 6 years
Link
Facial recognition systems have become ruthlessly efficient at picking people out of a crowd in recent years, and people are finding ways to thwart the artificial intelligence that powers them. Research has already shown that AI can be fooled into seeing something that’s not there, and now these algorithms can be hijacked and reprogrammed.
Despite recent advances, the technology behind facial recognition, a type of deep learning called machine vision, leaves much to be desired. Many computer vision algorithms are still at a point where they’re liable to make mistakes, such as mislabeling a turtle as a gun. These mistakes can be weaponized by subtly manipulating images so that they cause computers to “see” specific things—for example, a sticker on a sign can cause a self-driving car to think it’s actually a stop sign.
As detailed in a recent paper posted to arXiv, three Google Brain researchers have taken this type of malicious image manipulation (called adversarial examples) a step further and demonstrated that small changes to images can actually force a machine learning algorithm to do free computations for the attacker, even if it wasn’t originally trained to do these types of computations. This opens the door for the possibility of attackers being able to hijack our increasingly AI-driven smartphones by exposing them to subtly manipulated images.
Deep learning uses artificial neural networks—a type of computing architecture loosely modeled on the human brain—to teach machines how to recognize patterns by feeding them a lot of data. So, for example, if you wanted to teach a neural network how to recognize an image of a cat, you’d feed it tens of thousands, if not millions, of examples of cat pics so that the algorithm can determine general parameters that constitute “cat-ness”. If you then present the machine with an image that contains an object that falls within these parameters, and if this training stage was successful, it will determine that the image contains a cat with a high degree of certainty.
Strange things start to happen with machine vision neural nets when they’re fed certain pictures that would be meaningless to humans, however. In a picture that otherwise just looks like static, a machine vision algorithm might be very confident that the image contains a centipede or a leopard. Moreover, static can be overlaid on normal images in a way that’s imperceptible to humans, yet throws the machine vision algorithm through a loop, kind of like how smart TVs can be triggered to perform various tasks by audio that is inaudible to humans.
The Google Brain researchers took the concept of adversarial examples a step further with adversarial reprogramming: causing a machine vision algorithm to perform a task other than the one it was trained to perform.
The researchers demonstrated a relatively simple adversarial reprogramming attack that got a machine vision algorithm that was originally trained to recognize things like animals to count the number of squares in an image. To do this, they generated images that consisted of psychedelic static with a black grid in the middle. Some of the squares in this 4x4 black grid were randomly selected to be white. This was the adversarial image.
The researchers then mapped these adversarial images to image classifications from ImageNet, a massive database used to train machine learning algorithms. The mapping between the ImageNet classifications and the adversarial images was arbitrary and represented the number of white squares in the adversarial image. For example, if the adversarial image contained two white squares, this would return the value ‘Goldfish,’ while an image with 10 white squares would return the value ‘Ostrich.’
The scenario modeled by the researchers imagined that an attacker knows the parameters of a machine vision network trained to do a specific task and then reprograms the network to do free computations on a task it wasn’t originally trained to do. In this case, the neural network being attacked was trained as an ImageNet classifier trained to recognize animals.
To manipulate this ImageNet classifier to do free computations, the researchers embedded the black boxes with white squares in 100,000 ImageNet images and then allowed the machine vision network to proceed as usual. If the image contained a black box with nine squares, for example, it would report back that it saw a hen, and the researchers would know it had correctly counted nine squares since this was the ImageNet classification that mapped to the number nine.
The technique turned out to be remarkably effective. The algorithm counted squares correctly in over 99 percent of the 100,000 images, the authors wrote.
Although this was a simple example, the researchers argue that these types of adversarial reprogramming attacks could be much more sophisticated in the future.
“A variety of nefarious ends may be achievable if machine learning systems can be reprogrammed by a specially crafted input,” the researchers wrote. “For instance, as phones increasingly act as AI-driven digital assistants, the plausibility of reprogramming someone’s phone by exposing it to an adversarial image or audio file increases. As these digital assistants have access to a user’s email, calendar, social media accounts, and credit cards the consequences of this type of attack also grow larger.”
13 notes · View notes
0100100100101101 · 6 years
Link
Dashing around a battlefield in the bulky robo-armor Tom Cruise wore in Edge of Tomorrow won't cut it in the real world. For starters, it’s way too big. And the energy required to power something that size—via a gas engine strapped to your back in some early inventor iterations—is noisy and a giveaway to the enemy that you’re approaching.But a raft of newly developed exoskeletons is starting to meet the slimmed-down, stealth requirements of today’s troop commanders, who see these power-assisting suits as vital to the future combat missions. Among the most promising, and weird-looking, is the “third arm” that the U.S. Army Research Laboratory developed to help soldiers carry and support their weapons on the battlefield. The lightweight device, which weighs less than four pounds and hangs at a soldier’s side, stabilizes rifles and machine guns, which can weigh up to 27 pounds. This improves shooting accuracy and also minimizes fatigue. It can even be used while scrambling into position on the ground.
The kind of fatigue that the third arm aims to negate is a killer on the battlefield, and most of the new suits are similarly meant to help troops minimize the energy they use to carry enormous supply packs, weapons and other battlefield gear. In May, Lockheed Martin unveiled its lightest weight powered exo for lower body support. Dubbed ONYX, the form-fitting suit, which resembles an unobtrusive web of athletic braces, reduce the effort soldier’s need for walking, running, and climbing over varied terrain while carrying a heavy loads of up to 100 pounds.
The suit uses tracking sensors, mechanical knee actuators, and artificial intelligence-based software that predicts joint movement, all of which reduce stress on the lower back and the legs. It uses both rigid and flexible components, which fit snugly to make it more comfortable, and is meant to augment the user’s capabilities rather than do any of the actual work for him or her. “The system’s on-board computer uses an artificial intelligence algorithm to read and interpret motion sensors placed in key locations through the exoskeleton,” explains Keith Maxwell, Lockheed’s senior exoskeleton program manager. “ONYX tracks how the leg moves, understands the motion and provides a boost, assisting the knee at just the right time. This reduces stress on the lower extremities, increases stamina and improves endurance.”
The trick, according the Maxwell, is having the assistance come at just the right time, in order to prevent the wearer from falling out of sync with the device, or “fighting” it. ONYX syncs up with the user, but there’s room for even more improvement in the future, once its able to read electrical impulses directly. “It takes 75 milliseconds for the human body to go from thought to action,” said Maxwell, referring to the time it takes for electrical impulses from the brain to initiate muscle movement in the body. “We’re getting inside that control loop—detecting that signal at the muscle and initiating our movement at the same time the muscle moves.”
As a result, the exoskeleton quickly becomes second-nature, ultimately reducing the metabolic “cost” of transport—the wearer’s VO2max—by nine percent. In other words, soldiers don’t have to consume as much oxygen to perform a given task, Maxwell says. Taking that edge off has associated benefits. It improves psychological performance and cuts the risk of bone and muscle injuries. Furthermore, the motorized, battery-powered suit, which could have an eight-hour life with two batteries and up to 16 hours with four batteries, helps evenly distribute weight while carrying loads, helping maintain skeletal alignment. This avoids overstress and pressure injuries.
As an example of the net benefit, the company says a soldier who could normally perform 26 reps of 185-lb squats could perform 72 similar reps while wearing ONYX. “It won’t make you stronger or faster, but it will help you go longer,” says Maxwell, a frequent wearer of the exoskeleton who notes that doing so makes his own knee injuries essentially vanish. During a demonstration of the suit last week, he showed this off by performing an essentially effortless squat that he says he could maintain for an hour if needed.
Lockheed engineers are currently working on hardening the components so they can withstand battlefield use and improving the harnessing so it’s even more discreet and unobtrusive. They are also tweaking the fitting process to ensure that soldiers can properly adjust it without tools, in the field.
While Lockheed’s version is certainly streamlined, one of its competitors, Boston-based Dephy, has taken exo design a step beyond by further minimizing the hardware needed. Its ExoBoot improves overall mobility, but limits it physical presence to just the ankle. According to Dephy cofounder Luke Mooney, many exos work to bypass the human frame and transfer the loads soldier’s carry into the ground. But those systems are hampered by having to move their own weight, as well. Dephy’s ExoBoot works to provide artificial external muscles, focusing on the ankle joint instead of the knee, as ONYX does. “ExoBoot reduces calf muscle effort by providing an external torque about the ankle joint,” Mooney said. “Reducing muscle work reduces the operator effort—i.e., the metabolic cost—while also reducing muscle forces and the corresponding joint forces.”
Unassisted, calf muscles apply more than 1,000 pounds across the ankle joint even during normal walking, Mooney explained. That number is much higher while running or carrying a load. The boot uses an electric motor and onboard sensors and a controller to apply torque around the ankle. It feels like a normal boot when unpowered, though about a pound heavier, but most of the extra mass from the battery and electronics sits higher up the calf, making it feel lighter. When you take a few steps, the controller analyzes the forces and begins to provide supplemental torque to the ankle joint through tiny electric motors. Though the wearer can sense the presence of the boot at first, as well as the mechanical augmentation it’s providing, his or her body quickly adapts, rendering its presence unnoticeable, and therefore not a distraction or inhibitor—though the company reports that when the boot is switched off while in use, users who’ve grown accustomed to it describe their legs as feeling briefly like lead before returning to their more natural feeling. (Though obviously the exertion then ramps up, as well, once the boot is no longer supplementing their muscles.)
Because these systems are still in development, neither the Army nor the defense contractors will disclose estimated, per-unit costs. Of course, as with a lot of military innovations, they also have outside applications—first responders deploying them in emergency and disaster situations, plus countless uses for commercial and industrial workers—that will also help keep development and the overall final costs low for all parties.
There’s plenty of work still to be done. Acoustics, for instance, will be key to military users. The public videos released of Lockheed’s system are absent sound, but while observing the system last week, it was audible—via persistent electro-mechanical squeaks—but not particularly loud. Still, the next version, Maxwell says, will be even quieter.
Testing the systems could begin with field trials with the 10th Mountain Division at Fort Drum, N.Y., as soon as the end of this year. Such powered devices are part of the Army’s so-called Third Offset Strategy, which seeks to use robotics and artificial intelligence to enhance humans on the battlefield, rather than to replace them.
Actual deployment will depend on how well the systems perform in the trials, which will include everything from flat-surface and incline tests to comfort and usability evaluations by Army soldiers, and any further enhancements or modifications that might be needed as a result. One day, though, these systems could give Tom Cruise’s clunky getup a lightweight run for its money.
28 notes · View notes
0100100100101101 · 6 years
Link
At the beginning of Detroit: Become Human, a video game about American androids fighting for equal rights, a character looks out from the television screen and says, directly to the player, “Remember: This is not just a story. This is our future.”
It’s a bold claim. As Detroit’s story unfolds, the game switches between three different androids: household servant turned revolutionary leader Markus; Kara, a robot fleeing from government persecution with the abused child she rescued from her former boss; and Connor, an agent of the delightfully named megacorp CyberLife who hunts down “deviant androids” disobeying their programming. Through their perspectives, we’re meant to observe a technological future the game wants us to believe is, in fact, soon to come. Connor’s character may sound familiar. That’s because he’s essentially a recast of Rick Deckard, the titular Blade Runner from Ridley Scott’s 1982 sci-fi classic. In each case, Deckard and Connor are hunting aberrant robots, capturing and/or killing those who have broken free of their programming and attempting to live outside their intended roles as servants to humanity.
In both Detroit and Blade Runner the point of these robot hunters is to introduce the question of what separates humanity from a synthetic being so emotionally and intellectually advanced that it is indistinguishable from any member of our species. By the time we’ve watched the monologue from Blade Runner’s bleach blond “replicant” robot Roy Batty (Rutger Hauer) about his memories vanishing “like tears in rain,” any hint of inhumanity feels irrelevant. He, like the soulful androids who populate Detroit, remembers his past in the same way we do. He loves. He can be sad. He thinks about his own mortality. The movie ends with the audience having been convinced that a robot with incredibly advanced artificial intelligence deserves to be treated better than a defective home appliance. Blade Runner, it bears repeating, was released in 1982. Detroit: Become Human came out May of this year.
Again and again, Detroit attempts to pull its sci-fi storyline into the real world to convey the same message Blade Runner accomplished so many years ago. It evokes the American civil rights movement (its future Michigan features segregated shops and public transit where androids are kept to the back of city buses; one chapter is even called “Freedom March”), American slavery (the horrific abuses visited on the androids by their masters are regular enough to become numbing), and the Holocaust (extermination camps are set up to house revolutionary androids near the game’s finale) in order to do so. Others have done a great job running down the myriad ways in which Detroit fails in its evocation of the civil rights movement and class-based civil unrest. The poor taste inherent in its decision to make tone-deaf comparisons between its (multi-ethnic, apparently secular) robots and some of human history’s most reprehensible moments of violent prejudice is grotesque enough on its own. But it’s worth noting that on a dramatic level, Detroit also falls completely flat.
Its central point, presented with the satisfied air of a toddler smugly revealing that the family dog feels pain when you yank its tail, is that an android with a sophisticated sense of the world and itself deserves the same rights as any human. This seems like a philosophical problem that ought to have been put to bed around the time Blade Runner made the “dilemma” of android humanity part of mainstream pop culture. For decades now, audiences have watched, read, and played through stories that very persuasively argue there’s no good moral case for treating sufficiently advanced artificial intelligence—especially when housed in an independently thinking and feeling robot body—like dirt. To watch Roy Batty die in Blade Runner and feel nothing isn’t a failure of social and cultural empathy, but the viewer for just kind of being a monster. To release a video game in 2018 where players are honestly expected to experience conflicting emotions or a sense of emotional revelation when a completely humanistic robot is tortured or killed in cold blood ignores decades of genre-advancing history.
Even outside popular art, the past few decades have seen seismic shifts in our relationship with technology that should be impossible to ignore. In the ’80s, a home computer was revolutionary. Now, we live in an era where it’s completely mundane to ask talking boxes for trivia answers and maintain digital extensions of our personae on websites accessed through portable phones. We are not as suspicious of technology as we once were. It’s a part of us now—something we live with.
This shift is pretty clear in other areas of pop culture. Westworld—one of the highest profile sci-fi works in recent years—spent much of its first season retreading some of the same familiar ground as Detroit, but has found a more interesting path as it’s continued onward. While early episodes floundered with dramatically inert questions of whether sexually assaulting, torturing, and murdering lifelike thinking and feeling robots was an okay premise for an amusement park, it’s since moved on from hammering home the simplistic, insultingly moralizing lesson that “treating humanoid androids badly is the wrong thing to do.” At its best, characters like the show’s standout, Bernard Lowe—a tortured robot who is very well aware he is a robot—bring a welcome complexity.
Bernard, in actor Jeffrey Wright’s strongest performance to date, alternates naturally between a machine’s cold, vacant-eyed calculations and the trembling pathos of an android traumatized not only by the loss of his family and the violence of the world in which he lives, but also the knowledge that his memories are artificially coded and that his programming has led him to contribute to the horror of his surroundings. With this focus, viewers are given scenes far more philosophically troubling than the show’s earlier attempts to question whether it’s all right to kill humanlike robots for fun. In season two’s “Les Écorchés,” for example, Bernard is sat in a diagnostic interrogation and tormented by park co-creator Robert Ford (Anthony Hopkins), who, apparently, has entered his system in the form of a viral digital consciousness. Ford flits about his mind like a demonic possession. Bernard remembers killing others while under the intruder’s control. He cries and shakes like any human wracked with so much psychological pain would. “It’s like he’s trying to debug himself,” a technician notes. A digital read-out of Bernard’s synthetic brain shows his consciousness is “heavily fragmented,” as if under attack from a computer virus.
Rather than focus on simple ideas, the show acknowledges, in instances like these, that its audience is willing to accept an android character like Bernard as “human” enough to deserve empathy while remembering, too, that his mechanical nature introduces more compelling dramatic possibilities. Thankfully, Westworld’s second season has leaned further into this direction, moving (albeit at a glacial pace) toward stories about what it means for robots to embrace their freedom while being both deeply human and, due to their computerized nature, still fundamentally alien. By the end of the season, its earlier concern with flat moral questions has largely been swept away. Its finale, while still prone to narrative cliché elsewhere, shows a greater willingness to delve into explorations of how concepts like free will, mortality, and the nature of reality function for the computerized minds of its characters.
This is the sort of thing that elevates modern sci-fi, that reaffirms its potential for valuable speculation rather than just being a place to indulge familiar tropes and revisit nostalgic aesthetics. We see it in games like Nier: Automata, whose anime-tinged action is set in a far-future world where humanity has gone extinct, leaving behind only androids who must grapple with their minds persisting over centuries of samsara-like cycles of endless war against simpler machines trying to come to grips with their own intellectual awakening. We see it in Soma, which explores similar territory and turns it into soul-shaking horror by telling a story where people’s minds have been transplanted into synthetic consciousnesses, stored immortally on computers that reside in facilities dotting the inky depths of the ocean floor while the Earth dies out far above them. Like Bernard—and like many of the other characters now freeing themselves from both their shackles as Westworld’s park “hosts” and the narrative constraints of the show’s earlier episodes—these games transcend the outdated concerns of a story like Detroit. They give us something new to chew on, concerns that are not only intellectually fuller but also more reflective of where we are now as a technology-dependent species.
There’s no better summary of this change than the extremely belated Blade Runner sequel, Blade Runner 2049. Its predecessor was devoted entirely to convincing audiences that its assumedly inhuman replicants are worthy of empathy. It ended by asking if we’d even be able to tell the difference between a flesh-and-blood person and a synthetic one. Compare that to 2049, where protagonist K—Ryan Gosling playing a character with a suitably product-line-style name—is shown to be an android almost from the start. The plot of the film centers (like Detroit and Westworld) on a fast-approaching revolution where self-sufficient androids will overthrow their human creators, but the heart of its story is about the psychology of artificially intelligent beings. K is depicted as deeply troubled, grasping for affection from the mass-market hologram AI he’s in love with, grappling with the fact that he might be the first replicant to be born from another android, hoping to connect with his possible father, and being tormented by his inability to distinguish between what’s been programmed into his synthetic mind and what’s a “real” memory.
Blade Runner 2049 considers it a given that modern audiences can empathize with this android character without prerequisite arguments—that we’re not instinctively terrified of what he represents but willing to think about what such a creation means when set against age-old concepts of love and selfhood. As a sequel to the movie that did so much to settle questions about whether a robotic being was equal to humanity, it moves its concerns forward in tandem with society itself.
There’s a scene in 2049 where K, having learned of the existence of the first replicant child to be born of two replicant parents, is asked by his boss, Lt. Joshi (Robin Wright), to homicidally erase this revolutionary evidence in order to maintain the world’s status quo. K says he’s never killed something “born” before. When asked why that makes him uncomfortable, he replies that being born means having a soul—that that may be a crucial difference. “You’ve been getting on fine without one,” Joshi says. “What’s that, madam?” K replies. “A soul.”
It’s an exchange that takes moments, but it’s enough to communicate more about the nature of an AI consciousness than Detroit manages over its dozen hours. In these few words, 2049 puts an old debate to rest while raising new questions about what it means for a machine to worry about its place in the world. K doesn’t “have a soul” in the traditional sense, but he is tortured by the knowledge that he, with his need to love and be loved, may possess something quite like it. Modern science fiction is capable of asking us to explore what it means to view technology this way. It’s able to make us consider how our sense of reality may or may not intersect with the ever-more complex computers we create. It is, basically, able to do a lot more than revisit tired questions about whether the kind of highly advanced robots that populate Detroit: Become Human are worth taking seriously enough to care about in the first place.
15 notes · View notes
0100100100101101 · 6 years
Link
Beethoven, Poe, and Tesla all claimed to use a bizarre creative technique to come up with some of their ideas–a method that involved accessing their dreams to hunt down brilliant concepts and bring them into the conscious world. Researchers at the Massachusetts Institute of Technology are trying to build on the fabled process with an interface for dreams. They call it Dormio.
Led by MIT Media Lab Fluid Interfaces Group’s Adam Haar Horowitz, Dormio is a device designed to influence and extend the semi-lucid sleep state called hypnagogia. We all pass through this cognitive wonderland just before falling completely asleep. It’s a mental dimension that often features a distorted perception of space and time; you may lose your very sense of self, and you’ll often experience lucid dreams or come up with ideas that are free of the logical constraints and cognitive filters of the conscious brain. Even though we all experience hypnagogia, the wild visions and ideas that come with this phase of sleep are usually forever lost after a night of sleep. Geniuses like Edison and Dalí had a clever trick for recalling their ideas, though. They would take naps while holding a steel ball in their hands–which would fall as soon as they left their hypnagogia phase, instantly waking them up with a fresh memory of their lucid dreams.
Dormio is a much more advanced version of that steel ball trick, and aims to lengthen, influence, and record the “microdreams” we all experience in this state.
Using an electronic glove that contains sensors to monitor muscle tone, heart rate, and skin conductance, it monitors when you enter hypnagogia and when you’re falling into real sleep. At that point, Dormio gently nudges you with an audio cue emitted from either the team’s smartphone app, or a nearby Jibo robot with a cue word. The researchers used “fork” or “rabbit,” but it can be any word. The subtle noise is meant to bring you back into hypnagogia without completely waking you up, and in fact, the team found that the chosen word often gets conceptually incorporated into the user’s lucid dreams. Meanwhile, the app or bot will start a conversation with the semi-conscious sleeper, recording anything you say. Once the interaction is over, Dormio lets users fade out into slumberland again, repeating the process to “incept” their dreams and record dream reports.
If you think this sounds like snake oil, it’s not. Edison, Tesla, Poe, or Dalí all used the steel ball trick to make wild conceptual connections that are clearly evident in their work. One famous example of this was German organic chemist August Kekulé, who came up with the molecular structure of benzene during a hypnagogic trip: He saw molecules forming a snake that was biting its own tail.
The inventors of Dormio see the device as a more effective version of the steel ball. Here’s what Horowitz writes about his motivation on the project’s website:
I find the idea that there is a state of mind which composes and constructs my conscious self, but remains inaccessible to me, both frustrating and alluring. Hypnagogia is a “me” that I am unfamiliar with, a “me” that slips past memory as we drift into unconsciousness. Good neuroscience, to me, is effective self-examination. Good technology in service of making neuroscience relevant outside the laboratory, then, should facilitate self-examination. The ends of this project are both practical and philosophical. I have no doubt that Hypnagogia holds applications for augmenting memory, learning, and creativity. Yet also, after having explored the state myself, I find it to be a deeply valuable and inspiring sort of self-seeing which was inaccessible to me previously. As Nobel Prize winner Eric Kandel said, “human creativity…stems from access to underlying, unconscious forces.”
I asked Adam about the future of Dormio and other sleep-related technologies via email. The objective is to keep iterating the devices and integrating them with neuroscience labs. Their work is part of the long history of Dream Incubation that goes all the way back to Ancient Egypt. “There is a flood of contemporary interest in sleep as more and more science comes out on the mechanisms and importance and mystery of sleep”, he says, “I see two parallel paths which would be both significant. Firstly, that use of the system allows a glimpse for people of their own Hypnagogia […] Secondly, I hope further that the system is useful to the amazing scientists who have been advising me so far, and to that larger sleep science community.”
If you want to try it, you can be a bit of an Edison and build one yourself: Dormio is open source and you can get the software for biosignal tracking on Github. The circuit board design is online, too, and you can follow these step by step instructions to make one. The team hopes that perhaps one day Dormio could become a commercial project–for now, it’s a first step towards creating interfaces that allow us to interact with our subconscious the same way we interact with our conscious mind.
After all, we spend one-third of our lives sleeping. We need an app for that.
26 notes · View notes
0100100100101101 · 6 years
Link
In the summer of 2016, Xie Wen applied for a loan at the bank and was rejected. Later, he tried to purchase a plane ticket online but was blocked by the system.
“That is when I knew I was blacklisted,” Xie said.
He had been added to the Chinese Supreme Court’s list of “discredited” persons or entities, which usually targets people who refuse to repay debts. In Xie’s case, his advertising company was sued by another firm over a contract dispute and lost. The judge ordered Xie to pay $127,000, which he didn’t. Seven months later, without notice, Xie’s name was added to the blacklist – one that companies are encouraged to check before entering deals.
“It hurt my business,” Xie said. “My clients didn’t trust me. I didn’t get much work.”
Since the blacklist was created in October 2013, 9.59 million people have been to the list.
Like others on the blacklist, Xie was banned from boarding a plane or high-speed train. Xie was restricted from big purchases such as buying property, vacation packages, or private school tuition for his child. Officials could impose a wide range of penalties by blocking Xie’s personal ID card number, which is required in China for everything from boarding a flight to getting a social media account.
Anyone who had Xie’s name and ID number could check the blacklist through the Supreme Court’s website.
The blacklist is part of China’s efforts to build a “social credit” system and offers a glimpse into the kinds of penalties people deemed untrustworthy might get once it is set up by 2020. Critics have likened the plan to an Orwellian creation or an episode of the TV series "Black Mirror."
Credit expert Hu Naihong, who is advising the government on this project, disagrees.
China does not have a similar FICO scoring system in the U.S. because most Chinese do not have a credit card, a mortgage or other loans. So, on top of financial credit information, Chinese authorities are also looking at any other behaviors that might speak to someone’s creditworthiness.
For example, there is a public database on the CreditChina website for enterprises that catalogues things such as administrative penalties or violating procedures when bidding for a government project. It also includes a “red list” noting good behavior, such as filing taxes correctly and on time.
This is all information that government agencies already collect but it was not always shared with all levels of government. This massive amount of data will now be centralized for individuals and enterprises, said credit expert Hu Naihong.
“Those with good credibility will be rewarded, and those without credibility will be punished,” Hu said.
At the same time, dozens of cities are also testing their own social credit systems.
The Shanghai government will record on social credit files behaviors such as jaywalking or not sorting out garbage into the appropriate trash cans.
It may seem odd to Americans, but Hu said it’s necessary to include morality and ethics when it comes to assessing creditworthiness.
“You can’t say people damaged their credit only after they default on their debts. There may have been many clues in their daily lives that show they don’t like to follow rules,” she said.
Hu said the central government has not assigned any social credit scores to its citizens yet, but eventually there will be a financial credit score.
It is not clear who will calculate it and how social behaviors will be factored in. Hu said there will be a mechanism for people to dispute information on their social credit files.
In Xie’s case, he didn’t have time.
“No one notified me that I was on the blacklist. Then court officials detained me, so I had to pay the fine,” he said. Xie is now off the blacklist.
Lawyer Li Xiaolin was also not given advanced notice that he was blacklisted.
In 2014, Li was sued for defamation and lost. A judge ordered Li to make an apology, which he submitted in writing in April 2015. Ten months later, when he was away on a work trip, he was blocked from buying a return flight home to Beijing. That’s when he found out he was blacklisted.
It took him another three weeks before an official told him why.
“The court said my apology was not sincere. I asked officials how they determined what is sincere.” Li said.
Eventually Li wrote a second apology and the court removed him from the blacklist in 2016. Then last year, he tried to get a credit card.
“The bank denied my application. I figured out that the bank might still have my name blacklisted and I was right,” Li said.
The bank updated its records the next day, but by that point, he had spent almost a year to fully clear his name.
Journalist Liu Hu was sued in 2015 for reposting a message on social media. The court determined that the material was defamatory and ordered Liu to pay $1,400 compensation plus other fees.Still, Li is considered one of the luckier ones.
“I wired the money to the wrong account, so the court didn’t get the money. No one there told me. Then the court put me on the blacklist,” Liu said.
He corrected the mistake but then the judge said the amount needed to be increased by at least a thousand dollars.
While authorities have every right to enforce legitimate court orders, Maya Wang, a senior China researcher for Human Rights Watch, said the penalties in both Li and Liu’s cases were exacted in a manner that was “wildly arbitrary” and “unaccountable.” Already, in a simple blacklist system people are having difficulties seeking redress.
“The social credit system gives a very powerful weapon to officials, in a country with very unbalanced relations between citizens and the government,” she said.
For now, Liu is still on the blacklist, which the Chinese state media refers to as the list of “laolai” or deadbeats.
He is still trying to get off it.
“Basically, it is very hard to find information on how to resolve my case. I’ve only gotten this far because I have some contacts,” Liu said.
70 notes · View notes
0100100100101101 · 6 years
Photo
Tumblr media Tumblr media Tumblr media
Reginald Van de Velde
49 notes · View notes
0100100100101101 · 6 years
Photo
Stone Island Shadow Project
Tumblr media Tumblr media
1K notes · View notes
0100100100101101 · 6 years
Link
Scientists have developed a brain implant that noticeably boosted memory in its first serious test run, perhaps offering a promising new strategy to treat dementia, traumatic brain injuries and other conditions that damage memory.
The device works like a pacemaker, sending electrical pulses to aid the brain when it is struggling to store new information, but remaining quiet when it senses that the brain is functioning well.
In the test, reported Tuesday in the journal Nature Communications,the device improved word recall by 15 percent — roughly the amount that Alzheimer’s disease steals over two and half years.
The implant is still experimental; the researchers are currently in discussions to commercialize the technology. And its broad applicability is unknown, having been tested so far only in people with epilepsy.
Experts cautioned that the potential for misuse of any “memory booster” is enormous — A.D.H.D. drugs are widely used as study aids. They also said that a 15 percent improvement is fairly modest.
Still, the research marks the arrival of new kind of device: an autonomous aid that enhances normal, but less than optimal, cognitive function.
Doctors have used similar implants for years to block abnormal bursts of activity in the brain, most commonly in people with Parkinson’s disease and epilepsy.
“The exciting thing about this is that, if it can be replicated and extended, then we can use the same method to figure out what features of brain activity predict good performance,” said Bradley Voytek, an assistant professor of cognitive and data science at the University of California, San Diego.
The implant is based on years of work decoding brain signals, supported recently by more than $70 million from the Department of Defense to develop treatments for traumatic brain injury, the signature wound of the Iraq and Afghanistan wars.
The research team, led by scientists at the University of Pennsylvania and Thomas Jefferson University, last year reported that timed electrical pulses from implanted electrodes could reliably aid recall.
“It’s one thing to go back through your data, and find that the stimulation works. It’s another to have the program run on its own and watch it work in real time,” said Michael Kahana, a professor of psychology at the University of Pennsylvania and the senior author of the new study.
“Now that the technology is out of the box, all sorts of neuro-modulation algorithms could be used in this way,” he added.
Dr. Edward Chang, a professor of neurosurgery at the University of California, San Francisco, said, “Very similar approaches might be relevant for other applications, such as treating symptoms of depression or anxiety,” though the targets in the brain would be different.
The research team tested the memory aid in 25 people with epilepsy who were being evaluated for an operation.
The evaluation is a kind of fishing expedition, in which doctors thread an array of electrodes into the brain and wait for seizures to occur to see whether surgery might prevent them. Many of the electrodes are placed in the brain’s memory areas, and the wait can take weeks in the hospital.
Cognitive scientists use this period, with patients’ consent, to give memory tests and take recordings.
In the study, the research team determined the precise patterns for each person’s high-functioning state, when memory storage worked well in the brain, and low-functioning mode, when it did not.
The scientists then asked the patients to memorize lists of words and later, after a distraction, to recall as many as they could.
Each participant carried out a variety of tests repeatedly, recalling different words during each test. Some lists were memorized with the brain stimulation system turned on; others were done with it turned off, for comparison.
On average, people did about 15 percent better when the implant was switched on.
“I remember doing the tests, and enjoying it,” said David Mabrey, 47, a study participant who owns an insurance agency outside of Philadelphia. “It gave me something to do while lying there.”
“But I could not honestly tell how the stimulation was affecting my memory. You don’t feel anything; you don’t know whether it’s on or off.”
The new technology presents both risks and opportunities. Dr. Kahana said the implants could potentially sharpen memory more dramatically if the approach were refined to support retrieval — digging out the memory — rather than only storage.
Still, as currently devised, the implant requires that multiple electrodes be placed in the brain to determine its high- or low-functioning state (though stimulation is sent to just one location).
This makes it an extremely delicate operation that would likely be reserved only for severe cases of impairment — and certainly not for students cramming for tests, Dr. Voytek said.
“Ideally we can find other, less invasive ways to switch the brain from these lower to higher functioning states,” he said. “I don’t know what those would be, but eventually we’re going to have to work out the ethical and public policy questions raised by this technology.”
118 notes · View notes
0100100100101101 · 6 years
Photo
Tumblr media
2017.04.02.XT1.REPLICANT001
146 notes · View notes
0100100100101101 · 6 years
Link
The UK-based company ASI Data Science unveiled a machine learningalgorithm Wednesday that can identify terrorist propaganda videos with 99 percent accuracy.
This development marks one of the first instances of a company successfully using A.I. to flag extremist propaganda. The Islamic State group is notorious for its social media recruiting efforts, and this algorithm could help curtail them.
While the researchers at ASI wouldn’t discuss any technical specifics of the algorithm, it appears to work like other kinds of A.I. recognition software. The algorithm can examine any video and determine the probability that the video is a piece of extremist propaganda. According to the BBC, the algorithm was trained on thousands of hours of terrorist recruiting videos, and it uses characteristics from these videos to assign probability scores.
If a video is marked as very high probability, it is tagged for review by a human content moderator. Because the videos aren’t automatically taken down, any false positive should be caught before the video is wrongfully removed. ASI said that the algorithm could detect up to 94 percent of Islamic State uploads.
The algorithm, which has partially funded by the British government, was mainly created for the benefit of smaller video platforms that don’t have the resources to maintain a large content moderation staff. Because these sites can’t thoroughly examine every video uploaded, it’s easier for terrorist groups to plant propaganda on them. As researcher Marc Warner of ASI told the BBC, “There’s over 1,000 different videos on over 400 different platforms.”
It’s not clear when the new algorithm will be deployed on these sites. When it finally becomes available, groups like the Islamic State will likely try to change the way they craft their videos in order to avoid detection.
Hopefully the computer scientists can stay one step ahead.
7 notes · View notes