...Rotoscoping was a primitive and time-consuming process, but it was a necessary starting point for the industry. In the rotoscope method, animators stood at a glass-topped desk and traced over a projected live-action film frame-by-frame, copying actors' or animals' actions directly onto a hand-drawn world. The technique produced fluid, lifelike movements that animators couldn't achieve on their own.
The first full-length American film to use rotoscoping was Snow White and the Seven Dwarfs, which debuted in 1939, and Disney used the technique in subsequent films, including Alice in Wonderland, Sleeping Beauty and Peter Pan. Though actual mocap systems were still decades away, rotoscoping was precisely the proof of concept the field needed -- clearly, it paid off to mimic real people's actions as closely as possible in animated spaces...
When you think of robotics, you likely think of something rigid, heavy, and built for a specific purpose. New "Robotic Skins" technology developed by Yale researchers flips that notion on its head, allowing users to animate the inanimate and turn everyday objects into robots.
Developed in the lab of Rebecca Kramer-Bottiglio, assistant professor of mechanical engineering & materials science, robotic skins enable users to design their own robotic systems. Although the skins are designed with no specific task in mind, Kramer-Bottiglio said, they could be used for everything from search-and-rescue robots to wearable technologies. The results of the team's work are published today in Science Robotics.
The skins are made from elastic sheets embedded with sensors and actuators developed in Kramer-Bottiglio's lab. Placed on a deformable object -- a stuffed animal or a foam tube, for instance -- the skins animate these objects from their surfaces. The makeshift robots can perform different tasks depending on the properties of the soft objects and how the skins are applied.
"We can take the skins and wrap them around one object to perform a task -- locomotion, for example -- and then take them off and put them on a different object to perform a different task, such as grasping and moving an object," she said. "We can then take those same skins off that object and put them on a shirt to make an active wearable device."
Robots are typically built with a single purpose in mind. The robotic skins, however, allow users to create multi-functional robots on the fly. That means they can be used in settings that hadn't even been considered when they were designed, said Kramer-Bottiglio.
Additionally, using more than one skin at a time allows for more complex movements. For instance, Kramer-Bottiglio said, you can layer the skins to get different types of motion. "Now we can get combined modes of actuation -- for example, simultaneous compression and bending."
To demonstrate the robotic skins in action, the researchers created a handful of prototypes. These include foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct poor posture, and a device with a gripper that can grasp and move objects.
Kramer-Bottiglio said she came up with the idea for the devices a few years ago when NASA put out a call for soft robotic systems. The technology was designed in partnership with NASA, and its multifunctional and reusable nature would allow astronauts to accomplish an array of tasks with the same reconfigurable material. The same skins used to make a robotic arm out of a piece of foam could be removed and applied to create a soft Mars rover that can roll over rough terrain. With the robotic skins on board, the Yale scientist said, anything from balloons to balls of crumpled paper could potentially be made into a robot with a purpose.
"One of the main things I considered was the importance of multifunctionality, especially for deep space exploration where the environment is unpredictable," she said. "The question is: How do you prepare for the unknown unknowns?"
For the same line of research, Kramer-Bottiglio was recently awarded a $2 million grant from the National Science Foundation, as part of its Emerging Frontiers in Research and Innovation program.
An important part of our mission is keeping astronauts strong and healthy during stays in space, but did you know that our technology also helps keep you healthy? And the origins of these space innovations aren’t always what you’d expect.
As we release the latest edition of NASA Spinoff, our yearly publication that celebrates all the ways NASA technology benefits us here on Earth, let’s look at some ways NASA is improving wellness for astronauts—and everyone else.
1. Weightless weight-lifting
Without gravity to work against, astronauts lose bone and muscle mass in space. To fight it, they work out regularly. But to get them a good burn, we had to get creative. After all, pumping iron doesn’t do much good when the weights float.
The solution? Elastic resistance. Inventor Paul Francis was already working on a portable home gym that relied on spiral-shaped springs made of an elastic material. He thought the same idea would work on the space station and after additional development and extensive testing, we agreed.
Our Interim Resistive Exercise Device launched in 2000 to help keep astronauts fit. And Francis’ original plan took off too. The technology perfected for NASA is at the heart of the Bowflex Revolution as well as a new line of handheld devices called OYO DoubleFlex, both of which enable an intensive—and extensive—workout, right at home.
2. Polymer coating keeps hearts beating
A key ingredient in a lifesaving treatment for many patients with congestive heart failure is made from a material a NASA researcher stumbled upon while working on a supersonic jet in the 1990s.
Today, a special kind of pacemaker that helps synchronize the left and right sides of the heart utilizes the unique substance known as LaRC-SI. The strong material can be cast extremely thin, which makes it easier to insert in the tightly twisted veins of the heart, and because it insulates so well, the pacemaker’s electric pulses go exactly where they should.
Since it was approved by the FDA in 2009, the device has been implanted hundreds of thousands of times.
3. Sutures strong enough for interplanetary transport
Many people mistakenly think we created Teflon. Not true: DuPont invented the unique polymer in 1938. But an innovative new way to use the material was developed to help us transport samples back from Mars and now aids in stitching up surgery patients.
Our scientists would love to get pristine Martian samples into our labs for more advanced testing. One complicating factor? The red dust makes it hard to get a clean seal on the sample container. That means the sample could get contaminated on its way back to Earth.
The team building the cannister had an idea, but they needed a material with very specific properties to make it work. They decided to use Polytetrafluoroethylene (that’s the scientific name for Teflon), which works really well in space.
The material we commonly recognize as Teflon starts as a powder, and to transform it into a nonstick coating, the powder gets processed a certain way. But process it differently, and you can get all kinds of different results.
For our Mars sample return cannister prototype, the powder was compressed at high pressures into a block, which was then forced through an extruder. (Imagine pressing playdough through a mold). It had never been done before, but the end result was durable, flexible and extremely thin: exactly what we needed.
And since the material can be implanted safely in the human body—it was also perfect as super strong sutures for after surgery.
4. Plant pots that clean the air
It may surprise you, but the most polluted air you breathe is likely the air inside your home and office. That’s especially true these days with energy-efficient insulation: the hot air gets sealed in, but so do any toxins coming off the paint, furniture, cooking gas, etc.
This was a problem NASA began worrying about decades ago, when we started planning for long duration space missions. After all, there’s no environment more insulated than a spaceship flying through the vacuum of space.
On Earth, plants are a big part of the “life support” system cleaning our air, so we wondered if they could do the same indoors or in space.
The results from extensive research surprised us: we learned the most important air scrubbing happens not through a plant’s leaves, but around its roots. And now you can get the cleanest air out of your houseplants by using a special plant pot, available online, developed with that finding in mind: it maximizes air flow through the soil, multiplying the plant’s ability to clean your air.
5. Gas sensor detects pollution from overhead
Although this next innovation wasn’t created with pollution in mind, it’s now helping keep an eye on one of the biggest greenhouse gasses: methane.
We created this tiny methane “sniffer” to help us look for signs of life on Mars. On Earth, the biggest source of methane is actually bacteria, so when one of our telescopes on the ground caught a glimpse of the gas on Mars, we knew we needed to take a closer look.
We sent this new, extremely sensitive sensor on the Curiosity Rover, but we knew it could also be put to good use here on our home planet. We adapted it, and today it gets mounted on drones and cars to quickly and accurately detect gas leaks and methane emissions from pipelines, oil wells and more.
The sensor can also be used to better study emissions from swamps and other natural sources, to better understand and perhaps mitigate their effects on climate change.
6. DNA “paint” highlights cellular damage
There’s been a lot of news lately about DNA editing: can genes be changed safely to make people healthier? Should they be?
As scientists and ethicists tackle these big questions, they need to be sure they know exactly what’s changing in the genome when they use the editing tools that already exist.
Well, thanks to a tool NASA helped create, we can actually highlight any abnormalities in the genetic code with special fluorescent “paint.”
But that’s not all the “paint” can do. We actually created it to better understand any genetic damage our astronauts incurred during their time in space, where radiation levels are far higher than on Earth. Down here, it could help do the same. For example, it can help doctors select the right cancer treatment by identifying the exact mutation in cancer cells.
You can learn more about all these innovations, and dozens more, in the 2019 edition of NASA Spinoff. Read it online or request a limited quantity print copy and we’ll mail it to you!
(via MIT researchers turn water into 'calm' computer interfaces)
...The Tangible Media Group demonstrated a way to precisely transport droplets of liquid across a surface back in January, which it called "programmable droplets." The system is essentially just a printed circuit board, coated with a low-friction material, with a grid of copper wiring on top. By programmatically controlling the electric field of the grid, the team is able to change the shape of polarizable liquid droplets and move them around the surface. The precise control is such that droplets can be both merged and split.
Moving on from the underlying technology, the team is now focused on showing how we might leverage the system to create, play and communicate through natural materials...
Computer system transcribes words users 'speak silently'
MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.
The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations -- saying words "in your head" -- but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.
The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don't obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user's auditory experience.
The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers' experiments, for instance, subjects used the system to silently report opponents' moves in a chess game and just as silently receive computer-recommended responses.
"The motivation for this was to build an IA device -- an intelligence-augmentation device," says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. "Our idea was: Could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?"
"We basically can't live without our cellphones, our digital devices," says Pattie Maes, a professor of media arts and sciences and Kapur's thesis advisor. "But at the moment, the use of those devices is very disruptive. If I want to look something up that's relevant to a conversation I'm having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I'm with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present."
The researchers describe their device in a paper they presented at the Association for Computing Machinery's ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they're joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.
The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or "subvocalization," as it's known.
But subvocalization as a computer interface is largely unexplored. The researchers' first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.
The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.
But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.
Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies -- about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.
Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.
The basic configuration of the researchers' system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.
But, Kapur says, the system's performance should improve with more training data, which could be collected during its ordinary use. Although he hasn't crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.
In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. "We're in the middle of collecting data, and the results look nice," Kapur says. "I think we'll achieve full conversation some day."
"I think that they're a little underselling what I think is a real potential for the work," says Thad Starner, a professor in Georgia Tech's College of Computing. "Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You've got jet noise all around you, you're wearing these big ear-protection things -- wouldn't it be great to communicate with voice in an environment where you normally wouldn't be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you're a fighter pilot, or if you're a firefighter, you're already wearing these masks."
When we return to the Moon, much will seem unchanged since humans first arrived in 1969. The flags placed by Apollo astronauts will be untouched by any breeze. The footprints left by man’s “small step” on its surface will still be visible across the Moon’s dusty landscape.
Our next generation of lunar explorers will require pioneering innovation alongside proven communications technologies. We’re developing groundbreaking technologies to help these astronauts fulfill their missions.
In space communications networks, lasers will supplement traditional radio communications, providing an advancement these explorers require. The technology, called optical communications, has been in development by our engineers over decades.
Optical communications, in infrared, has a higher frequency than radio, allowing more data to be encoded into each transmission. Optical communications systems also have reduced size, weight and power requirements. A smaller system leaves more room for science instruments; a weight reduction can mean a less expensive launch, and reduced power allows batteries to last longer.
On the path through this “Decade of Light,” where laser joins radio to enable mission success, we must test and demonstrate a number of optical communications innovations.
The Laser Communications Relay Demonstration (LCRD) mission will send data between ground stations in Hawaii and California through a spacecraft in an orbit stationary relative to Earth’s rotation. The demo will be an important first step in developing next-generation Earth-relay satellites that can support instruments generating too much data for today’s networks to handle.
The Integrated LCRD Low-Earth Orbit User Modem and Amplifier-Terminal will provide the International Space Station with a fully operational optical communications system. It will communicate data from the space station to the ground through LCRD. The mission applies technologies from previous optical communications missions for practical use in human spaceflight.
In deep space, we’re working to prove laser technologies with our Deep Space Optical Communications mission. A laser’s wavelength is smaller than radio, leaving less margin for error in pointing back at Earth from very, very far away. Additionally, as the time it takes for data to reach Earth increases, satellites need to point ahead to make sure the beam reaches the right spot at the right time. The Deep Space Optical Communications mission will ensure that our communications engineers can meet those challenges head-on.
An integral part of our journey back to the Moon will be our Orion spacecraft. It looks remarkably similar to the Apollo capsule, yet it hosts cutting-edge technologies. NASA’s Laser Enhanced Mission Communications Navigation and Operational Services (LEMNOS) will provide Orion with data rates as much as 100 times higher than current systems.
LEMNOS’s optical terminal, the Orion EM-2 Optical Communications System, will enable live, 4K ultra-high-definition video from the Moon. By comparison, early Apollo cameras filmed only 10 frames per second in grainy black-and-white. Optical communications will provide a “giant leap” in communications technology, joining radio for NASA’s return to the Moon and the journey beyond.
NASA’s Space Communications and Navigation program office provides strategic oversight to optical communications research. At NASA’s Goddard Space Flight Center in Greenbelt, Maryland, the Exploration and Space Communications projects division is guiding a number of optical communications technologies from infancy to fruition. If you’re ever near Goddard, stop by our visitor center to check out our new optical communications exhibit. For more information, visit nasa.gov/SCaN and esc.gsfc.nasa.gov.
New 'e-dermis' brings sense of touch, pain to prosthetic hands
Amputees often experience the sensation of a "phantom limb" -- a feeling that a missing body part is still there.
That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.
"After many years, I felt my hand, as if a hollow shell got filled with life again," says the anonymous amputee who served as the team's principal volunteer tester.
Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.
"We've made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would," says Luke Osborn, a graduate student in biomedical engineering. "It's inspired by what is happening in human biology, with receptors for both touch and pain.
"This is interesting and new," Osborn said, "because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points."
The work -- published June 20 in the journal Science Robotics - shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.
Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.
Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.
"Pain is, of course, unpleasant, but it's also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees," he says. "Advances in prosthesis designs and control mechanisms can aid an amputee's ability to regain lost function, but they often lack meaningful, tactile feedback or perception."
That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee's nerves in a non-invasive way, through the skin, says the paper's senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.
"For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand," says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.
Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a "neuromorphic model" mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.
The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.
The e-dermis is not sensitive to temperature -- for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.
The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.
Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university's Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.
In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university's divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.
The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.