AI-trained robots are learning to read their surroundings based on just a few visual cues, and they're becoming better drivers too; elsewhere, smart machines on the factory floor are working in 'swarms' to increase quality and efficiency. It's a bot's life in this week's 5 Coolest Things, which also include advances in electro-textiles and a fascinating insight into evolutionary biology.

Getting The Whole Picture

[Attachment]

AI-trained bots that can quickly process and understand their surroundings, like humans can, could be valuable in dangerous search-and-rescue tasks. Top and above images credit: Getty Images.

What is it? Even if you're reading this column on a device in a busy place, chances are you've also got a pretty good sense of what's happening around you. Wherever we are, we use a combination of inference and casual glances to gain a sense of the environment - without necessarily taking in every object in your vicinity. Now researchers at the University of Texas at Austin have trained artificial intelligence to do something similar.

Why does it matter? AI-trained bots that can quickly process and understand their surroundings, like humans can, could be valuable in dangerous search-and-rescue tasks. But they'll also be better at dealing with situations everyday life brings - unlike, say, an assembly-line robot trained to do only one undertaking. Professor Kristen Grauman, who led the research team, said, 'We want an agent that's generally equipped to enter environments and be ready for new perception tasks as they arise. It behaves in a way that's versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.'

How does it work? With deep learning, Grauman et al trained their AI 'agent' using 'thousands of 360-degree images of different environments.' That gave the AI enough reference points to be able to interpret complicated environments on its own. 'Now, when presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses - like a tourist standing in the middle of a cathedral taking a few snapshots in different directions - that together add up to less than 20 percent of the full scene,' according to a report from UT Austin. 'The agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings.' The tech is described further in Science Robotics.

AIs On The Road

[Attachment]

Driverless cars will benefit from 'human-like' reasoning skills as they try to safely navigate unfamiliar terrain. Image credit: Getty Images.

What is it? Humans aren't just good at casually reading the room; it turns out we're also good at capably driving down unfamiliar roads just by matching basic map skills with simple observation. Now AI is catching up on that front, too.

Why does it matter? Driverless cars will benefit from 'human-like' reasoning skills as they try to safely navigate unfamiliar terrain. 'Our objective is to achieve autonomous navigation that is robust for driving in new environments,' said Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at MIT and co-author of a study released at the International Conference on Robotics and Automation (PDF). 'For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before.'

How does it work? The system learned from the pros: humans. The MIT researchers put a human behind the wheel of an autonomous vehicle as it drove around, collecting video and other data and matching it to a GPS navigation system - and feeding that info into its AI till the AI got the point 'Initially, at a T-shaped intersection, there are many different directions the car could turn,' Rus said. 'The model starts by thinking about all those directions, but as it sees more and more data about what people do, it will see that some people turn left and some turn right, but nobody goes straight. Straight ahead is ruled out as a possible direction, and the model learns that, at T-shaped intersections, it can only move left or right.'

The Fabric Of Our Future Lives

[Attachment]

The proximity of the headquarters of the Advanced Functional Fabrics of America (AFFOA) Institute to MIT's campus will help connect students and faculty to new facilities, including the Fabric Discovery Center that provides end-to-end prototyping from fiber design to system integration of new textile-based products. Caption credit: MIT News. Image credit: M. Scott Brauer.

What is it? Last summer MIT researchers announced a new way to embed electronic chips into fibers that could be sewn into clothing, wound dressings and the like. Now the school reports that the new tech is going gangbusters, with 250,000 chips sewn into clothes in less than a year, and interest from the likes of New Balance and 3M.

Why does it matter? 'Chip-containing fibers present a real prospect for fabrics to be the next frontier in computation and AI,' said Yoel Fink, MIT materials science professor. Such fibers, says MIT, 'could allow fabrics or composites to sense their environment, communicate, store and convert energy, and more.'

How does it work? The swiftness with which chip-containing fibers are making the leap from lab to market is due in part to an initiative MIT president L. Rafael Reif introduced in 2015 called 'innovation orchards,' designed to make 'tangible' innovations as easy to iterate on as software innovations - the process has been sped along by the MIT-adjacent nonprofit Advanced Functional Fabrics of America, of which Fink is CEO, and a textile mill in South Carolina. Learn more about the fiber's path here.

Evolutionary Research At A Snail's Pace

[Attachment]

Knocking out one gene in the snail Lymnaea stagnalis reverses shell coiling. In contrast to the wild-type dextral snail (right), a CRISPR-created snail shows sinistral coiling (left). Caption and image credits: Dr. Hiromi Takahashi of the Kuroda laboratory.

What is it? Evolutionary biology typically requires scientists to look at how life developed at a snail's pace - but sometimes it also requires looking at actual snails. Researchers at Chubu University in Japan announced recently that they've used the gene-editing software CRISPR to control whether snails' shells coil to the left or right.

Why does it matter? It's a big step for snail-kind: Just as left-handedness is rare in humans, left-coiledness occurs in only about 2% of snails; with this experiment, researcher Reiko Kuroda demonstrated that just a single gene directs the orientation of the coil. But it's also an advance for humankind, too, as the findings could help us understand our own development. Most animal life on earth, including humanity, develops so that our exteriors are left-right symmetrical - each side a mirror image of the other - whereas our interiors are arranged willy-nilly. In our embryonic stage, we start out symmetrical. And then something happens to break that symmetry. Biologists have long wondered: What's that something, and why does it happen?

How does it work? Kuroda and her team focused on a gene called Lsdia1 in Lymnaea stagnalis, a species of freshwater snail; they found that simply by using CRISPR to deactivate Lsdia1, they could produce lefty snails - whose descendants also exhibited the same, rare traits. They think genetic activity such as this might help explain asymmetry in other species. It also might give scientists insight into human developmental disorders like situs inversus, where the organs grow in a mirror image of their usual configuration. The study was published in Development.

Bot Who's Counting?

[Attachment]

Wenchao Zhou, co-founder of Ambots, shows off his mobile 3D printers. Image credit: Ambots.

What is it? In Arkansas, a 3D-printing company called Ambots is working on developing a 'class of mobile robots' that could 'initiate a new kind of digital factory.'

Why does it matter? Ambots' work occurs at the intersection of 3D printing and swarm robotics - a field that essentially springs from the notion that many robots are better than one, and that bots working in swarms can complete tasks faster and more efficiently. 'When teams of 3D printing robots act as an organized unit they gain the ability to carry out tasks far too complicated for an individual machine,' according to a new article on the blog Future Proof, which says swarm technology could be especially well-suited not only to factories but to work like farming and waste removal.

How does it work? The Ambots vision is 'thousands of these mobile devices working together on a single task, facilitating 3D printing for large-scale and mass-manufacturing purposes.' Just like the other robots discussed above are modeled after human behaviors, swarm robotics draws inspiration from the natural world too: The idea is to mimic creatures like ants and bees, which work together in swarms or hives to achieve common tasks.

Attachments

  • Original document
  • Permalink

Disclaimer

GE - General Electric Company published this content on 24 May 2019 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 24 May 2019 19:17:04 UTC