“Our results show that by reverse engineering how people think about a problem, we can develop better algorithms,” explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper’s lead author. “Moreover, this work points to promising methods to narrow the gap for other machine learning tasks.”
The paper’s other authors were Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.
When humans are exposed to a new concept — such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet — they often need only a few examples to understand its make-up and recognize new instances. While machines can now replicate some pattern-recognition tasks previously done only by humans — ATMs reading the numbers written on a check, for instance — machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.
“It has been very difficult to build machines that require as little data as humans when learning a
Once the nature of the tweet was discovered, the markets corrected themselves almost as quickly as they were skewed by the bogus information, but the event, known as Hack Crash, demonstrates the need to better understand how social media data is linked to decision making in the private and public sector, according to Tero Karppi, PhD, an assistant professor in the University at Buffalo College of Arts and Sciences’ Department of Media Study.
Based on its speed, Hack Crash was identified as a computer-based event, initiated by sophisticated algorithms designed to identify and evaluate Internet content that could influence markets. Those algorithms launched what amounted, in human terms, to a panicked trading spree, executing thousands of trades per second — all because of the assumed gravity of one social media posting.
“We need to begin to identify the different ways social media is being connected to modern finance. This includes an understanding of how things spread online and how the Internet infrastructure is designed for things to spread,” says Karppi, who with Kate Crawford of Microsoft Research and the MIT Center for Civic Media, analyzes the 2013 Twitter and Wall Street collision in
So the U.S. Department of Defense has given a $3 million grant to a team of computer scientists from the University of Utah and University of California, Irvine, to develop software that can hunt down a new kind of vulnerability that is nearly impossible to find with today’s technology.
The team is tasked with creating an analyzer that can thwart so-called algorithmic attacks that target the set of rules or calculations that a computer must follow to solve a problem. Algorithmic attacks are so new and sophisticated that only hackers hired by nation states are likely to have the resources necessary to mount them, but perhaps not for long.
“The military is looking ahead at what’s coming in terms of cybersecurity and it looks like they’re going to be algorithmic attacks,” says Matt Might, associate professor of computer science at the University of Utah and a co-leader on the team.
“Right now, the doors to the house are unlocked so there’s no point getting a ladder and scaling up to an unlocked window on the roof,” Might says of the current state of computer security. “But once all the doors get locked on the
The result, obtained by a team at Australia’s University of New South Wales (UNSW) in Sydney, appears in the international journal, Nature Nanotechnology.
The quantum code written at UNSW is built upon a class of phenomena called quantum entanglement, which allows for seemingly counterintuitive phenomena such as the measurement of one particle instantly affecting another — even if they are at opposite ends of the universe.
“This effect is famous for puzzling some of the deepest thinkers in the field, including Albert Einstein, who called it ‘spooky action at a distance’,” said Professor Andrea Morello, of the School of Electrical Engineering & Telecommunications at UNSW and Program Manager in the Centre for Quantum Computation & Communication Technology, who led the research. “Einstein was sceptical about entanglement, because it appears to contradict the principles of ‘locality’, which means that objects cannot be instantly influenced from a distance.”
Physicists have since struggled to establish a clear boundary between our everyday world — which is governed by classical physics — and this strangeness of the quantum world. For the past 50 years, the best guide to that boundary has been a theorem called Bell’s Inequality, which states that no local description of the world can
Empathy is a basic human ability. We often feel empathy toward and console others in distress. Is it possible for us to emphasize with humanoid robots? Since robots are becoming increasingly popular and common in our daily lives, it is necessary to understand our interaction with robots in social situations.
However, it is not clear how the human brain responds to robots in empathic situations.
Now, researchers at the Department of Information Science and Engineering, Toyohashi University of Technology in collaboration with researchers at the Department of Psychology, Kyoto University have found the first neurophysiological evidence of humans’ ability to empathize with robots in perceived pain and highlighted the difference in human empathy toward other humans and robots.
They performed electroencephalography (EEG) in 15 healthy adults who were observing pictures of either a human or robotic hand in painful or non-painful situations, such as a finger being cut by a knife. Event-related brain potentials for empathy toward humanoid robots in perceived pain were similar to those for empathy toward humans in pain. However, the beginning of the top-down process of empathy was weaker in empathy toward robots than toward humans.
“The ascending phase of P3 (350-500 ms after the stimulus presentation) showed a
A high-tech computer system is able to read samples of human tissue and aid pathologists in the identification of minute changes in cells that can indicate cancer is present. More than 10,000 slides were examined in the first phase of the study which shows that pathologists are as good at accurately diagnosing cancer on a computer as they are with a microscope.
Now Professor Nasir Rajpoot is working with University Hospitals Coventry and Warwickshire NHS Trust (UHCW) to develop the next generation of image analytics to use with this technology.
The ground breaking technology has the power to help pathologists grade some types of tumours, including lung, prostate and bladder tumours with precision. In prostate cancer, for example, this could make the difference between someone being offered surgery rather than drug based treatments.
The computer system known as The Omnyx® Precision Solution™, can help pathologists to see the small differences in cells in the same way that they have currently been using a microscope, allowing them to make sound decisions on many aspects of cancer diagnosis.
UHCW is the first in the UK to introduce this kind of innovation to its routine practice, meaning it is already benefitting patients.
The Omnyx system digitises slides
But when roboticists want to teach a robot how to do a task, they typically either write code or physically move a robot’s arm or body to show it how to perform an action.
Now a collaboration between University of Washington developmental psychologists and computer scientists has demonstrated that robots can “learn” much like kids — by amassing data through exploration, watching a human perform a task and determining how best to carry out that task on its own.
“You can look at this as a first step in building robots that can learn from humans in the same way that infants learn from humans,” said senior author Rajesh Rao, a UW professor of computer science and engineering.
“If you want people who don’t know anything about computer programming to be able to teach a robot, the way to do it is through demonstration — showing the robot how to clean your dishes, fold your clothes, or do household chores. But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own.”
The research, which combines child development research from the UW’s Institute for Learning & Brain Sciences Lab (I-LABS) with machine learning
“Searching Google still requires a lot of search,” says Ashok Goel, professor at Georgia Tech’s School of Interactive Computing. “Imagine if you could ask Google a complicated question and it immediately responded with your answer — not just a list of links to manually open. That’s what we did with Watson.”
Watson was trained by student teams in a class at Georgia Tech using 1,200 question-answer pairs (200 for each of six teams), which allowed them to “chat” with Watson and seek out inspiration for big design challenges in areas such as engineering, architecture, systems, and computing. The teams worked with the AI to learn about solutions that could be replicated from the natural world — something known as biologically inspired design — after first feeding Watson several hundred biology articles from Biologue, an interactive biology repository. Teams then posed questions to Watson about the research it had learned.
Questions included, “How do you make a better desalination process for consuming sea water?” Animals, it turns out, have a variety of answers for this, such as how seagulls filter out seawater salt through special glands. Another question asked, “How can manufacturers develop better solar cells for long-term space travel?” One answer: