AutoML: The Automation of Automation

Machine to Machine: AI makes AI

The next Big Thing in AI is likely to be use  of machine learning to automate machine learning. The idea that routine tasks involved in developing a machine learning solution could be automated makes perfect sense. Machine learning development is a replicable activity with routine processes. Although total automation is improbable at the moment, even partial automation yields significant benefits. As the ranks of available machine learning experts grow thinner and thinner, ability to automate some of their time-consuming tasks means that they can spend more time on high-value functions and less on the nitty-gritty of model building and reiteration. This will, in theory, release more data scientists to work on the vast number of projects envisioned in an ubiquitous AI environment, as well as making it possible for less proficient practitioners to utilize machine learning routines without the need for extensive training.

Although automated machine learning (AutoML) is appealing, and startups have emerged with various promise, it is clear that this capability is not yet fully developed. There are, however, innumerable systems that are suitable now for use in selected environments, and to solve specific problems. Among these programs are Google AutoML, DataRobot, Auto-WEKA, TPOT, and auto-sklearn. Many are Open Source or freely available. Major analytics firms are also rapidly developing AutoML routines, including Google, Microsoft, Salesforce, and Facebook, and this area is being approached with increasing urgency.

Current AutoML programs mainly take care of the highly repetitive tasks that machine learning requires to create and tune models. The chief current  automation targets are selection of appropriate machine learning algorithms, tuning of hyperparameters, feature extraction and iterative modeling. Hyperparamter tuning is particularly significant because it is critical to deep neural networks. Some AutoML routines have already demonstrated advances on manual human performance insome of these areas. Other processes that could support AutoML, such as data cleaning, are aided by a separate body of machine learning tools that could be added to AutoML In fact, AutoML itself exists in a larger context of applying ML to data science and software development—an effort that shows promise but remains at an early stage.

Even with recent focus on AutoML, the capability of these programs has yet to reach the stage where they could be relied upon to achieve a desired result without human intervention. Data scientists will not lose their jobs in near future; as others have pointed out, humans are still required to set the objectives and verify results for any machine learning operation. Transparency is also of the utmost importance in determining whether the model is accurately selecting useful results or has settled upon an illusion of accuracy.

Currently, many  AutoML programs have operated successfully in test cases, with problems emerging as the size of the data set rises or the required operation becomes more complicated. An AutoML solution must not only embrace a wide range of ML models and techniques; but it must at the same time handle the large amount of data that will be used for testing through innumerable iterations.

AutoML, and, indeed, other forms of auto data science are likely to continue to advance. It makes sense that machines should add a layer of thinking about thinking on top of the specific task layer. A machine driven approach to developing automation of automation makes sense, not only in reducing the human component, but also in ensuring that there is capability to meet the demands of an ever expanding usage of AI in business. Arguabley, development of a more autonomous AutoML would be an important step toward Artificial General Intelligence.

Improvement in this area is likely to be swift, due to the urgency of the data scientist shortage at a time when all companies are advised to invest in AI. There is an ambitious DARPA program, Data-Driven Discovery of Models (D3M), aimed at coming up with techniques that automate machine learning model building from data ingestion to model evaluation. This was begun in June, 2016 and is furthering interest in AutoML approaches. Among AutoML startups, one standout is DataRobot, which has raised $54 million recently, bringing its total funding to $111 million. Meanwhile, there is a growing body of research in Academia, as well as within corporate research teams, focusing upon how to crack a problem that could create something like a user-friendly machine learning platform.


 


Smart Farming with AI and Robotics: The Video

Following up on the previous post about AI and robotics in agriculture (Agricultural Robots and AI in Farming; Promises and Concerns), it seemed appropriate to provide some video on this fascinating and highly significant area. Agricultural robots face substantial challenges in handling an enormous variety of tasks; they also need to take special care in handling plants, produce, and animals. Agriculture is a critical area of  development that often goes unnoticed in the Industry 4.0 story. But these solutions are exploding and are likely to have enormous effects upon employment, finance, and society.

The videos are a mixture of talks, presentations, teaching material and product demonstrations available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

The Future of Farming with AI: Truly Organic at Scale (Machine Learning Society)

Published on May 17, 2017

A talk by Ryan Hooks, CEO & Founder, Huxley. Weblink with slides at http://www.mlsociety.com/events/the-f…

As climate change and global demographics begin to put excessive strain on the traditional farming model, the need for an agriculturally intelligent solution is vital. By 2050, the world population will increase by over 2 billion people. Current crop yields and freshwater resources will not be sufficient to sustain a population over 9 billion people.

On May 15th 2017, the Machine Learning Society hosted this event to showcase high tech farming techniques used in vertical and urban farming. Our keynote speaker is Ryan Hooks of Huxley. Huxley uses computer vision, augmented reality (AR), and A.I. to greatly improve yield, while driving the down cost and resources requirements. Huxley is creating an “operating system for plants” to grow with 95% less water, 2x the speed, 2/3 less carbon output, and half the nutrients needed.

Automation, Robotics & Machine Learning in Agriculture (Blue River Technology)

Published on May 13, 2016

Keynote presentation by Ben Chostner, VP Business Development of Blue River Technology, at the May 2016 Agri Investing Conference in Toronto.

Farmers: These ARE the Droids You’re Looking For (New Scientist)

Published on May 18, 2016

Autonomous robots created at the University of Sydney can count fruit on trees, spray weeds, and even herd cows.  All pictures courtesy of Professor Salah Sukkarieh, University of Sydney, Australia.

Robots Take Over the Dairy Farm (mooviechannel)

Published on Jan 8, 2015

Gareth Tape of Hardisworthy Farm in Devon calls the technology ‘life-changing’ – both for him and his cows. Watch the video to find out why.

Robots and Drones Agriculture’s Next Technological Revolution NHK Documentary (Japan Tokyo)

Published on Jul 9, 2017

While still a student, Shunji Sugaya started an IT company focused on artificial intelligence and robots for use on the farms of the future. Agriculture in Japan faces serious challenges like an aging population and shrinking workforce. Sugaya imagines robots and drones that reduce labor demands and farms that are run using big data. Today we look at Sugaya and the young engineers at his company in their efforts to shape the future of agriculture and fishing with cutting-edge technology.


 

 


Agricultural Robots and AI in Farming; Promises and Concerns

With constant attention given to Industry 4.0, autonomous vehicles and industrial robots, there is one significant area of robotics that is often under-reported—the growing use of autonomous agricultural robots and AI-driven smart systems in agriculture. Although automation has been practiced on the farm for many years, it is not been as widely visible as its cousins on the shop floor. But technology being deployed on farms today is likely to have far reaching consequences.

We are on the edge of an explosion in robotics that will change the face of agriculture around the world, affecting labor markets, society, and the wealth of nations. Moreover, developments today are global in extent, with solutions being created in the undeveloped world as well as in the developed world, stretching across every form of agriculture from massive row crop agribusiness and livestock management, down to precision farming and crop management in market gardens and enclosed spaces.

Agriculture Robots Today

Agriculture is vital to the health of the ever-expanding human population, and to the wide range of interrelated industries that make up the agricultural sector. Processes include everything from planting, seeding, weeding, milking, herding, and gathering, to transportation, processing, logistics, and ultimately to the market—be it the supermarket, or increasing retail online. The UN predicts that world population will rise from 7.3 billion to 9.7 billion in 2050. Robots and intelligent systems will be critical in improving food supplies. Analyst company Tractica estimates shipments of agricultural robots will increase from 32,000 units in 2016 to 594,000 units annually by 2024—when the market is expected to reach $74.1 billion per year.

While automation has been in place for some time and semi-autonomous tractors are increasingly common, farms pose particular problems for robots. Unlike highway travel, which is difficult enough for autonomous vehicles, agricultural robots need to be able to respond to unforeseen events, plus handle and manipulate objects within their environment. AI makes it possible to identify weeds and crops; discern crop health and status; and to be able to take action delicately enough to avoid damage in actions such as picking. At the same time, these robots must navigate irregular surfaces and pathways, find locations on a very fine scale,  and sense specific plant conditions across the terrain.

Agricultural robots using AI technologies are responding to economies in the agricultural sector as well as to rising labor costs and immigration restrictions. The first areas of general impact are in large businesses, since robots have high investment and maintenance costs and there is a lack of skilled operators. Conditions will change as robots become cheaper, more widely available, and capable of performing more diverse tasks. This will require evolution of AI technologies, expansion of collaboration abilities among robots; and man-machine combined operations. The ability of robots to work with humans could be particularly significant due to the wide range of discrimination tasks involved in food safety, quality control, and weed and pest removal. Robots will be guided by human supervisors with skills in agriculture and knowledge of robotic and agricultural systems.

Opportunities and Growth

According to the International Federation of Robotics, agricultural robots are likely to be the fastest growing robotics sector by 2020. Different sectors of agricultural markets will respond differently. Large businesses with row crops are early responders, since they have funds to invest and shrinking margins. For these companies, there are huge benefits in reducing labor costs and instituting more precise farming methods. As picking and weed killing and pest removal systems become more widely available, citrus orchards and difficult-to-pick crops are likely to be next. Robots capable of picking citrus, berries, and other delicate fruit in difficult locations are already starting appear. There are applications in virtually every part of the agricultural sector.

Other uses will appear as robots become more common and less expensive. Robots can make a difference not only in harvesting, but also in precision of water application and fertilizer. In areas where water is contentious, such as California and the Middle East, more efficient watering will make it possible to grow larger crops with greater efficiency and less water, avoiding creation of political and social crises.

In the developing world, opportunities are enormous but individual farmers have fewer resources. For this reason, smaller robots and robot clusters are likely to be more widely used, possibly with emerging Robot-as-a-Service (RaaS) operators providing  labor on a per-usage or rental basis. Robotics will be able to save enormously on chemical costs and water used for irrigation, which will have significant economic impacts, as well as environmental benefits.

Progress and Caution

As use of agricultural robots continues to expand, they will take on increasinly complex tasks, and replace a larger portion of agricultural labor–a critical component of global employment. In many countries, particularly in the developing world, this will create shifts in employment which will empower trends such as rural migration to cities, and reduce overall availability of labor–intensive jobs. More training will be needed by more people; this will impact education, socialization, and finance; particularly in countries with large populations.

There are many implications as AI and agricultural robots are deployed. New ideas are blossoming, startups are on the rise, and we can expect a wide range of consequences as  next generation agricultural robotics and AI continue to emerge.


 


Coming Soon to Nagoya: RoboCup 2017: The Video

It’s almost time for this year’s  RoboCup competition in Nagoya, Japan, July 27-30. This event has expanded to include more discussions and competitions in a diverse range of robotics activities. The robots demonstrate incremental developments in autonomous goal-directed behavior, improving a little bit each year.  RoboCup is a springboard for discussion of critical areas of robotic development and also provides a showcase for university robotic programs and recruitment.

RoboCup has been held annually for 21 years. More than 3,500 dedicated scientists and developers from more than 40 countries are expected. The event features a number of activities:

RoboCup Soccer includes autonomous mobile robots separated into leagues: Humanoid, Standard Platform, Middle Size, Small Size and Simulation.

RoboCup Industrial, includes RoboCup Logistics and RoboCup@Work. It is a competition between industrial mobile robots focusing on logistics and warehousing systems in anticipation of Industry 4.0.

RoboCup Rescue includes Rescue Robot League and Rescue Simulation League. It employs technologies developed from RoboCup Soccer, to promote simulations that will contribute toward development of autonomous robots for use in rescue efforts at disaster sites.

RoboCup @Home  applies these technologies to people’s everyday lives, evaluated according to how the robots cooperate with human beings to complete various tasks.

RoboCupJunior includes Soccer, Rescue and Onstage Competition to stimulate children’s curiosity and encourage them to participate in robot design and production.

Official RoboCup 2017 Video

Best Goals of RoboCup 2016 (CNET )

Published on Jul 14, 2016

Can Robots Beat Elite Soccer Players? (TEDxYouth)

Published on Apr 23, 2013

As Professor of Computer Science at UT Austin, Dr. Peter Stone’s interests run from competing in the international RoboCup soccer tournament to developing novel computing protocols for self-driven vehicles. His long-term research involves creating complete, robust, autonomous agents that can learn to interact with other intelligent agents in a wide range of complex and dynamic environments.

What Soccer-Playing Robots Have to do with Healthcare (TEDx Talks)

Published on Sep 29, 2012

Steve McGill is a second year PhD student at the University of Pennsylvania studying humanoid robotics and human robot interaction under Professor Dan Lee. As part of Team DARwIn, he captured the first place medal at the international RoboCup humanoid soccer competition in Istanbul, Turkey. Steve is interested in applying this robotic technology for deployment in the field for human intercommunication.

 


 


RPA as the Software Salient of AI: The Video

Robotic Process Automation (RPA) is bringing together elements of Robotics and AI to fuel a new vision of automation across numerous industries. RPA is not about industrial robots,  but it is about knowledge-based processes and services handled through intelligent software. Some of its most intriguing uses are within the financial services industry.

RPA is growing quickly, but the concept is somewhat flexible and still being refined. AI is a great enabler, but software robots have been performing many of these tasks for some time. Adoption figures are likely to be misleading, since any organization can tick off the “RPA box” with a few simple software agents. RPA represents the beginning of  AI /process integration. It is generally understood to be a part of the transition to a more complex AI-centered back office.

In these videos, we review discussions of the current state of RPA, what it entails, and where it is heading. The videos are provided under YouTube license, with discussions provided from their landing pages mildly edited.

Robotic Process Automation (RPA). The Next Productivity Revolution (PwC Australia)

Published on Mar 7, 2016

Being relatively simple to implement, RPA can deliver benefits quickly and return on investment of over 300%.

RPA Future (EdgeVerve)

Published on Mar 16, 2017

Robin George, AVP & Head of Business Development, EdgeVerve systems (an Infosys company) interviews Craig Le Clair, VP & Principal Analyst, Forrester.

 

Applying Robotic Process Automation in Finance and Risk (Accenture )

Published on Oct 12, 2016

Accenture Finance & Risk Practice is helping our financial services clients better manage the onslaught of data and regulations with the use of robotic process automation (RPA). Robots can drive cost, time and accuracy efficiency, and work 24/7 around key tasks such as anti-money laundering and order-to-cash. Ultimately, this frees up valuable employees to focus on higher value work that only humans can do.

HfS Webinar: Beyond RPA (HfS Research)

Published on Apr 24, 2017

With RPA already on its path to being adopted by mainstream enterprise organizations, the question that keeps coming up is where the industry is heading and what is next for those already on this path.

The emergence of AI and machine learning powered robots is deeply entwined with the way organizations are looking to structure their front and back office operations in this emerging Digital OneOffice environment.

Phil Fersht, CEO and Chief Analyst of HfS Research, Guy Kirkwood, COO of UiPath, Andrew Rayner, CTO of Genfour, and David Poole, CEO of Symphony Ventures will discuss the cultural shift brought by the advancements in cognitive RPA solutions.


 

 


SoftBank Buys Boston Dynamics, Promises More Robots

Japanese technology company SoftBank is acquiring acclaimed robot maker Boston Dynamics from Alphabet (Google), along with Japanese bipedal robotics company Schaft, both of which were acquired by Alpahabet in 2013. This is part of SoftBank’s move into the robotics space, exemplified by the SoftBank Robotics “Pepper” humanoid robot whose roles have been increasing beyond the consumer space recently. SoftBank’s robot lineup now includes Pepper, Boston Dynamics’ BigDog and Handle, Schaft’s S-One, and related projects, and Fetch robotics’ warehouse fulfillment robots, making it a versatile player in this space with multi-mission capability.

According to the press release, Masayoshi Son, Chairman and CEO of SoftBank Group, said:

“Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the Information Revolution, and Marc and his team at Boston Dynamics are the clear technology leaders in advanced dynamic robots. I am thrilled to welcome them to the SoftBank family and look forward to supporting them as they continue to advance the field of robotics and explore applications that can help make life easier, safer and more fulfilling.”

Boston Dynamics is known for its DARPA military-oriented robots, including BigDog, Handle, and the humanoid robot Alpha. It has been struggling to find a market for its products, in this stage of  development, and Alphabet has been trying to sell the operation since last year.

SoftBank has a wide range of related interests and commitments within this area, including advanced telecommunications, internet services, AI, smart robotics, IoT, clean energy technology providers, and ARM processors. It entered the robotics market through acquisition of Aldebaran Robotics in 2012. Aldebaran, creator of the Nao and Romeo robots, was renamed Softbank Robotics and created a social robot called Pepper, which is being tested in a growing range of consumer and business settings.

Softbank’s robotics ventures are centered in its Tokyo-based subsidiary SoftBank Robotics Holding Corp, established in 2014, with offices in Japan, France, U.S. and China. SoftBank Robotics has more than 500 employees working in Paris, Tokyo, San Francisco, Boston and Shanghai. Its robots are used in more than 70 countries for research, education, retail, healthcare, tourism, hospitality and entertainment.

The CEO of Boston Dynamics Explains Each Robot in the Fleet (jurvetson)

SoftBank Robotics Meet Pepper (SoftBank Robotics America)


 

 

 


Evolving the Cognitive Man-Machine

Human intelligence and artificial intelligence will increasingly interact as we extend the range of mechanical cognition to include sensory interpretation, role playing, and sentiment (Affective Computing, Intersecting Sentiment and AI: The VideoShifting the Boundaries of Human Computer Interaction with AI: The Video). Such advancement will both create new conflicts and increase our understanding of the human mind by providing an objective platform for comparison.

But the cross-pollination of human and machine understanding does not stop there. As digital assistant roles progress, robots will need to understand and influence people. They will need to win negotiations, and devise strategies of engagement. AI will need to become increasingly cognizant of human thought patterns and social characteristics. This will make them a part of the greater “human conversation.”

As cognitive systems are assigned roles in which they must take the lead or suggest actions, AI will be playing a human game with human pieces. Social interaction is a construct: Knowing the rules, anyone can play. This will lead to competition and friction between automata and humans across a wide range of activities.

Even as AI continues to advance, human capabilities will be amplified through integrated advisers, prostheses, and avatars that will vastly increase our ability to process information, remember and assemble concepts, travel to remote locations, and communicate–all at the speed of light.

Robots and mankind are locked in a co-evolution that will ultimately lead to hybridization. We can add new robotic capabilities much faster than we can evolve them on our own. Simple toolmaking was the first step along this path; the final step will be where the intersection of humanity and machine becomes blurred, and finally, almost invisible.

Organisms adapt to fill a niche; when they can no longer adapt, their cousins take over. Evolution is about survival of the fittest, not of the strongest or the largest or even the smartest. Technology is an evolution of tools to fit a world defined by humans, that will continue to be shaped by human thought. Hybridization is inevitable, because it will augment human capability. Technology can evolve and be adapted much quicker than native biology; so further evolution of the species will be based on technology.

At present, we are barely on the doorstep of hybridization. We have clumsy “wearables,” limited but promising smart prostheses; the beginnings of AR concepts from Google Glass to HoLolens; social industrial robots that can work with people; digital assistants that can insert themselves into social settings; and an increasing range of smart devices that bridge the human context and the IoT.

In a somewhat distant future, we will likely view this as simply “making better  tools.” The alarming possibilities we envision today will be the commonplace realities.  As with the unknown Chinese inventor of printing blocks for text, we will ignore revolutionary change and create a narrative in which everything is consistently normal.

Twilight of the Gods? Perhaps. For the present, we are faced with the problem of understanding these changes and applying new technologies in a way that society continues to benefit, and the multitude of interstices are filled. This will create great opportunity, but it will also demand innovation directed specifically toward human-machine interaction.

Ultimately, of course, this solves the problem of “the Singularity,” and a robotic Apocalypse. To quote Walt Kelly’s Pogo comic strip, “we have met the enemy and he is us.”


 


Affective Computing, Intersecting Sentiment and AI: The Video

Affective Computing is the combination of emotional intelligence with artificial intelligence. The role of emotion in AI is coming increasingly into focus as we attempt to integrate robots, digital assistants, and automation into social contexts. Interaction with humanity always involves sentiment, as demonstrated by the growing understanding that self-driving vehicles need to understand and react to their emotional surroundings–such as responding to aggressive driving. Meanwhile, sentiment analysis is growing independently in marketing as companies vie to create emotional response to products and react to social media comments. Meanwhile, in the uneven understanding of this technology, some still separate human from cyber systems on the basis of emotion.

AI must use and respond to emotional cues. This must be considered a component of the thought process. Companies are now beginning to focus upon this area, and combine it with the other elements of AI to build a more responsive and human-interactive technology.

Following are a few videos explaining where Affective Computing is heading. These are under standard YouTube license, and the descriptive information is, as usual, provided from the video landing page with minor edits.

The Human Side of AI: Affective Computing (Intel Software)

Published on Feb 13, 2017

Affective Computing can make us aware of our emotional state, helping us take better decisions, can help us to help others, or help machines make decisions to enrich our lives. There is another exciting use for emotional data: Machine Learning. This is where data is collected so the machine refines its understanding, to ultimately better personalize your experiences.

Imagine if the environments where you live and interact could personalize your experience based on how you feel in that moment. Imagine being able to provide superior care-giving to elderly, children and people with limited abilities.

The introduction is provided below. Some additional videos:

Affective Computing Part 1: Interpreting Emotional States to Unleash New Experiences

Affective Computing Part 2: Global User Insights and Recommendations for Developers

Artificial Intelligence meets emotional intelligence – CEO Summit 2016 (Mindtree Ltd.)

Published on Nov 8, 2016

With Artificial Intelligence (AI) gaining credence, Mindtree’s Chairman KK talks about the evolving roles of people and the importance of fostering emotional quotient (EQ) to remain relevant. He elaborates upon how Mindtree is helping its retail, finance, travel and hospitality clients reimagine customer service, the area most touched by AI and automation.

How Virtual Humans Learn Emotion and Social Intelligence (Tested )

Published on Aug 26, 2016

At USC ICT’s Virtual Humans lab, we learn how researchers build tools and algorithms that teach AI the complexities of social and emotional cues. We run through a few AI demos that demonstrate nuanced social interaction, which will be important for future systems like autonomous cars.

Shot by Joey Fameli and edited by Tywen Kelly
Music by Jinglepunks

Stanford Seminar: Buildings Machines That Understand and Shape Human Emotion (stanfordonline)

Published on Feb 3, 2017

Jonathan Gratch, Research Professor of Computer Science and Psychology at the University of Southern California (USC)

Affective Computing is the field of research directed at creating technology that recognizes, interprets, simulates and stimulates human emotion. In this talk, I will broadly overview my fifteen years of effort in advancing this nascent field, and emphasize the rich interdisciplinary connections between computational and scientific approaches to emotion. I will touch on several broad questions: Can a machine understand human emotion? To what end? Can a machine “have” emotion, and how would this impact the humans that interact with them? I will address these questions in the context of several domains, including healthcare, economic decision-making and interpersonal-skills training. I will discuss the consequences of these findings for theories of intelligence (i.e., what function does emotion serve in human intelligence and could this benefit machines?) as well as their practical implications for human-computer, computer-mediated and human-robot interaction. Throughout, I will argue the need for an interdisciplinary partnership between the social and computational sciences around to topic of emotion.


 

 


Robot Invasion Begins This Week: The Video

London’s Science Museum is putting on perhaps the largest and most significant robot exhibition ever, featuring both the cutting edge present and examples from the distant past. While this blog’s “The Video” offerings are generally not posted back-to-back, this time the event seems newsworthy, lasting, and really needing video! Most news coverage fails to offer an adequate picture.

Robotic history, particularly focusing on humanoids, is important in demonstrating evolution of the robot concept, as well as the gradual development of increasingly sophisticated capabilities. Mankind has always wanted to create a Golem; but the capabilities imagined depend upon the mud with which it is wrought.

To remedy the coverage gap, we have assembled a set of videos of the event, showing different aspects. Per usual, they are YouTube licensed, and provided with identifying information clipped from their landing pages.

Robots: 500 Years in the Making (Science Museum)

Published on Feb 7, 2017

From the dawn of mechanized human forms to cutting-edge technology fresh from the lab, curator Ben Russell, looks at Robots and reveals the astonishing 500-year quest to make machines human.

Seven Must see Robots (Science Museum)

Published on Feb 7, 2017

Join curator Ben Russell for the seven robots you must see.

Robots Through the Ages go on Show in London (AFP news agency)

Published on Feb 7, 2017

From an 18th-century clockwork swan to a robot quoting Shakespeare, a new exhibition at the Science Museum in London charts the 500-year history of machines that fascinate and terrify in equal measure.

Backstage at Science Museum’s Robots Exhibition: ‘You can always unplug them’ (Guardian Science and Tech)

Published on Feb 7, 2017

The Guardian’s design critic, Oliver Wainwright, goes behind the scenes at the Science Museum’s robots exhibition with the curator, who introduces him to some of the most advanced humanoid robots in the world.

From a lifelike baby to robots without conscience, the curator explains where the technology is at, who may use it and how far it has to go.


Brain Machine Interface: Cornerstone of the Next AI

As we move into the era of AI, we can expect changes across a broad range of areas. AI will vie with extension of human intelligence and human-machine hybridization. As we bind the complex and supremely powerful human intelligence to the array of capabilities available in AI, there will be an inevitable transformation in human thought and action. This transformation will require a close interface between humans and machines. We have already seen the beginning of a close connection between Internet search and familiar thought processes. People are able to react more rapidly to information through rapid and real-time access to supporting data. But processes of cognition demand much faster response to integrate the vast incoming data streams with other cognitive functions.

Because of this connection between AI and device interface, development of a range of prostheses will occur as this interface grows in importance. There is already an impetus to provide a direct connection between mechanical systems and the brain going back decades–even further, perhaps, if we include experiments from the 19th century on electro-stimulation of animals. To date, these experiments have remained relatively limited in function, involving only simple messages and simple capabilities endowed with limited human awareness and response. However, as understanding of thought processes and possibilities of interface devices continue to increase in complexity, we will see radical advances that lead to a new era of human-machine interaction.

Recent developments in this area suggest the possibility of an explosion in capability, led by ability to read biological thought through invasive and noninvasive techniques; to act upon human thought streams through the same mechanisms; and to comprehend a much wider scope of thought processes and action potential as a result.

The chief direct human-machine interface possibilities come from two areas. These are insertion of a chip of minute size in a biological system; and use of external sensors and headsets to read electronic activity in the brain and interpret or act upon it. Thus, recent developments have shown capability to move artificial limbs; to retain sensation in artificial prostheses; capability to understand images in dreams; to understand emotional states; and ability to understand and act upon thought streams from individual neurons or neural matrices. Growth in this area has led to artificial smart limbs; artificial cornea implants; cochlear implants; and a range of similar helpful devices, each of which requires specific input from neural systems and provide an output based upon processing of some form.

The capabilities of these devices grow as the range increases. Recent developments have included cheaper local, and even some remote, electrical brain reading capabilities. On the implant side, sensors have become smaller and more suitable to insertion in areas of the brain or other portions of the body; most recently, the possibility of using a tiny chip with stent-like expansion and noninvasive insertion in the bloodstream, situated remotely, is a DARPA project. On another level of human brain interface, we have seen capability to grow biological neuron nets and integrate them with other systems. This increases the possibilities of a variety of direct brain interface devices as well as helping to improve our overall understanding of the requirements to build an intelligence.

The need to create a closer integration between man and machine is driven by the same requirements as increased AI within the enterprise. AI and big data demand vast increases in real-time computation which leads directly to improved networking and the capability to rapidly draw resources into the cognitive mix. Humans will need to keep up with this ever increasing processing of information. This will provide a competitive market for interface devices which make it easier for individuals to become first responders in an artificially cognitive world. Such an interface will also provide subconscious control possibilities for intelligent systems; to engage fight or flight reflexes in avoiding danger, and with enhanced capabilities linked directly between devices and implants. Additionally, the capability to read and respond to human thought will require new languages of machine communication, new forms of security, and create the ability to link human beings together on a thought processing level with the possibility of instantaneous problem-solving through immediate ad-hoc networks of human and AI components.

What does this mean ethically? Human thought is evolving quickly, and human values will always drive this innovation. Philosophically, we live within a human-centered world where the definitions and the actions are all circumscribed by human beings with their genetic predispositions. We do not fully understand the nature of human world construction due to our direct immersion in it. This makes it more difficult to determine the end result. But, we are unlikely to yield world-building resources and capabilities to a machine. With control of the definitions and the possibilities, human beings will continue to reign supreme.

In other respects, it is important that this is a long-term evolution. Concepts of jobs and work location are all capable of change. These are human definitions and fit the world which was created after the Neolithic Revolution. They have been further molded by the Industrial Revolution. For example, we think of cities as central to culture; but there is no longer the same need for cities. Likewise, there is no longer a need for jobs as we have defined them; yet society must channel people to perform activities that are beneficial to society.

In the immediate sense, the further development of human-AI initiatives creates new possibilities for autonomous and semi-autonomous robots, faster and more efficient business processes, and possibly greater innovation and collaboration. Companies need to understand the possibility for this type of interface to offer specific advantages to their firm. They could make processing of information much quicker; but will also raise ethical and legal issues that need to be explored. The possibilities are truly earthshaking. But the challenges are enormous, and the repercussions across social, economic, and political areas are likely to be revolutionary.