Agricultural Robots and AI in Farming; Promises and Concerns

With constant attention given to Industry 4.0, autonomous vehicles and industrial robots, there is one significant area of robotics that is often under-reported—the growing use of autonomous agricultural robots and AI-driven smart systems in agriculture. Although automation has been practiced on the farm for many years, it is not been as widely visible as its cousins on the shop floor. But technology being deployed on farms today is likely to have far reaching consequences.

We are on the edge of an explosion in robotics that will change the face of agriculture around the world, affecting labor markets, society, and the wealth of nations. Moreover, developments today are global in extent, with solutions being created in the undeveloped world as well as in the developed world, stretching across every form of agriculture from massive row crop agribusiness and livestock management, down to precision farming and crop management in market gardens and enclosed spaces.

Agriculture Robots Today

Agriculture is vital to the health of the ever-expanding human population, and to the wide range of interrelated industries that make up the agricultural sector. Processes include everything from planting, seeding, weeding, milking, herding, and gathering, to transportation, processing, logistics, and ultimately to the market—be it the supermarket, or increasing retail online. The UN predicts that world population will rise from 7.3 billion to 9.7 billion in 2050. Robots and intelligent systems will be critical in improving food supplies. Analyst company Tractica estimates shipments of agricultural robots will increase from 32,000 units in 2016 to 594,000 units annually by 2024—when the market is expected to reach $74.1 billion per year.

While automation has been in place for some time and semi-autonomous tractors are increasingly common, farms pose particular problems for robots. Unlike highway travel, which is difficult enough for autonomous vehicles, agricultural robots need to be able to respond to unforeseen events, plus handle and manipulate objects within their environment. AI makes it possible to identify weeds and crops; discern crop health and status; and to be able to take action delicately enough to avoid damage in actions such as picking. At the same time, these robots must navigate irregular surfaces and pathways, find locations on a very fine scale,  and sense specific plant conditions across the terrain.

Agricultural robots using AI technologies are responding to economies in the agricultural sector as well as to rising labor costs and immigration restrictions. The first areas of general impact are in large businesses, since robots have high investment and maintenance costs and there is a lack of skilled operators. Conditions will change as robots become cheaper, more widely available, and capable of performing more diverse tasks. This will require evolution of AI technologies, expansion of collaboration abilities among robots; and man-machine combined operations. The ability of robots to work with humans could be particularly significant due to the wide range of discrimination tasks involved in food safety, quality control, and weed and pest removal. Robots will be guided by human supervisors with skills in agriculture and knowledge of robotic and agricultural systems.

Opportunities and Growth

According to the International Federation of Robotics, agricultural robots are likely to be the fastest growing robotics sector by 2020. Different sectors of agricultural markets will respond differently. Large businesses with row crops are early responders, since they have funds to invest and shrinking margins. For these companies, there are huge benefits in reducing labor costs and instituting more precise farming methods. As picking and weed killing and pest removal systems become more widely available, citrus orchards and difficult-to-pick crops are likely to be next. Robots capable of picking citrus, berries, and other delicate fruit in difficult locations are already starting appear. There are applications in virtually every part of the agricultural sector.

Other uses will appear as robots become more common and less expensive. Robots can make a difference not only in harvesting, but also in precision of water application and fertilizer. In areas where water is contentious, such as California and the Middle East, more efficient watering will make it possible to grow larger crops with greater efficiency and less water, avoiding creation of political and social crises.

In the developing world, opportunities are enormous but individual farmers have fewer resources. For this reason, smaller robots and robot clusters are likely to be more widely used, possibly with emerging Robot-as-a-Service (RaaS) operators providing  labor on a per-usage or rental basis. Robotics will be able to save enormously on chemical costs and water used for irrigation, which will have significant economic impacts, as well as environmental benefits.

Progress and Caution

As use of agricultural robots continues to expand, they will take on increasinly complex tasks, and replace a larger portion of agricultural labor–a critical component of global employment. In many countries, particularly in the developing world, this will create shifts in employment which will empower trends such as rural migration to cities, and reduce overall availability of labor–intensive jobs. More training will be needed by more people; this will impact education, socialization, and finance; particularly in countries with large populations.

There are many implications as AI and agricultural robots are deployed. New ideas are blossoming, startups are on the rise, and we can expect a wide range of consequences as  next generation agricultural robotics and AI continue to emerge.


Delphi Automotive Grabs NuTonomy; the Jostling Continues

In yet another acquisition move within the vehicular autonomy market, UK-based Delphi Automotive  has purchased Boston-based NuTonomy, to double its research staff and add compatible software systems for its Automated Mobility on-Demand (AMoD) solutions. Neither of these companies is as yet a major player within the autonomous automobile sector. Delphi Automotive PLC is a General Motors spinoff and NuTonomy is a Carnegie Mellon startup. Delphi’s fully autonomous solution, called the Centralized Sensing Localization and Planning (CSLP) system is planned for a 2019 launch. It is based on Blackberry OS and Delphi’s other recent acquisition Ottomatika’s software. Meanwhile, NuTonomy was founded in 2013 by long-time industry players, Dr. Karl Iagnemma and Dr. Emilio Frazzoli, and is developing a full stack autonomous drive software solution for the AMoD market. The result of the acquisition will be to combine NuTonomy ‘s hundred member team with Delphi’s hundred member team to double research staff in an area where skills are in extreme demand.

While this acquisition could raise Delphi to a higher level of visibility among major autonomous vehicle challengers, it also comes with important implications for the other industry players. Delphi and NuTonomy were pursuing different collaborations to achieve success. Delphi was in a partnership with the BMW and Intel/Mobileye but NuTonomy is allied with Nvidia–which some believe has a more mature autonomous software stack. Inevitably, this brings into the fray the ongoing competition between Intel and Nvidia over artificial intelligence processors and supporting software–particularly as industry awaits the upcoming Nvidia Xavier SOC which should become available in 2018.

NuTonomy’s employees will remain in Boston while Delphi remains in the United Kingdom—though it also has offices in Boston. Both have been running experiments. and have a presence, in Singapore. The combination leaves Delphi with self-driving operations in Boston, Boston, Pittsburgh, Singapore, Santa Monica, and Silicon Valley. Combined with NuTonomy efforts, Delphi will have 60 autonomous cars on the road by the end of 2017.

Another point of interest is that, as with Google’s Waymo, Delphi Automotive intends to split off the vehicle autonomy business in 2018. It will create two new standalone companies: Delphi Technologies to handle the powertrain business, looking at next generation vehicle propulsion systems based on electrification; and Aptiv, which will include Electronics & Safety and Electrical/Electronic Architecture—including the “brains” to drive vehicles. Separating the autonomous vehicle units in the current context makes sense due to the special dynamics of this sector. Smaller companies are being bought by larger companies to obtain resources and skills that are hard to amass in the current environment and separate companies are more easily integrated into the competitive alliances that will be necessary to incorporate an increasing range of specialized products and expertise.

According to Delphi’s President and Chief Executive Officer, Kevin Clark, “The combination of the nuTonomy and Ottomatika AD teams, along with Delphi’s industry-leading portfolio of perception systems and smart vehicle architecture solutions, further enhances our competitive position as the industry’s most formidable provider of autonomous mobility solutions.”

In short, the autonomous vehicle sector is likely to remain volatile for some time and the search for talent will continue until the next generation of engineers in AI solutions becomes available.

Baidu Adds xPerception to its AI/VR Stockpile

Leading Chinese Internet search provider Baidu is acquiring US startup xPerception, an AI-based visual perception software and hardware provider for robotics and virtual reality (VR). This provides important talent for the company’s moves into AI, with xPerception co-founders, Dr. Bao Yingze and Dr. Chen Mingyu, who were both key engineers at AR startup Magic Leap. The xPerception team will move to the US and Beijing offices of Baidu Research and continue developing xPerception’s Simultaneous Localization and Mapping (SLAM) technology.

SLAM is critical to visual perception used in a variety of  AI and VR roles, including 3D vision, robotics, drones and autonomous driving. The base of xPerception technology is a 3D visual inertial camera on mobile platforms, with a sophisticated SDK that enables pose tracking, low-latency sensor fusion, and object recognition. This permits self-localization, 3D structure reconstruction, and path planning in new environments. These technologies linking AI and VR are opening new opportunities, as discussed in the recent blog On the Intersection of AI and Augmented Reality.

In addition to integration with Baidu AI and autonomous driving programs, the xPerception acquisition provides high-demand skills and helps to defray concerns over US regulations and immigration policies. There has recently been potential regulatory blockage of Chinese acquisitions, most notably claims that Alibaba Group’s $1.2 billion bid for U.S. firm MoneyGram International poses national security risks. U.S. lawmakers are requesting  review by the Committee on Foreign Investment. Meanwhile, Chinese internet firm LeEco has terminated of a $2 billion bid for U.S. electronics firm Vizio due to regulatory issues. Baidu’s splitting of its AI research between Beijing and California provides assurance that changing US immigration policies will not overly affect research agendas, while providing access to US-based talent and skills networks.

Baidu has been active in AI for some years now, but finding talent in this area is difficult. High-profile chief scientist, Andrew Ng, left the company  in March, making addition of experienced AI/VR staff a priority. Its commitment to this area is further demonstrated by the Chinese government’s  February selection of Baidu to set up a national AI lab. Baidu Research currently maintains four analytics and AI labs: a Silicon Valley AI Lab, the Institute of Deep Learning, the Big Data Lab and the Augmented Reality Lab. According to Ng, Baidu’s AI group now includes about 1,300 people.

The search for AI talent is global, fueled by visions of integrating this rapidly developing technology with an increasing range of business and technology processes. Autonomous vehicles continue to be a driving force. Meanwhile, globalization issues may drive companies to hedge their bets, particularly in China and India. The last large acquisition, Intel’s purchase of Mobileye, split research between Silicon Valley and Israel. (Car Wars: Intel Bags Mobileye).


Car Wars: Intel Bags Mobileye

Intel is acquiring autonomous driving company Mobileye in a deal valued at $15.3 billion, expected to close toward the end of this year. Acquisition of the Israeli firm, whose technology is used by  27 companies in the auto industry, represents a number of interesting issues in the self-driving vehicle technology race.

Intel has been pursuing autonomous vehicle technology, but this initiative–one of the 10 largest acquisitions in the tech industry–brings it front and center. The key to Mobileye’s autonomous solution lies in its silicon. Mobileye has developed its EyeQ® family of system-on-chip (SoC) devices, which support complex and computationally intense vision processing while still maintaining low power consumption. Mobileye is currently developing its fifth generation chip, the EyeQ 5, to act as the visual central computer for fully autonomous self-driving vehicles expected to appear in 2020. The EyeQ chips employ proprietary computing cores optimized for computer vision, signal processing, and machine learning tasks, including deep neural networks. These cores are designed specifically to address the needs of Advanced Driver Assistance Systems (ADAS).

As a chip developer focusing upon providing the building blocks for technology, its traditional role, Intel is moving forcefully in this direction, partly as a result of growing competition in embedded machine learning from the likes of Nvidia and Qualcomm, both of which are also moving into the autonomous vehicle area. Self-driving cars are the nexus of development in machine learning due to the huge profit expectations of the automobile, transportation, and logistics industries. Evolution of autonomous vehicles, particularly with deep learning capabilities in silicon, will also create additional pressure on skills for artificial intelligence across all industry sectors, as well as creating an explosion in innovation and speeding development of these systems.

Intel intends to form an autonomous driving unit combining its current Automated Driving Group (ADG) and Mobileye. The group will be headquartered in Israel and led by Mobileye’s co-founder, Amnon Shashua, currently Mobileye’s Chairman and CTO; and a professor at Hebrew University.

According to the combined press release:

This acquisition will combine the best-in-class technologies from both companies, spanning connectivity, computer vision, data center, sensor fusion, high-performance computing, localization and mapping, machine learning and artificial intelligence. Together with partners and customers, Intel and Mobileye expect to deliver driving solutions that will transform the automotive industry.

The new organization will support both companies’ existing production programs and build upon the large number of relationships that Mobileye maintains with OEMs,  automobile industry tier 1 suppliers, and semiconductor partners.

Intel’s interests in this deal are likely to be diverse. Among the potential benefits are:

  • Potentially four terabytes of data per hour of data to be processed, creating large-scale opportunities for Intel’s high-end Xeon processors and mobilize latest generation of SOC’s.
  • Moving to Israel, where Intel already has a significant presence, potentially isolates its research and testing from the competitive hotbed of Silicon Valley, shielding employees from poaching. It also avoids current immigration issues.
  • Additional competitive advantages within AI and embedded deep learning, which are the next generation targets of Intel’s silicon competition.

It is worth noting that this is a general boost to autonomous vehicles that will inevitably lead to greater concentration of resources in this field.  Although relying upon a common supplier of autonomous systems makes sense economically, it also reduces competitive advantages.

The number of companies involved in this sector continues to grow as the implications stretch out through the entire transportation-related sector.  We have covered a number of these systems in recent blogs here (Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users, Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI, All Things Need Autonomy: TomTom Grabs Autonomos, Uber Doubles Down on AI with Geometric Acquisition, Qualcomm Buys NXP for IoT and Cars ). The net result will to be to create a huge rush for talent in the machine learning space, as well as all of the areas related to integration with automobile systems. This will increase the speed of evolution for embedded AI, which will filter rapidly into other areas of business and process enablement though impeded by the availability of talent.

All Things Need Autonomy: TomTom Grabs Autonomos

Noted location and mapping service TomTom has acquired Autonomos, a Berlin-based autonomous driving startup. TomTom has been expanding its range of activities for several years, not only in the autonomous vehicle area, but also in a variety of other technologies related to geolocation. These have included navigation devices, smart watches and cameras for consumers. But the Autonomos acquisition shows that the company is becoming increasingly serious about the coming range of self-driving vehicles.

Autonomos is company that provides R&D consultancy services for automated vehicle assistance systems. Its capabilities include a full demonstration-level autonomous driving software stack, 3D sensor technology, and digital image processing.

The company was established in 2012 as a spin-off from the German Federal Ministry of Education and Research (BMBF) research project entitled “AutoNOMOS Labs,” at the Freie Universitaet in Berlin. Its prototype “MadeInGermany” vehicle has been officially testing autonomous driving on the streets of Berlin since 2011. The company participated in the 2400 km automated driving challenge in Mexico, operating in cooperation with its research partner, the Freia Universitat Berlin. Its vehicle drove about 2,250 km of highways and 150 km through cities, traversing difficult road conditions in which the autonomous vehicle demonstrated its capabilities and performed well.

Addition of Autonomos is expected to advance TomTom’s map-based products for autonomous driving. Its in-house autonomous driving stack will enable TomTom to serve customers with products such as its HD map, RoadDNA localization technology, as well as its navigation, traffic and other cloud services.

“This is an important development for TomTom as it will help us to continue to strengthen our capabilities for the future of driving and expand our knowledge and expertise,” says Harold Goddijn, CEO and co-founder of TomTom in the press release. “With this deal we are further positioning ourselves as one of the leaders in autonomous driving”.

There are a number of interesting points to this announcement. It is yet another acquisition of an AI startup to obtaining research and talent. It also demonstrates how autonomous driving is, and autonomous vehicles are, becoming increasingly important to every company within the vehicle and transportation industries. We have seen the high-profile efforts of Uber, Tesla, Apple and the major automobile companies to gain a foothold in this area.

As self-driving automobiles and transportation become ever closer, it is clear that anyone who wishes to participate in the vehicle industry in future will need to have access to this technology, and to have qualified staff capable of innovating in this area. The data sciences required by autonomous vehicle are of a different caliber from those required by internal analytic functions within a corporation; a wider variety of skills and an emphasis upon machine learning and deep learning techniques is absolutely critical. The skills are expected to become increasingly difficult to find outside of acquisition of startups such as this one.

Another factor to look at in this acquisition is that TomTom has been moving gradually beyond mapping and automobile applications into a variety of other product areas that involve knowledge of location. These include health monitoring products. As digitization proceeds, it becomes increasingly clear that acquiring advanced skills in one area is easily translatable into another. Autonomos itself has moved into a smart stereo camera to aid its automotive techniques; this camera can be used for other purposes and can be linked to other initiatives in a geolocation portfolio. As TomTom continues to build out additional products based upon its mapping and geolocation capabilities, Autonomos is likely to play a significant role, not only in the products it has produced, but also through its 20 AI employees who will now be available for the products within TomTom’s universe.

For companies, it is important to take note that acquisitions like this point to the need to be aware of this expanding environment. In addition to AI in analytics processes and in general business process automation, AI will continue to spread out into unforeseeable new areas. Every company is seeking an innovation that will provide an ultimate advantage within their niche; advent of a truly autonomous process, particularly related to difficult control issues in physical goods movement, manufacturing, or transportation is likely to become increasingly important.


Software Eats World: Autonomous Agents in Business Process Management (BPM)

Shhh! The robots are moving back home to software! (Not that they ever left.) Concepts forged in the IoT will become part of every other system. Digital components are easily connected and share a multitude of underlying principles, so concepts quickly move between unrelated disciplines, and all related technologies tend to converge.

While considering the IoT we have looked at autonomy, and what is required for devices such as automobiles and industrial robots to operate safely on their own, coordinated with other devices and working with humans (Autonomy INFOGRAPHIC, Challenges of Autonomy: The Video, Where the AI Rubber Hits the NHTSA Road: Letter to GM Describes State of Autonomy). We have reviewed the special challenges of autonomy, and how they are being solved to create efficient and effective systems. These capabilities are now destined to enrich other areas.

A key issue is how to apply this learning to business processes themselves. Autonomous robots and vehicles are extensions of digital processes. These processes are defined by software and engage numerous other systems toward the performance of a given end.

The development of an autonomous device requires layer upon layer of intelligence performance (see The Composite Nature of Embedded Cognition). Devices must be able to sense the activities surrounding them; they must have the ability to interact with their surroundings; and they must be able to provide a wide variety of actions that can be flexibly fitted to the accomplishment of a mission. All of these details might also be applied to strictly software systems.

A cognitive approach does not simply glue artificial intelligence to existing processes with the assumption that AI will provide the required result. Instead, we will see multiple AI systems used in conjunction with each other to perfect and deliver software solutions. Each routine will access a range of sensor and processing capabilities which will offer autonomy at the process level. Autonomous agents—software systems capable of specific actions in a larger whole—will then perform their functions as needed to achieve a desired end result. This is in line with a growing understanding of the composite nature of artificial intelligence. It also demands new forms of orchestration and new ways of providing AI capability.

An autonomous agent business process management solution will be able to sense when processes are required rather responding to a fixed call within another software program. This means that processes will anticipate requirements and act early to create an efficient solution. They will act with project managers understanding of when specific data or tasks need to be accomplished. Autonomous agents will be able to interact with other programs and bring a catalog of analytic, machine learning, predictive, and sensor-driven capabilities. This range of functional autonomy will need brokerage, data sharing, and orchestration. A collaborative framework will be required to ensure that the components do not block each other and that the priority of specific tasks is respected.

With an autonomous agent-based process management solution, response in a complex environment will be much faster and more effective than with a fixed system. Similarly, the cognitive capabilities of such a composite system would likely create new possibilities in overall management and furtherance of larger goals. It would become possible to orchestrate all business processes and micromanage them on an atomic level through ability to immediately activate an autonomous response from coordinated process components.

The further development of digital business, AI, autonomy, and cloud computing all tend in the direction of componentized autonomous agents. However, if we look for a timeframe, this will occur well in the future. We are now at the stage of integrating tiny amounts of AI in small and disparate processes. Robotics are merely at the edge of achieving true autonomy. And the processes of orchestration and synchronization of vast independence and coordinated autonomous systems is at the moment beyond our grasp.

However, it is important to understand that in a digital business environment, all of the advances that are made in one field filter with little delay into all other sectors. As we develop cognition and autonomy for robotics and vehicles, these same processes become available to programs of recruitment, sales, finance, manufacturing, medicine, and everything else. We are moving into not only a artificial intelligence – driven world, but a composite artificial intelligence driven world. The capability of developing layer upon layer of such cognition will create field effects that will ultimately change the nature of the combined process. Just as the human mind is entirely different from the neural mechanism of a single cell, the enormous multilayered possibilities of a galaxy of autonomous agents creates a subtly new system whose capabilities cannot yet be adequately explored.

We are just at the beginning of this change, and the marketers are fierce in describing their products as the apex of this evolution. But we are nowhere near the ability to fully comprehend the requirements, capabilities, and consequences of such a cognitive software environment.

For business, taking a more complex view of AI in the enterprise is mandatory. The effects will require a shift in strategy. Software vendors will need to understand the subtle ways in which their programs will need to interact. This is a long term movement, but preparing for it must begin now.

AI Cars in the Alphabet: Google Parent Company Creates Waymo

Google has just released its autonomous car project as a separate company called Waymo, operating under Alphabet. This move has been in progress for at least a year, and it represents the maturation of the company’s interests in this technology. From an AI standpoint, it means that the new company will have to turn a profit. It also means that it is likely to pursue automotive industry partnerships in its drive toward a completely autonomous vehicle rather than attempting to develop all facets of the technology itself.

About a year ago, John Krafcik, an ex-Hyundai North America executive, was put in charge of the project, and he continues CEO of Waymo. During his tenure, there was a shakeup of engineers and executives, several of whom went on to create AI startups such as Otto (now acquired by Uber), Nuro, and an as-yet-unnamed autonomous vehicle company from former project head Chris Urmson, who left in August.

As a preface to the move, Krafcik blogged:

Waymo may be a new company, but we’re building on advanced self-driving technology developed over many years at Google. On October 20, 2015, we completed the world’s first fully-self driven car ride. Steve Mahan rode alone in one of our prototype vehicles, cruising through Austin’s suburbs. Steve is legally blind, so our sensors and software were his chauffeur. His route reflected the way millions of people could use a self-driving car in everyday life: riding from a park to a doctor’s office and through typical neighborhoods.

Google’s autonomous vehicles have the experience of millions of driving miles under the hood, and this has given the company an advantage. But now, as competition grows fiercer, it is time to make the technology into a product, and find a niche for it that can fit Google’s needs and capabilities. Building cars is difficult, as other tech companies, such as Apple, have discovered. And Tesla has early mover advantage in pursuing a high tech/high visibility approach to every aspect of personal transportation.

The Google project had been gradually extending its partnerships in pursuit of autonomy through a range of strategic hookups, such as a partnership with Fiat Chrysler that has recently morphed into a ride-sharing service using semi-autonomous Pacifica vans, to be available by the end of 2017.

It is clear that Autonomy is becoming more important as a distinct branch of AI, fueled by billions of research dollars and development by an increasing number of automobile and transportation companies, software developers, and niche-market startups. Autonomy has a set of unique problems, as we explored in our recent Autonomy Infographic. As a digital technology, the concepts developed can easily be stretched to new fields, and combined with evolving capabilities in other fields. Transportation and logistics are obvious uses, but autonomous robots will bring whole new issues to the fore as we struggle to mold society to its autonomous robotic future.

Uber Doubles Down on AI with Geometric Acquisition

Ride sharing company Uber has been moving into the world of self-driving cars, autonomous operations, and AI for a while. It is now acquiring Geometric Intelligence, a company focusing upon Deep Learning and Machine Intelligence, that will provide the core for the newly announced Uber AI lab. The lab will be headed by Geometric Intelligence co-founders Gary Marcus and Zoubin Ghahramani, both well-known for their work in this area, and will have 15 founding members.

Uber has a vision of autonomous transportation that could have revolutionary implications for movement of people and things. Self-driving cars are the current focal point of today’s work on autonomy due to revolutionary implications and tremendous financial opportunities. They will affect every business model in the transportation industry, from personal travel to freight delivery, including fueling, routing, and optimization. There is also an almost desperate search for top-level talent as major transportation companies from Tesla to Toyota and data companies seek to crack the autonomy puzzle. Autonomy is difficult, but competition is accelerating its development.

Geometric’s four founders are Gary Marcus, Zoubin Ghahramani, Doug Bemis and Ken Stanley. According to the company web site, Gary Marcus is a scientist, bestselling author, and entrepreneur. His published works include The Algebraic Mind: Integrating Connectionism and Cognitive Science and The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought. He is also Professor of Psychology and Neural Science at NYU.

Zoubin Ghahramani is a world leader in the field of machine learning and Professor of Information Engineering at the University of Cambridge. He is known in particular for fundamental contributions to probabilistic modeling and Bayesian nonparametric approaches to machine learning systems, and to the development of approximate variational inference algorithms for scalable learning.

Doug Bemis has served as CTO for several other startups, including Syntracts LLC and Windward Mark Interactive. Doug also received a PhD from NYU in neurolinguistics, for work using magnetoencephalography to investigate the neural bases of semantic composition. Subsequently, he worked with Stanislas Dehaene at Neurospin in France.

Ken Stanley is an associate professor of computer science at the University of Central Florida. He is a leader in neuroevolution (combining neural networks with evolutionary techniques), where he helped invent prominent algorithms such as NEAT, CPPNs, HyperNEAT, and novelty search. His ideas have also reached a broader audience through the recent popular science book, Why Greatness Cannot Be Planned: The Myth of the Objective.

The Uber AI team seems well-balanced, and heavily weighted in Machine Learning through a variety of approaches. The stakes are high, and this bodes well for future advances in autonomy that will be of benefit across the AI and robotics areas. Although details of the Geometric approach are sketchy at this point, its model appears to focus upon combining a rules-based approach with machine intelligence inference. Such hybrid approaches could offer the best of all possible worlds.

According to the release from Uber’s Jeff Holden, “In spite of notable wins with machine learning in recent years, we are still very much in the early innings of machine intelligence. The formation of Uber AI Labs, to be directed by Geometric’s Founding CEO Gary Marcus, represents Uber’s commitment to advancing the state of the art, driven by our vision that moving people and things in the physical world can be radically faster, safer and accessible to all.”

Challenges of Autonomy: The Video

Autonomous systems are a key focal point in today’s cognitive systems research, particularly through high profile association with self-driving cars. But vehicles are only a part of the story. Autonomy and partial autonomy are also important to a growing array of industrial and consumer robots, drones, and new devices from the IoT.

Autonomy is a unique problem involving an array of technologies, and it has implications for technological growth, the economy, and for human society. Autonomous systems need to operate safely in cooperation with people; they require extensive sensors, composite intelligence, and Fog Computing (device level processing linked to cloud level processing and coordination).

In the following videos, we look at aspects of autonomy. The video descriptions are drawn from the sources.

Intelligent Autonomous Systems (Distinctive Voices)

This talk describes the current research path towards intelligent, semi-autonomous systems, where both humans and automation tightly interact, and together, accomplish tasks such as searching for survivors of a hurricane using a team of UAVs with sensors with highly efficient interaction. This talk is describes the current state of the art in:

1. Intelligent robotic (only) systems;

2. Modeling human decisions;

3. Semi-autonomous systems, with a focus on information exchange, and command and control.

By Mark Campbell, the S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Oct 18, 2012

Autonomous Technology and the Greater Human Good – Winter Intelligence (Winter Intelligence Conference, Oxford)

Next generation technologies will make at least some of their decisions autonomously. Self-driving vehicles, rapid financial transactions, military drones, and many other applications will drive the creation of autonomous systems. If implemented well, they have the potential to create enormous wealth and productivity. But if given goals that are too simplistic, autonomous systems can be dangerous.

By Stephen Omohundro, a scientist researching Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. Apr 14, 2013

Developing Trust in Autonomous Robots (Carnegie Mellon University Robotics Institute)

By Michael Wagner Senior NREC Commercialization Specialist, RI, NREC. Feb 20, 2015

Intelligent Systems – Computers Learn to Understand the World (MaxPlanckSociety)

Autonomous systems such as robots or self-driving cars are expected to play an increasingly important role in future. Initially, however, they must learn how to safely negotiate their environments. To this end, Michael Black teaches them to recognize people: a technology that will enable even more amazing applications.

By Michael Black, Max Planck Institute. Aug 8, 2016

The Future of Autonomous Systems: a Computational Perspective (Stanford ICMEStudio)

Mobile autonomous systems are poised to transform several sectors, from transportation and logistics all the way to space exploration. This talk briefly reviews major computational challenges in endowing autonomous systems with fast and reliable decision-making capabilities, and discusses recent advances made at the Stanford Autonomous Systems Laboratory.

By Marco Pavone, Assistant Professor of Aeronautics and Astronautics, Stanford. May 27, 2016