Delphi Automotive Grabs NuTonomy; the Jostling Continues

In yet another acquisition move within the vehicular autonomy market, UK-based Delphi Automotive  has purchased Boston-based NuTonomy, to double its research staff and add compatible software systems for its Automated Mobility on-Demand (AMoD) solutions. Neither of these companies is as yet a major player within the autonomous automobile sector. Delphi Automotive PLC is a General Motors spinoff and NuTonomy is a Carnegie Mellon startup. Delphi’s fully autonomous solution, called the Centralized Sensing Localization and Planning (CSLP) system is planned for a 2019 launch. It is based on Blackberry OS and Delphi’s other recent acquisition Ottomatika’s software. Meanwhile, NuTonomy was founded in 2013 by long-time industry players, Dr. Karl Iagnemma and Dr. Emilio Frazzoli, and is developing a full stack autonomous drive software solution for the AMoD market. The result of the acquisition will be to combine NuTonomy ‘s hundred member team with Delphi’s hundred member team to double research staff in an area where skills are in extreme demand.

While this acquisition could raise Delphi to a higher level of visibility among major autonomous vehicle challengers, it also comes with important implications for the other industry players. Delphi and NuTonomy were pursuing different collaborations to achieve success. Delphi was in a partnership with the BMW and Intel/Mobileye but NuTonomy is allied with Nvidia–which some believe has a more mature autonomous software stack. Inevitably, this brings into the fray the ongoing competition between Intel and Nvidia over artificial intelligence processors and supporting software–particularly as industry awaits the upcoming Nvidia Xavier SOC which should become available in 2018.

NuTonomy’s employees will remain in Boston while Delphi remains in the United Kingdom—though it also has offices in Boston. Both have been running experiments. and have a presence, in Singapore. The combination leaves Delphi with self-driving operations in Boston, Boston, Pittsburgh, Singapore, Santa Monica, and Silicon Valley. Combined with NuTonomy efforts, Delphi will have 60 autonomous cars on the road by the end of 2017.

Another point of interest is that, as with Google’s Waymo, Delphi Automotive intends to split off the vehicle autonomy business in 2018. It will create two new standalone companies: Delphi Technologies to handle the powertrain business, looking at next generation vehicle propulsion systems based on electrification; and Aptiv, which will include Electronics & Safety and Electrical/Electronic Architecture—including the “brains” to drive vehicles. Separating the autonomous vehicle units in the current context makes sense due to the special dynamics of this sector. Smaller companies are being bought by larger companies to obtain resources and skills that are hard to amass in the current environment and separate companies are more easily integrated into the competitive alliances that will be necessary to incorporate an increasing range of specialized products and expertise.

According to Delphi’s President and Chief Executive Officer, Kevin Clark, “The combination of the nuTonomy and Ottomatika AD teams, along with Delphi’s industry-leading portfolio of perception systems and smart vehicle architecture solutions, further enhances our competitive position as the industry’s most formidable provider of autonomous mobility solutions.”

In short, the autonomous vehicle sector is likely to remain volatile for some time and the search for talent will continue until the next generation of engineers in AI solutions becomes available.


Car Wars: Intel Bags Mobileye

Intel is acquiring autonomous driving company Mobileye in a deal valued at $15.3 billion, expected to close toward the end of this year. Acquisition of the Israeli firm, whose technology is used by  27 companies in the auto industry, represents a number of interesting issues in the self-driving vehicle technology race.

Intel has been pursuing autonomous vehicle technology, but this initiative–one of the 10 largest acquisitions in the tech industry–brings it front and center. The key to Mobileye’s autonomous solution lies in its silicon. Mobileye has developed its EyeQ® family of system-on-chip (SoC) devices, which support complex and computationally intense vision processing while still maintaining low power consumption. Mobileye is currently developing its fifth generation chip, the EyeQ 5, to act as the visual central computer for fully autonomous self-driving vehicles expected to appear in 2020. The EyeQ chips employ proprietary computing cores optimized for computer vision, signal processing, and machine learning tasks, including deep neural networks. These cores are designed specifically to address the needs of Advanced Driver Assistance Systems (ADAS).

As a chip developer focusing upon providing the building blocks for technology, its traditional role, Intel is moving forcefully in this direction, partly as a result of growing competition in embedded machine learning from the likes of Nvidia and Qualcomm, both of which are also moving into the autonomous vehicle area. Self-driving cars are the nexus of development in machine learning due to the huge profit expectations of the automobile, transportation, and logistics industries. Evolution of autonomous vehicles, particularly with deep learning capabilities in silicon, will also create additional pressure on skills for artificial intelligence across all industry sectors, as well as creating an explosion in innovation and speeding development of these systems.

Intel intends to form an autonomous driving unit combining its current Automated Driving Group (ADG) and Mobileye. The group will be headquartered in Israel and led by Mobileye’s co-founder, Amnon Shashua, currently Mobileye’s Chairman and CTO; and a professor at Hebrew University.

According to the combined press release:

This acquisition will combine the best-in-class technologies from both companies, spanning connectivity, computer vision, data center, sensor fusion, high-performance computing, localization and mapping, machine learning and artificial intelligence. Together with partners and customers, Intel and Mobileye expect to deliver driving solutions that will transform the automotive industry.

The new organization will support both companies’ existing production programs and build upon the large number of relationships that Mobileye maintains with OEMs,  automobile industry tier 1 suppliers, and semiconductor partners.

Intel’s interests in this deal are likely to be diverse. Among the potential benefits are:

  • Potentially four terabytes of data per hour of data to be processed, creating large-scale opportunities for Intel’s high-end Xeon processors and mobilize latest generation of SOC’s.
  • Moving to Israel, where Intel already has a significant presence, potentially isolates its research and testing from the competitive hotbed of Silicon Valley, shielding employees from poaching. It also avoids current immigration issues.
  • Additional competitive advantages within AI and embedded deep learning, which are the next generation targets of Intel’s silicon competition.

It is worth noting that this is a general boost to autonomous vehicles that will inevitably lead to greater concentration of resources in this field.  Although relying upon a common supplier of autonomous systems makes sense economically, it also reduces competitive advantages.

The number of companies involved in this sector continues to grow as the implications stretch out through the entire transportation-related sector.  We have covered a number of these systems in recent blogs here (Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users, Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI, All Things Need Autonomy: TomTom Grabs Autonomos, Uber Doubles Down on AI with Geometric Acquisition, Qualcomm Buys NXP for IoT and Cars ). The net result will to be to create a huge rush for talent in the machine learning space, as well as all of the areas related to integration with automobile systems. This will increase the speed of evolution for embedded AI, which will filter rapidly into other areas of business and process enablement though impeded by the availability of talent.


Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users

Chinese ride-hailing and transportation company DiDi Chuxing has just opened its own AI vehicle lab in Silicon Valley. DiDi Labs in Mountain View, CA, will focus on AI-based security and intelligent driving technologies, hoping to grab talent in the region as the battle for AI skills continues. ( see Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI Arsenal).

According to Cheng Wei, the company’s founder and CEO, “As we strive to bring better services to broader communities, DiDi’s international vision now extends to building the best-in-class international research network, advancing the global transportation revolution by leveraging innovative resources. The launch of DiDi Labs is a landmark in creating this global nexus of innovation.”

Hidden from the spotlight, DiDi has developed a huge amount of experience in China, which provides a substantial data source for its AI efforts. DiDi acquired Uber China in August 2016. Unlike rivals, the company size and reach is truly impressive. It offers a full range of technology-based services for nearly 400 million users across more than 400 Chinese cities. These include taxi hailing, private car hailing, Hitch (social ride-sharing), DiDi Chauffeur, DiDi Bus, DiDi Minibus, DiDi Test Drive, DiDi Car Rental and DiDi Enterprise Solutions. As many as 20 million rides were completed on DiDi’s platform on a daily basis in October 2016, making DiDi the world’s second largest online transaction platform next only to China’s Alibaba-owned shopping website, Taobao.

It is also important to note that DiDi is estimated to be worth about US$35 billion and its investors reportedly include all three of China’s Internet giants—Alibaba, Tencent, and Baidu—and Apple. The current initiative is centered around security issues, but location and stated intent makes the acquisition of Valley AI talent a clear objective. Uber’s acquisition of Carnegie Mellon’s whole AI lab in 2015 got all companies in the automotive intelligence sector nervous about talent. The company has recently had a range of issues arise regarding regulation in its Chinese market, and the current move may deflect some of that, and also play into the Chinese government’s current push to develop innovation.

DiDi Labs will be led by Dr. Fengmin Gong, Vice President of DiDi Research Institute, with a number of prominent researchers including top automobile security expert Charlie Miller, known as one of the scientists who famously hacked a Jeep Cherokee in 2015. Current projects span the areas of cloud-based security, deep learning, human-machine interaction, computer vision and imaging, as well as intelligent driving technologies.

DiDi has been involved with autonomous vehicles for some time, and this move mainly surfaces its involvement. DiDi previously established the DiDi Research Institute to focus on AI technologies including machine learning and computer vision. It also has ongoing research in big data analytics based on the 70TB worth of data, 9 billion routing requests, and 13 billion location points provided by its platforms.

In founding the new lab, the company also launched the DiDi-Udacity Self-Driving Car Challenge, an open-source self-driving competition in which player teams are invited to create an Automated Safety and Awareness Processing Stack (ASAPS) to improve general safety metrics for human and intelligent driving scenarios based on real data.

According to DiDi CTO Bob Zhang, “In the next decade, DiDi will play a leading role in innovation at three layers: optimization of transportation infrastructure, introduction of new energy vehicles and intelligent driving systems, and a shift in human-automotive relationship from ownership to shared access.”


Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI

Ford has just invested $1 billion in a startup called Argo AI that will operate as a subsidiary, focusing upon autonomous vehicles and AI. With the wave of mergers and acquisitions in the AI area recently, this should come as no surprise. Autonomous vehicles represent the cutting edge of a number of AI and machine learning technologies. We have considered the problem of autonomy in several blogs (Autonomy INFOGRAPHIC, Challenges of Autonomy: The Video, Autonomous Social Robots: The Video). In the automobile and transportation sector, autonomous technologies are particularly aggressive and well-funded.

Argo AI’s brief, as stated on the startup website:

We founded Argo AI to tackle one of the most challenging applications in computer science, robotics and artificial intelligence — self-driving vehicles. While technology exists today to augment the human driver and automate the driving task up to a certain level of capability, replacing the human driver remains an extremely complex challenge. Advances in artificial intelligence, machine learning and computer vision are required to solve it. These technologies will eventually lead us to a new generation of the automobile — a vehicle that is connected, intelligent, and able to safely operate itself alone or as part of a shared fleet. The potential of these shared fleets of self-driving vehicles will be one of the most transformative advancements in this century.

It is a transformative vision that fits well with Ford’s recent moves, as well as with initiatives throughout the transportation industries.

In recent months we have seen Google spin off Waymo (it’s autonomous vehicle unit), GM tying up with IBM Watson; TomTom grabbing Autonomous, and Uber acquiring Geometric. In 2015, Uber acquired the entire robotics department of Carnegie Mellon. Noting that Argo AI draws upon personnel from Uber and Waymo, it is clear that the battle for AI talent, particularly in deep learning, is in full swing and is likely to have a continuing impact on how AI progresses even as skills become more widely available in the next few years..

Ford’s investment does have an interesting twist. In attempting to navigate the world of skills shortage, it has created Argo as a subsidiary with majority Ford ownership but the possibility of equity sharing with AI employees who come aboard. This could be attractive, to less well-known but proficient practitioners wishing to develop advanced skills in this area. The fact that Uber was able to hire away Carnegie’s AI team sent shock waves through the industry and caused many to reevaluate employment policies and skills acquisition. The numerous startups in this area demonstrate that providing equity is a good place to start.

Another issue in automotive AI acquisitions is the growing realization that autonomy will change the industry so profoundly that companies will need to completely re-tool their business models in order to survive. Ford has been making aggressive moves in this direction through other acquisitions.

In the company’s own words:

Ford invested in Velodyne, the Silicon Valley-based leader in LiDAR (Light Detection and Ranging) sensors, to move quickly towards mass production of a more affordable, automotive-grade LiDAR sensor… We’re acquiring SAIPS, a machine learning and artificial intelligence start-up based in Israel, which will play a key role in image and video processing, object detection, signal processing and deep learning capabilities to help autonomous vehicles learn and adapt to their surroundings.
We’re forming an exclusive licensing agreement with Nirenberg Neuroscience, founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code that the eye uses to transmit visual information to the brain…
And we’re investing in Civil Maps, helping us develop 3D, high-resolution maps of our autonomous vehicles’ surroundings.
These four new partnerships build on a recent Ford investment in Pivotal, which is helping accelerate the software needed to support autonomous vehicles.
Plus, we’re working with a long list of universities around the world, including Stanford University, MIT, the University of Michigan and Aachen University in Germany.

That was from August. In the same announcement the company promised to have fully autonomous vehicles in commercial operation for a ride-sharing service beginning in 2021.

As we have pointed out before, Ford’s efforts are by no means unique. Every company remotely related to transportation is frantically trying to move in a similar direction, drawing from a very limited pool of talent having both the skills and the practical experience to spin up autonomy in its latest garb at the speed companies need to stay in this game.

Let the Car Wars begin! Seriously. It will vastly accelerate development and adoption of advanced AI and analytics technologies across every facet of business.


 


All Things Need Autonomy: TomTom Grabs Autonomos

Noted location and mapping service TomTom has acquired Autonomos, a Berlin-based autonomous driving startup. TomTom has been expanding its range of activities for several years, not only in the autonomous vehicle area, but also in a variety of other technologies related to geolocation. These have included navigation devices, smart watches and cameras for consumers. But the Autonomos acquisition shows that the company is becoming increasingly serious about the coming range of self-driving vehicles.

Autonomos is company that provides R&D consultancy services for automated vehicle assistance systems. Its capabilities include a full demonstration-level autonomous driving software stack, 3D sensor technology, and digital image processing.

The company was established in 2012 as a spin-off from the German Federal Ministry of Education and Research (BMBF) research project entitled “AutoNOMOS Labs,” at the Freie Universitaet in Berlin. Its prototype “MadeInGermany” vehicle has been officially testing autonomous driving on the streets of Berlin since 2011. The company participated in the 2400 km automated driving challenge in Mexico, operating in cooperation with its research partner, the Freia Universitat Berlin. Its vehicle drove about 2,250 km of highways and 150 km through cities, traversing difficult road conditions in which the autonomous vehicle demonstrated its capabilities and performed well.

Addition of Autonomos is expected to advance TomTom’s map-based products for autonomous driving. Its in-house autonomous driving stack will enable TomTom to serve customers with products such as its HD map, RoadDNA localization technology, as well as its navigation, traffic and other cloud services.

“This is an important development for TomTom as it will help us to continue to strengthen our capabilities for the future of driving and expand our knowledge and expertise,” says Harold Goddijn, CEO and co-founder of TomTom in the press release. “With this deal we are further positioning ourselves as one of the leaders in autonomous driving”.

There are a number of interesting points to this announcement. It is yet another acquisition of an AI startup to obtaining research and talent. It also demonstrates how autonomous driving is, and autonomous vehicles are, becoming increasingly important to every company within the vehicle and transportation industries. We have seen the high-profile efforts of Uber, Tesla, Apple and the major automobile companies to gain a foothold in this area.

As self-driving automobiles and transportation become ever closer, it is clear that anyone who wishes to participate in the vehicle industry in future will need to have access to this technology, and to have qualified staff capable of innovating in this area. The data sciences required by autonomous vehicle are of a different caliber from those required by internal analytic functions within a corporation; a wider variety of skills and an emphasis upon machine learning and deep learning techniques is absolutely critical. The skills are expected to become increasingly difficult to find outside of acquisition of startups such as this one.

Another factor to look at in this acquisition is that TomTom has been moving gradually beyond mapping and automobile applications into a variety of other product areas that involve knowledge of location. These include health monitoring products. As digitization proceeds, it becomes increasingly clear that acquiring advanced skills in one area is easily translatable into another. Autonomos itself has moved into a smart stereo camera to aid its automotive techniques; this camera can be used for other purposes and can be linked to other initiatives in a geolocation portfolio. As TomTom continues to build out additional products based upon its mapping and geolocation capabilities, Autonomos is likely to play a significant role, not only in the products it has produced, but also through its 20 AI employees who will now be available for the products within TomTom’s universe.

For companies, it is important to take note that acquisitions like this point to the need to be aware of this expanding environment. In addition to AI in analytics processes and in general business process automation, AI will continue to spread out into unforeseeable new areas. Every company is seeking an innovation that will provide an ultimate advantage within their niche; advent of a truly autonomous process, particularly related to difficult control issues in physical goods movement, manufacturing, or transportation is likely to become increasingly important.


 


AI Cars in the Alphabet: Google Parent Company Creates Waymo

Google has just released its autonomous car project as a separate company called Waymo, operating under Alphabet. This move has been in progress for at least a year, and it represents the maturation of the company’s interests in this technology. From an AI standpoint, it means that the new company will have to turn a profit. It also means that it is likely to pursue automotive industry partnerships in its drive toward a completely autonomous vehicle rather than attempting to develop all facets of the technology itself.

About a year ago, John Krafcik, an ex-Hyundai North America executive, was put in charge of the project, and he continues CEO of Waymo. During his tenure, there was a shakeup of engineers and executives, several of whom went on to create AI startups such as Otto (now acquired by Uber), Nuro, and an as-yet-unnamed autonomous vehicle company from former project head Chris Urmson, who left in August.

As a preface to the move, Krafcik blogged:

Waymo may be a new company, but we’re building on advanced self-driving technology developed over many years at Google. On October 20, 2015, we completed the world’s first fully-self driven car ride. Steve Mahan rode alone in one of our prototype vehicles, cruising through Austin’s suburbs. Steve is legally blind, so our sensors and software were his chauffeur. His route reflected the way millions of people could use a self-driving car in everyday life: riding from a park to a doctor’s office and through typical neighborhoods.

Google’s autonomous vehicles have the experience of millions of driving miles under the hood, and this has given the company an advantage. But now, as competition grows fiercer, it is time to make the technology into a product, and find a niche for it that can fit Google’s needs and capabilities. Building cars is difficult, as other tech companies, such as Apple, have discovered. And Tesla has early mover advantage in pursuing a high tech/high visibility approach to every aspect of personal transportation.

The Google project had been gradually extending its partnerships in pursuit of autonomy through a range of strategic hookups, such as a partnership with Fiat Chrysler that has recently morphed into a ride-sharing service using semi-autonomous Pacifica vans, to be available by the end of 2017.

It is clear that Autonomy is becoming more important as a distinct branch of AI, fueled by billions of research dollars and development by an increasing number of automobile and transportation companies, software developers, and niche-market startups. Autonomy has a set of unique problems, as we explored in our recent Autonomy Infographic. As a digital technology, the concepts developed can easily be stretched to new fields, and combined with evolving capabilities in other fields. Transportation and logistics are obvious uses, but autonomous robots will bring whole new issues to the fore as we struggle to mold society to its autonomous robotic future.


Uber Doubles Down on AI with Geometric Acquisition

Ride sharing company Uber has been moving into the world of self-driving cars, autonomous operations, and AI for a while. It is now acquiring Geometric Intelligence, a company focusing upon Deep Learning and Machine Intelligence, that will provide the core for the newly announced Uber AI lab. The lab will be headed by Geometric Intelligence co-founders Gary Marcus and Zoubin Ghahramani, both well-known for their work in this area, and will have 15 founding members.

Uber has a vision of autonomous transportation that could have revolutionary implications for movement of people and things. Self-driving cars are the current focal point of today’s work on autonomy due to revolutionary implications and tremendous financial opportunities. They will affect every business model in the transportation industry, from personal travel to freight delivery, including fueling, routing, and optimization. There is also an almost desperate search for top-level talent as major transportation companies from Tesla to Toyota and data companies seek to crack the autonomy puzzle. Autonomy is difficult, but competition is accelerating its development.

Geometric’s four founders are Gary Marcus, Zoubin Ghahramani, Doug Bemis and Ken Stanley. According to the company web site, Gary Marcus is a scientist, bestselling author, and entrepreneur. His published works include The Algebraic Mind: Integrating Connectionism and Cognitive Science and The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought. He is also Professor of Psychology and Neural Science at NYU.

Zoubin Ghahramani is a world leader in the field of machine learning and Professor of Information Engineering at the University of Cambridge. He is known in particular for fundamental contributions to probabilistic modeling and Bayesian nonparametric approaches to machine learning systems, and to the development of approximate variational inference algorithms for scalable learning.

Doug Bemis has served as CTO for several other startups, including Syntracts LLC and Windward Mark Interactive. Doug also received a PhD from NYU in neurolinguistics, for work using magnetoencephalography to investigate the neural bases of semantic composition. Subsequently, he worked with Stanislas Dehaene at Neurospin in France.

Ken Stanley is an associate professor of computer science at the University of Central Florida. He is a leader in neuroevolution (combining neural networks with evolutionary techniques), where he helped invent prominent algorithms such as NEAT, CPPNs, HyperNEAT, and novelty search. His ideas have also reached a broader audience through the recent popular science book, Why Greatness Cannot Be Planned: The Myth of the Objective.

The Uber AI team seems well-balanced, and heavily weighted in Machine Learning through a variety of approaches. The stakes are high, and this bodes well for future advances in autonomy that will be of benefit across the AI and robotics areas. Although details of the Geometric approach are sketchy at this point, its model appears to focus upon combining a rules-based approach with machine intelligence inference. Such hybrid approaches could offer the best of all possible worlds.

According to the release from Uber’s Jeff Holden, “In spite of notable wins with machine learning in recent years, we are still very much in the early innings of machine intelligence. The formation of Uber AI Labs, to be directed by Geometric’s Founding CEO Gary Marcus, represents Uber’s commitment to advancing the state of the art, driven by our vision that moving people and things in the physical world can be radically faster, safer and accessible to all.”


Where the AI Rubber Hits the NHTSA Road: Letter to GM Describes State of Autonomy

The U.S. National Highway Traffic Safety Administration (NHTSA) has released a letter in response to a GM query regarding use of warning lights in self-driving vehicles that provides interesting details on the state of the GM autonomous vehicle program and highlights some of the challenges in this critical area of AI. Autonomous vehicles are almost literally “where the rubber hits the road,” for recent developments in AI and Machine Learning, so these details point to the larger issues confronting all autonomous systems, and the regulatory issues that they expose.

Following are key points of the letter:

You state that GM is developing a new adaptive cruise control system with lane following (which GM has referred to as Super Cruise ) that controls steering, braking, and acceleration in certain freeway environments. When Super Cruise is in use, the driver must always remain attentive to the road, supervise Super Cruise’s performance, and be ready to steer and brake at all times. In some situations, Super Cruise will alert the driver to resume steering for example, when the system detects a limit or fault. If the driver is unable or unwilling to take control of the wheel (if, for example, the driver is incapacitated or unresponsive), Super Cruise may determine that the safest thing to do is to bring the vehicle slowly to a stop in or near the roadway, and the vehicle’s brakes will hold the vehicle until overridden by the driver.

You indicate that GM plans to develop Super Cruise so that, in this situation, once Super Cruise has brought the vehicle to a stop, the vehicle’s automated system will activate the vehicle’s hazard lights. You state that you believe that this automatic activation of the hazard lights complies with the requirements of FMVSS No. 108 for several reasons….

GM states that in the event that a human driver fails to respond to Super Cruise’s request that the human retake control of the vehicle, and Super Cruise consequently determines that the safest thing to do is to bring the vehicle slowly to a stop in or near the roadway, Super Cruise-equipped vehicles will activate the vehicle’s hazard lights automatically once the vehicle is stopped….

We note that GM indicates that when the driver is unable or unwilling to take control of the vehicle the system will bring the vehicle to a stop in or near the roadway. A vehicle system that stops a vehicle directly in a roadway might, depending on the circumstances, be considered to contain a safety-related defect–i.e., it may present an unreasonable risk of an accident occurring or of death and injury in an accident. Federal law requires the recall of a vehicle that contains a safety-related defect. We urge GM to fully consider the likely operation of the system it is contemplating and ensure that it will not present such a risk.

This letter addresses concerns that autonomous vehicles are not yet ready for unsupervised operation, as indicated in recent incidents such as the June Tesla Autopilot crash.

While the description of GM’s Super Cruise system is illuminating, the letter also draws attention to the innumerable points that need to be considered as autonomous systems become a part of everyday reality.

In an April, 2015 letter to the California Department of Motor Vehicles, the NHTSA described its 24 month research program into 10 areas of autonomous vehicle operation:

Current Research Questions
1. How can we retain driver’s attention on the driving task for highly automated systems that are only partially self driving and thus require a driver to cycle in and out of an automated driving mode during a driving trip?
2. For highly automated systems that envision allowing the driver to detach from the driving task, but safely resume with a reasonable lead time.
3. What types of driver misuse/abuse can occur?
4. What are the incremental driver training needs for each level of automation?
5. What functionally safe design strategies can be implemented for automated vehicle functions?
6. What level of cybersecurity is appropriate for automated vehicle functions?
7. What is the performance of Artificial Intelligence (AI) in different driving scenarios, particularly those situations where the vehicle would have to make crash avoidance decisions with imperfect information?
8. Are there appropriate minimum system performance requirements for automated vehicle systems?
9. What objective tests or other certification procedures are appropriate?
10. What are the potential incremental safety benefits for automated vehicle functions/concepts?

These research points provide useful guidance for companies developing or employing autonomous systems, as well as pointing to areas in which regulation is likely to occur.

As society fits AI-driven autonomy into both consumer and work environments, a broad spectrum analysis of impact is becoming increasingly urgent. Each case requires different treatment, but moving ahead without understanding the full range of interactions across operation, society, and security is likely to be perilous, indeed.

A detailed report on NHTSA policy on autonomous vehicles can be found here: Federal Automated Vehicles Policy: Accelerating the Next Revolution In Roadway Safety.


GM Gets Personal With IBM Watson: Cars get OnStar AI System

General Motors Corporation is enlisting the aid of IBM’s Watson AI platform to create OnStar Go, a smart interactive assistant for automobiles. While the basic GM OnStar platform has been around for 20 years, every car maker today is working on enhancing its network-connected entertainment and vehicle management system. This is a good opportunity for both companies. By the end of 2016 GM expects to have 12 million OnStar connected vehicles on the road worldwide. OnStar’s primary mission is to supply safety help and communications by connecting to a cellular network, and this role is expanding as the market continues to develop.

OnStar Go will appear early next year in more than 2 million GM vehicles using 4G cellular service. It is a subscription service providing entertainment and safety capabilities. It uses Watson’s cognitive capabilities in a manner similar to a home smart personal assistant such as Amazon Echo or Google Home. OnStar Go bills itself as the first “cognitive mobility platform”.

The capabilities that IBM Watson brings to this platform will include a range of similar services to those provided in varying degrees by smart phones and smart home assistants, plus others that involved its capacity to monitor and control the mechanisms of the vehicle. OnStar Go will be able to help drivers with a growing range of services that demand knowledge of location, personal habits, and vehicular control.

Start with the Car

According to IBM’s press release:

“Combining OnStar’s industry leading vehicle connectivity and data capabilities with IBM Watson APIs will create experiences that allow drivers and passengers to achieve greater levels of efficiency and safety. These experiences could include avoiding traffic when you’re low on fuel, then activating a fuel pump and paying from the dash; ordering a cup of coffee on the go; or getting news and in-vehicle entertainment tailored to your personality and location in real time.”

Cognitive capabilities bring a range of new possibilities for personalizing the driving experience. Initial offerings seem fairly similar to what is already available; but the capability to learn from experience and store vast amounts of data makes it possible to create a more highly customized experience as well as presenting an opportunity to extend marketing and branding possibilities.

“On average, people in the U.S. spend more than 46 minutes per day in their car and are looking for ways to optimize their time,” said Phil Abram, Executive Director, GM Connected Products and Strategy. “By leveraging OnStar’s connectivity and combining it with the power of Watson, we’re looking to provide safer, simpler and better solutions to make our customers’ mobility experience more valuable and productive.”

As a subscription product, OnStar Go will provide GM with a continuous revenue stream in the entertainment and services industry. The platform also provides plenty of opportunities for learning how people prefer to set up their environment and where they would like to go. This data is useful in creating better personal assistants. IBM does not as yet have a personal assistant tool competing in the Google/Apple/Amazon space; it is clear that such platforms are the way of the future. Voice-operated request systems with cognitive interactions that decode naïve requests and formulate a complex response based upon database and Internet search capability will shape home and office environments for years to come.

Caveat Emptor

There are items of particular note in providing cognitive links to automobile functions. The OnStar system has potential to stop the car or manipulate its controls. It provides parking capabilities and navigation capabilities which will grow in importance. This kind of control presents security risks which have already been demonstrated last year with a pre-Watson version of OnStar compromised to remote control a Chevrolet Impala, as reported by the television program 60 Minutes last year. While security problems get patched, the possibility of sophisticated access to vehicle systems will always remain. Watson will require large amounts of data, some of it personal; this data can likely be hacked to provide information about whereabouts and intentions of the user; at the same time it should be able to provide more sophisticated intrusion response. Another issue is mobility. A continuous attachment to the cellular network as a requirement for operation could create possibility for security compromise or interruption of complex actions when the network fails. With the connection to vehicle systems, this could create dangerous vulnerabilities that might impact the driver. Additionally, the data collected will likely be provided to third parties for marketing, making it possible for outsiders to obtain special knowledge for marketing–most of which will be that benign, but some of which may be pernicious in a moving vehicle.

On to the Future

The ability to provide smart assistance within an automobile is probably the most difficult of the tasks to be expected of a personal assistant. These systems have been of marginal value in the home with relatively little to control. Automobiles are different. They are a transportation system in motion in a sophisticated environment, with innumerable demands–some of which might include personal comfort, entertainment, navigation, purchase of outside services, fueling, environmental issues, lighting, safety, and response to the unexpected.

This is a whole new breeding ground for smart assistance. While all major automobile manufacturers are working upon similar types of help, particularly as the era of autonomous vehicles approaches, the Watson approach brings in some new possibilities. The complex, cognitive learning system with integrated database capabilities and search makes it possible to craft a more interactive user interface as well as operate more deeply within the user environment. This will create a challenge for other players in the smart automobile as well as in the personal assistant sectors.

The personal assistant industry is developing quickly as visions of the smart home come together. Companies understand that they need to be visionary here. Those who establish a viable platform and are able to demonstrate success in this category will be able to apply it across the board to the consumer and commercial environments. This Holy Grail of marketing and interactive assistance will be fundamental to the complex operation of the Internet of Things; it will bring on the age of personal assistants, robots, and the advent of ubiquitous intelligence in processes.


Qualcomm Buys NXP for IoT and Cars

In one of the biggest technology deals in recent years, semiconductor giant Qualcomm has signed an agreement to buy NXP Semiconductors N.V. for about $47 billion. Netherlands-based NXP is most noted for its automobile silicon products. NXP is the fifth-largest non-memory semiconductor supplier, and a leading supplier for secure identification, automotive and digital networking products. It was the co-founder (with Sony) of Near-Field-Communications (NFC), and inventor of the I²C interface used for attaching peripherals to microcontrollers.

These capabilities put NXP in the center of IoT and smart automobile technology. Combining with Qualcomm, whose ARM processor, wireless networking technology, and overlaps in NFC and automotive technologies seems a good fit, this move will expand Qualcomm capabilities as well as opening new markets for embedded silicon solutions.

The NXP release details what the company brings to the table:

Mobile: A leader in mobile SoCs, 3G/4G modems and security.

Automotive: A leader in global automotive semiconductors, including ADAS, infotainment, safety systems, body and networking, powertrain and chassis, secure access, telematics and connectivity.

IoT and Security: A leader in broad-based microcontrollers, secure identification, mobile transactions, payment cards and transit; strength in application processors and connectivity systems.

Networking: A leader in network processors for wired and wireless communications and RF sub-segments, Wave-2 11ac/11ad, RF power and BTS systems.

Qualcomm and Intel have both been moving rapidly into the IoT building blocks territory in recent years, and the developing automobile silicon market is a critical testbed for evolving concepts in this area. To paraphrase Willie Sutton’s famous quote, “that’s where the money is.” Autonomous and electric car frenzies are pushing a variety of technologies to create complete digital environments on fully-managed wheels.

As digitization is proving in so many areas, developments shaping the IoT easily move between categories with the speed of invention. Ultimately, neuromorphic and other specialized cognitive chip designs such as Qualcomm’s Zeroth will line up with these device interfaces to create a new generation of smart things.

But that’s another story.