Delphi Automotive Grabs NuTonomy; the Jostling Continues

In yet another acquisition move within the vehicular autonomy market, UK-based Delphi Automotive  has purchased Boston-based NuTonomy, to double its research staff and add compatible software systems for its Automated Mobility on-Demand (AMoD) solutions. Neither of these companies is as yet a major player within the autonomous automobile sector. Delphi Automotive PLC is a General Motors spinoff and NuTonomy is a Carnegie Mellon startup. Delphi’s fully autonomous solution, called the Centralized Sensing Localization and Planning (CSLP) system is planned for a 2019 launch. It is based on Blackberry OS and Delphi’s other recent acquisition Ottomatika’s software. Meanwhile, NuTonomy was founded in 2013 by long-time industry players, Dr. Karl Iagnemma and Dr. Emilio Frazzoli, and is developing a full stack autonomous drive software solution for the AMoD market. The result of the acquisition will be to combine NuTonomy ‘s hundred member team with Delphi’s hundred member team to double research staff in an area where skills are in extreme demand.

While this acquisition could raise Delphi to a higher level of visibility among major autonomous vehicle challengers, it also comes with important implications for the other industry players. Delphi and NuTonomy were pursuing different collaborations to achieve success. Delphi was in a partnership with the BMW and Intel/Mobileye but NuTonomy is allied with Nvidia–which some believe has a more mature autonomous software stack. Inevitably, this brings into the fray the ongoing competition between Intel and Nvidia over artificial intelligence processors and supporting software–particularly as industry awaits the upcoming Nvidia Xavier SOC which should become available in 2018.

NuTonomy’s employees will remain in Boston while Delphi remains in the United Kingdom—though it also has offices in Boston. Both have been running experiments. and have a presence, in Singapore. The combination leaves Delphi with self-driving operations in Boston, Boston, Pittsburgh, Singapore, Santa Monica, and Silicon Valley. Combined with NuTonomy efforts, Delphi will have 60 autonomous cars on the road by the end of 2017.

Another point of interest is that, as with Google’s Waymo, Delphi Automotive intends to split off the vehicle autonomy business in 2018. It will create two new standalone companies: Delphi Technologies to handle the powertrain business, looking at next generation vehicle propulsion systems based on electrification; and Aptiv, which will include Electronics & Safety and Electrical/Electronic Architecture—including the “brains” to drive vehicles. Separating the autonomous vehicle units in the current context makes sense due to the special dynamics of this sector. Smaller companies are being bought by larger companies to obtain resources and skills that are hard to amass in the current environment and separate companies are more easily integrated into the competitive alliances that will be necessary to incorporate an increasing range of specialized products and expertise.

According to Delphi’s President and Chief Executive Officer, Kevin Clark, “The combination of the nuTonomy and Ottomatika AD teams, along with Delphi’s industry-leading portfolio of perception systems and smart vehicle architecture solutions, further enhances our competitive position as the industry’s most formidable provider of autonomous mobility solutions.”

In short, the autonomous vehicle sector is likely to remain volatile for some time and the search for talent will continue until the next generation of engineers in AI solutions becomes available.


Coming Soon to Nagoya: RoboCup 2017: The Video

It’s almost time for this year’s  RoboCup competition in Nagoya, Japan, July 27-30. This event has expanded to include more discussions and competitions in a diverse range of robotics activities. The robots demonstrate incremental developments in autonomous goal-directed behavior, improving a little bit each year.  RoboCup is a springboard for discussion of critical areas of robotic development and also provides a showcase for university robotic programs and recruitment.

RoboCup has been held annually for 21 years. More than 3,500 dedicated scientists and developers from more than 40 countries are expected. The event features a number of activities:

RoboCup Soccer includes autonomous mobile robots separated into leagues: Humanoid, Standard Platform, Middle Size, Small Size and Simulation.

RoboCup Industrial, includes RoboCup Logistics and RoboCup@Work. It is a competition between industrial mobile robots focusing on logistics and warehousing systems in anticipation of Industry 4.0.

RoboCup Rescue includes Rescue Robot League and Rescue Simulation League. It employs technologies developed from RoboCup Soccer, to promote simulations that will contribute toward development of autonomous robots for use in rescue efforts at disaster sites.

RoboCup @Home  applies these technologies to people’s everyday lives, evaluated according to how the robots cooperate with human beings to complete various tasks.

RoboCupJunior includes Soccer, Rescue and Onstage Competition to stimulate children’s curiosity and encourage them to participate in robot design and production.

Official RoboCup 2017 Video

Best Goals of RoboCup 2016 (CNET )

Published on Jul 14, 2016

Can Robots Beat Elite Soccer Players? (TEDxYouth)

Published on Apr 23, 2013

As Professor of Computer Science at UT Austin, Dr. Peter Stone’s interests run from competing in the international RoboCup soccer tournament to developing novel computing protocols for self-driven vehicles. His long-term research involves creating complete, robust, autonomous agents that can learn to interact with other intelligent agents in a wide range of complex and dynamic environments.

What Soccer-Playing Robots Have to do with Healthcare (TEDx Talks)

Published on Sep 29, 2012

Steve McGill is a second year PhD student at the University of Pennsylvania studying humanoid robotics and human robot interaction under Professor Dan Lee. As part of Team DARwIn, he captured the first place medal at the international RoboCup humanoid soccer competition in Istanbul, Turkey. Steve is interested in applying this robotic technology for deployment in the field for human intercommunication.

 


 


SoftBank Buys Boston Dynamics, Promises More Robots

Japanese technology company SoftBank is acquiring acclaimed robot maker Boston Dynamics from Alphabet (Google), along with Japanese bipedal robotics company Schaft, both of which were acquired by Alpahabet in 2013. This is part of SoftBank’s move into the robotics space, exemplified by the SoftBank Robotics “Pepper” humanoid robot whose roles have been increasing beyond the consumer space recently. SoftBank’s robot lineup now includes Pepper, Boston Dynamics’ BigDog and Handle, Schaft’s S-One, and related projects, and Fetch robotics’ warehouse fulfillment robots, making it a versatile player in this space with multi-mission capability.

According to the press release, Masayoshi Son, Chairman and CEO of SoftBank Group, said:

“Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the Information Revolution, and Marc and his team at Boston Dynamics are the clear technology leaders in advanced dynamic robots. I am thrilled to welcome them to the SoftBank family and look forward to supporting them as they continue to advance the field of robotics and explore applications that can help make life easier, safer and more fulfilling.”

Boston Dynamics is known for its DARPA military-oriented robots, including BigDog, Handle, and the humanoid robot Alpha. It has been struggling to find a market for its products, in this stage of  development, and Alphabet has been trying to sell the operation since last year.

SoftBank has a wide range of related interests and commitments within this area, including advanced telecommunications, internet services, AI, smart robotics, IoT, clean energy technology providers, and ARM processors. It entered the robotics market through acquisition of Aldebaran Robotics in 2012. Aldebaran, creator of the Nao and Romeo robots, was renamed Softbank Robotics and created a social robot called Pepper, which is being tested in a growing range of consumer and business settings.

Softbank’s robotics ventures are centered in its Tokyo-based subsidiary SoftBank Robotics Holding Corp, established in 2014, with offices in Japan, France, U.S. and China. SoftBank Robotics has more than 500 employees working in Paris, Tokyo, San Francisco, Boston and Shanghai. Its robots are used in more than 70 countries for research, education, retail, healthcare, tourism, hospitality and entertainment.

The CEO of Boston Dynamics Explains Each Robot in the Fleet (jurvetson)

SoftBank Robotics Meet Pepper (SoftBank Robotics America)


 

 

 


Cisco Adds MindMeld for Conversational Assistance

Cisco is buying AI digital assistant startup MindMeld as part of a string of May acquisitions. Cisco will be using the technology to improve its collaboration suite by adding conversational interfaces, beginning with Cisco Spark.

MindMeld is a relatively small company, but it is a recognized player in the conversational interface area. It provides a flexible protocol called “Deep-Domain Conversational AI,” which can be used to add knowledge and expertise around any custom content domain.  This allows companies to amplify the capabilities of natural language conversational interfaces. MindMeld is currently used by Spotify, and Samsung, among others.

MindMeld brings its 10 patents in its domain, and was founded by Tim Tuttle, a former AI researcher from MIT and Bell Labs,  and Moninder Jheeta in 2014.

Capabilities included in the MindMeld offering are broad vocabulary natural language understanding, question answering across any knowledge graph, dialog management and dialog state tracking, and large scale training data generation and management.

The value proposition of MindMeld’s offering has always been in enabling conversations within specific knowledge domains–a particular weakness of general purpose assistants, particularly as AI moves to respond to business needs.

According to Cisco’s head of M&A and venture investment, Rob Salvagno, writing on the Cisco blog,

I’m excited for the potential represented by the MindMeld team and their technology, coupled with Cisco’s market-leading collaboration portfolio, to enable us to create a user experience that is unlike anything that exists in the market today. Together, we will work to create the next generation collaboration experience. The MindMeld team will form the Cognitive Collaboration team and report into the IoT and Applications group under Jens Meggers, senior vice president and general manager.

Of course, acquisition of this company also brings significant AI skill sets which will now be directed toward enhancing Cisco’s efforts. It is part of the continuing movement to find a place for digital assonants in business that will match their blossoming in the consumer realm (Digital Assistants Coming of Age with Alexa).


 


Google Adds a Parliament of Owls to its VR Team:  Acquisition of Owlchemy

Google is moving a bit further into the VR space with its acquisition of “absurd” VR gaming company Owlchemy Labs.  This could accelerate VR innovation, bringing extra attention to the user experience (UX), and a concentration of skills for Google that could provide an edge as VR and AR progress.

Owlchemy Labs began as a conventional games designer, and then became one of the first companies with work with the Oculus Rift VR platform, releasing the VR game Job Simulator. Job Simulator now has over $3 million in sales, and the company has grown from a team of 4 in 2010 to a team of 23. According to the Owlchemy Press Release:

This means Owlchemy will continue building high quality VR content for platforms like the HTC Vive, Oculus Touch, and PlayStation VR. This means continuing to focus on hand interactions and high quality user experiences, like with Job Simulator. This means continuing our mission to build VR for everyone, and doing all of this as the same silly Owlchemy Labs you know and love.

Job Simulator was an overnight success with its particular focus on tracked head and hands to enhance the gaming experience. The VR game was launched with HTC Vive, Oculus + Touch, and PlayStation VR and won multiple awards for gameplay and interaction. Recently, the company has pioneered a new way to show VR footage with its “Spectator Mode” and it continues to improve its VR presentation.

Google’s blog announcement shows where the new acquisition fits into the Googleverse:

We care a lot about building and investing in compelling, high-quality, and interactive virtual reality experiences and have created many of our own—from YouTube, Street View, and Photos on Daydream to Google Earth VR and Tilt Brush. And, we work with partners and support developers and creators outside of Google to help bring their ideas to VR. …

Together, we’ll be working to create engaging, immersive games and developing new interaction models across many different platforms to continue bringing the best VR experiences to life. There is so much more to build and learn, so stay tuned!

Google has been actively pursuing VR for some time now, from before it’s almost accidental rollout of the Cardboard VR headset. Unlike many other operators in this space, it has the resources to explore a very broad array of VR issues.  The company’s Daydream Labs, for example, is focusing upon social aspects of the VR UX experience as well as content. Usability and controls for the VR experience are of particular importance in building new uses for these platforms, and this is an area in which Owlchemy excels.

As with AI, this acquisition also brings more VR experience and skills into Google. Competition includes Microsoft HoloLens and Facebook Oculus, among others. As has always been the case, the next generation of technologies are being actively explored in gaming before being put to practical use.  The intersection of AI with VR (On the Intersection of AI and Augmented Reality) will become particularly important as a means of building and enhancing virtual and enhanced realities, and interacting with their components.

The Duke of Wellington is famously attributed with the quotation, “The battle of Waterloo was won on the playing fields of Eton.”  It is quite possible that the battle of Consumer and Enterprise VR will be won on the gaming headsets of FAMGA (Facebook, Apple, Microsoft, Google, Amazon).

 


IBM and ABB Collaborate to Boost Industry 4.0

Swiss-based engineering giant Asea Brown Boveri (ABB) and IBM have just announced a strategic collaboration that brings together ABB’s industry leading digital sensor and control offering, ABB Ability, with IBM Watson AI-based IoT system to provide a more comprehensive intelligent solution fir control and defect monitoring issues in utilities, industry and transport & infrastructure.

IBM Watson has been moving gradually into a range of new territories as applications for its cognitive capabilities are being explored. This move brings both companies closer to “Industry 4.0” where competition includes not only software companies, but also manufacturing firms such as GE.

The IBM/ABB solution is aimed at varying goals such as improving quality control, reducing downtime and increasing speed and yield of industrial processes by enabling current sensor and data gathering systems to become “cognitive” by using collected data to understand and take actions. ABB brings a deep domain knowledge and extensive portfolio of digital solutions to the mix, which is combined with IBM’s AI capabilities and vertical industry applications. The first two joint industry solutions will bring real-time cognitive insights to finding defects on the factory floor, and optimize maintenance of smart grids.

According to ABB CEO Ulrich Spiesshofe:

“This powerful combination marks truly the next level of industrial technology, moving beyond current connected systems that simply gather data, to industrial operations and machines that use data to sense, analyze, optimize and take actions that drive greater uptime, speed and yield for industrial customers. With an installed base of 70 million connected devices, 70,000 digital control systems and 6,000 enterprise software solutions, ABB is a trusted leader in the industrial space, and has a four decade long history of creating digital solutions for customers. IBM is a leader in artificial intelligence and cognitive computing. Together, IBM and ABB will create powerful solutions for customers to benefit from the Fourth Industrial Revolution.”

Part of ABBs business strategy is collaboration with other vendors, which includes partnering with IBM, Microsoft, and Wipro, among others in delivery of its digital solutions. Prior to the IBM deal, ABB has been using predictive and prescriptive analytics, plus customized models based extensive industry expertise, to identify and prioritize emerging maintenance needs based on probability of failure and asset criticality.

For IBM, this the collaboration aids in bringing Watson deeper into the crucial Industry 4.0 space, a key area for technological progress in IoT and AI. According to IBM CEO, Ginni Rometty:

“This important collaboration with ABB will take Watson even deeper into industrial applications — from manufacturing, to utilities, to transportation and more. The data generated from industrial companies’ products, facilities and systems holds the promise of exponential advances in innovation, efficiency and safety. Only with Watson’s broad cognitive capabilities and our platform’s unique support for industries can this vast new resource be turned into value, with trust. We are eager to work in partnership with ABB on this new industrial era.”

We can expect an onslaught of collaborations, mergers and acquisitions, and talent wars across this valuable sector as major industrial and IT forces join in the fray.


 


Baidu Adds xPerception to its AI/VR Stockpile

Leading Chinese Internet search provider Baidu is acquiring US startup xPerception, an AI-based visual perception software and hardware provider for robotics and virtual reality (VR). This provides important talent for the company’s moves into AI, with xPerception co-founders, Dr. Bao Yingze and Dr. Chen Mingyu, who were both key engineers at AR startup Magic Leap. The xPerception team will move to the US and Beijing offices of Baidu Research and continue developing xPerception’s Simultaneous Localization and Mapping (SLAM) technology.

SLAM is critical to visual perception used in a variety of  AI and VR roles, including 3D vision, robotics, drones and autonomous driving. The base of xPerception technology is a 3D visual inertial camera on mobile platforms, with a sophisticated SDK that enables pose tracking, low-latency sensor fusion, and object recognition. This permits self-localization, 3D structure reconstruction, and path planning in new environments. These technologies linking AI and VR are opening new opportunities, as discussed in the recent blog On the Intersection of AI and Augmented Reality.

In addition to integration with Baidu AI and autonomous driving programs, the xPerception acquisition provides high-demand skills and helps to defray concerns over US regulations and immigration policies. There has recently been potential regulatory blockage of Chinese acquisitions, most notably claims that Alibaba Group’s $1.2 billion bid for U.S. firm MoneyGram International poses national security risks. U.S. lawmakers are requesting  review by the Committee on Foreign Investment. Meanwhile, Chinese internet firm LeEco has terminated of a $2 billion bid for U.S. electronics firm Vizio due to regulatory issues. Baidu’s splitting of its AI research between Beijing and California provides assurance that changing US immigration policies will not overly affect research agendas, while providing access to US-based talent and skills networks.

Baidu has been active in AI for some years now, but finding talent in this area is difficult. High-profile chief scientist, Andrew Ng, left the company  in March, making addition of experienced AI/VR staff a priority. Its commitment to this area is further demonstrated by the Chinese government’s  February selection of Baidu to set up a national AI lab. Baidu Research currently maintains four analytics and AI labs: a Silicon Valley AI Lab, the Institute of Deep Learning, the Big Data Lab and the Augmented Reality Lab. According to Ng, Baidu’s AI group now includes about 1,300 people.

The search for AI talent is global, fueled by visions of integrating this rapidly developing technology with an increasing range of business and technology processes. Autonomous vehicles continue to be a driving force. Meanwhile, globalization issues may drive companies to hedge their bets, particularly in China and India. The last large acquisition, Intel’s purchase of Mobileye, split research between Silicon Valley and Israel. (Car Wars: Intel Bags Mobileye).


 


Car Wars: Intel Bags Mobileye

Intel is acquiring autonomous driving company Mobileye in a deal valued at $15.3 billion, expected to close toward the end of this year. Acquisition of the Israeli firm, whose technology is used by  27 companies in the auto industry, represents a number of interesting issues in the self-driving vehicle technology race.

Intel has been pursuing autonomous vehicle technology, but this initiative–one of the 10 largest acquisitions in the tech industry–brings it front and center. The key to Mobileye’s autonomous solution lies in its silicon. Mobileye has developed its EyeQ® family of system-on-chip (SoC) devices, which support complex and computationally intense vision processing while still maintaining low power consumption. Mobileye is currently developing its fifth generation chip, the EyeQ 5, to act as the visual central computer for fully autonomous self-driving vehicles expected to appear in 2020. The EyeQ chips employ proprietary computing cores optimized for computer vision, signal processing, and machine learning tasks, including deep neural networks. These cores are designed specifically to address the needs of Advanced Driver Assistance Systems (ADAS).

As a chip developer focusing upon providing the building blocks for technology, its traditional role, Intel is moving forcefully in this direction, partly as a result of growing competition in embedded machine learning from the likes of Nvidia and Qualcomm, both of which are also moving into the autonomous vehicle area. Self-driving cars are the nexus of development in machine learning due to the huge profit expectations of the automobile, transportation, and logistics industries. Evolution of autonomous vehicles, particularly with deep learning capabilities in silicon, will also create additional pressure on skills for artificial intelligence across all industry sectors, as well as creating an explosion in innovation and speeding development of these systems.

Intel intends to form an autonomous driving unit combining its current Automated Driving Group (ADG) and Mobileye. The group will be headquartered in Israel and led by Mobileye’s co-founder, Amnon Shashua, currently Mobileye’s Chairman and CTO; and a professor at Hebrew University.

According to the combined press release:

This acquisition will combine the best-in-class technologies from both companies, spanning connectivity, computer vision, data center, sensor fusion, high-performance computing, localization and mapping, machine learning and artificial intelligence. Together with partners and customers, Intel and Mobileye expect to deliver driving solutions that will transform the automotive industry.

The new organization will support both companies’ existing production programs and build upon the large number of relationships that Mobileye maintains with OEMs,  automobile industry tier 1 suppliers, and semiconductor partners.

Intel’s interests in this deal are likely to be diverse. Among the potential benefits are:

  • Potentially four terabytes of data per hour of data to be processed, creating large-scale opportunities for Intel’s high-end Xeon processors and mobilize latest generation of SOC’s.
  • Moving to Israel, where Intel already has a significant presence, potentially isolates its research and testing from the competitive hotbed of Silicon Valley, shielding employees from poaching. It also avoids current immigration issues.
  • Additional competitive advantages within AI and embedded deep learning, which are the next generation targets of Intel’s silicon competition.

It is worth noting that this is a general boost to autonomous vehicles that will inevitably lead to greater concentration of resources in this field.  Although relying upon a common supplier of autonomous systems makes sense economically, it also reduces competitive advantages.

The number of companies involved in this sector continues to grow as the implications stretch out through the entire transportation-related sector.  We have covered a number of these systems in recent blogs here (Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users, Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI, All Things Need Autonomy: TomTom Grabs Autonomos, Uber Doubles Down on AI with Geometric Acquisition, Qualcomm Buys NXP for IoT and Cars ). The net result will to be to create a huge rush for talent in the machine learning space, as well as all of the areas related to integration with automobile systems. This will increase the speed of evolution for embedded AI, which will filter rapidly into other areas of business and process enablement though impeded by the availability of talent.


Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users

Chinese ride-hailing and transportation company DiDi Chuxing has just opened its own AI vehicle lab in Silicon Valley. DiDi Labs in Mountain View, CA, will focus on AI-based security and intelligent driving technologies, hoping to grab talent in the region as the battle for AI skills continues. ( see Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI Arsenal).

According to Cheng Wei, the company’s founder and CEO, “As we strive to bring better services to broader communities, DiDi’s international vision now extends to building the best-in-class international research network, advancing the global transportation revolution by leveraging innovative resources. The launch of DiDi Labs is a landmark in creating this global nexus of innovation.”

Hidden from the spotlight, DiDi has developed a huge amount of experience in China, which provides a substantial data source for its AI efforts. DiDi acquired Uber China in August 2016. Unlike rivals, the company size and reach is truly impressive. It offers a full range of technology-based services for nearly 400 million users across more than 400 Chinese cities. These include taxi hailing, private car hailing, Hitch (social ride-sharing), DiDi Chauffeur, DiDi Bus, DiDi Minibus, DiDi Test Drive, DiDi Car Rental and DiDi Enterprise Solutions. As many as 20 million rides were completed on DiDi’s platform on a daily basis in October 2016, making DiDi the world’s second largest online transaction platform next only to China’s Alibaba-owned shopping website, Taobao.

It is also important to note that DiDi is estimated to be worth about US$35 billion and its investors reportedly include all three of China’s Internet giants—Alibaba, Tencent, and Baidu—and Apple. The current initiative is centered around security issues, but location and stated intent makes the acquisition of Valley AI talent a clear objective. Uber’s acquisition of Carnegie Mellon’s whole AI lab in 2015 got all companies in the automotive intelligence sector nervous about talent. The company has recently had a range of issues arise regarding regulation in its Chinese market, and the current move may deflect some of that, and also play into the Chinese government’s current push to develop innovation.

DiDi Labs will be led by Dr. Fengmin Gong, Vice President of DiDi Research Institute, with a number of prominent researchers including top automobile security expert Charlie Miller, known as one of the scientists who famously hacked a Jeep Cherokee in 2015. Current projects span the areas of cloud-based security, deep learning, human-machine interaction, computer vision and imaging, as well as intelligent driving technologies.

DiDi has been involved with autonomous vehicles for some time, and this move mainly surfaces its involvement. DiDi previously established the DiDi Research Institute to focus on AI technologies including machine learning and computer vision. It also has ongoing research in big data analytics based on the 70TB worth of data, 9 billion routing requests, and 13 billion location points provided by its platforms.

In founding the new lab, the company also launched the DiDi-Udacity Self-Driving Car Challenge, an open-source self-driving competition in which player teams are invited to create an Automated Safety and Awareness Processing Stack (ASAPS) to improve general safety metrics for human and intelligent driving scenarios based on real data.

According to DiDi CTO Bob Zhang, “In the next decade, DiDi will play a leading role in innovation at three layers: optimization of transportation infrastructure, introduction of new energy vehicles and intelligent driving systems, and a shift in human-automotive relationship from ownership to shared access.”


Amazon Gives Alexa a PhD Boost

Amazon has just announced the Alexa Fund Fellowship program, a year-long doctoral program at four universities to take on technology problems that can be solved with Amazon’s NLP home assistant Alexa. Alexa has become increasingly popular in recent years, through its open-sourcing provision provision of cloud-based “skills” that enable people to interact with devices through voice commands in an intuitive way. This has created numerous alliances and connections with devices across the IoT, as we have discussed in a previous post, Digital Assistants Coming of Age with Alexa.

Amazon is a strong competitor in a tough field that includes various overlaps with Google Home, Microsoft Cortana, Apple Siri, IBM Watson and others. Competition is not only about market share and market direction; but it is also about available skills, alliances, and formation of new research streams that focus upon favorable technology.

The Fellowship Program

According to Amazon’s developer site:

The Amazon Alexa Fund is establishing an Alexa Fund Fellowship program to support academic institutions leading the charge in the fields of text-to-speech (TTS), natural language understanding (NLU), automatic speech recognition (ASR), conversational artificial intelligence (AI), and other related engineering fields. The goal of the Alexa Fund Fellowship program is to educate students about voice technology and empower them to create the next big thing.

The four participating universities are Carnegie Mellon, University of Waterloo, University of Southern California (USC), and John Hopkins. The program provides cash funding, Alexa-enabled devices, and mentorship from the company’s Alexa Science teams in development of a graduate or undergraduate class curriculum related to Alexa technology.

The first two announced programs are from Carnegie Mellon and Waterloo. Carnegie’s Dialog Systems course “will teach participants how to implement a complete spoken language system while providing opportunities to explore research topics of interest in the context of a functioning system. ” Waterloo’s Fundamentals of Computational Intelligence will “introduce students to novel approaches for computational intelligence based techniques including: knowledge-based reasoning, expert systems, fuzzy inferencing and connectionist modeling based on artificial neural networks.”

It is interesting to note that the funded courses might include broader agendas, furthering general progress in AI–but in an Alexa context.

The Investment Context

This Fellowship is part of a spate of recent R&D, startup, and academic investments in Alexa. It comes on the heels of Amazon’s September announcement of the Alexa Prize, a competition between 12 selected university teams to build a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes, using Alexa technology.

The Alexa Fund itself was started in June, 2015, with goals described its landing page:

The Alexa Fund provides up to $100 million in venture capital funding to fuel voice technology innovation. We believe experiences designed around the human voice will fundamentally improve the way people use technology. Since introducing Alexa-enabled devices like the Amazon Echo, we’ve heard from developers, device-makers, and companies of all sizes that want to innovate with voice. Whether that’s creating new Alexa capabilities with the Alexa Skills Kit (ASK), building devices that use Alexa for new and novel voice experiences using the Alexa Voice Service (AVS), inventing core voice-enabling technology, or developing a product or service that can expand the boundaries of voice technology, we’d love to talk to you.

In past years, the Fund has focused upon entrepreneurship, and has investments at various funding stages in 23 companies. Although many amounts are undisclosed, the top recent known investments have been in Thalmic Labs a wearables company; ecobee, a provider of Wi-Fi enabled smart thermostats; and Ring, a wireless video doorbell company.

Recently, the Fund has also created the Alexa Accelerator initiative in cooperation with global entrepreneurial ecosystem company, Techstars. The Accelerator is a 13-week startup course for 10 companies running from July to September 2017. According to Fund spokesperson Rodrigo Prudencio:

We will seek out companies tackling hard problems in a variety of domains—consumer, productivity, enterprise, entertainment, health and wellness, travel—that are interested in making an Alexa integration a priority. We’ll also look for companies that are building enabling technology such as natural language understanding (NLU) and better hardware designs that can extend or add to Alexa’s capabilities.

They are Not Alone

Of course, Amazon is not unique in creating and funding programs that support its AI initiatives, both within the academic and entrepreneurial communities. There is a lot of funding available, and this will support progress in these technologies. Such initiatives also respond to the need to educate AI specialists in a hurry, and create new initiatives that might provide an edge for one company’s interpretation of the AI and robotics future.