Delphi Automotive Grabs NuTonomy; the Jostling Continues

In yet another acquisition move within the vehicular autonomy market, UK-based Delphi Automotive  has purchased Boston-based NuTonomy, to double its research staff and add compatible software systems for its Automated Mobility on-Demand (AMoD) solutions. Neither of these companies is as yet a major player within the autonomous automobile sector. Delphi Automotive PLC is a General Motors spinoff and NuTonomy is a Carnegie Mellon startup. Delphi’s fully autonomous solution, called the Centralized Sensing Localization and Planning (CSLP) system is planned for a 2019 launch. It is based on Blackberry OS and Delphi’s other recent acquisition Ottomatika’s software. Meanwhile, NuTonomy was founded in 2013 by long-time industry players, Dr. Karl Iagnemma and Dr. Emilio Frazzoli, and is developing a full stack autonomous drive software solution for the AMoD market. The result of the acquisition will be to combine NuTonomy ‘s hundred member team with Delphi’s hundred member team to double research staff in an area where skills are in extreme demand.

While this acquisition could raise Delphi to a higher level of visibility among major autonomous vehicle challengers, it also comes with important implications for the other industry players. Delphi and NuTonomy were pursuing different collaborations to achieve success. Delphi was in a partnership with the BMW and Intel/Mobileye but NuTonomy is allied with Nvidia–which some believe has a more mature autonomous software stack. Inevitably, this brings into the fray the ongoing competition between Intel and Nvidia over artificial intelligence processors and supporting software–particularly as industry awaits the upcoming Nvidia Xavier SOC which should become available in 2018.

NuTonomy’s employees will remain in Boston while Delphi remains in the United Kingdom—though it also has offices in Boston. Both have been running experiments. and have a presence, in Singapore. The combination leaves Delphi with self-driving operations in Boston, Boston, Pittsburgh, Singapore, Santa Monica, and Silicon Valley. Combined with NuTonomy efforts, Delphi will have 60 autonomous cars on the road by the end of 2017.

Another point of interest is that, as with Google’s Waymo, Delphi Automotive intends to split off the vehicle autonomy business in 2018. It will create two new standalone companies: Delphi Technologies to handle the powertrain business, looking at next generation vehicle propulsion systems based on electrification; and Aptiv, which will include Electronics & Safety and Electrical/Electronic Architecture—including the “brains” to drive vehicles. Separating the autonomous vehicle units in the current context makes sense due to the special dynamics of this sector. Smaller companies are being bought by larger companies to obtain resources and skills that are hard to amass in the current environment and separate companies are more easily integrated into the competitive alliances that will be necessary to incorporate an increasing range of specialized products and expertise.

According to Delphi’s President and Chief Executive Officer, Kevin Clark, “The combination of the nuTonomy and Ottomatika AD teams, along with Delphi’s industry-leading portfolio of perception systems and smart vehicle architecture solutions, further enhances our competitive position as the industry’s most formidable provider of autonomous mobility solutions.”

In short, the autonomous vehicle sector is likely to remain volatile for some time and the search for talent will continue until the next generation of engineers in AI solutions becomes available.


Microsoft Adding AI Startup Hexadite to its Security Arsenal

Microsoft is acquiring Hexadite, a 2014 startup delivering agentless, automatic incident investigation and remediation solutions. The terms of the agreement are not disclosed, but the price is said to be in the $100 million range, which is a respectable bounty.

Hexadite’s Automated Incident Response Solution (AIRS™) takes an expert system approach to security, using an AI engine. It is modeled after the investigative processes used by cyber analysts, and driven by AI. AI processes investigate every alert generated by existing security solutions and can take remediation actions when a threat is verified–without human intervention. It can also operate in a semi-automated mode requiring analyst approval. Rapid deployment and automatic functionality lets customer organizations get up to speed and customize response without requiring coding skills.

The addition of AI tools to every aspect of security is inevitable, particularly as hackers incorporate them into their own tools and methods. We looked at some of these issues earlier in the video collection, AI and Risk Management: The Video, as well as in The Internet of Intelligent Things and Your Security.

Microsoft’s take on the acquisition:

“Our vision is to deliver a new generation of security capabilities that helps our customers protect, detect and respond to the constantly evolving and ever-changing cyberthreat landscape,” said Terry Myerson, executive vice president, Windows and Devices Group, Microsoft. “Hexadite’s technology and talent will augment our existing capabilities and enable our ability to add new tools and services to Microsoft’s robust enterprise security offerings.”

According to Hexadite:

“This serves as validation of all the team has done to create something truly amazing in the market. It also means that the technology is going to be used worldwide by hundreds of millions of people as part of Microsoft’s security portfolio.”

According to the company, the name “Hexadite” is a neologism where “Hexa”  refers to the “sixth sense” of AIRS, which automatically understands what to do about each cyber-alert received; and “DITE” is for ‘dynamic investigation and threat elimination’.

Hexadite headquarters are in Boston, with a team of researchers in Israel. Following the close of the deal and after a period of integration, Hexadite will be fully absorbed into Microsoft as part of the Windows and Devices Group.

This is also Microsoft’s fourth recent acquisition of security companies with research facilities in Israel. Israel has always been a hotspot for security technologies, and spreading AI research capabilities around the globe seems to be a growing phenomenon.

Companies need to understand that developments in AI will affect both the magnitude of the cybersecurity threat, and its response.


 


SoftBank Buys Boston Dynamics, Promises More Robots

Japanese technology company SoftBank is acquiring acclaimed robot maker Boston Dynamics from Alphabet (Google), along with Japanese bipedal robotics company Schaft, both of which were acquired by Alpahabet in 2013. This is part of SoftBank’s move into the robotics space, exemplified by the SoftBank Robotics “Pepper” humanoid robot whose roles have been increasing beyond the consumer space recently. SoftBank’s robot lineup now includes Pepper, Boston Dynamics’ BigDog and Handle, Schaft’s S-One, and related projects, and Fetch robotics’ warehouse fulfillment robots, making it a versatile player in this space with multi-mission capability.

According to the press release, Masayoshi Son, Chairman and CEO of SoftBank Group, said:

“Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the Information Revolution, and Marc and his team at Boston Dynamics are the clear technology leaders in advanced dynamic robots. I am thrilled to welcome them to the SoftBank family and look forward to supporting them as they continue to advance the field of robotics and explore applications that can help make life easier, safer and more fulfilling.”

Boston Dynamics is known for its DARPA military-oriented robots, including BigDog, Handle, and the humanoid robot Alpha. It has been struggling to find a market for its products, in this stage of  development, and Alphabet has been trying to sell the operation since last year.

SoftBank has a wide range of related interests and commitments within this area, including advanced telecommunications, internet services, AI, smart robotics, IoT, clean energy technology providers, and ARM processors. It entered the robotics market through acquisition of Aldebaran Robotics in 2012. Aldebaran, creator of the Nao and Romeo robots, was renamed Softbank Robotics and created a social robot called Pepper, which is being tested in a growing range of consumer and business settings.

Softbank’s robotics ventures are centered in its Tokyo-based subsidiary SoftBank Robotics Holding Corp, established in 2014, with offices in Japan, France, U.S. and China. SoftBank Robotics has more than 500 employees working in Paris, Tokyo, San Francisco, Boston and Shanghai. Its robots are used in more than 70 countries for research, education, retail, healthcare, tourism, hospitality and entertainment.

The CEO of Boston Dynamics Explains Each Robot in the Fleet (jurvetson)

SoftBank Robotics Meet Pepper (SoftBank Robotics America)


 

 

 


Cisco Adds MindMeld for Conversational Assistance

Cisco is buying AI digital assistant startup MindMeld as part of a string of May acquisitions. Cisco will be using the technology to improve its collaboration suite by adding conversational interfaces, beginning with Cisco Spark.

MindMeld is a relatively small company, but it is a recognized player in the conversational interface area. It provides a flexible protocol called “Deep-Domain Conversational AI,” which can be used to add knowledge and expertise around any custom content domain.  This allows companies to amplify the capabilities of natural language conversational interfaces. MindMeld is currently used by Spotify, and Samsung, among others.

MindMeld brings its 10 patents in its domain, and was founded by Tim Tuttle, a former AI researcher from MIT and Bell Labs,  and Moninder Jheeta in 2014.

Capabilities included in the MindMeld offering are broad vocabulary natural language understanding, question answering across any knowledge graph, dialog management and dialog state tracking, and large scale training data generation and management.

The value proposition of MindMeld’s offering has always been in enabling conversations within specific knowledge domains–a particular weakness of general purpose assistants, particularly as AI moves to respond to business needs.

According to Cisco’s head of M&A and venture investment, Rob Salvagno, writing on the Cisco blog,

I’m excited for the potential represented by the MindMeld team and their technology, coupled with Cisco’s market-leading collaboration portfolio, to enable us to create a user experience that is unlike anything that exists in the market today. Together, we will work to create the next generation collaboration experience. The MindMeld team will form the Cognitive Collaboration team and report into the IoT and Applications group under Jens Meggers, senior vice president and general manager.

Of course, acquisition of this company also brings significant AI skill sets which will now be directed toward enhancing Cisco’s efforts. It is part of the continuing movement to find a place for digital assonants in business that will match their blossoming in the consumer realm (Digital Assistants Coming of Age with Alexa).


 


Google Adds a Parliament of Owls to its VR Team:  Acquisition of Owlchemy

Google is moving a bit further into the VR space with its acquisition of “absurd” VR gaming company Owlchemy Labs.  This could accelerate VR innovation, bringing extra attention to the user experience (UX), and a concentration of skills for Google that could provide an edge as VR and AR progress.

Owlchemy Labs began as a conventional games designer, and then became one of the first companies with work with the Oculus Rift VR platform, releasing the VR game Job Simulator. Job Simulator now has over $3 million in sales, and the company has grown from a team of 4 in 2010 to a team of 23. According to the Owlchemy Press Release:

This means Owlchemy will continue building high quality VR content for platforms like the HTC Vive, Oculus Touch, and PlayStation VR. This means continuing to focus on hand interactions and high quality user experiences, like with Job Simulator. This means continuing our mission to build VR for everyone, and doing all of this as the same silly Owlchemy Labs you know and love.

Job Simulator was an overnight success with its particular focus on tracked head and hands to enhance the gaming experience. The VR game was launched with HTC Vive, Oculus + Touch, and PlayStation VR and won multiple awards for gameplay and interaction. Recently, the company has pioneered a new way to show VR footage with its “Spectator Mode” and it continues to improve its VR presentation.

Google’s blog announcement shows where the new acquisition fits into the Googleverse:

We care a lot about building and investing in compelling, high-quality, and interactive virtual reality experiences and have created many of our own—from YouTube, Street View, and Photos on Daydream to Google Earth VR and Tilt Brush. And, we work with partners and support developers and creators outside of Google to help bring their ideas to VR. …

Together, we’ll be working to create engaging, immersive games and developing new interaction models across many different platforms to continue bringing the best VR experiences to life. There is so much more to build and learn, so stay tuned!

Google has been actively pursuing VR for some time now, from before it’s almost accidental rollout of the Cardboard VR headset. Unlike many other operators in this space, it has the resources to explore a very broad array of VR issues.  The company’s Daydream Labs, for example, is focusing upon social aspects of the VR UX experience as well as content. Usability and controls for the VR experience are of particular importance in building new uses for these platforms, and this is an area in which Owlchemy excels.

As with AI, this acquisition also brings more VR experience and skills into Google. Competition includes Microsoft HoloLens and Facebook Oculus, among others. As has always been the case, the next generation of technologies are being actively explored in gaming before being put to practical use.  The intersection of AI with VR (On the Intersection of AI and Augmented Reality) will become particularly important as a means of building and enhancing virtual and enhanced realities, and interacting with their components.

The Duke of Wellington is famously attributed with the quotation, “The battle of Waterloo was won on the playing fields of Eton.”  It is quite possible that the battle of Consumer and Enterprise VR will be won on the gaming headsets of FAMGA (Facebook, Apple, Microsoft, Google, Amazon).

 


Baidu Adds xPerception to its AI/VR Stockpile

Leading Chinese Internet search provider Baidu is acquiring US startup xPerception, an AI-based visual perception software and hardware provider for robotics and virtual reality (VR). This provides important talent for the company’s moves into AI, with xPerception co-founders, Dr. Bao Yingze and Dr. Chen Mingyu, who were both key engineers at AR startup Magic Leap. The xPerception team will move to the US and Beijing offices of Baidu Research and continue developing xPerception’s Simultaneous Localization and Mapping (SLAM) technology.

SLAM is critical to visual perception used in a variety of  AI and VR roles, including 3D vision, robotics, drones and autonomous driving. The base of xPerception technology is a 3D visual inertial camera on mobile platforms, with a sophisticated SDK that enables pose tracking, low-latency sensor fusion, and object recognition. This permits self-localization, 3D structure reconstruction, and path planning in new environments. These technologies linking AI and VR are opening new opportunities, as discussed in the recent blog On the Intersection of AI and Augmented Reality.

In addition to integration with Baidu AI and autonomous driving programs, the xPerception acquisition provides high-demand skills and helps to defray concerns over US regulations and immigration policies. There has recently been potential regulatory blockage of Chinese acquisitions, most notably claims that Alibaba Group’s $1.2 billion bid for U.S. firm MoneyGram International poses national security risks. U.S. lawmakers are requesting  review by the Committee on Foreign Investment. Meanwhile, Chinese internet firm LeEco has terminated of a $2 billion bid for U.S. electronics firm Vizio due to regulatory issues. Baidu’s splitting of its AI research between Beijing and California provides assurance that changing US immigration policies will not overly affect research agendas, while providing access to US-based talent and skills networks.

Baidu has been active in AI for some years now, but finding talent in this area is difficult. High-profile chief scientist, Andrew Ng, left the company  in March, making addition of experienced AI/VR staff a priority. Its commitment to this area is further demonstrated by the Chinese government’s  February selection of Baidu to set up a national AI lab. Baidu Research currently maintains four analytics and AI labs: a Silicon Valley AI Lab, the Institute of Deep Learning, the Big Data Lab and the Augmented Reality Lab. According to Ng, Baidu’s AI group now includes about 1,300 people.

The search for AI talent is global, fueled by visions of integrating this rapidly developing technology with an increasing range of business and technology processes. Autonomous vehicles continue to be a driving force. Meanwhile, globalization issues may drive companies to hedge their bets, particularly in China and India. The last large acquisition, Intel’s purchase of Mobileye, split research between Silicon Valley and Israel. (Car Wars: Intel Bags Mobileye).


 


Car Wars: Intel Bags Mobileye

Intel is acquiring autonomous driving company Mobileye in a deal valued at $15.3 billion, expected to close toward the end of this year. Acquisition of the Israeli firm, whose technology is used by  27 companies in the auto industry, represents a number of interesting issues in the self-driving vehicle technology race.

Intel has been pursuing autonomous vehicle technology, but this initiative–one of the 10 largest acquisitions in the tech industry–brings it front and center. The key to Mobileye’s autonomous solution lies in its silicon. Mobileye has developed its EyeQ® family of system-on-chip (SoC) devices, which support complex and computationally intense vision processing while still maintaining low power consumption. Mobileye is currently developing its fifth generation chip, the EyeQ 5, to act as the visual central computer for fully autonomous self-driving vehicles expected to appear in 2020. The EyeQ chips employ proprietary computing cores optimized for computer vision, signal processing, and machine learning tasks, including deep neural networks. These cores are designed specifically to address the needs of Advanced Driver Assistance Systems (ADAS).

As a chip developer focusing upon providing the building blocks for technology, its traditional role, Intel is moving forcefully in this direction, partly as a result of growing competition in embedded machine learning from the likes of Nvidia and Qualcomm, both of which are also moving into the autonomous vehicle area. Self-driving cars are the nexus of development in machine learning due to the huge profit expectations of the automobile, transportation, and logistics industries. Evolution of autonomous vehicles, particularly with deep learning capabilities in silicon, will also create additional pressure on skills for artificial intelligence across all industry sectors, as well as creating an explosion in innovation and speeding development of these systems.

Intel intends to form an autonomous driving unit combining its current Automated Driving Group (ADG) and Mobileye. The group will be headquartered in Israel and led by Mobileye’s co-founder, Amnon Shashua, currently Mobileye’s Chairman and CTO; and a professor at Hebrew University.

According to the combined press release:

This acquisition will combine the best-in-class technologies from both companies, spanning connectivity, computer vision, data center, sensor fusion, high-performance computing, localization and mapping, machine learning and artificial intelligence. Together with partners and customers, Intel and Mobileye expect to deliver driving solutions that will transform the automotive industry.

The new organization will support both companies’ existing production programs and build upon the large number of relationships that Mobileye maintains with OEMs,  automobile industry tier 1 suppliers, and semiconductor partners.

Intel’s interests in this deal are likely to be diverse. Among the potential benefits are:

  • Potentially four terabytes of data per hour of data to be processed, creating large-scale opportunities for Intel’s high-end Xeon processors and mobilize latest generation of SOC’s.
  • Moving to Israel, where Intel already has a significant presence, potentially isolates its research and testing from the competitive hotbed of Silicon Valley, shielding employees from poaching. It also avoids current immigration issues.
  • Additional competitive advantages within AI and embedded deep learning, which are the next generation targets of Intel’s silicon competition.

It is worth noting that this is a general boost to autonomous vehicles that will inevitably lead to greater concentration of resources in this field.  Although relying upon a common supplier of autonomous systems makes sense economically, it also reduces competitive advantages.

The number of companies involved in this sector continues to grow as the implications stretch out through the entire transportation-related sector.  We have covered a number of these systems in recent blogs here (Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users, Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI, All Things Need Autonomy: TomTom Grabs Autonomos, Uber Doubles Down on AI with Geometric Acquisition, Qualcomm Buys NXP for IoT and Cars ). The net result will to be to create a huge rush for talent in the machine learning space, as well as all of the areas related to integration with automobile systems. This will increase the speed of evolution for embedded AI, which will filter rapidly into other areas of business and process enablement though impeded by the availability of talent.


Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users

Chinese ride-hailing and transportation company DiDi Chuxing has just opened its own AI vehicle lab in Silicon Valley. DiDi Labs in Mountain View, CA, will focus on AI-based security and intelligent driving technologies, hoping to grab talent in the region as the battle for AI skills continues. ( see Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI Arsenal).

According to Cheng Wei, the company’s founder and CEO, “As we strive to bring better services to broader communities, DiDi’s international vision now extends to building the best-in-class international research network, advancing the global transportation revolution by leveraging innovative resources. The launch of DiDi Labs is a landmark in creating this global nexus of innovation.”

Hidden from the spotlight, DiDi has developed a huge amount of experience in China, which provides a substantial data source for its AI efforts. DiDi acquired Uber China in August 2016. Unlike rivals, the company size and reach is truly impressive. It offers a full range of technology-based services for nearly 400 million users across more than 400 Chinese cities. These include taxi hailing, private car hailing, Hitch (social ride-sharing), DiDi Chauffeur, DiDi Bus, DiDi Minibus, DiDi Test Drive, DiDi Car Rental and DiDi Enterprise Solutions. As many as 20 million rides were completed on DiDi’s platform on a daily basis in October 2016, making DiDi the world’s second largest online transaction platform next only to China’s Alibaba-owned shopping website, Taobao.

It is also important to note that DiDi is estimated to be worth about US$35 billion and its investors reportedly include all three of China’s Internet giants—Alibaba, Tencent, and Baidu—and Apple. The current initiative is centered around security issues, but location and stated intent makes the acquisition of Valley AI talent a clear objective. Uber’s acquisition of Carnegie Mellon’s whole AI lab in 2015 got all companies in the automotive intelligence sector nervous about talent. The company has recently had a range of issues arise regarding regulation in its Chinese market, and the current move may deflect some of that, and also play into the Chinese government’s current push to develop innovation.

DiDi Labs will be led by Dr. Fengmin Gong, Vice President of DiDi Research Institute, with a number of prominent researchers including top automobile security expert Charlie Miller, known as one of the scientists who famously hacked a Jeep Cherokee in 2015. Current projects span the areas of cloud-based security, deep learning, human-machine interaction, computer vision and imaging, as well as intelligent driving technologies.

DiDi has been involved with autonomous vehicles for some time, and this move mainly surfaces its involvement. DiDi previously established the DiDi Research Institute to focus on AI technologies including machine learning and computer vision. It also has ongoing research in big data analytics based on the 70TB worth of data, 9 billion routing requests, and 13 billion location points provided by its platforms.

In founding the new lab, the company also launched the DiDi-Udacity Self-Driving Car Challenge, an open-source self-driving competition in which player teams are invited to create an Automated Safety and Awareness Processing Stack (ASAPS) to improve general safety metrics for human and intelligent driving scenarios based on real data.

According to DiDi CTO Bob Zhang, “In the next decade, DiDi will play a leading role in innovation at three layers: optimization of transportation infrastructure, introduction of new energy vehicles and intelligent driving systems, and a shift in human-automotive relationship from ownership to shared access.”


Amazon Gives Alexa a PhD Boost

Amazon has just announced the Alexa Fund Fellowship program, a year-long doctoral program at four universities to take on technology problems that can be solved with Amazon’s NLP home assistant Alexa. Alexa has become increasingly popular in recent years, through its open-sourcing provision provision of cloud-based “skills” that enable people to interact with devices through voice commands in an intuitive way. This has created numerous alliances and connections with devices across the IoT, as we have discussed in a previous post, Digital Assistants Coming of Age with Alexa.

Amazon is a strong competitor in a tough field that includes various overlaps with Google Home, Microsoft Cortana, Apple Siri, IBM Watson and others. Competition is not only about market share and market direction; but it is also about available skills, alliances, and formation of new research streams that focus upon favorable technology.

The Fellowship Program

According to Amazon’s developer site:

The Amazon Alexa Fund is establishing an Alexa Fund Fellowship program to support academic institutions leading the charge in the fields of text-to-speech (TTS), natural language understanding (NLU), automatic speech recognition (ASR), conversational artificial intelligence (AI), and other related engineering fields. The goal of the Alexa Fund Fellowship program is to educate students about voice technology and empower them to create the next big thing.

The four participating universities are Carnegie Mellon, University of Waterloo, University of Southern California (USC), and John Hopkins. The program provides cash funding, Alexa-enabled devices, and mentorship from the company’s Alexa Science teams in development of a graduate or undergraduate class curriculum related to Alexa technology.

The first two announced programs are from Carnegie Mellon and Waterloo. Carnegie’s Dialog Systems course “will teach participants how to implement a complete spoken language system while providing opportunities to explore research topics of interest in the context of a functioning system. ” Waterloo’s Fundamentals of Computational Intelligence will “introduce students to novel approaches for computational intelligence based techniques including: knowledge-based reasoning, expert systems, fuzzy inferencing and connectionist modeling based on artificial neural networks.”

It is interesting to note that the funded courses might include broader agendas, furthering general progress in AI–but in an Alexa context.

The Investment Context

This Fellowship is part of a spate of recent R&D, startup, and academic investments in Alexa. It comes on the heels of Amazon’s September announcement of the Alexa Prize, a competition between 12 selected university teams to build a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes, using Alexa technology.

The Alexa Fund itself was started in June, 2015, with goals described its landing page:

The Alexa Fund provides up to $100 million in venture capital funding to fuel voice technology innovation. We believe experiences designed around the human voice will fundamentally improve the way people use technology. Since introducing Alexa-enabled devices like the Amazon Echo, we’ve heard from developers, device-makers, and companies of all sizes that want to innovate with voice. Whether that’s creating new Alexa capabilities with the Alexa Skills Kit (ASK), building devices that use Alexa for new and novel voice experiences using the Alexa Voice Service (AVS), inventing core voice-enabling technology, or developing a product or service that can expand the boundaries of voice technology, we’d love to talk to you.

In past years, the Fund has focused upon entrepreneurship, and has investments at various funding stages in 23 companies. Although many amounts are undisclosed, the top recent known investments have been in Thalmic Labs a wearables company; ecobee, a provider of Wi-Fi enabled smart thermostats; and Ring, a wireless video doorbell company.

Recently, the Fund has also created the Alexa Accelerator initiative in cooperation with global entrepreneurial ecosystem company, Techstars. The Accelerator is a 13-week startup course for 10 companies running from July to September 2017. According to Fund spokesperson Rodrigo Prudencio:

We will seek out companies tackling hard problems in a variety of domains—consumer, productivity, enterprise, entertainment, health and wellness, travel—that are interested in making an Alexa integration a priority. We’ll also look for companies that are building enabling technology such as natural language understanding (NLU) and better hardware designs that can extend or add to Alexa’s capabilities.

They are Not Alone

Of course, Amazon is not unique in creating and funding programs that support its AI initiatives, both within the academic and entrepreneurial communities. There is a lot of funding available, and this will support progress in these technologies. Such initiatives also respond to the need to educate AI specialists in a hurry, and create new initiatives that might provide an edge for one company’s interpretation of the AI and robotics future.


 


Leading EU Bank Forges Ahead with AI Investment

A leading EU bank, Banco Santander SA, has just invested in AI companies Personetics Technologies and Gridspace, highlighting recent moves in the Financial Services industries to embrace AI across a growing swathe of operations, after initial reluctance. The size of the investment is as yet unknown, but promises to be fairly large and is also notable for the fact that these are recently established global companies that have major industry clients.

These investments are a part of Santander’s Investment arm, which has joined a relatively small number of investment ventures focusing upon bringing innovation to financial technologies (fintech). The focus is upon startups that challenge traditional financial institutions–which, in themselves, are bringing the AI to the Financial Industry as a whole.

According to the company, Santander InnoVentures is based in London and maintains a global reach. It builds on a philosophy of collaboration and partnership with small and start-up companies.

We launched our $100 million fund in July 2014 to get closer to the wave of disruptive innovation in the FinTech space. We aim to support the digital revolution to make sure Santander customers around the world benefit from the latest know-how and innovations across the Banking Group’s geographies.

The fund is part of the Santander Group’s broader innovation agenda, in which we help FinTech companies grow from a very early stage (i.e. seed) to a more mature stage.

While the two new investments are focused specifically on customer service, they open the way for increased involvement in more sophisticated AI capable of operating across a broad spectrum of financial services. Personetics creates “chatbots” that can respond to customer questions through social media–specifically focusing on finance, and Gridspace is used to monitor call center conversations.

Gridspace is a collaboration between SRI International (developer of Siri), and a multidisciplinary engineering team.

From the Gridspace web site:

Gridspace is the leading platform for Conversational Intelligence. It enables companies to analyze and operationalize the conversational speech and text inputs others can’t. It provides everything you need to make your company more aware, customer-friendly, profitable, and secure. Get communications that talk back.

Personetics presents a more fintech-specific profile. From the Personetics web site:

Personetics enables the world’s leading financial institutions to transform the way they engage and serve their customers in the digital age. We bring a unique combination of financial services domain expertise, tightly embedded into a cognitive application framework using AI, predictive analytics, and Machine Learning technologies to deliver a personalized experience that help customers better manage their financial lives.

Combining built-in financial proficiency with advanced cognitive capabilities, our solutions enable financial institutions to understand and anticipate individual customer behavior and needs, communicate in a conversational and personalized manner, and continuously learn and improves from each interaction.