All Things Need Autonomy: TomTom Grabs Autonomos

Noted location and mapping service TomTom has acquired Autonomos, a Berlin-based autonomous driving startup. TomTom has been expanding its range of activities for several years, not only in the autonomous vehicle area, but also in a variety of other technologies related to geolocation. These have included navigation devices, smart watches and cameras for consumers. But the Autonomos acquisition shows that the company is becoming increasingly serious about the coming range of self-driving vehicles.

Autonomos is company that provides R&D consultancy services for automated vehicle assistance systems. Its capabilities include a full demonstration-level autonomous driving software stack, 3D sensor technology, and digital image processing.

The company was established in 2012 as a spin-off from the German Federal Ministry of Education and Research (BMBF) research project entitled “AutoNOMOS Labs,” at the Freie Universitaet in Berlin. Its prototype “MadeInGermany” vehicle has been officially testing autonomous driving on the streets of Berlin since 2011. The company participated in the 2400 km automated driving challenge in Mexico, operating in cooperation with its research partner, the Freia Universitat Berlin. Its vehicle drove about 2,250 km of highways and 150 km through cities, traversing difficult road conditions in which the autonomous vehicle demonstrated its capabilities and performed well.

Addition of Autonomos is expected to advance TomTom’s map-based products for autonomous driving. Its in-house autonomous driving stack will enable TomTom to serve customers with products such as its HD map, RoadDNA localization technology, as well as its navigation, traffic and other cloud services.

“This is an important development for TomTom as it will help us to continue to strengthen our capabilities for the future of driving and expand our knowledge and expertise,” says Harold Goddijn, CEO and co-founder of TomTom in the press release. “With this deal we are further positioning ourselves as one of the leaders in autonomous driving”.

There are a number of interesting points to this announcement. It is yet another acquisition of an AI startup to obtaining research and talent. It also demonstrates how autonomous driving is, and autonomous vehicles are, becoming increasingly important to every company within the vehicle and transportation industries. We have seen the high-profile efforts of Uber, Tesla, Apple and the major automobile companies to gain a foothold in this area.

As self-driving automobiles and transportation become ever closer, it is clear that anyone who wishes to participate in the vehicle industry in future will need to have access to this technology, and to have qualified staff capable of innovating in this area. The data sciences required by autonomous vehicle are of a different caliber from those required by internal analytic functions within a corporation; a wider variety of skills and an emphasis upon machine learning and deep learning techniques is absolutely critical. The skills are expected to become increasingly difficult to find outside of acquisition of startups such as this one.

Another factor to look at in this acquisition is that TomTom has been moving gradually beyond mapping and automobile applications into a variety of other product areas that involve knowledge of location. These include health monitoring products. As digitization proceeds, it becomes increasingly clear that acquiring advanced skills in one area is easily translatable into another. Autonomos itself has moved into a smart stereo camera to aid its automotive techniques; this camera can be used for other purposes and can be linked to other initiatives in a geolocation portfolio. As TomTom continues to build out additional products based upon its mapping and geolocation capabilities, Autonomos is likely to play a significant role, not only in the products it has produced, but also through its 20 AI employees who will now be available for the products within TomTom’s universe.

For companies, it is important to take note that acquisitions like this point to the need to be aware of this expanding environment. In addition to AI in analytics processes and in general business process automation, AI will continue to spread out into unforeseeable new areas. Every company is seeking an innovation that will provide an ultimate advantage within their niche; advent of a truly autonomous process, particularly related to difficult control issues in physical goods movement, manufacturing, or transportation is likely to become increasingly important.


 


Digital Transformation: The Video

Digital Transformation is critical to everything about the future of IT; and, increasingly, of everything else. By converting processes, work products, goods and services into digital formats, it is possible to transform them and manipulate them through software. The manipulation processes are, moreover, similar across the entire universe of digital data. Digitization means information and plans can be communicated instantly; processes can be transformed swiftly; and the whole spectrum of analytics and AI can be applied to the data stream.

Companies need to transform their analog processes into digital equivalents just to catch up with the present; integrating these processes into current software and services will permit them to competitively reach the future.

Here we have a number of videos focusing upon Digital Transformation, along with the explanations provided. These are all available under the standard YouTube license.

Davos 2016 – The Digital Transformation of Industries (World Economic Forum)

What big bets are companies making in the digital transformation of their business models and organizational structures?

On the agenda:

  • Defining digital transformation
  • Making the right investment decisions
  • Designing a digital culture

Speakers:

  • Marc R. Benioff, Chairman and Chief Executive Officer, Salesforce, USA.
  • Klaus Kleinfeld, Chairman and Chief Executive Officer, Alcoa, USA.
  • Jean-Pascal Tricoire, Chairman and Chief Executive Officer, Schneider Electric, France.
  • Bernard J. Tyson, Chairman and Chief Executive Officer, Kaiser Permanente, USA.
  • Meg Whitman, President and Chief Executive Officer, Hewlett Packard Enterprise, USA.

Moderated by Rich Lesser, Global Chief Executive Officer and President, Boston Consulting Group, USA.

(Creative Commons license; Published on Jan 29, 2016)

Digital Transformation – The Business World of Tomorrow (Detecon International)

Smart business networks, sensors, the Internet of Things, or teamwork which is distributed solely as virtual assignments – many of these innovations will result in substantial changes to business strategies as well. What has not changed, however, is the extreme complexity in the design of digital business models resulting from the frequent lack of transparency concerning concrete operational consequences in the corporate business units.

What capabilities does a company require for the realization of digitalized strategies and processes? What steps are necessary to become “Leading Digital”? And is all of this work even worth the effort? (Published on Jan 22, 2014)

What Digital Transformation Means for Business (MIT Sloan Management Review)

An all-star panel discusses digital transformation at Intel, Zipcar — and beyond. Panelists include Kim Stevenson (Intel Corporation), Mark Norman (Zipcar), Didier Bonnet (Capgemini Consulting), Andrew McAfee (MIT Center for Digital Business)

(Published on Jun 28, 2014)

Why Every Leader Should Care About Disruption (McKinsey & Company)

Digitization, automation, and other advances are transforming industries, labor markets, and the global economy. In this interview, MIT’s Andrew McAfee and McKinsey’s James Manyika discuss how executives and policy makers can respond.

The disruptive impact of technology is the topic of a McKinsey-hosted discussion among business leaders, policy makers, and researchers at this year’s meeting of the World Economic Forum, in Davos, Switzerland. In this video, two session participants preview the critical issues that will be discussed, including the impact of digitization and automation on labor markets and how companies can adapt in a world of rapid technological change.

Text from http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/why-every-leader-should-care-about-digitization-and-disruptive-innovation

(Published on Jan 4, 2015)

Leading Digital Transformation Now – No Matter What Business You’re In (Capgemini Group)

In this keynote session recorded at Oracle OpenWorld 2014, Dr. Didier Bonnet, Capgemini Consulting’s global head of digital transformation and coauthor (with MIT’s George Westerman and Andrew McAfee) of the upcoming book “Leading Digital,” highlights how large companies in traditional industries—from finance to manufacturing to pharmaceuticals—are using digital to gain strategic advantage.

Didier also discusses the principles and practices that lead to successful digital transformation based on a two-part framework: where to invest in digital capabilities, and how to lead the transformation. (Published on Sep 30, 2014)


Microsoft Acquires Maluuba for Deep Learning and Talent

Microsoft has announced the acquisition of deep learning company Maluuba, a Canadian startup that focuses upon a comprehensive view of artificial intelligence, and particularly, the understanding of language. Maluuba’s role in Microsoft is likely to involve fortification of the Cortana digital assistant, as well as other voice-understanding initiatives.

In its blog on the subject Microsoft focused on Maluuba’s integration with Microsoft’s overall AI research efforts. The blog by Harry Shuman, Executive Vice President of Microsoft’s Artificial Intelligence and Research Group Microsoft says:

“Maluuba’s impressive team is addressing some of the fundamental problems in language understanding by modeling some of the innate capabilities of the human brain, from memory and common sense reasoning to curiosity and decision making.”

In addition to Maluuba, the acquisition brings a major deep learning player along from Montréal. Yoshua Bengio is a top expert in deep learning and head of the Montréal Institute of Learning Algorithms. He is an adviser to Maluuba and will continue in an advisory capacity, though his position with Microsoft remains unknown. Notably, Bengio has played a central role in making Montréal a center of deep learning.

Maluuba was founded in 2011 by classmates Sam Pasupalak and Kaheer Suleman from the University of Waterloo, in Canada. The name Maluuba came from Waterloo professor Peter A. Buhr. It was a made up name that he used for fictitious programming languages.

The Acquisition

For Maluuba, the acquisition will make it possible to improve AI capabilities generally, and incorporate a range of related technologies. According the co-founders’ announcement:

“So far, our team has focused on the areas of machine reading comprehension, dialogue understanding, and general (human) intelligence capabilities such as memory, common-sense reasoning, and information seeking behavior. Our early research achievements in these domains accelerated our need to scale our team rapidly; it was apparent that we needed to bolster our work with significant resources to advance towards solving artificial general intelligence.”

Maluuba also has a revealing insight on its intended directions on its website:

“We finally saw a great opportunity to apply Deep Learning and Reinforcement Learning techniques to solve fundamental problems in language understanding, with the vision of creating a truly literate machine – one that could actually read, comprehend, synthesize, infer and make logical decisions like humans. This meant we had to heavily invest in research, therefore we started our Research lab in Montréal in late 2015 (in addition to our awesome engineering team in Waterloo). Our research lab, located at the epicentre of Deep Learning, is focused on advancing the state-of-the-art in deep learning for human language understanding.”

How this Helps Microsoft

The acquisition of Maluuba continues Microsoft’s activity and interest in the AI sphere since it created the Artificial Intelligence and Research Group last fall.

The acquisition of Maluuba demonstrates the growing AI focus upon understanding and acting upon commands; a capacity demonstrated by IBM’s Watson, and in a simplified but ubiquitous form, by Amazon’s Alexa (we discussed Alexa here). Language is a key to the current development of AI. Language contains not only the basics of communication which might be understood by Natural Language Processing (NLP), but it also contains all of the nuances of human thought. Watson’s victory in the TV Jeopardy competition some years ago increased awareness of this. It is one thing to understand basic language; a secondary problem is to understand well enough to respond; and the third issue is to understand the question and then to be able to formulate a response which meets human criteria of a sufficient answer.

The practical impact of Microsoft’s acquisition is likely to be fairly minimal at present. This is part of Microsoft’s experimental program and will be incorporated in its research toward developing AI capabilities for the future.

Parsing the Pieces

As the AI revolution continues, we are seeing deep learning become ever important in conjunction with other analytics tools to create a more comprehensive AI solution. What we mean by “artificial intelligence” continues to expand as we develop further understanding of the inner workings of both human and machine thought. Some of this is informed by technology, but it also encompasses and interacts with research initiatives in the biological sciences, as well as with the philosophy of intelligence from the past.

It is critical for major software companies to compete in this area. Research is often acquired through mergers with small startup companies such as Maluuba. We can expect further mergers and acquisitions in this area as the technology expands. AI will gradually assert itself in corporate processes; initially through the user interface, and capability to respond to input in natural language. It will move on toward greater comprehension of audio, graphic, and video formats; and then on to the next step of achieving a greater understanding and increasing ability to provide autonomous response.


Digital Assistants Coming of Age with Alexa

Digital Assistants are bringing AI to a new level, and this will have consequences for how the IoT develops. At the recent CES show, there was one stand-out performer: Alexa. Amazon’s voice-actuated helper was available on a wide range of technology, demonstrating the possibilities of interactive AI-driven control. The popularity of Alexa highlights the increasing role of intelligent digital assistants and leads us to consider what we might expect of this area in future.

Smart digital assistants have been explored by major software and IT companies and include Watson-based assistants from IBM; Apple’s Siri digital assistant; Microsoft’s Cortana; Google’s Now and Amazon’s Alexa. This category has been developing for several years. But 2017 seems like it could be a watershed year, and part of the reason is Alexa.

One of the reasons that Alexa is popular among developers is that it can control many different home systems and is relatively device-agnostic. The Alexa platform is easily integrated into an array of products, offering different types of services, and wireless control of home devices. The net result is that it can function as an easily configured hub for home services, as well as providing immediate information from the network. Alexa can be used for home security, for home information, to turn off lighting, to adjust the thermostat, and to provide general purpose information–or, indeed, any function to which a third-party provider might like to add a wireless connection.

This strategy has propelled Alexa forward, and at CES, Alexa was the voice of robots, the mouthpiece of smart systems, the driver of speakers and audio systems, the recipient and enabler of voice commands, and the dominant digital assistant at the show.

With digital assistants, we can see a development across the years from first ventures in natural language processing (NLP). Siri on Apple was the first critical development through the marketing prowess of Apple. It brought AI assistance into the mainstream and legitimized it for everyone. Capability has grown with improvements in NLP, better user interfaces, and greater integration with products and services.

Use of digital assistants will certainly grow as the capabilities continue to expand. But it will be device agnostic platforms that will be of greatest importance. In the early days of personal computing, it was only when the IBM PC became available that the computer explosion really took off. Other companies were able to build equipment and software for a commonly available platform. This led to a huge spurt of creativity around the globe, enabling developments which eventually led to the software world of today. Alexa is perhaps another watershed, popularizing the opening of digital assistants to a wider range of possibilities.

There are several keys to importance of digital assistant functions.

  • Configuration must be easy, simple, and performable by individual consumers.
  • The assistant must be able to understand voice commands out-of-the-box; a technology which has reached a visible turning point recently.
  • The digital assistant must be capable of performing useful functions; this means integration with home devices and excludes strictly proprietary system.
  • The digital assistant must integrate with existing smart technology in the home, in the automobile, and even in the workplace
  • The digital assistant must be unobtrusive and available as needed. Personal robots in varying forms could be available throughout the location, for example.
  • The digital assistant must be capable of performing necessary tasks. Perhaps the biggest limitation today is that smart homes are hard to find and people are difficult to convince that and expensive addition will help them in their daily lives.
  • Digital assistants must and will become inexpensive and ubiquitous; this means that they must coexist with humans and other devices
  • Digital assistants must be capable of integrating with other digital assistants to provide a complete response. This will demand new software as these systems spread and become more relevant.

For the present, digital assistants are in an early stage of acceptance. In time, however, the technology pioneered to some extent by Alexa is likely to become commonplace. It will influence the home environment as well as the office. As with much new technology, initial benefit will be found in the home. It is certain that it will also move into the enterprise environment. Capability to interact with other systems will be of critical importance as these devices proliferate. This will create a market for AI software designed to meet the needs of digital assistants and will bolster the move toward cognitive computing.

Companies need to understand that this wave of development is about to become important, Software markets will develop, new communications requirements will emerge, and the burgeoning Internet of things will become smarter and smarter and smarter.


Brain Machine Interface: Cornerstone of the Next AI

As we move into the era of AI, we can expect changes across a broad range of areas. AI will vie with extension of human intelligence and human-machine hybridization. As we bind the complex and supremely powerful human intelligence to the array of capabilities available in AI, there will be an inevitable transformation in human thought and action. This transformation will require a close interface between humans and machines. We have already seen the beginning of a close connection between Internet search and familiar thought processes. People are able to react more rapidly to information through rapid and real-time access to supporting data. But processes of cognition demand much faster response to integrate the vast incoming data streams with other cognitive functions.

Because of this connection between AI and device interface, development of a range of prostheses will occur as this interface grows in importance. There is already an impetus to provide a direct connection between mechanical systems and the brain going back decades–even further, perhaps, if we include experiments from the 19th century on electro-stimulation of animals. To date, these experiments have remained relatively limited in function, involving only simple messages and simple capabilities endowed with limited human awareness and response. However, as understanding of thought processes and possibilities of interface devices continue to increase in complexity, we will see radical advances that lead to a new era of human-machine interaction.

Recent developments in this area suggest the possibility of an explosion in capability, led by ability to read biological thought through invasive and noninvasive techniques; to act upon human thought streams through the same mechanisms; and to comprehend a much wider scope of thought processes and action potential as a result.

The chief direct human-machine interface possibilities come from two areas. These are insertion of a chip of minute size in a biological system; and use of external sensors and headsets to read electronic activity in the brain and interpret or act upon it. Thus, recent developments have shown capability to move artificial limbs; to retain sensation in artificial prostheses; capability to understand images in dreams; to understand emotional states; and ability to understand and act upon thought streams from individual neurons or neural matrices. Growth in this area has led to artificial smart limbs; artificial cornea implants; cochlear implants; and a range of similar helpful devices, each of which requires specific input from neural systems and provide an output based upon processing of some form.

The capabilities of these devices grow as the range increases. Recent developments have included cheaper local, and even some remote, electrical brain reading capabilities. On the implant side, sensors have become smaller and more suitable to insertion in areas of the brain or other portions of the body; most recently, the possibility of using a tiny chip with stent-like expansion and noninvasive insertion in the bloodstream, situated remotely, is a DARPA project. On another level of human brain interface, we have seen capability to grow biological neuron nets and integrate them with other systems. This increases the possibilities of a variety of direct brain interface devices as well as helping to improve our overall understanding of the requirements to build an intelligence.

The need to create a closer integration between man and machine is driven by the same requirements as increased AI within the enterprise. AI and big data demand vast increases in real-time computation which leads directly to improved networking and the capability to rapidly draw resources into the cognitive mix. Humans will need to keep up with this ever increasing processing of information. This will provide a competitive market for interface devices which make it easier for individuals to become first responders in an artificially cognitive world. Such an interface will also provide subconscious control possibilities for intelligent systems; to engage fight or flight reflexes in avoiding danger, and with enhanced capabilities linked directly between devices and implants. Additionally, the capability to read and respond to human thought will require new languages of machine communication, new forms of security, and create the ability to link human beings together on a thought processing level with the possibility of instantaneous problem-solving through immediate ad-hoc networks of human and AI components.

What does this mean ethically? Human thought is evolving quickly, and human values will always drive this innovation. Philosophically, we live within a human-centered world where the definitions and the actions are all circumscribed by human beings with their genetic predispositions. We do not fully understand the nature of human world construction due to our direct immersion in it. This makes it more difficult to determine the end result. But, we are unlikely to yield world-building resources and capabilities to a machine. With control of the definitions and the possibilities, human beings will continue to reign supreme.

In other respects, it is important that this is a long-term evolution. Concepts of jobs and work location are all capable of change. These are human definitions and fit the world which was created after the Neolithic Revolution. They have been further molded by the Industrial Revolution. For example, we think of cities as central to culture; but there is no longer the same need for cities. Likewise, there is no longer a need for jobs as we have defined them; yet society must channel people to perform activities that are beneficial to society.

In the immediate sense, the further development of human-AI initiatives creates new possibilities for autonomous and semi-autonomous robots, faster and more efficient business processes, and possibly greater innovation and collaboration. Companies need to understand the possibility for this type of interface to offer specific advantages to their firm. They could make processing of information much quicker; but will also raise ethical and legal issues that need to be explored. The possibilities are truly earthshaking. But the challenges are enormous, and the repercussions across social, economic, and political areas are likely to be revolutionary.


Neuromorphic Chips and Silicon Brains: The Video

Here we have assembled a series of videos focusing upon the hardware side of AI. What happens when you start to create replicable silicon brains that emulate human thought processes? First, there can be a tremendous increase in efficiency and cost savings for ordinary processing. But these chips are also starting to carve out their own territory in pursuit of composite AI.

Most AI silicon is focused upon machine learning, and is now being used for image and voice processing, and other high-speed real time pattern matching tasks. As these chips grow increasingly powerful, they are also helping to unlock the secrets of human thought. Although this is still a very early development stage, specialized hardware systems have a tremendous advantage in processing speed over conventional systems, and are able to perform tasks that would not otherwise be possible in robotics.

As usual, each video includes descriptive text from its publication site.

BrainScales to Human Brain Project: Neuromorphic Computing Coming of Age (IBM Research)

A presentation by Karlheinz Meier, Co-Director of the Human Brain Project, University of Hedelberg, Germany. The Human Brain Project (HBP) is a European Commission Future and Emerging Technologies Flagship. project. This presentation is from the IBM Research Cognitive Systems Colloquium: Brain-Inspired Computing at IBM Research – Almaden in San Jose, CA. Published on Jun 11, 2015.

IBM’s TrueNorth Chip mimics the Human Brain (Rajamanickam Antonimuthu)

07 Aug 2014: Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory applications

Samsung TrueNorth: Neuronal to create a camera that emulates the human retina chip (Aban Tech)

Samsung TrueNorth: Neuronal to create a camera that emulates the human retina chip.

Now Samsung has taken this chip-brain and adapted it to work with your Dynamic Vision Sensor, a photo sensor that works similarly to the human retina. The result has been spectacular, the result has been a camera capable of processing digital images at breakneck speed, recording video at 2,000 frames per second. Typically, conventional digital cameras typically record at up to 120 fps. Overcoming them in such a broad way, the Samsung is positioned as a perfectly useful for creating three-dimensional maps complement, improve gesture recognition systems or even used in autonomous cars to better detect hazards. Published on Aug 15, 2016

GTC 2016: NVIDIA DGX-1, World’s First Deep Learning Supercomputer (NVIDIA )

NVIDIA CEO Jen-Hsun Huang introduces the NVIDIA DGX-1, the world’s first deep learning supercomputer, at the GPU Technology Conference. Providing 170 teraflops of performance in one box, DGX-1 is over 12x faster than its predecessor from one year ago. Published on Apr 6, 2016

Qualcomm Zeroth (Qualcomm Korea)

For the past few years Qualcomm Research and Development teams have been working on a new computer architecture that breaks the traditional mold. The company wanted to create a new computer processor that mimics the human brain and nervous system so devices can have embedded cognition driven by brain inspired computing—this is Qualcomm Zeroth processing.