Artificial Intelligence: Toward a Theory of Minds

One thing that differentiates humans from animals is a concept called “Theory of Mind”. Theory of Mind describes how individuals are able to understand what other humans are thinking and feeling so that they can make an appropriate response. This idea is increasingly important for implementations of artificial intelligence.  Theory of Mind is an understanding of the intellectual and emotional state, motivations, and probable actions of another individual based upon internal hypotheses about how that individual must be thinking. It demands creating a model of another mind. To date, this concept has been considered mainly in the realm of developing humanoid robots, but it has important consequences across all AI.

How is this important? It has two components with different effects. First, it is about developing an AI Theory of Mind to promote more precise interaction with human beings. Intelligent machines will need to anticipate human response to understand the broader context of questions and required actions. Second, it is about having a Theory of Mind for each AI that we, as humans, encounter (Theory of AI Mind). This will become increasingly important for interactions between people and machines.

AI Theory of Minds

AI Theory of Human Mind

The next evolution of an AI will require each AI to have a Theory of Human Mind, and an ability to understand and act according to human characteristics, thought patterns, and biases. This is a part of current developments. It is difficult, however, since it requires machines to understand something that we strain to understand ourselves. Philosophers have struggled endlessly with the lack of capability to externalize human thought or obtain an objective understanding. Still, as with the Turing test, the attribution of a Theory of Mind is proven if the machine is able to anticipate human thoughts, actions, or desires and act upon them in a way that signifies understanding rather than pre-programmed behavior.

Theory of Mind is likely to emerge as a clear differentiator between generations of AI, and an important milestone along the way to true general intelligence. It will also mark the possibility of full collaboration with humans with each side capable of understanding possible actions by the other.

Human Theory of AI Mind

The other side of the coin is the more immediately important human Theory of AI Mind. We are intuitively aware of this when question a digital assistant. Before making a query, we consider the “intelligence” of the machine and its probable response, and frame questions to produce a useful response. We then evaluate the response based on the same model.  If it is a weak or very limited AI, this is taken into account; if it is a strong AI, then questions of capability, bias, focus, and domain come into play. We create a model of AI response. Theory of AI mind is also important in considering responsibility for an action; a concept which will be extremely important for insurance in applications such as autonomous vehicles.

Theory of AI Mind will become increasingly important for user interface in moving toward ubiquitous AI. As machines grow “smarter”, we will expect more of them, and more differences between them. As with understanding of human Theory of Mind and personality, we will hold mental models of specific AI’s and interact with them according to that model.

Implications for Theories of General Intelligence

With further development of both sides of AI Theory of Mind will come a greater understanding of how people interact with other intelligent entities–including other humans. AI provides an intelligent “Other” that is not necessarily subject to biases of human thought. People have inbuilt biases of many types and from many sources including heredity, society, brain function, mental limitations, the conscious/subconscious split, and  innumerable other factors. If we abstract intelligence from this matrix, it becomes more possible to understand the specific things which make human intelligence special. For the first time, we may be able to step aside from human capabilities and reach a broader understanding of intelligence itself.

From a technology perspective, Theory of Mind will become a competitive factor in AI development. Machines of limited intelligence, creativity, or capability will be perceived as such and will have caveats put upon their operation. This is similar to human interactions with animals, which are perceived as having a small intelligence and fixed responses to many stimuli. Competition to meet such perceptual criteria will empower the race toward artificial general intelligence. It will also drive development of intelligence toward more human-like interaction. Some of this is already happening as we move toward intelligent humanoids which are made to appear like rational creatures and satisfy some of the requirements of a theory of mind.

Conclusion

Theory of Mind is likely to become more critical in coming years. In a sense, it provides an evaluative concept for AI which steps beyond the Turing Test. In the Turing test it is possible to create a nebulous interaction which appears on the surface to be somewhat human and somewhat able to fool individuals. But, if a machine is able to interact on an intellectual level and provide the framework for an adequate Theory of Mind ,then it becomes something more than just a machine with interesting and creative responses. Perhaps, a Theory of Mind is the basis for consciousness, or a preliminary step in that direction. But, in any case, this criteria, commonly used to differentiate between humans and animals, must be turned on its head to distinguish between humans and AI.


 


Quantum AI: The Video

The frequently touted advantages of quantum computing are becoming more possible as early systems begin to appear and researchers begin to explore software. Since this developing technology focuses upon solutions for very large data sets and complex calculations, AI seems a natural application. In reality, the situation is more nuanced. There are, indeed, possibilities—particularly in machine learning—but they may demand a new approach and new types of models.

Quantum AI is now being actively pursued. It could have tremendous implications for solving complex and intractable problems, and could even bring researchers closer to a general AI. At the very least, it will provide competitive advantages to first movers who are able to harness its possibilities. It is still too early to determine how much can be gained over conventional and specialized neuromorphic processors, but recent developments are making it possible to explore this area, and interest is beginning to grow.

In the following videos, we explore quantum AI from a variety of viewpoints, from business to technical detail. The videos are a mixture of presentations and discussions available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

The Race to Quantum AI (Space And Intelligence)

Published on Mar 11, 2017

The initial enthusing over AI has faded and the sci-fi scenarios are mostly over. Even with the emergence of new machine-learning techniques the ultimate goal of the field—some form of General AI—remains a distant vision. Still, powerful machine learning is spreading into new industries and areas of daily life and will heighten attention on the unintended consequences that may result.

Quantum AI The New Frontier in Artificial Intelligence (Welcome To The Future)

Published on Aug 17, 2017

A talk by Dr. Colin P. Williams, Director of Strategy & Business Development, D-Wave Systems.

Universal Deep Quantum Learning (QuICS)

Published on Oct 6, 2015

Universal Deep Quantum Learning QuICS Workshop on the Frontiers of Quantum Information and Computer Science given by Seth Lloyd (MIT). Quantum systems can generate non-classical correlations that can’t be generated by classical systems. This talk investigates whether quantum devices can recognize patterns that can’t be recognized classically. Deep quantum learning architectures are compared with deep classical learning architectures, and conditions are identified under which universal deep quantum learners can recognize patterns that can’t be recognized by classical learners.

Google’s Quantum AI Lab (The Artificial Intelligence Channel)

Published on Sep 4, 2017

Hartmut Neven talks about possible roles of quantum effects and subjective experience in Artificial Intelligence.


 

 

 

 


AI in Finance: The Video

Financial Technology (FinTech) is traditionally conservative, but has been using machine learning for some time. The industry is now on the verge of a technology explosion, as AI and blockchain create unique challenges across this sector. AI can provide advantages in risk analysis, fraud detection and in marketing, as well as in predicting market behaviour. Every area of finance has specific interests and risks in moving forward with these projects, from ML-driven hedge funds with limited transparency to credit profiling with potential regulatory issues.

As with many other sectors, marketing and customer relations is the first area in which AI will make its mark. In finance, however, there are numerous areas in which AI will have important repercussions. Following are several videos that look at aspects of FinTech AI.

Finance is one of several vertical industries that we will look at in this series, as we explore the ongoing issues when AI technologies are incorporated into businesses, studies, and lives.

The videos are a mixture of presentations and discussions avaialable under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

Why AI is the Future of FinTech (BootstrapLabs)

Published on Jul 24, 2017

The latest innovation and the positive impact of Artificial Intelligence technologies from the Applied AI Conference, an event for people who are working, researching, building, and investing in Applied Artificial Intelligence technologies and products.

Panel Moderator: Jean-Baptiste Su, Principal Analyst. Atherton Research & FORBES Technology Columnist

Speakers:
Parth Vasa, Head of Data Science, Bloomberg Engineering, Bloomberg LP
Massimo Mascaro, Director, Data Engineering and Data Science, Intuit
Sangeeta Chakraborty, Chief Customer Officer, Ayasdi
Mark Nelsen, Senior Vice President of Risk and Authentication Products, Visa

AI and the Future of Finance (IIF)

Published on Oct 15, 2017

Perspective from  The Institute of International Finance.

IBM Watson on Cognitive Computing & Artificial Intelligence Are Transforming Financial Services (LendIt Conference)

Published on Mar 7, 2017

IBM Watson Group’s Brian Walter shows how ‘Cognitive Computing & Artificial Intelligence Are Transforming Financial Services’ at LendIt USA 2017 in New York City.

LendIt USA is the world’s biggest show in lending and fintech.

The Future of Corporate Finance: Automation powered by SAP Leonardo Machine Learning (SAP)

Published on Jul 20, 2017

Leverage next generation automation technologies to significantly increase the level of automation in your Shared Services Organization and drastically increase the efficiency of your Financial Shared Services staff. Smart automation with machine learning is self-learning and continuously improving, thus eliminating maintenance efforts. Staff can move away from daily routines and focus on strategic tasks such as growth & planning. SAP Leonardo Machine Learning can be a key enabler.


 

 

 

 

 

 

 


AI Centers Go Global, but SV Still Runs the Show

As artificial intelligence becomes increasingly important for business and AI data scientists become scarcer, there is growing international competition to create AI centers around the world. AI hubs bring together academic research and industry, feed innovation, and attract experts. Silicon Valley in the United States remains the leading contender with its concentration of IT companies and venture capitalists. In the US, the runner up is likely to be considered Boston—home to Harvard, MIT, and Boston Dynamics’ famed robots.

The US is not the only place that is focusing upon AI, however. China has expressed a desire to be number one in AI; a first mover with significant research and innovation. They are investing heavily in this sector, and time will tell whether they are able to achieve this goal—which would be difficult unless US tech industry dynamics change. Other countries have lesser goals, such as specializing in AI within niche markets (automobiles, for example). Some hubs are also centered around particular institutions, government centers, industrial research labs, or specific researchers. It is notable that McKinsey Global Institute (MGI) studies show that in 2016, the US got 66% of external AI investments (40% in Silicon Valley, alone), with China second at 17%, and others more distant.

Following are some of the top and upcoming AI centers around the world.

China

China intends to become a dominant player in artificial intelligence hoping to create $150 billion industry by 2030. China’s focus is important due to its huge online population — over 750 million people — as well as a tech – savvy population, and a government interested in using AI to build efficiencies in its cities. China is already estimated to have the biggest AI ecosystem after the United States, although still dwarfed in spending and number of companies. With the US cutting back on many areas, China believes that it can surge forth in technology such as AI to gain an edge.

One important center for Chinese AI is the Shanghai’s Xuhui District. Shanghai plans to build a number of AI development centers and has recently hosted the 2017 Global Artificial Intelligence Innovation Summit. The Xuhui AI ecosystem and new innovation center are expected to be completed near 2020. The District currently has more than 120 scientific research facilities, 10 institutions of higher learning and thousands of laboratories and R&D centers. Shanghai will also be creating AI development hubs in its Jingan, Yangpu, Jiading and Baoshan districts. The focus of its AI efforts will include big data, cloud computing, and the Internet of vehicles and robots. Chinese companies pursuing AI ventures are led by the BAT companies of Baidu, Alibaba, and Tencent, plus branches of American companies such as Google, Microsoft, and IBM.

Canada

Canada is focusing upon artificial intelligence at the government level. The key city to watch at the moment is Montréal, though Toronto also has AI activity. Montréal has academic resources at the Université de Montréal and at McGill University, plus a growing range of companies locating AI facilities there to take advantage of local talent. Google has an AI lab in Montréal focusing on deep learning related to the Google Brain project. It is also investing more than $3 million USD in the Montreal Institute for Learning Algorithms (MILA), a joint research group created by, McGill  and the Université de Montréal. Microsoft has also invested in Montréal’s burgeoning AI sector in supporting a tech-incubator called Element AI from the Institute for Data Valorization (IVADO). Microsoft is growing research and design in Montréal and investing $7 million in Montréal’s academic community in pursuit of AI. The Université de Montréal has 100 researchers in deep learning and McGill has 50. Montréal has one of the biggest concentrations of university students of all major North American cities and is a Canadian leader in university research.

Europe

In Europe, London is the top AI center, supporting a Google presence through the acquisition of British – based Deep Mind in 2014. It was a Deep Mind program that defeated a professional Go player and made headlines a few years ago. London is also home to startups such as Tractable for visual recognition and VocalIQ, a self-learning dialogue API acquired by Apple. London is also home to the Leverhulme Centre for the Future of Intelligence (CFI), an AI research Center open by Stephen Hawking that features a collaboration between Cambridge University, Oxford, Imperial College London, and the University of California at Berkeley.

In the European Union, France is also pursuing AI. The French have more than 100 startups leveraging AI in a variety of applications. France has one of the world’s largest machine learning and AI communities, although it’s best and brightest are often hired by global tech firms. While London is still considered the leading European AI hub, Paris has been making some inroads. Overseas interest includes Facebook’s global AI research center in Paris, and Google’s acquired Moodstocks machine learning startup . French universities and research institutions involved with AI include the French Institute for Research in Computer Science and Automation, and the French National Center for Scientific Research (CNRS). Available talent includes 4000 members of the Paris Machine Learning Group, home of the open source code library for AI, SciKit Learn. Startups include Valeo which has recently launched the first global research center in AI in deep learning dedicated to automotive applications.

Asia

In Asia, Singapore is also building and AI hub, which matches its ongoing interest in advanced computing and networking. Singapore is building a smart city architecture based on IBM Watson, and IBM is working with the National University of Singapore (NUS) to offer a Watson-based cognitive computing education program. Singapore’s National Research Foundation (NRF) is investing up to $107 million USD over five years in AI.SG, a national program to promote AI adoption.  Startups include Teqlabs, a new innovation lab focusing on initiatives such as API integration for the financial technology industry.  A new artificial intelligence incubator has been announced by private investment firm Marvelstone Group, which plans to build 100 AI startups per year and attract global AI talent to Singapore.

In the End…

Silicon Valley remains very much in the lead. These are just some of the active global AI centers in a dynamic that includes continuous poaching of AI talent and acquisition of promising startups. It is unlikely that this will change soon, even with current developments in global trade and immigration. But hubs outside of SV will continue to drive AI forward, developing talent and new ideas.


 


Machine Learning Nuts and Bolts, Wheels and Gears: The Video

Getting down to the nitty-gritty, it’s time to take a practical view of what is involved in setting up Machine Learning (ML) projects. There is a lot of material available, but it can be difficult to find accessible information that can guide you through the development and implementation maze. Here we look at how to access ML APIs, how to use TensorFlow, and common algorithms that you might wish to use for various purposes.

A more practical understanding gives you an edge in discussing possbilities, as also heading you toward the right track if you wish to add these skills to your programming arsenal.

The videos are a mixture of talks, presentations and teaching material available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

Basic Machine Learning Algorithms Overview – Data Science Crash Course Mini-series (Hortonworks)

Published on Aug 1, 2017

A high-level overview of common, basic Machine Learning algorithms by Robert Hryniewicz

Hello World – Machine Learning Recipes (Google Developers)

Published on Mar 30, 2016

Six lines of Python is all it takes to write your first machine learning program! In this episode, we’ll briefly introduce what machine learning is and why it’s important. Then, we’ll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up.

Deep Learning, Self-Taught Learning and Unsupervised Feature Learning (Stanford Graduate School of Business)

Published on May 13, 2013

Talk by Andrew Ng.

TensorFlow and Deep Learning without a PhD, Part 1 (Google Cloud)

Published on Mar 8, 2017

With TensorFlow, deep machine learning transitions from an area of research to mainstream software engineering. In this video, Martin Gorner demonstrates how to construct and train a neural network that recognises handwritten digits. Along the way, he’ll describe some “tricks of the trade” used in neural network design, and finally, he’ll bring the recognition accuracy of his model above 99%.

Content applies to software developers of all levels. Experienced machine learning enthusiasts, this video will introduce you to TensorFlow through well known models such as dense and convolutional networks. This is an intense technical video designed to help beginners in machine learning ramp up quickly.

TensorFlow and Deep Learning without a PhD, Part 2 

(Google Cloud)

Published on Mar 10, 2017

Deep learning has already revolutionized machine learning research, but it hasn’t been broadly accessible to many developers. In this video, Martin Gorner explores the possibilities of recurrent neural networks by building a language model in TensorFlow. What this model can do will impress you.

Developers with no prior machine learning experience are welcome. We do recommend that you watch the previous video unless you already know about dense and convolutional networks and are only interested in recurrent networks.

This is an intense technical video designed to help beginners in machine learning ramp up quickly.


 

 


Challenges and Opportunities in the AI User Interface

While the general integration of artificial intelligence into business processes is likely to have a profound effect, one area common to all applications will be particularly impacted–the User Interface (UI). We have already seen the beginning of  AI in digital assistants such as Alexa (see Digital Assistants Coming of Age with Alexa) that interpret natural language commands and respond with speech or actions. While this type of Conversational Interface (CI) is likely to become ubiquitous, there are other significant UI impacts that may be at least as important.

Today, capabilities of applications have grown geometrically and complexity continues to increase. Meanwhile, mobile platforms have limited display space and fewer controls. This leads to a new need for simplification. The choices presented to the user in menu upon menu within office products, for example can be exhaustive and make it difficult to locate specific actions. Yet there is great demand for additional processing to meet the needs of new use cases.

AI to the Rescue 

Natural language processing provides a start for simplifying access to the deeper complexities of programs by understanding commands and responding without a keyboard. But the conversational interface is also evolving as AI progresses. We are still at a rudimentary stage, returning to the need to memorize commands like the old TTY interfaces. What is needed is to address the selection of actions based upon context. This is the beginning of a new form of interface in which AI is able to translate a human request for a desired result into a command sequence to one or more applications to perform tasks that will meet the user’s ultimate goal.

An AI User Interface

For example, someone might ask “Should I go to the office today?”. The system might then assess health information, determine if there is an illness; check the weather for extremes; check databases of current natural disasters that might be applicable; check holiday schedules; check company instructions and so forth, all in an instant. But to do this, the AI needs a broader range of knowledge and data than is commonly available for current AI applications, and a capacity to understand what such a request might entail. In fact, as with many such puzzles, there is the beginnings of a requirement for an artificial general intelligence which would think across a wide range of territory rather than within the simple parameters of a question. The AI would think about thinking.

Such an interface demands situational awareness and an understanding of the overall context in which a question is posed. The AI needs to define the specifics of where information would be found; it also needs to understand how to convey and/or act upon that intelligence, and make complex choices.

Implications of an AI UI

As software continues to grow in  complexity, it is certain that AI will provide a bridging interface between human and machine. This interface will become more flexible and more powerful, and it will evolve to perform more duties. Just as simplification of the UI makes it possible to perform complex tasks on the computer with only a basic understanding of operating principles, people will begin to interact with computers in a conversational way and evolve information requests to meet an increasingly sophisticated interaction. The growing sophistication of this interaction and will feed the development of further AI UI capabilities.

Development of a sophisticated AI-based UI is not solely about natural language processing, however. All computer interactions can likely be improved by the addition of an AI layer. While conversation is a priority, AI will be able to  to reduce the clutter and confusion of menu-based interactions by providing selection based upon context, and capability to interact based on desired results rather than tool selection. In effect, this is much like movements in  software development, such as the growing Agile and DevOps movements. Writing software to meet specific customer needs is much better than coding based around technology. This same rule must apply to the actions of programs themselves.

Into the Future

AI will also be applied to the processes of developing artificial intelligence. We have already seen programs that read a user interface screen image and convert it directly into code. In the next iteration, we can expect to see AI solutions which turn actions into instructions which may feed multiple AI systems to further a desired operation. While such a general intelligence creates its own issues, it will arrived in incremental steps. Increasing complexity of AI routines, increasing integration, and increasing componentization will open the way for AI to operate across a broader range of data and make decisions about thinking and about interaction that can generally be applied across all computer systems.


 


Skills for Data Science and Machine Learning: The Video

Here we look at the skills and employment opportunities that are surfacing as we move into an era of advanced analytics and ubiquitous AI.

The videos are a mixture of seminars and discussions avaialable under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

What Kaggle has Learned from Almost a Million Data Scientists (O’Reilly

Published on May 25, 2017

A talk by Anthony Goldbloom from Kaggle.  Kaggle is a community of almost a million data scientists, who have built more than two million machine-learning models while participating in Kaggle competitions. Data scientists come to Kaggle to learn, collaborate, and develop the state of the art in machine learning. Anthony Goldbloom shares lessons learned from top performers in the Kaggle community and explores the types of machine-learning techniques typically used, some of the tricks he’s seen, and pitfalls to avoid. Along the way, Anthony discusses work habits and skills that help data scientists succeed.

Andrew Ng: Artificial Intelligence is the New Electricity (Stanford Graduate School of Business)

Published on Feb 2, 2017

On Wednesday, January 25, 2017, Baidu chief scientist, Coursera co-founder, and Stanford adjunct professor Andrew Ng spoke at the Stanford MSx Future Forum. The Future Forum is a discussion series that explores the trends that are changing the future. During his talk, Professor Ng discussed how artificial intelligence (AI) is transforming industry after industry.

What Are the Top Skills Needed to Be a Data Scientist? (SAS Software)

Published on Oct 26, 2015

Dr. Goutam Chakraborty of Oklahoma State University outlines the top skills you need to be successful as a data scientist. From SAS Academy for Data Science

Top Data Scientist D J Patil’s Tips to Build a Career in Data (FactorDaily)

Published on Jul 20, 2017

D J Patil, who along with top data scientist Jeff Hammerbacher coined the term “data scientist” tells you what it takes to be a data scientist During the Obama administration, Patil became the first ever chief data scientist to be appointed by the US government.


 

 

 


Coming Soon to Nagoya: RoboCup 2017: The Video

It’s almost time for this year’s  RoboCup competition in Nagoya, Japan, July 27-30. This event has expanded to include more discussions and competitions in a diverse range of robotics activities. The robots demonstrate incremental developments in autonomous goal-directed behavior, improving a little bit each year.  RoboCup is a springboard for discussion of critical areas of robotic development and also provides a showcase for university robotic programs and recruitment.

RoboCup has been held annually for 21 years. More than 3,500 dedicated scientists and developers from more than 40 countries are expected. The event features a number of activities:

RoboCup Soccer includes autonomous mobile robots separated into leagues: Humanoid, Standard Platform, Middle Size, Small Size and Simulation.

RoboCup Industrial, includes RoboCup Logistics and RoboCup@Work. It is a competition between industrial mobile robots focusing on logistics and warehousing systems in anticipation of Industry 4.0.

RoboCup Rescue includes Rescue Robot League and Rescue Simulation League. It employs technologies developed from RoboCup Soccer, to promote simulations that will contribute toward development of autonomous robots for use in rescue efforts at disaster sites.

RoboCup @Home  applies these technologies to people’s everyday lives, evaluated according to how the robots cooperate with human beings to complete various tasks.

RoboCupJunior includes Soccer, Rescue and Onstage Competition to stimulate children’s curiosity and encourage them to participate in robot design and production.

Official RoboCup 2017 Video

Best Goals of RoboCup 2016 (CNET )

Published on Jul 14, 2016

Can Robots Beat Elite Soccer Players? (TEDxYouth)

Published on Apr 23, 2013

As Professor of Computer Science at UT Austin, Dr. Peter Stone’s interests run from competing in the international RoboCup soccer tournament to developing novel computing protocols for self-driven vehicles. His long-term research involves creating complete, robust, autonomous agents that can learn to interact with other intelligent agents in a wide range of complex and dynamic environments.

What Soccer-Playing Robots Have to do with Healthcare (TEDx Talks)

Published on Sep 29, 2012

Steve McGill is a second year PhD student at the University of Pennsylvania studying humanoid robotics and human robot interaction under Professor Dan Lee. As part of Team DARwIn, he captured the first place medal at the international RoboCup humanoid soccer competition in Istanbul, Turkey. Steve is interested in applying this robotic technology for deployment in the field for human intercommunication.

 


 


Affective Computing, Intersecting Sentiment and AI: The Video

Affective Computing is the combination of emotional intelligence with artificial intelligence. The role of emotion in AI is coming increasingly into focus as we attempt to integrate robots, digital assistants, and automation into social contexts. Interaction with humanity always involves sentiment, as demonstrated by the growing understanding that self-driving vehicles need to understand and react to their emotional surroundings–such as responding to aggressive driving. Meanwhile, sentiment analysis is growing independently in marketing as companies vie to create emotional response to products and react to social media comments. Meanwhile, in the uneven understanding of this technology, some still separate human from cyber systems on the basis of emotion.

AI must use and respond to emotional cues. This must be considered a component of the thought process. Companies are now beginning to focus upon this area, and combine it with the other elements of AI to build a more responsive and human-interactive technology.

Following are a few videos explaining where Affective Computing is heading. These are under standard YouTube license, and the descriptive information is, as usual, provided from the video landing page with minor edits.

The Human Side of AI: Affective Computing (Intel Software)

Published on Feb 13, 2017

Affective Computing can make us aware of our emotional state, helping us take better decisions, can help us to help others, or help machines make decisions to enrich our lives. There is another exciting use for emotional data: Machine Learning. This is where data is collected so the machine refines its understanding, to ultimately better personalize your experiences.

Imagine if the environments where you live and interact could personalize your experience based on how you feel in that moment. Imagine being able to provide superior care-giving to elderly, children and people with limited abilities.

The introduction is provided below. Some additional videos:

Affective Computing Part 1: Interpreting Emotional States to Unleash New Experiences

Affective Computing Part 2: Global User Insights and Recommendations for Developers

Artificial Intelligence meets emotional intelligence – CEO Summit 2016 (Mindtree Ltd.)

Published on Nov 8, 2016

With Artificial Intelligence (AI) gaining credence, Mindtree’s Chairman KK talks about the evolving roles of people and the importance of fostering emotional quotient (EQ) to remain relevant. He elaborates upon how Mindtree is helping its retail, finance, travel and hospitality clients reimagine customer service, the area most touched by AI and automation.

How Virtual Humans Learn Emotion and Social Intelligence (Tested )

Published on Aug 26, 2016

At USC ICT’s Virtual Humans lab, we learn how researchers build tools and algorithms that teach AI the complexities of social and emotional cues. We run through a few AI demos that demonstrate nuanced social interaction, which will be important for future systems like autonomous cars.

Shot by Joey Fameli and edited by Tywen Kelly
Music by Jinglepunks

Stanford Seminar: Buildings Machines That Understand and Shape Human Emotion (stanfordonline)

Published on Feb 3, 2017

Jonathan Gratch, Research Professor of Computer Science and Psychology at the University of Southern California (USC)

Affective Computing is the field of research directed at creating technology that recognizes, interprets, simulates and stimulates human emotion. In this talk, I will broadly overview my fifteen years of effort in advancing this nascent field, and emphasize the rich interdisciplinary connections between computational and scientific approaches to emotion. I will touch on several broad questions: Can a machine understand human emotion? To what end? Can a machine “have” emotion, and how would this impact the humans that interact with them? I will address these questions in the context of several domains, including healthcare, economic decision-making and interpersonal-skills training. I will discuss the consequences of these findings for theories of intelligence (i.e., what function does emotion serve in human intelligence and could this benefit machines?) as well as their practical implications for human-computer, computer-mediated and human-robot interaction. Throughout, I will argue the need for an interdisciplinary partnership between the social and computational sciences around to topic of emotion.


 

 


Car Wars: Intel Bags Mobileye

Intel is acquiring autonomous driving company Mobileye in a deal valued at $15.3 billion, expected to close toward the end of this year. Acquisition of the Israeli firm, whose technology is used by  27 companies in the auto industry, represents a number of interesting issues in the self-driving vehicle technology race.

Intel has been pursuing autonomous vehicle technology, but this initiative–one of the 10 largest acquisitions in the tech industry–brings it front and center. The key to Mobileye’s autonomous solution lies in its silicon. Mobileye has developed its EyeQ® family of system-on-chip (SoC) devices, which support complex and computationally intense vision processing while still maintaining low power consumption. Mobileye is currently developing its fifth generation chip, the EyeQ 5, to act as the visual central computer for fully autonomous self-driving vehicles expected to appear in 2020. The EyeQ chips employ proprietary computing cores optimized for computer vision, signal processing, and machine learning tasks, including deep neural networks. These cores are designed specifically to address the needs of Advanced Driver Assistance Systems (ADAS).

As a chip developer focusing upon providing the building blocks for technology, its traditional role, Intel is moving forcefully in this direction, partly as a result of growing competition in embedded machine learning from the likes of Nvidia and Qualcomm, both of which are also moving into the autonomous vehicle area. Self-driving cars are the nexus of development in machine learning due to the huge profit expectations of the automobile, transportation, and logistics industries. Evolution of autonomous vehicles, particularly with deep learning capabilities in silicon, will also create additional pressure on skills for artificial intelligence across all industry sectors, as well as creating an explosion in innovation and speeding development of these systems.

Intel intends to form an autonomous driving unit combining its current Automated Driving Group (ADG) and Mobileye. The group will be headquartered in Israel and led by Mobileye’s co-founder, Amnon Shashua, currently Mobileye’s Chairman and CTO; and a professor at Hebrew University.

According to the combined press release:

This acquisition will combine the best-in-class technologies from both companies, spanning connectivity, computer vision, data center, sensor fusion, high-performance computing, localization and mapping, machine learning and artificial intelligence. Together with partners and customers, Intel and Mobileye expect to deliver driving solutions that will transform the automotive industry.

The new organization will support both companies’ existing production programs and build upon the large number of relationships that Mobileye maintains with OEMs,  automobile industry tier 1 suppliers, and semiconductor partners.

Intel’s interests in this deal are likely to be diverse. Among the potential benefits are:

  • Potentially four terabytes of data per hour of data to be processed, creating large-scale opportunities for Intel’s high-end Xeon processors and mobilize latest generation of SOC’s.
  • Moving to Israel, where Intel already has a significant presence, potentially isolates its research and testing from the competitive hotbed of Silicon Valley, shielding employees from poaching. It also avoids current immigration issues.
  • Additional competitive advantages within AI and embedded deep learning, which are the next generation targets of Intel’s silicon competition.

It is worth noting that this is a general boost to autonomous vehicles that will inevitably lead to greater concentration of resources in this field.  Although relying upon a common supplier of autonomous systems makes sense economically, it also reduces competitive advantages.

The number of companies involved in this sector continues to grow as the implications stretch out through the entire transportation-related sector.  We have covered a number of these systems in recent blogs here (Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users, Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI, All Things Need Autonomy: TomTom Grabs Autonomos, Uber Doubles Down on AI with Geometric Acquisition, Qualcomm Buys NXP for IoT and Cars ). The net result will to be to create a huge rush for talent in the machine learning space, as well as all of the areas related to integration with automobile systems. This will increase the speed of evolution for embedded AI, which will filter rapidly into other areas of business and process enablement though impeded by the availability of talent.