Artificial Intelligence: Toward a Theory of Minds

One thing that differentiates humans from animals is a concept called “Theory of Mind”. Theory of Mind describes how individuals are able to understand what other humans are thinking and feeling so that they can make an appropriate response. This idea is increasingly important for implementations of artificial intelligence.  Theory of Mind is an understanding of the intellectual and emotional state, motivations, and probable actions of another individual based upon internal hypotheses about how that individual must be thinking. It demands creating a model of another mind. To date, this concept has been considered mainly in the realm of developing humanoid robots, but it has important consequences across all AI.

How is this important? It has two components with different effects. First, it is about developing an AI Theory of Mind to promote more precise interaction with human beings. Intelligent machines will need to anticipate human response to understand the broader context of questions and required actions. Second, it is about having a Theory of Mind for each AI that we, as humans, encounter (Theory of AI Mind). This will become increasingly important for interactions between people and machines.

AI Theory of Minds

AI Theory of Human Mind

The next evolution of an AI will require each AI to have a Theory of Human Mind, and an ability to understand and act according to human characteristics, thought patterns, and biases. This is a part of current developments. It is difficult, however, since it requires machines to understand something that we strain to understand ourselves. Philosophers have struggled endlessly with the lack of capability to externalize human thought or obtain an objective understanding. Still, as with the Turing test, the attribution of a Theory of Mind is proven if the machine is able to anticipate human thoughts, actions, or desires and act upon them in a way that signifies understanding rather than pre-programmed behavior.

Theory of Mind is likely to emerge as a clear differentiator between generations of AI, and an important milestone along the way to true general intelligence. It will also mark the possibility of full collaboration with humans with each side capable of understanding possible actions by the other.

Human Theory of AI Mind

The other side of the coin is the more immediately important human Theory of AI Mind. We are intuitively aware of this when question a digital assistant. Before making a query, we consider the “intelligence” of the machine and its probable response, and frame questions to produce a useful response. We then evaluate the response based on the same model.  If it is a weak or very limited AI, this is taken into account; if it is a strong AI, then questions of capability, bias, focus, and domain come into play. We create a model of AI response. Theory of AI mind is also important in considering responsibility for an action; a concept which will be extremely important for insurance in applications such as autonomous vehicles.

Theory of AI Mind will become increasingly important for user interface in moving toward ubiquitous AI. As machines grow “smarter”, we will expect more of them, and more differences between them. As with understanding of human Theory of Mind and personality, we will hold mental models of specific AI’s and interact with them according to that model.

Implications for Theories of General Intelligence

With further development of both sides of AI Theory of Mind will come a greater understanding of how people interact with other intelligent entities–including other humans. AI provides an intelligent “Other” that is not necessarily subject to biases of human thought. People have inbuilt biases of many types and from many sources including heredity, society, brain function, mental limitations, the conscious/subconscious split, and  innumerable other factors. If we abstract intelligence from this matrix, it becomes more possible to understand the specific things which make human intelligence special. For the first time, we may be able to step aside from human capabilities and reach a broader understanding of intelligence itself.

From a technology perspective, Theory of Mind will become a competitive factor in AI development. Machines of limited intelligence, creativity, or capability will be perceived as such and will have caveats put upon their operation. This is similar to human interactions with animals, which are perceived as having a small intelligence and fixed responses to many stimuli. Competition to meet such perceptual criteria will empower the race toward artificial general intelligence. It will also drive development of intelligence toward more human-like interaction. Some of this is already happening as we move toward intelligent humanoids which are made to appear like rational creatures and satisfy some of the requirements of a theory of mind.

Conclusion

Theory of Mind is likely to become more critical in coming years. In a sense, it provides an evaluative concept for AI which steps beyond the Turing Test. In the Turing test it is possible to create a nebulous interaction which appears on the surface to be somewhat human and somewhat able to fool individuals. But, if a machine is able to interact on an intellectual level and provide the framework for an adequate Theory of Mind ,then it becomes something more than just a machine with interesting and creative responses. Perhaps, a Theory of Mind is the basis for consciousness, or a preliminary step in that direction. But, in any case, this criteria, commonly used to differentiate between humans and animals, must be turned on its head to distinguish between humans and AI.


 


Quantum AI: The Video

The frequently touted advantages of quantum computing are becoming more possible as early systems begin to appear and researchers begin to explore software. Since this developing technology focuses upon solutions for very large data sets and complex calculations, AI seems a natural application. In reality, the situation is more nuanced. There are, indeed, possibilities—particularly in machine learning—but they may demand a new approach and new types of models.

Quantum AI is now being actively pursued. It could have tremendous implications for solving complex and intractable problems, and could even bring researchers closer to a general AI. At the very least, it will provide competitive advantages to first movers who are able to harness its possibilities. It is still too early to determine how much can be gained over conventional and specialized neuromorphic processors, but recent developments are making it possible to explore this area, and interest is beginning to grow.

In the following videos, we explore quantum AI from a variety of viewpoints, from business to technical detail. The videos are a mixture of presentations and discussions available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

The Race to Quantum AI (Space And Intelligence)

Published on Mar 11, 2017

The initial enthusing over AI has faded and the sci-fi scenarios are mostly over. Even with the emergence of new machine-learning techniques the ultimate goal of the field—some form of General AI—remains a distant vision. Still, powerful machine learning is spreading into new industries and areas of daily life and will heighten attention on the unintended consequences that may result.

Quantum AI The New Frontier in Artificial Intelligence (Welcome To The Future)

Published on Aug 17, 2017

A talk by Dr. Colin P. Williams, Director of Strategy & Business Development, D-Wave Systems.

Universal Deep Quantum Learning (QuICS)

Published on Oct 6, 2015

Universal Deep Quantum Learning QuICS Workshop on the Frontiers of Quantum Information and Computer Science given by Seth Lloyd (MIT). Quantum systems can generate non-classical correlations that can’t be generated by classical systems. This talk investigates whether quantum devices can recognize patterns that can’t be recognized classically. Deep quantum learning architectures are compared with deep classical learning architectures, and conditions are identified under which universal deep quantum learners can recognize patterns that can’t be recognized by classical learners.

Google’s Quantum AI Lab (The Artificial Intelligence Channel)

Published on Sep 4, 2017

Hartmut Neven talks about possible roles of quantum effects and subjective experience in Artificial Intelligence.


 

 

 

 


AI in Finance: The Video

Financial Technology (FinTech) is traditionally conservative, but has been using machine learning for some time. The industry is now on the verge of a technology explosion, as AI and blockchain create unique challenges across this sector. AI can provide advantages in risk analysis, fraud detection and in marketing, as well as in predicting market behaviour. Every area of finance has specific interests and risks in moving forward with these projects, from ML-driven hedge funds with limited transparency to credit profiling with potential regulatory issues.

As with many other sectors, marketing and customer relations is the first area in which AI will make its mark. In finance, however, there are numerous areas in which AI will have important repercussions. Following are several videos that look at aspects of FinTech AI.

Finance is one of several vertical industries that we will look at in this series, as we explore the ongoing issues when AI technologies are incorporated into businesses, studies, and lives.

The videos are a mixture of presentations and discussions avaialable under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

Why AI is the Future of FinTech (BootstrapLabs)

Published on Jul 24, 2017

The latest innovation and the positive impact of Artificial Intelligence technologies from the Applied AI Conference, an event for people who are working, researching, building, and investing in Applied Artificial Intelligence technologies and products.

Panel Moderator: Jean-Baptiste Su, Principal Analyst. Atherton Research & FORBES Technology Columnist

Speakers:
Parth Vasa, Head of Data Science, Bloomberg Engineering, Bloomberg LP
Massimo Mascaro, Director, Data Engineering and Data Science, Intuit
Sangeeta Chakraborty, Chief Customer Officer, Ayasdi
Mark Nelsen, Senior Vice President of Risk and Authentication Products, Visa

AI and the Future of Finance (IIF)

Published on Oct 15, 2017

Perspective from  The Institute of International Finance.

IBM Watson on Cognitive Computing & Artificial Intelligence Are Transforming Financial Services (LendIt Conference)

Published on Mar 7, 2017

IBM Watson Group’s Brian Walter shows how ‘Cognitive Computing & Artificial Intelligence Are Transforming Financial Services’ at LendIt USA 2017 in New York City.

LendIt USA is the world’s biggest show in lending and fintech.

The Future of Corporate Finance: Automation powered by SAP Leonardo Machine Learning (SAP)

Published on Jul 20, 2017

Leverage next generation automation technologies to significantly increase the level of automation in your Shared Services Organization and drastically increase the efficiency of your Financial Shared Services staff. Smart automation with machine learning is self-learning and continuously improving, thus eliminating maintenance efforts. Staff can move away from daily routines and focus on strategic tasks such as growth & planning. SAP Leonardo Machine Learning can be a key enabler.


 

 

 

 

 

 

 


AI Centers Go Global, but SV Still Runs the Show

As artificial intelligence becomes increasingly important for business and AI data scientists become scarcer, there is growing international competition to create AI centers around the world. AI hubs bring together academic research and industry, feed innovation, and attract experts. Silicon Valley in the United States remains the leading contender with its concentration of IT companies and venture capitalists. In the US, the runner up is likely to be considered Boston—home to Harvard, MIT, and Boston Dynamics’ famed robots.

The US is not the only place that is focusing upon AI, however. China has expressed a desire to be number one in AI; a first mover with significant research and innovation. They are investing heavily in this sector, and time will tell whether they are able to achieve this goal—which would be difficult unless US tech industry dynamics change. Other countries have lesser goals, such as specializing in AI within niche markets (automobiles, for example). Some hubs are also centered around particular institutions, government centers, industrial research labs, or specific researchers. It is notable that McKinsey Global Institute (MGI) studies show that in 2016, the US got 66% of external AI investments (40% in Silicon Valley, alone), with China second at 17%, and others more distant.

Following are some of the top and upcoming AI centers around the world.

China

China intends to become a dominant player in artificial intelligence hoping to create $150 billion industry by 2030. China’s focus is important due to its huge online population — over 750 million people — as well as a tech – savvy population, and a government interested in using AI to build efficiencies in its cities. China is already estimated to have the biggest AI ecosystem after the United States, although still dwarfed in spending and number of companies. With the US cutting back on many areas, China believes that it can surge forth in technology such as AI to gain an edge.

One important center for Chinese AI is the Shanghai’s Xuhui District. Shanghai plans to build a number of AI development centers and has recently hosted the 2017 Global Artificial Intelligence Innovation Summit. The Xuhui AI ecosystem and new innovation center are expected to be completed near 2020. The District currently has more than 120 scientific research facilities, 10 institutions of higher learning and thousands of laboratories and R&D centers. Shanghai will also be creating AI development hubs in its Jingan, Yangpu, Jiading and Baoshan districts. The focus of its AI efforts will include big data, cloud computing, and the Internet of vehicles and robots. Chinese companies pursuing AI ventures are led by the BAT companies of Baidu, Alibaba, and Tencent, plus branches of American companies such as Google, Microsoft, and IBM.

Canada

Canada is focusing upon artificial intelligence at the government level. The key city to watch at the moment is Montréal, though Toronto also has AI activity. Montréal has academic resources at the Université de Montréal and at McGill University, plus a growing range of companies locating AI facilities there to take advantage of local talent. Google has an AI lab in Montréal focusing on deep learning related to the Google Brain project. It is also investing more than $3 million USD in the Montreal Institute for Learning Algorithms (MILA), a joint research group created by, McGill  and the Université de Montréal. Microsoft has also invested in Montréal’s burgeoning AI sector in supporting a tech-incubator called Element AI from the Institute for Data Valorization (IVADO). Microsoft is growing research and design in Montréal and investing $7 million in Montréal’s academic community in pursuit of AI. The Université de Montréal has 100 researchers in deep learning and McGill has 50. Montréal has one of the biggest concentrations of university students of all major North American cities and is a Canadian leader in university research.

Europe

In Europe, London is the top AI center, supporting a Google presence through the acquisition of British – based Deep Mind in 2014. It was a Deep Mind program that defeated a professional Go player and made headlines a few years ago. London is also home to startups such as Tractable for visual recognition and VocalIQ, a self-learning dialogue API acquired by Apple. London is also home to the Leverhulme Centre for the Future of Intelligence (CFI), an AI research Center open by Stephen Hawking that features a collaboration between Cambridge University, Oxford, Imperial College London, and the University of California at Berkeley.

In the European Union, France is also pursuing AI. The French have more than 100 startups leveraging AI in a variety of applications. France has one of the world’s largest machine learning and AI communities, although it’s best and brightest are often hired by global tech firms. While London is still considered the leading European AI hub, Paris has been making some inroads. Overseas interest includes Facebook’s global AI research center in Paris, and Google’s acquired Moodstocks machine learning startup . French universities and research institutions involved with AI include the French Institute for Research in Computer Science and Automation, and the French National Center for Scientific Research (CNRS). Available talent includes 4000 members of the Paris Machine Learning Group, home of the open source code library for AI, SciKit Learn. Startups include Valeo which has recently launched the first global research center in AI in deep learning dedicated to automotive applications.

Asia

In Asia, Singapore is also building and AI hub, which matches its ongoing interest in advanced computing and networking. Singapore is building a smart city architecture based on IBM Watson, and IBM is working with the National University of Singapore (NUS) to offer a Watson-based cognitive computing education program. Singapore’s National Research Foundation (NRF) is investing up to $107 million USD over five years in AI.SG, a national program to promote AI adoption.  Startups include Teqlabs, a new innovation lab focusing on initiatives such as API integration for the financial technology industry.  A new artificial intelligence incubator has been announced by private investment firm Marvelstone Group, which plans to build 100 AI startups per year and attract global AI talent to Singapore.

In the End…

Silicon Valley remains very much in the lead. These are just some of the active global AI centers in a dynamic that includes continuous poaching of AI talent and acquisition of promising startups. It is unlikely that this will change soon, even with current developments in global trade and immigration. But hubs outside of SV will continue to drive AI forward, developing talent and new ideas.


 


AutoML: The Automation of Automation

Machine to Machine: AI makes AI

The next Big Thing in AI is likely to be use  of machine learning to automate machine learning. The idea that routine tasks involved in developing a machine learning solution could be automated makes perfect sense. Machine learning development is a replicable activity with routine processes. Although total automation is improbable at the moment, even partial automation yields significant benefits. As the ranks of available machine learning experts grow thinner and thinner, ability to automate some of their time-consuming tasks means that they can spend more time on high-value functions and less on the nitty-gritty of model building and reiteration. This will, in theory, release more data scientists to work on the vast number of projects envisioned in an ubiquitous AI environment, as well as making it possible for less proficient practitioners to utilize machine learning routines without the need for extensive training.

Although automated machine learning (AutoML) is appealing, and startups have emerged with various promise, it is clear that this capability is not yet fully developed. There are, however, innumerable systems that are suitable now for use in selected environments, and to solve specific problems. Among these programs are Google AutoML, DataRobot, Auto-WEKA, TPOT, and auto-sklearn. Many are Open Source or freely available. Major analytics firms are also rapidly developing AutoML routines, including Google, Microsoft, Salesforce, and Facebook, and this area is being approached with increasing urgency.

Current AutoML programs mainly take care of the highly repetitive tasks that machine learning requires to create and tune models. The chief current  automation targets are selection of appropriate machine learning algorithms, tuning of hyperparameters, feature extraction and iterative modeling. Hyperparamter tuning is particularly significant because it is critical to deep neural networks. Some AutoML routines have already demonstrated advances on manual human performance insome of these areas. Other processes that could support AutoML, such as data cleaning, are aided by a separate body of machine learning tools that could be added to AutoML In fact, AutoML itself exists in a larger context of applying ML to data science and software development—an effort that shows promise but remains at an early stage.

Even with recent focus on AutoML, the capability of these programs has yet to reach the stage where they could be relied upon to achieve a desired result without human intervention. Data scientists will not lose their jobs in near future; as others have pointed out, humans are still required to set the objectives and verify results for any machine learning operation. Transparency is also of the utmost importance in determining whether the model is accurately selecting useful results or has settled upon an illusion of accuracy.

Currently, many  AutoML programs have operated successfully in test cases, with problems emerging as the size of the data set rises or the required operation becomes more complicated. An AutoML solution must not only embrace a wide range of ML models and techniques; but it must at the same time handle the large amount of data that will be used for testing through innumerable iterations.

AutoML, and, indeed, other forms of auto data science are likely to continue to advance. It makes sense that machines should add a layer of thinking about thinking on top of the specific task layer. A machine driven approach to developing automation of automation makes sense, not only in reducing the human component, but also in ensuring that there is capability to meet the demands of an ever expanding usage of AI in business. Arguabley, development of a more autonomous AutoML would be an important step toward Artificial General Intelligence.

Improvement in this area is likely to be swift, due to the urgency of the data scientist shortage at a time when all companies are advised to invest in AI. There is an ambitious DARPA program, Data-Driven Discovery of Models (D3M), aimed at coming up with techniques that automate machine learning model building from data ingestion to model evaluation. This was begun in June, 2016 and is furthering interest in AutoML approaches. Among AutoML startups, one standout is DataRobot, which has raised $54 million recently, bringing its total funding to $111 million. Meanwhile, there is a growing body of research in Academia, as well as within corporate research teams, focusing upon how to crack a problem that could create something like a user-friendly machine learning platform.


 


AI in Medicine: The Video

Applications for AI and machine learning have blossomed recently in the medical and healthcare sectors, providing new opportunities and possibilities across everything from medical image recognition to rearch and diagnostics. While covering this vast territory in brief is impossible, a small sample of current developments and thinking in this area is helpful in understanding the current state of AI.

Healthcare is one of several vertical industries that we will look at in this series, as we explore the ongoing issues when AI technologies are incorporated into businesses, studies, and lives.

The videos are a mixture of presentations, seminars and discussions avaialable under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

Artificial Intelligence in Medical Research and Applications (IJCAI Video Competition)

Published on Aug 21, 2017

In various medical fields and healthcare, we are facing an astonishingly serious problem—that is, we are drowning in heterogeneous patient data while starving for expert knowledge for interpretations. To assist medical practitioners for detecting, diagnosing, and treating various medical conditions, groups of computer science researchers combine domain experts’ intelligence with artificial intelligence by building computational models for the Big Medical Data available. This video demonstrates the research and applications of artificial intelligence by showcasing three applications domains, including dermatology, cardiology, and psychology.

By Xuan Guo , Akshay Arun , Prashnna Gyawali , Sandesh Ghimire , Erin Coppola, Omar Gharbia , Jwala Dhamala. Rochester Institute of Technology.

Man, Machine, and Medicine: Mass General Researchers Using AI (NVIDIA)

 Published on Dec 7, 2016
Researchers at Mass General Hospital are using artificial intelligence and deep learning to advance patient care.

Keynote Speech. Artificial Intelligence in Medicine (CoMST 學術分享頻道)

Published on Oct 24, 2017

Speaker: Leo Anthony Celi MD MS MPH
MIT Institute for Medical Engineering and Science
Beth Israel Deaconess Medical Center, Harvard Medical School.

Precision Medicine and Drug Discovery (AIMed)

Published on Jan 13, 2017


 

 


Smart Farming with AI and Robotics: The Video

Following up on the previous post about AI and robotics in agriculture (Agricultural Robots and AI in Farming; Promises and Concerns), it seemed appropriate to provide some video on this fascinating and highly significant area. Agricultural robots face substantial challenges in handling an enormous variety of tasks; they also need to take special care in handling plants, produce, and animals. Agriculture is a critical area of  development that often goes unnoticed in the Industry 4.0 story. But these solutions are exploding and are likely to have enormous effects upon employment, finance, and society.

The videos are a mixture of talks, presentations, teaching material and product demonstrations available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

The Future of Farming with AI: Truly Organic at Scale (Machine Learning Society)

Published on May 17, 2017

A talk by Ryan Hooks, CEO & Founder, Huxley. Weblink with slides at http://www.mlsociety.com/events/the-f…

As climate change and global demographics begin to put excessive strain on the traditional farming model, the need for an agriculturally intelligent solution is vital. By 2050, the world population will increase by over 2 billion people. Current crop yields and freshwater resources will not be sufficient to sustain a population over 9 billion people.

On May 15th 2017, the Machine Learning Society hosted this event to showcase high tech farming techniques used in vertical and urban farming. Our keynote speaker is Ryan Hooks of Huxley. Huxley uses computer vision, augmented reality (AR), and A.I. to greatly improve yield, while driving the down cost and resources requirements. Huxley is creating an “operating system for plants” to grow with 95% less water, 2x the speed, 2/3 less carbon output, and half the nutrients needed.

Automation, Robotics & Machine Learning in Agriculture (Blue River Technology)

Published on May 13, 2016

Keynote presentation by Ben Chostner, VP Business Development of Blue River Technology, at the May 2016 Agri Investing Conference in Toronto.

Farmers: These ARE the Droids You’re Looking For (New Scientist)

Published on May 18, 2016

Autonomous robots created at the University of Sydney can count fruit on trees, spray weeds, and even herd cows.  All pictures courtesy of Professor Salah Sukkarieh, University of Sydney, Australia.

Robots Take Over the Dairy Farm (mooviechannel)

Published on Jan 8, 2015

Gareth Tape of Hardisworthy Farm in Devon calls the technology ‘life-changing’ – both for him and his cows. Watch the video to find out why.

Robots and Drones Agriculture’s Next Technological Revolution NHK Documentary (Japan Tokyo)

Published on Jul 9, 2017

While still a student, Shunji Sugaya started an IT company focused on artificial intelligence and robots for use on the farms of the future. Agriculture in Japan faces serious challenges like an aging population and shrinking workforce. Sugaya imagines robots and drones that reduce labor demands and farms that are run using big data. Today we look at Sugaya and the young engineers at his company in their efforts to shape the future of agriculture and fishing with cutting-edge technology.


 

 


Agricultural Robots and AI in Farming; Promises and Concerns

With constant attention given to Industry 4.0, autonomous vehicles and industrial robots, there is one significant area of robotics that is often under-reported—the growing use of autonomous agricultural robots and AI-driven smart systems in agriculture. Although automation has been practiced on the farm for many years, it is not been as widely visible as its cousins on the shop floor. But technology being deployed on farms today is likely to have far reaching consequences.

We are on the edge of an explosion in robotics that will change the face of agriculture around the world, affecting labor markets, society, and the wealth of nations. Moreover, developments today are global in extent, with solutions being created in the undeveloped world as well as in the developed world, stretching across every form of agriculture from massive row crop agribusiness and livestock management, down to precision farming and crop management in market gardens and enclosed spaces.

Agriculture Robots Today

Agriculture is vital to the health of the ever-expanding human population, and to the wide range of interrelated industries that make up the agricultural sector. Processes include everything from planting, seeding, weeding, milking, herding, and gathering, to transportation, processing, logistics, and ultimately to the market—be it the supermarket, or increasing retail online. The UN predicts that world population will rise from 7.3 billion to 9.7 billion in 2050. Robots and intelligent systems will be critical in improving food supplies. Analyst company Tractica estimates shipments of agricultural robots will increase from 32,000 units in 2016 to 594,000 units annually by 2024—when the market is expected to reach $74.1 billion per year.

While automation has been in place for some time and semi-autonomous tractors are increasingly common, farms pose particular problems for robots. Unlike highway travel, which is difficult enough for autonomous vehicles, agricultural robots need to be able to respond to unforeseen events, plus handle and manipulate objects within their environment. AI makes it possible to identify weeds and crops; discern crop health and status; and to be able to take action delicately enough to avoid damage in actions such as picking. At the same time, these robots must navigate irregular surfaces and pathways, find locations on a very fine scale,  and sense specific plant conditions across the terrain.

Agricultural robots using AI technologies are responding to economies in the agricultural sector as well as to rising labor costs and immigration restrictions. The first areas of general impact are in large businesses, since robots have high investment and maintenance costs and there is a lack of skilled operators. Conditions will change as robots become cheaper, more widely available, and capable of performing more diverse tasks. This will require evolution of AI technologies, expansion of collaboration abilities among robots; and man-machine combined operations. The ability of robots to work with humans could be particularly significant due to the wide range of discrimination tasks involved in food safety, quality control, and weed and pest removal. Robots will be guided by human supervisors with skills in agriculture and knowledge of robotic and agricultural systems.

Opportunities and Growth

According to the International Federation of Robotics, agricultural robots are likely to be the fastest growing robotics sector by 2020. Different sectors of agricultural markets will respond differently. Large businesses with row crops are early responders, since they have funds to invest and shrinking margins. For these companies, there are huge benefits in reducing labor costs and instituting more precise farming methods. As picking and weed killing and pest removal systems become more widely available, citrus orchards and difficult-to-pick crops are likely to be next. Robots capable of picking citrus, berries, and other delicate fruit in difficult locations are already starting appear. There are applications in virtually every part of the agricultural sector.

Other uses will appear as robots become more common and less expensive. Robots can make a difference not only in harvesting, but also in precision of water application and fertilizer. In areas where water is contentious, such as California and the Middle East, more efficient watering will make it possible to grow larger crops with greater efficiency and less water, avoiding creation of political and social crises.

In the developing world, opportunities are enormous but individual farmers have fewer resources. For this reason, smaller robots and robot clusters are likely to be more widely used, possibly with emerging Robot-as-a-Service (RaaS) operators providing  labor on a per-usage or rental basis. Robotics will be able to save enormously on chemical costs and water used for irrigation, which will have significant economic impacts, as well as environmental benefits.

Progress and Caution

As use of agricultural robots continues to expand, they will take on increasinly complex tasks, and replace a larger portion of agricultural labor–a critical component of global employment. In many countries, particularly in the developing world, this will create shifts in employment which will empower trends such as rural migration to cities, and reduce overall availability of labor–intensive jobs. More training will be needed by more people; this will impact education, socialization, and finance; particularly in countries with large populations.

There are many implications as AI and agricultural robots are deployed. New ideas are blossoming, startups are on the rise, and we can expect a wide range of consequences as  next generation agricultural robotics and AI continue to emerge.


 


Machine Learning Nuts and Bolts, Wheels and Gears: The Video

Getting down to the nitty-gritty, it’s time to take a practical view of what is involved in setting up Machine Learning (ML) projects. There is a lot of material available, but it can be difficult to find accessible information that can guide you through the development and implementation maze. Here we look at how to access ML APIs, how to use TensorFlow, and common algorithms that you might wish to use for various purposes.

A more practical understanding gives you an edge in discussing possbilities, as also heading you toward the right track if you wish to add these skills to your programming arsenal.

The videos are a mixture of talks, presentations and teaching material available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

Basic Machine Learning Algorithms Overview – Data Science Crash Course Mini-series (Hortonworks)

Published on Aug 1, 2017

A high-level overview of common, basic Machine Learning algorithms by Robert Hryniewicz

Hello World – Machine Learning Recipes (Google Developers)

Published on Mar 30, 2016

Six lines of Python is all it takes to write your first machine learning program! In this episode, we’ll briefly introduce what machine learning is and why it’s important. Then, we’ll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up.

Deep Learning, Self-Taught Learning and Unsupervised Feature Learning (Stanford Graduate School of Business)

Published on May 13, 2013

Talk by Andrew Ng.

TensorFlow and Deep Learning without a PhD, Part 1 (Google Cloud)

Published on Mar 8, 2017

With TensorFlow, deep machine learning transitions from an area of research to mainstream software engineering. In this video, Martin Gorner demonstrates how to construct and train a neural network that recognises handwritten digits. Along the way, he’ll describe some “tricks of the trade” used in neural network design, and finally, he’ll bring the recognition accuracy of his model above 99%.

Content applies to software developers of all levels. Experienced machine learning enthusiasts, this video will introduce you to TensorFlow through well known models such as dense and convolutional networks. This is an intense technical video designed to help beginners in machine learning ramp up quickly.

TensorFlow and Deep Learning without a PhD, Part 2 

(Google Cloud)

Published on Mar 10, 2017

Deep learning has already revolutionized machine learning research, but it hasn’t been broadly accessible to many developers. In this video, Martin Gorner explores the possibilities of recurrent neural networks by building a language model in TensorFlow. What this model can do will impress you.

Developers with no prior machine learning experience are welcome. We do recommend that you watch the previous video unless you already know about dense and convolutional networks and are only interested in recurrent networks.

This is an intense technical video designed to help beginners in machine learning ramp up quickly.


 

 


Challenges and Opportunities in the AI User Interface

While the general integration of artificial intelligence into business processes is likely to have a profound effect, one area common to all applications will be particularly impacted–the User Interface (UI). We have already seen the beginning of  AI in digital assistants such as Alexa (see Digital Assistants Coming of Age with Alexa) that interpret natural language commands and respond with speech or actions. While this type of Conversational Interface (CI) is likely to become ubiquitous, there are other significant UI impacts that may be at least as important.

Today, capabilities of applications have grown geometrically and complexity continues to increase. Meanwhile, mobile platforms have limited display space and fewer controls. This leads to a new need for simplification. The choices presented to the user in menu upon menu within office products, for example can be exhaustive and make it difficult to locate specific actions. Yet there is great demand for additional processing to meet the needs of new use cases.

AI to the Rescue 

Natural language processing provides a start for simplifying access to the deeper complexities of programs by understanding commands and responding without a keyboard. But the conversational interface is also evolving as AI progresses. We are still at a rudimentary stage, returning to the need to memorize commands like the old TTY interfaces. What is needed is to address the selection of actions based upon context. This is the beginning of a new form of interface in which AI is able to translate a human request for a desired result into a command sequence to one or more applications to perform tasks that will meet the user’s ultimate goal.

An AI User Interface

For example, someone might ask “Should I go to the office today?”. The system might then assess health information, determine if there is an illness; check the weather for extremes; check databases of current natural disasters that might be applicable; check holiday schedules; check company instructions and so forth, all in an instant. But to do this, the AI needs a broader range of knowledge and data than is commonly available for current AI applications, and a capacity to understand what such a request might entail. In fact, as with many such puzzles, there is the beginnings of a requirement for an artificial general intelligence which would think across a wide range of territory rather than within the simple parameters of a question. The AI would think about thinking.

Such an interface demands situational awareness and an understanding of the overall context in which a question is posed. The AI needs to define the specifics of where information would be found; it also needs to understand how to convey and/or act upon that intelligence, and make complex choices.

Implications of an AI UI

As software continues to grow in  complexity, it is certain that AI will provide a bridging interface between human and machine. This interface will become more flexible and more powerful, and it will evolve to perform more duties. Just as simplification of the UI makes it possible to perform complex tasks on the computer with only a basic understanding of operating principles, people will begin to interact with computers in a conversational way and evolve information requests to meet an increasingly sophisticated interaction. The growing sophistication of this interaction and will feed the development of further AI UI capabilities.

Development of a sophisticated AI-based UI is not solely about natural language processing, however. All computer interactions can likely be improved by the addition of an AI layer. While conversation is a priority, AI will be able to  to reduce the clutter and confusion of menu-based interactions by providing selection based upon context, and capability to interact based on desired results rather than tool selection. In effect, this is much like movements in  software development, such as the growing Agile and DevOps movements. Writing software to meet specific customer needs is much better than coding based around technology. This same rule must apply to the actions of programs themselves.

Into the Future

AI will also be applied to the processes of developing artificial intelligence. We have already seen programs that read a user interface screen image and convert it directly into code. In the next iteration, we can expect to see AI solutions which turn actions into instructions which may feed multiple AI systems to further a desired operation. While such a general intelligence creates its own issues, it will arrived in incremental steps. Increasing complexity of AI routines, increasing integration, and increasing componentization will open the way for AI to operate across a broader range of data and make decisions about thinking and about interaction that can generally be applied across all computer systems.