AutoML: The Automation of Automation

Machine to Machine: AI makes AI

The next Big Thing in AI is likely to be use  of machine learning to automate machine learning. The idea that routine tasks involved in developing a machine learning solution could be automated makes perfect sense. Machine learning development is a replicable activity with routine processes. Although total automation is improbable at the moment, even partial automation yields significant benefits. As the ranks of available machine learning experts grow thinner and thinner, ability to automate some of their time-consuming tasks means that they can spend more time on high-value functions and less on the nitty-gritty of model building and reiteration. This will, in theory, release more data scientists to work on the vast number of projects envisioned in an ubiquitous AI environment, as well as making it possible for less proficient practitioners to utilize machine learning routines without the need for extensive training.

Although automated machine learning (AutoML) is appealing, and startups have emerged with various promise, it is clear that this capability is not yet fully developed. There are, however, innumerable systems that are suitable now for use in selected environments, and to solve specific problems. Among these programs are Google AutoML, DataRobot, Auto-WEKA, TPOT, and auto-sklearn. Many are Open Source or freely available. Major analytics firms are also rapidly developing AutoML routines, including Google, Microsoft, Salesforce, and Facebook, and this area is being approached with increasing urgency.

Current AutoML programs mainly take care of the highly repetitive tasks that machine learning requires to create and tune models. The chief current  automation targets are selection of appropriate machine learning algorithms, tuning of hyperparameters, feature extraction and iterative modeling. Hyperparamter tuning is particularly significant because it is critical to deep neural networks. Some AutoML routines have already demonstrated advances on manual human performance insome of these areas. Other processes that could support AutoML, such as data cleaning, are aided by a separate body of machine learning tools that could be added to AutoML In fact, AutoML itself exists in a larger context of applying ML to data science and software development—an effort that shows promise but remains at an early stage.

Even with recent focus on AutoML, the capability of these programs has yet to reach the stage where they could be relied upon to achieve a desired result without human intervention. Data scientists will not lose their jobs in near future; as others have pointed out, humans are still required to set the objectives and verify results for any machine learning operation. Transparency is also of the utmost importance in determining whether the model is accurately selecting useful results or has settled upon an illusion of accuracy.

Currently, many  AutoML programs have operated successfully in test cases, with problems emerging as the size of the data set rises or the required operation becomes more complicated. An AutoML solution must not only embrace a wide range of ML models and techniques; but it must at the same time handle the large amount of data that will be used for testing through innumerable iterations.

AutoML, and, indeed, other forms of auto data science are likely to continue to advance. It makes sense that machines should add a layer of thinking about thinking on top of the specific task layer. A machine driven approach to developing automation of automation makes sense, not only in reducing the human component, but also in ensuring that there is capability to meet the demands of an ever expanding usage of AI in business. Arguabley, development of a more autonomous AutoML would be an important step toward Artificial General Intelligence.

Improvement in this area is likely to be swift, due to the urgency of the data scientist shortage at a time when all companies are advised to invest in AI. There is an ambitious DARPA program, Data-Driven Discovery of Models (D3M), aimed at coming up with techniques that automate machine learning model building from data ingestion to model evaluation. This was begun in June, 2016 and is furthering interest in AutoML approaches. Among AutoML startups, one standout is DataRobot, which has raised $54 million recently, bringing its total funding to $111 million. Meanwhile, there is a growing body of research in Academia, as well as within corporate research teams, focusing upon how to crack a problem that could create something like a user-friendly machine learning platform.


 


Machine Learning Nuts and Bolts, Wheels and Gears: The Video

Getting down to the nitty-gritty, it’s time to take a practical view of what is involved in setting up Machine Learning (ML) projects. There is a lot of material available, but it can be difficult to find accessible information that can guide you through the development and implementation maze. Here we look at how to access ML APIs, how to use TensorFlow, and common algorithms that you might wish to use for various purposes.

A more practical understanding gives you an edge in discussing possbilities, as also heading you toward the right track if you wish to add these skills to your programming arsenal.

The videos are a mixture of talks, presentations and teaching material available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

Basic Machine Learning Algorithms Overview – Data Science Crash Course Mini-series (Hortonworks)

Published on Aug 1, 2017

A high-level overview of common, basic Machine Learning algorithms by Robert Hryniewicz

Hello World – Machine Learning Recipes (Google Developers)

Published on Mar 30, 2016

Six lines of Python is all it takes to write your first machine learning program! In this episode, we’ll briefly introduce what machine learning is and why it’s important. Then, we’ll follow a recipe for supervised learning (a technique to create a classifier from examples) and code it up.

Deep Learning, Self-Taught Learning and Unsupervised Feature Learning (Stanford Graduate School of Business)

Published on May 13, 2013

Talk by Andrew Ng.

TensorFlow and Deep Learning without a PhD, Part 1 (Google Cloud)

Published on Mar 8, 2017

With TensorFlow, deep machine learning transitions from an area of research to mainstream software engineering. In this video, Martin Gorner demonstrates how to construct and train a neural network that recognises handwritten digits. Along the way, he’ll describe some “tricks of the trade” used in neural network design, and finally, he’ll bring the recognition accuracy of his model above 99%.

Content applies to software developers of all levels. Experienced machine learning enthusiasts, this video will introduce you to TensorFlow through well known models such as dense and convolutional networks. This is an intense technical video designed to help beginners in machine learning ramp up quickly.

TensorFlow and Deep Learning without a PhD, Part 2 

(Google Cloud)

Published on Mar 10, 2017

Deep learning has already revolutionized machine learning research, but it hasn’t been broadly accessible to many developers. In this video, Martin Gorner explores the possibilities of recurrent neural networks by building a language model in TensorFlow. What this model can do will impress you.

Developers with no prior machine learning experience are welcome. We do recommend that you watch the previous video unless you already know about dense and convolutional networks and are only interested in recurrent networks.

This is an intense technical video designed to help beginners in machine learning ramp up quickly.


 

 


Challenges and Opportunities in the AI User Interface

While the general integration of artificial intelligence into business processes is likely to have a profound effect, one area common to all applications will be particularly impacted–the User Interface (UI). We have already seen the beginning of  AI in digital assistants such as Alexa (see Digital Assistants Coming of Age with Alexa) that interpret natural language commands and respond with speech or actions. While this type of Conversational Interface (CI) is likely to become ubiquitous, there are other significant UI impacts that may be at least as important.

Today, capabilities of applications have grown geometrically and complexity continues to increase. Meanwhile, mobile platforms have limited display space and fewer controls. This leads to a new need for simplification. The choices presented to the user in menu upon menu within office products, for example can be exhaustive and make it difficult to locate specific actions. Yet there is great demand for additional processing to meet the needs of new use cases.

AI to the Rescue 

Natural language processing provides a start for simplifying access to the deeper complexities of programs by understanding commands and responding without a keyboard. But the conversational interface is also evolving as AI progresses. We are still at a rudimentary stage, returning to the need to memorize commands like the old TTY interfaces. What is needed is to address the selection of actions based upon context. This is the beginning of a new form of interface in which AI is able to translate a human request for a desired result into a command sequence to one or more applications to perform tasks that will meet the user’s ultimate goal.

An AI User Interface

For example, someone might ask “Should I go to the office today?”. The system might then assess health information, determine if there is an illness; check the weather for extremes; check databases of current natural disasters that might be applicable; check holiday schedules; check company instructions and so forth, all in an instant. But to do this, the AI needs a broader range of knowledge and data than is commonly available for current AI applications, and a capacity to understand what such a request might entail. In fact, as with many such puzzles, there is the beginnings of a requirement for an artificial general intelligence which would think across a wide range of territory rather than within the simple parameters of a question. The AI would think about thinking.

Such an interface demands situational awareness and an understanding of the overall context in which a question is posed. The AI needs to define the specifics of where information would be found; it also needs to understand how to convey and/or act upon that intelligence, and make complex choices.

Implications of an AI UI

As software continues to grow in  complexity, it is certain that AI will provide a bridging interface between human and machine. This interface will become more flexible and more powerful, and it will evolve to perform more duties. Just as simplification of the UI makes it possible to perform complex tasks on the computer with only a basic understanding of operating principles, people will begin to interact with computers in a conversational way and evolve information requests to meet an increasingly sophisticated interaction. The growing sophistication of this interaction and will feed the development of further AI UI capabilities.

Development of a sophisticated AI-based UI is not solely about natural language processing, however. All computer interactions can likely be improved by the addition of an AI layer. While conversation is a priority, AI will be able to  to reduce the clutter and confusion of menu-based interactions by providing selection based upon context, and capability to interact based on desired results rather than tool selection. In effect, this is much like movements in  software development, such as the growing Agile and DevOps movements. Writing software to meet specific customer needs is much better than coding based around technology. This same rule must apply to the actions of programs themselves.

Into the Future

AI will also be applied to the processes of developing artificial intelligence. We have already seen programs that read a user interface screen image and convert it directly into code. In the next iteration, we can expect to see AI solutions which turn actions into instructions which may feed multiple AI systems to further a desired operation. While such a general intelligence creates its own issues, it will arrived in incremental steps. Increasing complexity of AI routines, increasing integration, and increasing componentization will open the way for AI to operate across a broader range of data and make decisions about thinking and about interaction that can generally be applied across all computer systems.


 


Intersection of Blockchain and AI: The Video

AI and Blockchain are both topics that have generated significant interest recently as these technologies are incorporated into business processes. As with all digital technologies, however, they are not discrete. Maturing technologies often converge to develop entirely new possibilities, and blockchain combined with AI could create some potent results in fintech, accountability, and in infrastructure.

The following videos look at this potential for convergence from several perspectives.

The videos are a mixture of talks, presentations and teaching material available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

The Convergence of Blockchain and Artificial Intelligence (Patrick Schwerdtfeger)

Published on Sep 9, 2016

The fields of Blockchain and Artificial Intelligence are converging, and they will intersect soon. Artificial Intelligence and Machine Learning require vast amounts of data. That’s how they learn. Meanwhile, Blockchain allows for decentralized autonomous organizations which will soon involve hundreds of millions of people. Furthermore, platforms built on Blockchain technology will soon be powerful enough to support AI applications. At that point, AI could evolve very quickly and become, effectively, an unstoppable utility for the world’s population. No one knows exactly how this convergence will play out. Certainly, I do not either, but I know this is an exciting time and I plan on following the developments as they emerge. The results will most definitely affect us all.

Blockchains for Artificial Intelligence (PyData)

Published on Jul 26, 2017

This talk by Trent McConaghy describes the various ways in which emerging blockchain technologies can be helpful for machine learning / artificial intelligence work, from audit trails on data to decentralized model exchanges.

In recent years, big data has transformed AI, to an almost unreasonable level. Now, blockchain technology could transform AI too, in its own particular ways. Some applications of blockchains to AI are mundane yet useful, like audit trails on AI models. Some appear almost unreasonable, like AI that can own itself — AI DAOs (decentralized autonomous organizations) leading to the first AI millionaires. All of them are opportunities. Blockchain technologies — especially planet-scale ones — can help realize some long-standing dreams of AI and data folks.

Numerai – A Revolutionary Hedge Fund Built on Blockchain and AI (Epicenter)

Published on Jul 11, 2017

Numerai Founder Richard Craib discusses his radical project to build a hedge fund with network effects. Numerai manages its portfolio by giving its data in encrypted form to data scientists who compete to create the best predictions and get paid with cryptocurrencies. Numerai expects to radically alter the structure of the hedge fund and asset management industry.

Topics discussed in this Epicenter episode include:

  • Quantitative trading and the role of AI in investing
  • How Numerai uses crowdsourcing and AI to manage its portfolio
  • The function of Numerai’s own token Numeraire

Code is Not the Law: Blockchain Contracts and Artificial Intelligence (Aliensyntax)

Published on Oct 28, 2016

A presentation by Adam Kolber (Brooklyn Law School) from The Ethics of Artificial Intelligence conference that took place October 14-15, 2016. It was hosted by the NYU Center for Bioethics in conjunction with the NYU Center for Mind, Brain and Consciousness. Published by Livestream.com.


 


Microservices, Platforms, and the Infrastructure of AI: The Video

AI and Machine Learning are often at their best as composite services  performing as part of a coordinated platform. In this brief set of videos, we look at AI APIs, microservices, platforms, and how they can be brought together to achieve larger goals.  These are recent discussions in a rapidly evolving territory where exciting visions continue to emerge.

The videos are a mixture of seminars and discussions avaialable under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

Machine Learning APIs by Example (Google Cloud)

Published on Mar 9, 2017

Think your business could make use of Google’s machine learning expertise when it comes to powering and improving your business applications, but do you get stuck on building and training your own custom model? Google Cloud Platform (GCP) offers five APIs: Google Cloud Vision API, Cloud Speech API, Cloud Natural Language API, Cloud Translation API and Cloud Video API. These APIs access pre-trained machine learning models with a single API call. In this video, Sara Robinson shares an overview of each API. Then she dives into code with a live demo.

Fireside Chat: Crafting the Future of Technology – Microservices & Platforms (Zinnov Management Consulting)

Published on Mar 30, 2017

Kevin Prendeville, Managing Director – PE & LS, Accenture
Pankaj Chawla, MD & CTO – Products and Platforms Engineering, Accenture
Peter Schmutzer, Director Purchasing, Intel
Siddhartha Agarwal, VP, Product Management & Strategy, Oracle

We are in the midst of a revolution in artificial intelligence: the art and engineering of getting computers to perform tasks that, until recently, required natural intelligence — in other words, people. We can speak to our phones, software can identify faces, and algorithms can teach themselves to play Atari video games. Top-of-the-line sedans have the ability to drive autonomously on the open road. Progress has been driven by machine learning techniques, such as deep convolution networks, dating to the 1980s, by Moore’s law, and by continual improvements in machine architecture. Progress is so dizzying that a few futurists, technologists, and philosophers have publicly mused about where the swift ascension of machines is taking us. In this session, we will have a sneak-peek into the future of AI and what it holds for us.

APIs and Artificial Intelligence (Google Cloud)

Published on Mar 9, 2017

A fundamental goal for every business is to keep users attention and focus on their products or services. Combining APIs with AI leads to better stickiness.

Having Fun with Robots Using Microservices on Docker and Kubernetes (Devoxx)

Published on Nov 10, 2016

Controlling and building a single robot is already a challenge, but how would that work if you want to have a swarm of robots interact with each other? How do we control and interact with them whilst all robots are slightly different?


 


Software Eats World: Autonomous Agents in Business Process Management (BPM)

Shhh! The robots are moving back home to software! (Not that they ever left.) Concepts forged in the IoT will become part of every other system. Digital components are easily connected and share a multitude of underlying principles, so concepts quickly move between unrelated disciplines, and all related technologies tend to converge.

While considering the IoT we have looked at autonomy, and what is required for devices such as automobiles and industrial robots to operate safely on their own, coordinated with other devices and working with humans (Autonomy INFOGRAPHIC, Challenges of Autonomy: The Video, Where the AI Rubber Hits the NHTSA Road: Letter to GM Describes State of Autonomy). We have reviewed the special challenges of autonomy, and how they are being solved to create efficient and effective systems. These capabilities are now destined to enrich other areas.

A key issue is how to apply this learning to business processes themselves. Autonomous robots and vehicles are extensions of digital processes. These processes are defined by software and engage numerous other systems toward the performance of a given end.

The development of an autonomous device requires layer upon layer of intelligence performance (see The Composite Nature of Embedded Cognition). Devices must be able to sense the activities surrounding them; they must have the ability to interact with their surroundings; and they must be able to provide a wide variety of actions that can be flexibly fitted to the accomplishment of a mission. All of these details might also be applied to strictly software systems.

A cognitive approach does not simply glue artificial intelligence to existing processes with the assumption that AI will provide the required result. Instead, we will see multiple AI systems used in conjunction with each other to perfect and deliver software solutions. Each routine will access a range of sensor and processing capabilities which will offer autonomy at the process level. Autonomous agents—software systems capable of specific actions in a larger whole—will then perform their functions as needed to achieve a desired end result. This is in line with a growing understanding of the composite nature of artificial intelligence. It also demands new forms of orchestration and new ways of providing AI capability.

An autonomous agent business process management solution will be able to sense when processes are required rather responding to a fixed call within another software program. This means that processes will anticipate requirements and act early to create an efficient solution. They will act with project managers understanding of when specific data or tasks need to be accomplished. Autonomous agents will be able to interact with other programs and bring a catalog of analytic, machine learning, predictive, and sensor-driven capabilities. This range of functional autonomy will need brokerage, data sharing, and orchestration. A collaborative framework will be required to ensure that the components do not block each other and that the priority of specific tasks is respected.

With an autonomous agent-based process management solution, response in a complex environment will be much faster and more effective than with a fixed system. Similarly, the cognitive capabilities of such a composite system would likely create new possibilities in overall management and furtherance of larger goals. It would become possible to orchestrate all business processes and micromanage them on an atomic level through ability to immediately activate an autonomous response from coordinated process components.

The further development of digital business, AI, autonomy, and cloud computing all tend in the direction of componentized autonomous agents. However, if we look for a timeframe, this will occur well in the future. We are now at the stage of integrating tiny amounts of AI in small and disparate processes. Robotics are merely at the edge of achieving true autonomy. And the processes of orchestration and synchronization of vast independence and coordinated autonomous systems is at the moment beyond our grasp.

However, it is important to understand that in a digital business environment, all of the advances that are made in one field filter with little delay into all other sectors. As we develop cognition and autonomy for robotics and vehicles, these same processes become available to programs of recruitment, sales, finance, manufacturing, medicine, and everything else. We are moving into not only a artificial intelligence – driven world, but a composite artificial intelligence driven world. The capability of developing layer upon layer of such cognition will create field effects that will ultimately change the nature of the combined process. Just as the human mind is entirely different from the neural mechanism of a single cell, the enormous multilayered possibilities of a galaxy of autonomous agents creates a subtly new system whose capabilities cannot yet be adequately explored.

We are just at the beginning of this change, and the marketers are fierce in describing their products as the apex of this evolution. But we are nowhere near the ability to fully comprehend the requirements, capabilities, and consequences of such a cognitive software environment.

For business, taking a more complex view of AI in the enterprise is mandatory. The effects will require a shift in strategy. Software vendors will need to understand the subtle ways in which their programs will need to interact. This is a long term movement, but preparing for it must begin now.


Software Modernization and your Smart Digital Future

Code modernization is essential in transitioning to digital business. Ancient code will have numerous liabilities in integration and in remaining secure. Fundamental to the problem is the fact that languages, programming approaches, and the surrounding IT are all evolving even as the business environment is evolving. This means that programs accumulate technical debt, leading to growing inefficiencies and maintenance costs over time. Continued accumulation of technical debt complicates any conversion effort. Yet, as we move into a future of newly designed smart processes and omnipresent digital interactions, it is certain that radical change and more invasive modernization will be necessary.

It is clear that a general approach is needed that leads both to effective conversion and to meeting the unknown requirements of the future. So, companies that wish to change need to centralize the modernization effort and discover the technologies which will be specifically applicable to the firm. In this context, it is important to consider the ROI of change efforts. Modernization must provide both for the current situation and for the unknown environment of the future.

One of the most persistent problems in modernization is migration of COBOL which exists in millions of lines across critical applications in high accuracy/high-volume areas such as finance and healthcare. These systems are particularly vulnerable as industry evolves to meet complex new requirements from clients and partners. While these systems have often operated for many years, as “black boxes” around which code might be wrapped, this approach eventually must break down. It entails a growing maintenance burden and serious security issues when pathways are built into the code to enable API access to obscure routines. Familiarity with the code base disappears as employees retire, and there is a growing lack of talent and experience in working with older programs.

To make essential changes and build for a digital and interconnected future, there is a range of possible remedies. These include:

  • Continuing the black box. Since the software has continued to operate for many years and performed its functions, you can hope that nothing bad will happen and simply continue the maintenance cycle. You will be faced with increasingly expensive maintenance, and potentially serious security flaws as the years drag on. There will also be an opportunity cost as new technologies are increasingly unavailable due to the lack of malleability in access to critical code.
  • Off-the-shelf replacement. It is sometimes possible to replace critical programs originally built in-house with commercial software or SaaS solutions. This often requires considerable customization, and will be less flexible. Processes may need to be changed, licensing costs will be incurred and there may be integration issues and unforeseen consequences.
  • Greenfield replacement. Starting from scratch demands creating a project of at least the size of the original one. All of the lessons learned in the original coding will be lost, and there are likely to be huge over-runs in time and cost in both adding new features and making certain that critical functions continue to operate as expected.
  • Manual conversion. Manual conversion or refactoring of the original system can be a massive project, potentially larger and more expensive than the original system. It is possible to introduce the modernized COBOL languages or move to later generation code. Without the specific knowledge of the original code and access to the programmer’s logic, much of the original functionality can be compromised. Such projects have very poor rates of completion on time and with adequate success. This will also be true of many “lift and shift” efforts which convert and bring the application to the cloud.
  • Incremental conversion. Large programs could be split up, with only critical “must change” code subject to conversion. This provides short term benefits, but it also potentially adds to technical debt in the interfaces, and the original code that persists will continue as a potential source of future problems.
  • Automated model-based conversion. For some situations, an automated conversion based on modeling can provide a cost effective outcome, depending upon the technology in use. Here, the original code is converted to a semantic model which is then optimized and used to generate code in another language.

Each situation is likely to have different needs, and demand a different solution. This is part of the reason that conversion has become such an intractable problem.

There are numerous companies involved with modernization of code and with bringing older programs into the digital environment—and huge differences, depending upon whether you are looking at a change of platform, a coding conversion, an update, a refactoring, or a rewrite of ancient routines. The most important issue is to determine what the overall modernization requirements are: what is absolutely critical, and what could be reserved for later. Modernization can be very expensive; but it also needs to be correct.