AutoML: The Automation of Automation

Machine to Machine: AI makes AI

The next Big Thing in AI is likely to be use  of machine learning to automate machine learning. The idea that routine tasks involved in developing a machine learning solution could be automated makes perfect sense. Machine learning development is a replicable activity with routine processes. Although total automation is improbable at the moment, even partial automation yields significant benefits. As the ranks of available machine learning experts grow thinner and thinner, ability to automate some of their time-consuming tasks means that they can spend more time on high-value functions and less on the nitty-gritty of model building and reiteration. This will, in theory, release more data scientists to work on the vast number of projects envisioned in an ubiquitous AI environment, as well as making it possible for less proficient practitioners to utilize machine learning routines without the need for extensive training.

Although automated machine learning (AutoML) is appealing, and startups have emerged with various promise, it is clear that this capability is not yet fully developed. There are, however, innumerable systems that are suitable now for use in selected environments, and to solve specific problems. Among these programs are Google AutoML, DataRobot, Auto-WEKA, TPOT, and auto-sklearn. Many are Open Source or freely available. Major analytics firms are also rapidly developing AutoML routines, including Google, Microsoft, Salesforce, and Facebook, and this area is being approached with increasing urgency.

Current AutoML programs mainly take care of the highly repetitive tasks that machine learning requires to create and tune models. The chief current  automation targets are selection of appropriate machine learning algorithms, tuning of hyperparameters, feature extraction and iterative modeling. Hyperparamter tuning is particularly significant because it is critical to deep neural networks. Some AutoML routines have already demonstrated advances on manual human performance insome of these areas. Other processes that could support AutoML, such as data cleaning, are aided by a separate body of machine learning tools that could be added to AutoML In fact, AutoML itself exists in a larger context of applying ML to data science and software development—an effort that shows promise but remains at an early stage.

Even with recent focus on AutoML, the capability of these programs has yet to reach the stage where they could be relied upon to achieve a desired result without human intervention. Data scientists will not lose their jobs in near future; as others have pointed out, humans are still required to set the objectives and verify results for any machine learning operation. Transparency is also of the utmost importance in determining whether the model is accurately selecting useful results or has settled upon an illusion of accuracy.

Currently, many  AutoML programs have operated successfully in test cases, with problems emerging as the size of the data set rises or the required operation becomes more complicated. An AutoML solution must not only embrace a wide range of ML models and techniques; but it must at the same time handle the large amount of data that will be used for testing through innumerable iterations.

AutoML, and, indeed, other forms of auto data science are likely to continue to advance. It makes sense that machines should add a layer of thinking about thinking on top of the specific task layer. A machine driven approach to developing automation of automation makes sense, not only in reducing the human component, but also in ensuring that there is capability to meet the demands of an ever expanding usage of AI in business. Arguabley, development of a more autonomous AutoML would be an important step toward Artificial General Intelligence.

Improvement in this area is likely to be swift, due to the urgency of the data scientist shortage at a time when all companies are advised to invest in AI. There is an ambitious DARPA program, Data-Driven Discovery of Models (D3M), aimed at coming up with techniques that automate machine learning model building from data ingestion to model evaluation. This was begun in June, 2016 and is furthering interest in AutoML approaches. Among AutoML startups, one standout is DataRobot, which has raised $54 million recently, bringing its total funding to $111 million. Meanwhile, there is a growing body of research in Academia, as well as within corporate research teams, focusing upon how to crack a problem that could create something like a user-friendly machine learning platform.


 


Smart Farming with AI and Robotics: The Video

Following up on the previous post about AI and robotics in agriculture (Agricultural Robots and AI in Farming; Promises and Concerns), it seemed appropriate to provide some video on this fascinating and highly significant area. Agricultural robots face substantial challenges in handling an enormous variety of tasks; they also need to take special care in handling plants, produce, and animals. Agriculture is a critical area of  development that often goes unnoticed in the Industry 4.0 story. But these solutions are exploding and are likely to have enormous effects upon employment, finance, and society.

The videos are a mixture of talks, presentations, teaching material and product demonstrations available under standard YouTube license, with landing page descriptions added. Minor edits have been made where necessary.

The Future of Farming with AI: Truly Organic at Scale (Machine Learning Society)

Published on May 17, 2017

A talk by Ryan Hooks, CEO & Founder, Huxley. Weblink with slides at http://www.mlsociety.com/events/the-f…

As climate change and global demographics begin to put excessive strain on the traditional farming model, the need for an agriculturally intelligent solution is vital. By 2050, the world population will increase by over 2 billion people. Current crop yields and freshwater resources will not be sufficient to sustain a population over 9 billion people.

On May 15th 2017, the Machine Learning Society hosted this event to showcase high tech farming techniques used in vertical and urban farming. Our keynote speaker is Ryan Hooks of Huxley. Huxley uses computer vision, augmented reality (AR), and A.I. to greatly improve yield, while driving the down cost and resources requirements. Huxley is creating an “operating system for plants” to grow with 95% less water, 2x the speed, 2/3 less carbon output, and half the nutrients needed.

Automation, Robotics & Machine Learning in Agriculture (Blue River Technology)

Published on May 13, 2016

Keynote presentation by Ben Chostner, VP Business Development of Blue River Technology, at the May 2016 Agri Investing Conference in Toronto.

Farmers: These ARE the Droids You’re Looking For (New Scientist)

Published on May 18, 2016

Autonomous robots created at the University of Sydney can count fruit on trees, spray weeds, and even herd cows.  All pictures courtesy of Professor Salah Sukkarieh, University of Sydney, Australia.

Robots Take Over the Dairy Farm (mooviechannel)

Published on Jan 8, 2015

Gareth Tape of Hardisworthy Farm in Devon calls the technology ‘life-changing’ – both for him and his cows. Watch the video to find out why.

Robots and Drones Agriculture’s Next Technological Revolution NHK Documentary (Japan Tokyo)

Published on Jul 9, 2017

While still a student, Shunji Sugaya started an IT company focused on artificial intelligence and robots for use on the farms of the future. Agriculture in Japan faces serious challenges like an aging population and shrinking workforce. Sugaya imagines robots and drones that reduce labor demands and farms that are run using big data. Today we look at Sugaya and the young engineers at his company in their efforts to shape the future of agriculture and fishing with cutting-edge technology.


 

 


Blockchain Meets the IoT: the Video

Blockchain, the secure ledger system behind the Bitcoin cryptocurrency, is becoming increasingly important in key usage cases throughout business and industry. It provides a persistent, unalterable record system that is becoming increasingly important to security. And secure transactions are of critical importance in sensitive areas such as finance and health. But one of the most important areas that is now emerging is the connection with the Internet of Things (IoT). The surprising benefits of incorporating blockchain concepts in IoT development are creating intense interest as companies grapple with the needs of new infrastructure.

In these videos, we provide several discussions on the intersection of blockchain and the IoT. The videos are under standard YouTube license, and the description provided is from each video’s landing page (with minor edits).

Blockchain and the Internet of Things explained (IBM Internet of Things)

Published on Nov 2, 2016

A blockchain ledger can create a tamper-resistant record when information needs to be shared among business partners without setting up a costly centralized IT infrastructure. Let’s look at how supply chains benefit when data is shared through a private blockchain.

Next Generation IoT Technologies Using The Block Chain (Samsung Developer Connection)

Published on Dec 1, 2014

Talk by Gurvinder Ahluwalia

An overview of how Samsung and IBM are thinking about the next generation of IoT infrastructure, and why they are using Blockchain. This presents an approach to address the problems of cost, privacy and longevity of smart devices on the Internet Things.

Key Use Cases Intersecting Blockchain and IoT (IBM Internet of Things)

Published on Dec 9, 2016

Jerry Cuomo (Vice President Blockchain Technologies, IBM), Mika Lammi (Head of IoT Business Development, Kouvola Innovation), and James Murphy (Offering Manager, IBM Watson IoT Platform – Risk Management & Security) talk about the trends in business that are driving the pairing of Blockchain and IoT.

Blockchains for the Internet of Things – Solving the IoTs Most Critical Problems (blockchainofthings.com)

Published on Jul 7, 2016

Blockchains are poised to revolutionize the Industrial Internet of Things (IIoT) by providing security, peer-to-peer device communications and new functionality via smart properties. Andre De Castro CEO of Blockchain of Things, Inc. discusses why blockchains make sense for the IIoT

 


 


Artificial Realities: The Video

Virtual Reality, Augmented Reality, and Mixed Reality are about to enter the mainstream through a number of different routes, aligned with all of the technologies of an increasingly digital world.  Exciting new possibilities are already becoming apparent, led by gaming and by the complex “hyper-reality” visions of science fiction. As yet, companies are struggling to find applications for today in a technology that looks so much like tomorrow. But breakthroughs may be imminent as vendors such as Microsoft, Google, and exciting startups begin to create the future in this territory.

Here we have assembled a mixed set of videos looking at different aspects of this technology. All are available under a standard YouTube license.

Windows Holographic: Enabling a World of Mixed Reality (Windows)

Published on Jun 1, 2016

Windows Holographic enables a world of mixed reality – where devices work together – regardless of whether they are developed for virtual reality, augmented reality, or anything in-between. Windows Holographic will help expand the Windows ecosystem far beyond the PC.

Keynote – Into the Future, with Magic Leap’s Graeme Devine (Games for Change )

Published on Jul 26, 2016

Magic Leap’s Chief Game Wizard Graeme Devine shares the startup’s vision for mixed reality in the classroom and making virtual objects appear in real life.

Intelligent Agents, Augmented Reality & the Future of Productivity (O’Reilly)

Published on Jun 13, 2016

A conversation with Satya Nadella, CEO, Microsoft and Tim O’Reilly at Next:Economy Summit 2105 in San Francisco California.

MRO.AIR – Artificial Intelligent Reality (Connectar)

Published on Dec 9, 2015

Connectar presents its MRO.air product, described as

“An Artificial Intelligent reality that works like an digital aura to help, guard and improve mechanics in maintenance and repair organizations. The promise is to increase the efficiency with over 30% and to reduce errors with over 50% and this is how.”

Augmented Reality (World Economic Forum)

Published on Apr 23, 2012

Augmented reality is bringing a new sweeping change in the way we communicate, democratising the way we produce and consume information.

 


Swarm Intelligence for Robots, Algorithms, and Humans: The Video

Swarm intelligence occurs in nature when social creatures such as ants combine their behavior in an integrated way for a collective purpose. Understanding swarm intelligence and replicating it is of increasing interest as we move into a world of AI and clustered robots. Understanding emergent behavior from swarms also furthers concepts of autonomy and organization at every level of intelligence.

Here we have provide a few videos that look at the potential for applying swarm concepts to human thought, as well as to robotics. As usual, the videos here are available on a standard YouTube license.

What is Swarm Intelligence? (UNU videos)

Published on Feb 3, 2016
From Unanimous AI, a company focusing upon human swarms. According to the company website:
Unlike traditional AI, which aims to replicate human intelligence, our Swarm A.I. technologies builds intelligent systems that keep people in the loop, leveraging our natural human knowledge, insights, and intuitions. At Unanimous, we use technology to amplify human intelligence, not replace it.
(The final video in this set is also from Unanimous.)

Taming the swarm – Collective Artificial Intelligence (TEDx Talks)

Published on Jan 14, 2016

by Radhika Nagpal

Radhika Nagpal is the Kavli Professor of Computer Science at Harvard University and a core faculty member of the Wyss Institute for Biologically Inspired Engineering. At Harvard, she leads the Self-organizing Systems Research Group (SSR) and her research combines computer science, robotics, and biology. Her main area of interest is how cooperation can emerge or be programmed from large groups of simple agents.

Can A Thousand Tiny Swarming Robots Outsmart Nature? (Deep Look)

Published on Jul 21, 2015

How does a group of animals — or cells, for that matter — work together when no one’s in charge? Tiny swarming robots–called Kilobots–work together to tackle tasks in the lab, but what can they teach us about the natural world?

How do you simultaneously control a thousand robots in a swarm? The question may seem like science fiction, but it’s one that has challenged real robotics engineers for decades. In 2010, the Kilobot entered the scene. Now, engineers are programming these tiny independent robots to cooperate on group tasks. This research could one day lead to robots that can assemble themselves into machines, or provide insights into how swarming behaviors emerge in nature.

In the future, this kind of research might lead to collaborative robots that could self-assemble into a composite structure. This larger robot could work in dangerous or contaminated areas, like cleaning up oil spills or conducting search-and-rescue activities.

DEEP LOOK is a short video series created by KQED San Francisco and presented by PBS Digital Studios. This is made available on a standard YouTube license.

Human Swarming (IEEE Blended Intelligence, 2015. UNU videos)

Published on Oct 7, 2015

Dr. Louis Rosenberg of Unanimous AI (same as first video) describes the basics of “Human Swarming”, discussing the natural models that inspire Human Swarms and the benefits of swarms over votes, polls, markets, and other methods for tapping the Collective Intelligence of groups.


 


Once More into the Breach: Times of Great Change Bring Opportunity

Times of great change are also times of great opportunity. We are all aware of the erroneous Chinese ideograms that define Crisis as Danger plus Opportunity. They have been pulled out by politicians since 1938, most famously by JFK.They are popular because the statement remains correct, though the derivation is false. But beware!

Correct Vision, Wrong Characters

In a greater sense, the current upheavals in politics across the world should be understood as a natural result of the changes in technology, which have ushered in the 21st century. We have long argued that recent developments such as growth of social media, development of stronger AI, spread of the Internet, and all of the trappings of today’s online universe were having a revolutionary impact. But, when you equate the Internet with Gutenberg and mildly consider the technological results as revolutionary and creating great opportunities for the future, you cannot ignore the inevitable political and social traumas which come with such radical change. The Gutenberg press created the Protestant Revolution by making it possible to distribute vast numbers of Bibles in regional languages. Wars were subsequently fought for hundreds of years; powers rose and fell; people were burned at the stake; and some moved across the seas to America.

The technical revolution which we are facing today will take, perhaps, hundreds of years to be incorporated in the social fabric. Even as the technology continues to evolve, we have not yet come to grips with a digital world and the instantaneous communications that it makes possible. As we add technologies such as Big Data Analytics and AI to the equation, the situation becomes even more difficult. Human society must now adapt to competing intelligence; this is not to the robot revolution portrayed in science fiction, but, rather, the type of intelligence that will be incorporated in a wide range of human tools and activities. The complex result of this association will inevitably yield new visions of how people must live together and how the various tribes that populate the earth will cooperate—or not—going forward into the future.

The changes that are occurring appear on the surface to be minor for most people, since lives continue, errands must be run, children must be schooled and so forth. But the greater movements of society–jobs, economies, interactions, global relationships, political groupings, and all the rest that exists on a meta level–is in flux with a need to respond continuously to new situations.

All of this yields enormous uncertainties. While this creates great opportunity for those with foresight in areas subject to positive change, it also means that populations react to unforeseeable consequences. This is the basis for international conflict, which, in a complex society, inevitably creates a maelstrom.

Recent political movements such as Brexit and the US election, have certainly responded to changes in technology, and technology has also added to uncertainty. Campaigns are being waged globally with big data, hacking, and extensive use of social networks. People are enraged by tweets, and real news clashes with fake news and disinformation so quickly that verification is impossible.

Is extreme conflict the new normal? Will we disengage emotions from the constant barrage of new developments? Not likely. But, one interesting possibility looms. If we reach the point that we can no longer adapt rapidly to changing situations due to limited information and emotional disturbance, then it may gradually become time to bring in automated leadership based upon big data and artificial intelligence. Some would say that this is already happening. But such a result creates a new category of risk, indeed.


Choose Your Concepts Well: You May Need to Make them Work

Technology is constantly accelerating, spinning off new terminologies and buzzwords that develop their own trajectories of meaning. Attempting to chart these trends has become almost a discipline in itself. But, with so much vested in this area, the problems are often misrepresented. Gartner’s “Hype Cycle,” for example, presents an occasionally dangerous picture of simple technological progression. It would be nice if things worked that way. In reality, however, buzzwords become marketing concepts, become generalized, and are then fed back into innovation.

Unfortunately, inflated buzzwords and marketing terms can also be used to define products that don’t really exist. Such “vaporware” (as it was called in the 1980s) has always been with us. Today, it can even create “unicorns”.

In the end, it is important to follow the trends. But it is more important to make certain there is a sound business case, and that the technology has a practical objective.

As the fury of terminologies and “emerging trends” continues in AI, we would do well to remember the lessons of the past.


 


The Fungible Digital Universe: Convergence in a New Key

With digital transformation so active in enterprise considerations, relatively little time has been spent in considering its implications. Digital transformation is often presented as a means of improving efficiency and creating processes that can be easily and flexibly integrated across the corporation. But one of the key issues in digital transformation is digital convergence.

In digital transformation, the vision is to transform all processes, designs, work products, services, and anything that might be so-converted into a digital form. Digital form makes instantaneous transmission over vast distances possible; integrates processes, and opens them to replication or posting within the cloud. The advantages are extraordinary and digital transformation has been underway for a very long time. But there is another factor in this conversation: the fact that all digital streams can easily be subjected to the same or similar processes, thereby opening the way for extreme innovation and cross-pollination, as diverse and heretofore entirely discrete fields are brought together.

As we move into an era of embedded artificial intelligence and big data analytics, digital convergence becomes an issue of extreme importance. The same processes that analyze and interrogate one digital stream might easily be applied to another; the algorithms used in deep learning, for example, easily move between image recognition and voice. Similarly, because they embody the same digital format, the same algorithms might be used to find patterns within programs; to analyze architectural drawings; to understand transactions; and to provide new forms of user access and data comprehension. Digital convergence becomes a kind of synesthesia, where the boundaries between objects, activities and functions become increasingly blurred.

Digital convergence as an issue first arose with telecommunications, and then with multimedia. These areas were early examples of how digitization erased the boundaries between dissimilar components. As everything became transmissible in digital form across the network, markets for multimedia changed; intellectual property rights became problematic; and the capability to copy and broadcast items such as movies and audio recordings became nearly infinite. Legal systems are still struggling to fit the new possibilities within social and legal frameworks and understandings. Now, with everything becoming digital, issues such as intellectual property, privacy, and segregation of one component of the universe from another become moot.

The world itself, as we know it, is a fabrication of knowledge whose definitions, patterns, and interactions we digest and share. It is an imperfect system since it is overlaid upon an existing physical reality; the vertices become sharp and apparent, and logical argumentation becomes less precise because definitions encompass our understanding of an item rather than the item itself. In a converged digital universe, the world we interact with is, in fact, the primary level. Language starts to stumble in its description, because all of those new concepts become terms whose meaning changes as swiftly as the territory shifts.

Digital transformation is the means by which we are moving to this more flexible, more fungible universe; it is an essential process for business which will reap immediate benefits in being able to act far swifter than any non-digital process or comprehension. But, in a greater sense, it will also change how we understand the universe and how future interactions will take place as human beings evolve toward assisted and hybrid man-machine thought processes.


 


Neuromorphic Chips and Silicon Brains: The Video

Here we have assembled a series of videos focusing upon the hardware side of AI. What happens when you start to create replicable silicon brains that emulate human thought processes? First, there can be a tremendous increase in efficiency and cost savings for ordinary processing. But these chips are also starting to carve out their own territory in pursuit of composite AI.

Most AI silicon is focused upon machine learning, and is now being used for image and voice processing, and other high-speed real time pattern matching tasks. As these chips grow increasingly powerful, they are also helping to unlock the secrets of human thought. Although this is still a very early development stage, specialized hardware systems have a tremendous advantage in processing speed over conventional systems, and are able to perform tasks that would not otherwise be possible in robotics.

As usual, each video includes descriptive text from its publication site.

BrainScales to Human Brain Project: Neuromorphic Computing Coming of Age (IBM Research)

A presentation by Karlheinz Meier, Co-Director of the Human Brain Project, University of Hedelberg, Germany. The Human Brain Project (HBP) is a European Commission Future and Emerging Technologies Flagship. project. This presentation is from the IBM Research Cognitive Systems Colloquium: Brain-Inspired Computing at IBM Research – Almaden in San Jose, CA. Published on Jun 11, 2015.

IBM’s TrueNorth Chip mimics the Human Brain (Rajamanickam Antonimuthu)

07 Aug 2014: Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory applications

Samsung TrueNorth: Neuronal to create a camera that emulates the human retina chip (Aban Tech)

Samsung TrueNorth: Neuronal to create a camera that emulates the human retina chip.

Now Samsung has taken this chip-brain and adapted it to work with your Dynamic Vision Sensor, a photo sensor that works similarly to the human retina. The result has been spectacular, the result has been a camera capable of processing digital images at breakneck speed, recording video at 2,000 frames per second. Typically, conventional digital cameras typically record at up to 120 fps. Overcoming them in such a broad way, the Samsung is positioned as a perfectly useful for creating three-dimensional maps complement, improve gesture recognition systems or even used in autonomous cars to better detect hazards. Published on Aug 15, 2016

GTC 2016: NVIDIA DGX-1, World’s First Deep Learning Supercomputer (NVIDIA )

NVIDIA CEO Jen-Hsun Huang introduces the NVIDIA DGX-1, the world’s first deep learning supercomputer, at the GPU Technology Conference. Providing 170 teraflops of performance in one box, DGX-1 is over 12x faster than its predecessor from one year ago. Published on Apr 6, 2016

Qualcomm Zeroth (Qualcomm Korea)

For the past few years Qualcomm Research and Development teams have been working on a new computer architecture that breaks the traditional mold. The company wanted to create a new computer processor that mimics the human brain and nervous system so devices can have embedded cognition driven by brain inspired computing—this is Qualcomm Zeroth processing.


Ring the Welkin: Everything is Related, and a New Era has Begun

We cannot ignore the outside world as technology continues to press forward. Technology affects economies, jobs, and public opinion. It will continue to inform aspirations, fears, and political dialogues around the world.

The impact of recent technology has been so profound that society has not really caught up with it. This is the “Cartoon Cliff” effect. People continue forward just as they always have, until that moment where they look down and discover the earth is so very far below. Perhaps with the US Election, that moment has arrived.

Society reacts. Hope for the best, plan for the worst, and remember that the outcome is far from certain. The past is not necessarily a guide to the future. But it is important to remain optimistic. Adjustments are always necessary, no matter how painful they might be. There will be new opportunities and new possibilities. Even as some doors begin to close, others will open. Changes in the geopolitical lineup will hasten trends that have been building for years.