AI and the Election: Are We There Yet?

The previous Presidential elections was guided by big data analytics to help predict outcomes, gauge where effort needed to be spent, and organize the campaign workforce. Since then, interest in and usage of advanced analytics and artificial intelligence has surged. With its predictive capabilities AI techniques and predictive algorithms are a natural fit for election strategists. Big data analytics has already been demonstrated with Nate Silver’s FiveThirtyEight blog and others, showing continuing improvements in accuracy. AI brings something new to the field. But what it can do, and its current capabilities in this area are very much in debate.

With so much at stake it would be expected that AI would play a role in election forecasting. Indeed a recent poll a recent prediction based on artificial intelligence from an AI system called MogIA from Sanjiv Rai, founder of Indian healthcare industry AI startup, used 20 million data points from public platforms to predict that Donald Trump would win the election in a landslide. This program has correctly predicted the winners of the last three U.S. presidential elections. Evidence on election day makes that prediction dubious, at best. It could still happen. But it does emphasize the strengths and weaknesses of current AI and cognitive approaches, as well as the directions in which it they likely to develop.

The MogIA prediction was based upon social media interaction. Engagements with tweets and Facebook live videos were used as predictors, among other data. While this has worked in previous elections where social media was less of a factor, it would be clear to American voters that the social media engagements would be a poor predictor in this election. Donald Trump’s campaign was based upon negative tweets; these came so furiously and provoked such enormous response, both positive and negative, that engagement was not necessarily going to lead to a vote. In fact, engagement with Donald Trump’s tweets could well have capsized his electability.

Post-Election Update: Trump won the election, against all expectations and contrary to poll results. Was MogIA right? Possibly. But it also indicates a new social media role in US elections. As this role changes, the model will need to be re-evaluated since social media exist within the context of a much larger and more complex system.

This points to the problem of data selection and interpretation in feeding a predictive system. Artificial intelligence can draw conclusions from data and find patterns, but its ability to define those patterns depends upon how they are described and the relationships that are initially drawn in the prediction model.

MogIA was a sentiment-driven social media analysis approach. Artificial intelligence has also been engaged in this contest in a variety of other ways. TechRepublic teamed with Unanimous A.I. to create “swarm AI” sessions. This is a hybrid strategy using a crowdsourcing approach, where voters used an online platform that employed AI techniques to aid the group in coming to a decision. It used a crowd controlled pointer and focused upon a set of specific issues related to economy and technology to create a final measure. The sample was very small but the result went to Clinton. While, in this case, the results are somewhat irrelevant due to the size of the sample, it does point to the use of a hybrid human-AI approach to prediction which short-circuits some of the problems in an entirely AI-based system. Such an approach adds a ready understanding of issues, which can be used to create an immediate group response with potential for greater possibility of predictive success. Eliminating potential bias in such a system does present difficulties, however.

In another approach, the Washington Post is attempting to improve on election coverage of every race by using AI in a program called Heliograf which the company built in-house. Heliograf permits massive data collection and analysis and is used to collect and report on data that has been missed or has not been extensively covered. Its technique is to use templates to automatically update stories; a technique previously used in the company’s coverage of the Rio Olympics. This again is a hybrid human – AI approach—not to prediction, but as an aid in the understanding of an event by sorting out the signal from the noise. Ability to glean important information and assemble it to the point where unknown or invisible connections might be discerned, or the significance of buried data to current news headlines, creates an opportunity for improving human awareness and understanding.

Another AI approach being developed to understand the hugely complex information relating to elections is Electome, a project of the laboratory of the Laboratory of Social Machines at MIT Media Lab in partnership with media companies such as the Knight Foundation, Twitter, the Washington Post, CNN, Bloomberg and others, provides real time analytics on public opinion related to the election. Electome ingests public opinion focusing upon Twitter. It classifies conversations by issues; segments by demographics, tone and other factors; and provides a deeper understanding of occurrences. Access to Electome is provided to journalists and was also used by the Commission on Presidential Debates to brief moderators.

AI is also being used in this hybrid context by Amazon’s Alexa, the AI used by the company’s Echo smart speaker devices. The election has brought a wide range of new concepts and new information requirements for home users. Questions posed by Alexa’s users–people at home who ask questions of the device–need to be anticipated to create an immediate response. This demands a hybrid approach, with a team of experts screening potential content and developing concepts which can be referred to Alexa. Human fine tuning is essential in these cases, where meaning is critical and understanding needs to be instant.

Finally, the election itself is creating new avenues of thought for AI. The recently unveiled Election Algorithm (EA) is an optimization and search technique inspired by the election. It is a form of genetic algorithm that works with a data population of data clustered as candidates and voters. Candidates for form parties, and advertise. They can confederate to increase success. On election day, the candidate cluster to achieve the most votes is announced as the winner. The new algorithm competes successfully with the other genetic algorithms in making certain kinds of prediction.

Presidential elections are entering a new type of predictive era that involves deep analysis of social media, external news sources and strategy. This combination reveals some issues with AI techniques of today, and the need to incorporate a human crowdsource capability to ensure an accurate result in many real-world situations. How such interactions are supported will be the subject of ongoing research.

It is also clear that as with the very earliest computer systems “Garbage In Garbage Out (GIGO)” still applies. But, the definition of what constitutes garbage is increasingly complex. In many cases, bad data of today is likely to be data that has not been sufficiently explored and understood. Complex data will require independent analysis before being subjected to an analytic process. The ETL of yesterday’s fixed databases will need to be extended with an analytic approach—which is already happening on one level, with Natural Language Processing (NLP).

Four Talks on Artificial Intelligence: The Video

Sometimes video is the avenue of choice for delivering information. Videos are the core of most online courses and can start important conversations that encourage debate on significant topics. In the area of AI and related technologies, a range of discussions, course material and training has become available in video form. These materials are available freely on the web as an important resource in the goal of continuous learning, a critical component of the changing technological environment.

In this video feature, we provide four different views of progress in artificial intelligence, including a lecture on AI and the future by Demis Hassabis; Noam Chomsky’s speech on AI and its definition;, a Ted talk on evolutionary algorithms by Keith Downing;, and finally, a panel on progress in AI from Microsoft.
The implications and definition of AI are in a state of flux as they often have been. The more we understand the processes of thought, the more we know of its complexities. New definitions are emerging constantly as the ability to replicate thought processes lifts the curtain on the nature of these processes; their diversity, their similarities, and what binds them together.

Only a replication of a complex behavior makes it possible to understand the inherent limitations in its components. The Turing test is incomplete; a truly sentient intelligence would not only mimic human behavior but be capable of self-invention. Such an intelligence would be able to cast its own algorithms and assemble the components of thought in a manner best able to meet the evolving demands of an unconstrained external environment.

Video provides an excellent mode for making these complex topics accessible.

Demis Hassabis: Artificial Intelligence and the Future (Singularity Lectures)

Noam Chomsky lecture on artificial intelligence (Imaginarium)

Evolutionary computation: Keith Downing at TEDxTrondheim (TEDx Talks)

Panel: Progress in AI: Myths, Realities, and Aspirations (Microsoft Research)

The Composite Nature of Embedded Cognition

As cognitive approaches become embedded in business processes and devices they will become indispensable. Processes modeled upon thought will simply become part of the landscape of helper applications expected in the digital environment. We have already seen discrete intelligent processes becoming commonplace and even expected. Examples include spell checking and grammar checking in documents; machine translation in webpages and search results; packet filtering in communications; and personal photo enhancement. These technologies offer useful services. The underlying mechanism is often complex but, as with an automobile or any other advanced machine, the actual mechanics need not be known to the user.

Artificial intelligence even in its most profound forms is likely to follow this same path. We can expect smart processes to be brought together to make operations more efficient, friendlier, or easier to use. Embedded artificial intelligence is likely to become increasingly important.

Local intelligence, or the ability to make choices automatically at the individual operation level will become normal. When intelligence is embedded at the process level, many functions that might have required expert intervention, or paused for user selection, will be routinely and automatically performed. Leading the way will be further development of the user interface. This is the most visible element of IBM’s Watson Analytics program; the ability to understand and respond to natural language requests, framing those requests in a way that they can be submitted to databases and analytics to produce a guided result. The most advanced intelligence need not be in the assembly, analysis, or predictive capabilities of big data; but, rather, in the understanding of the request itself and ability to formulate an adequate response.

Embedded cognition provides a new range of challenges and opportunities. Its most visible impact is in robotics and in Industry 4.0. It will be inherently important to all autonomous systems. Embedded intelligence will not be online all the time, nor will it be capable of being adapted or updated on a real-time basis. This means it must be secure. Autonomous intelligence must also cooperate and communicate with other systems. Standardization will make the routines and operations in this category into commodity parts to be assembled in creating a greater whole. Any machine which has complex parts today demonstrates how this will work. Automobiles have transmissions whose technology may be swapped between vehicles. Principles of operations remain the same. Multiple complex parts may be assembled to create a machine of the next level of complexity. People will interact with this environment through personal devices as well as through monitoring of their own behavior.

In looking at embedded cognition it is already apparent that there are vast differences among the various smart processes being embedded in processes and devices. Some are for visual recognition and perception; some are for speech recognition; some are for decision-making under uncertainty; and some are for translation between languages. While similar principles are involved their specific requirements are not the same. Each defines a particular utility which may be brought together with other smart components to create a more complex machine. Embedded natural language processing can be linked to big data analysis to provide answers to questions; machine learning and pattern recognition can be applied separately to issues such as fraud. Pattern recognition capabilities can be embedded in equipment used to search for cellular components, or symptoms of a disease. They may become a part of an appliance which includes artificial intelligence in the same way that it includes the use of electricity. The AI capability is simply applied at the level where it is required.

While artificial intelligence does not presume to replace the human brain, it does provide a next level of flexibility and machine control, making it possible to respond to an evolving matrix of real time data. This can provide a finer and more efficient guidance of individual processes than might be available with manual or semi automated controls.

The applications of embedded cognition in industry and robotics are patently obvious. But it is significant that this technology will be available piecemeal and often taken for granted. Already we see elements of this appearing around the home with intelligent control mechanisms and voice-actuated modules capable of understanding and responding to limited commands. These commands might go out to a mechanism such as a multimedia device which itself contains intelligence for responding to the mood of the user. Or, it might query the refrigerator for information on user behavior that can be used to produce shopping lists or provide guidance in the purchase of food.

All of this is in a nascent state of development. The key issue is that artificial intelligence will develop for specific tasks and these tasks can then be combined in a modular fashion to create a broader and more effective process or mechanism based upon cognition in all components. Assembling such a composite appliance or process becomes a matter of orchestration—which may itself be handled by an intelligent machine.

The ability to create innumerable separate “smart” processes will act like algorithms in programming. Once established, they will become part of the language of a cognitive device universe. Combining these components will lead to the ability to create unique mechanisms with specific intelligence that will bring machine capability to a higher level of complexity and effectiveness. The beginnings of this are visible in autonomous social robots.

Companies need to monitor development of the building blocks of cognitive processes, and understand the growing capabilities. There will be no artificial brain, but there will be many tiny interactive minds that could create unforeseeable consequences as different capacities are drawn together. This will have implications for Security,a s well as for the future development of the IoT.