AI and Algorithms vs Fake News: The Video

Fake News is a huge and growing problem for elections, as demonstrated recently in the US. But fake news and false facts have been growing in every sector through social media. With active disinformation campaigns on the rise, and simplification of news items backed by increasing volumes of “fact bites” in an anarchic global conversation, determining what is true and what is false is not only extremely critical, it is also extraordinarily difficult.

AI has been cited as a possible route to fact checking in this environment, but this is now being understood as a potentially monumental task. There are routes to success, but they will require a close examination of false data and false narratives, a deeper understanding of underlying mechanisms, and a willingness to examine the meaning of “truth.”

Pushing the frontiers of Fake News may, in the last analysis, demand a General Artificial Intelligence. It could even lead to a tipping point in the creation of artificial understanding.

Here are a set of videos discussing this subject, all with the standard YouTube license; they have been provided along with their explanatory material to provide context.

Fake News Challenge on WTAE Pittsburgh (Dean Pomerleau)

Published on Jan 5, 2017

WTAE Pittsburgh reporter Michelle Wright interviews Dr. Dean Pomerleau of Carnegie Mellon University on his effort to fix the problem of fake news via a competition called the “Fake News Challenge”.

What started out as a bet with some friends has grown to become a large effort by teams of volunteers around the world, who will be competing to combine AI, machine learning and natural language processing to build systems that can help to identify fake news stories.

Pomerleau hopes the challenge will help to restore credibility to news organizations and civility to on-line discourse by quickly identify fake news, hoaxes & propaganda.

The results of the Fake News Challenge could be used by Facebook, social media companies, news outlets or fact checkers to quickly identify and stop the spread of fake news.

Tech&Check: The Future of Automated Fact-Checking (ThePoynterInstitute)

Published on Apr 4, 2016

“Tech & Check” co-hosted by the Duke Reporters’ Lab and Poynter’s International Fact-Checking Network was the first conference to explore the promise, challenges of automated fact-checking.

How Bots are Automating Fact-Checking (SXSW 2017)

Published on Mar 17, 2017

As news organizations and researchers explore how they can automate journalism, fact-checkers are breaking new ground with some intriguing experiments. At the University of Texas at Arlington, computer scientists have built ClaimBuster, a tool that can do the work of a dozen college interns by sifting through massive amounts of text to find claims to fact-check. At Duke, researchers have created iCheck, which automates the time-consuming process of checking claims about a candidate’s voting record. These and other tools show promising new ways that fact-checking can be automated. Also on the horizon: instant “pop-up” fact-checking on live TV.

AI Can Tell Real News from the Fake (Infosys)

Published on Mar 19, 2017

People who get their news from social media sites are particularly susceptible to fake news. Often times readers don’t even realize that what appears on social media may not be legitimate news. It is time for large news platforms to assess the current situation, and figure out which AI-enabled security technology could best be wrapped around their proprietary, consumer-facing offerings. Not only should content providers be proactive about protecting the authenticity of information on their platforms, there is also a need to acknowledge that for IoT to succeed, they must have robust security measures in place that are powered by AI.


 


Car Wars: Intel Bags Mobileye

Intel is acquiring autonomous driving company Mobileye in a deal valued at $15.3 billion, expected to close toward the end of this year. Acquisition of the Israeli firm, whose technology is used by  27 companies in the auto industry, represents a number of interesting issues in the self-driving vehicle technology race.

Intel has been pursuing autonomous vehicle technology, but this initiative–one of the 10 largest acquisitions in the tech industry–brings it front and center. The key to Mobileye’s autonomous solution lies in its silicon. Mobileye has developed its EyeQ® family of system-on-chip (SoC) devices, which support complex and computationally intense vision processing while still maintaining low power consumption. Mobileye is currently developing its fifth generation chip, the EyeQ 5, to act as the visual central computer for fully autonomous self-driving vehicles expected to appear in 2020. The EyeQ chips employ proprietary computing cores optimized for computer vision, signal processing, and machine learning tasks, including deep neural networks. These cores are designed specifically to address the needs of Advanced Driver Assistance Systems (ADAS).

As a chip developer focusing upon providing the building blocks for technology, its traditional role, Intel is moving forcefully in this direction, partly as a result of growing competition in embedded machine learning from the likes of Nvidia and Qualcomm, both of which are also moving into the autonomous vehicle area. Self-driving cars are the nexus of development in machine learning due to the huge profit expectations of the automobile, transportation, and logistics industries. Evolution of autonomous vehicles, particularly with deep learning capabilities in silicon, will also create additional pressure on skills for artificial intelligence across all industry sectors, as well as creating an explosion in innovation and speeding development of these systems.

Intel intends to form an autonomous driving unit combining its current Automated Driving Group (ADG) and Mobileye. The group will be headquartered in Israel and led by Mobileye’s co-founder, Amnon Shashua, currently Mobileye’s Chairman and CTO; and a professor at Hebrew University.

According to the combined press release:

This acquisition will combine the best-in-class technologies from both companies, spanning connectivity, computer vision, data center, sensor fusion, high-performance computing, localization and mapping, machine learning and artificial intelligence. Together with partners and customers, Intel and Mobileye expect to deliver driving solutions that will transform the automotive industry.

The new organization will support both companies’ existing production programs and build upon the large number of relationships that Mobileye maintains with OEMs,  automobile industry tier 1 suppliers, and semiconductor partners.

Intel’s interests in this deal are likely to be diverse. Among the potential benefits are:

  • Potentially four terabytes of data per hour of data to be processed, creating large-scale opportunities for Intel’s high-end Xeon processors and mobilize latest generation of SOC’s.
  • Moving to Israel, where Intel already has a significant presence, potentially isolates its research and testing from the competitive hotbed of Silicon Valley, shielding employees from poaching. It also avoids current immigration issues.
  • Additional competitive advantages within AI and embedded deep learning, which are the next generation targets of Intel’s silicon competition.

It is worth noting that this is a general boost to autonomous vehicles that will inevitably lead to greater concentration of resources in this field.  Although relying upon a common supplier of autonomous systems makes sense economically, it also reduces competitive advantages.

The number of companies involved in this sector continues to grow as the implications stretch out through the entire transportation-related sector.  We have covered a number of these systems in recent blogs here (Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users, Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI, All Things Need Autonomy: TomTom Grabs Autonomos, Uber Doubles Down on AI with Geometric Acquisition, Qualcomm Buys NXP for IoT and Cars ). The net result will to be to create a huge rush for talent in the machine learning space, as well as all of the areas related to integration with automobile systems. This will increase the speed of evolution for embedded AI, which will filter rapidly into other areas of business and process enablement though impeded by the availability of talent.


Artificial Realities: The Video

Virtual Reality, Augmented Reality, and Mixed Reality are about to enter the mainstream through a number of different routes, aligned with all of the technologies of an increasingly digital world.  Exciting new possibilities are already becoming apparent, led by gaming and by the complex “hyper-reality” visions of science fiction. As yet, companies are struggling to find applications for today in a technology that looks so much like tomorrow. But breakthroughs may be imminent as vendors such as Microsoft, Google, and exciting startups begin to create the future in this territory.

Here we have assembled a mixed set of videos looking at different aspects of this technology. All are available under a standard YouTube license.

Windows Holographic: Enabling a World of Mixed Reality (Windows)

Published on Jun 1, 2016

Windows Holographic enables a world of mixed reality – where devices work together – regardless of whether they are developed for virtual reality, augmented reality, or anything in-between. Windows Holographic will help expand the Windows ecosystem far beyond the PC.

Keynote – Into the Future, with Magic Leap’s Graeme Devine (Games for Change )

Published on Jul 26, 2016

Magic Leap’s Chief Game Wizard Graeme Devine shares the startup’s vision for mixed reality in the classroom and making virtual objects appear in real life.

Intelligent Agents, Augmented Reality & the Future of Productivity (O’Reilly)

Published on Jun 13, 2016

A conversation with Satya Nadella, CEO, Microsoft and Tim O’Reilly at Next:Economy Summit 2105 in San Francisco California.

MRO.AIR – Artificial Intelligent Reality (Connectar)

Published on Dec 9, 2015

Connectar presents its MRO.air product, described as

“An Artificial Intelligent reality that works like an digital aura to help, guard and improve mechanics in maintenance and repair organizations. The promise is to increase the efficiency with over 30% and to reduce errors with over 50% and this is how.”

Augmented Reality (World Economic Forum)

Published on Apr 23, 2012

Augmented reality is bringing a new sweeping change in the way we communicate, democratising the way we produce and consume information.

 


Car Wars: DiDi Chuxing Roars into the Valley with 400 Million Users

Chinese ride-hailing and transportation company DiDi Chuxing has just opened its own AI vehicle lab in Silicon Valley. DiDi Labs in Mountain View, CA, will focus on AI-based security and intelligent driving technologies, hoping to grab talent in the region as the battle for AI skills continues. ( see Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI Arsenal).

According to Cheng Wei, the company’s founder and CEO, “As we strive to bring better services to broader communities, DiDi’s international vision now extends to building the best-in-class international research network, advancing the global transportation revolution by leveraging innovative resources. The launch of DiDi Labs is a landmark in creating this global nexus of innovation.”

Hidden from the spotlight, DiDi has developed a huge amount of experience in China, which provides a substantial data source for its AI efforts. DiDi acquired Uber China in August 2016. Unlike rivals, the company size and reach is truly impressive. It offers a full range of technology-based services for nearly 400 million users across more than 400 Chinese cities. These include taxi hailing, private car hailing, Hitch (social ride-sharing), DiDi Chauffeur, DiDi Bus, DiDi Minibus, DiDi Test Drive, DiDi Car Rental and DiDi Enterprise Solutions. As many as 20 million rides were completed on DiDi’s platform on a daily basis in October 2016, making DiDi the world’s second largest online transaction platform next only to China’s Alibaba-owned shopping website, Taobao.

It is also important to note that DiDi is estimated to be worth about US$35 billion and its investors reportedly include all three of China’s Internet giants—Alibaba, Tencent, and Baidu—and Apple. The current initiative is centered around security issues, but location and stated intent makes the acquisition of Valley AI talent a clear objective. Uber’s acquisition of Carnegie Mellon’s whole AI lab in 2015 got all companies in the automotive intelligence sector nervous about talent. The company has recently had a range of issues arise regarding regulation in its Chinese market, and the current move may deflect some of that, and also play into the Chinese government’s current push to develop innovation.

DiDi Labs will be led by Dr. Fengmin Gong, Vice President of DiDi Research Institute, with a number of prominent researchers including top automobile security expert Charlie Miller, known as one of the scientists who famously hacked a Jeep Cherokee in 2015. Current projects span the areas of cloud-based security, deep learning, human-machine interaction, computer vision and imaging, as well as intelligent driving technologies.

DiDi has been involved with autonomous vehicles for some time, and this move mainly surfaces its involvement. DiDi previously established the DiDi Research Institute to focus on AI technologies including machine learning and computer vision. It also has ongoing research in big data analytics based on the 70TB worth of data, 9 billion routing requests, and 13 billion location points provided by its platforms.

In founding the new lab, the company also launched the DiDi-Udacity Self-Driving Car Challenge, an open-source self-driving competition in which player teams are invited to create an Automated Safety and Awareness Processing Stack (ASAPS) to improve general safety metrics for human and intelligent driving scenarios based on real data.

According to DiDi CTO Bob Zhang, “In the next decade, DiDi will play a leading role in innovation at three layers: optimization of transportation infrastructure, introduction of new energy vehicles and intelligent driving systems, and a shift in human-automotive relationship from ownership to shared access.”


Amazon Gives Alexa a PhD Boost

Amazon has just announced the Alexa Fund Fellowship program, a year-long doctoral program at four universities to take on technology problems that can be solved with Amazon’s NLP home assistant Alexa. Alexa has become increasingly popular in recent years, through its open-sourcing provision provision of cloud-based “skills” that enable people to interact with devices through voice commands in an intuitive way. This has created numerous alliances and connections with devices across the IoT, as we have discussed in a previous post, Digital Assistants Coming of Age with Alexa.

Amazon is a strong competitor in a tough field that includes various overlaps with Google Home, Microsoft Cortana, Apple Siri, IBM Watson and others. Competition is not only about market share and market direction; but it is also about available skills, alliances, and formation of new research streams that focus upon favorable technology.

The Fellowship Program

According to Amazon’s developer site:

The Amazon Alexa Fund is establishing an Alexa Fund Fellowship program to support academic institutions leading the charge in the fields of text-to-speech (TTS), natural language understanding (NLU), automatic speech recognition (ASR), conversational artificial intelligence (AI), and other related engineering fields. The goal of the Alexa Fund Fellowship program is to educate students about voice technology and empower them to create the next big thing.

The four participating universities are Carnegie Mellon, University of Waterloo, University of Southern California (USC), and John Hopkins. The program provides cash funding, Alexa-enabled devices, and mentorship from the company’s Alexa Science teams in development of a graduate or undergraduate class curriculum related to Alexa technology.

The first two announced programs are from Carnegie Mellon and Waterloo. Carnegie’s Dialog Systems course “will teach participants how to implement a complete spoken language system while providing opportunities to explore research topics of interest in the context of a functioning system. ” Waterloo’s Fundamentals of Computational Intelligence will “introduce students to novel approaches for computational intelligence based techniques including: knowledge-based reasoning, expert systems, fuzzy inferencing and connectionist modeling based on artificial neural networks.”

It is interesting to note that the funded courses might include broader agendas, furthering general progress in AI–but in an Alexa context.

The Investment Context

This Fellowship is part of a spate of recent R&D, startup, and academic investments in Alexa. It comes on the heels of Amazon’s September announcement of the Alexa Prize, a competition between 12 selected university teams to build a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes, using Alexa technology.

The Alexa Fund itself was started in June, 2015, with goals described its landing page:

The Alexa Fund provides up to $100 million in venture capital funding to fuel voice technology innovation. We believe experiences designed around the human voice will fundamentally improve the way people use technology. Since introducing Alexa-enabled devices like the Amazon Echo, we’ve heard from developers, device-makers, and companies of all sizes that want to innovate with voice. Whether that’s creating new Alexa capabilities with the Alexa Skills Kit (ASK), building devices that use Alexa for new and novel voice experiences using the Alexa Voice Service (AVS), inventing core voice-enabling technology, or developing a product or service that can expand the boundaries of voice technology, we’d love to talk to you.

In past years, the Fund has focused upon entrepreneurship, and has investments at various funding stages in 23 companies. Although many amounts are undisclosed, the top recent known investments have been in Thalmic Labs a wearables company; ecobee, a provider of Wi-Fi enabled smart thermostats; and Ring, a wireless video doorbell company.

Recently, the Fund has also created the Alexa Accelerator initiative in cooperation with global entrepreneurial ecosystem company, Techstars. The Accelerator is a 13-week startup course for 10 companies running from July to September 2017. According to Fund spokesperson Rodrigo Prudencio:

We will seek out companies tackling hard problems in a variety of domains—consumer, productivity, enterprise, entertainment, health and wellness, travel—that are interested in making an Alexa integration a priority. We’ll also look for companies that are building enabling technology such as natural language understanding (NLU) and better hardware designs that can extend or add to Alexa’s capabilities.

They are Not Alone

Of course, Amazon is not unique in creating and funding programs that support its AI initiatives, both within the academic and entrepreneurial communities. There is a lot of funding available, and this will support progress in these technologies. Such initiatives also respond to the need to educate AI specialists in a hurry, and create new initiatives that might provide an edge for one company’s interpretation of the AI and robotics future.


 


Deconstructing Machine Learning: Three Easy Pieces

This is a medley of pieces that help to define machine learning in its various forms. Rather than this blog’s usual “The Video” approach, this includes a video, a graphic, and a video series. The video by Pedro Domingo’s attempts to classify (and there is a link to the slides); the graphic is a chart from the Asimov Institute that attempts to diagram the various approaches; and the last piece is a set of videos from Welch Labs that takes you through the process of building and training a neural network in Python (with a link to the supporting code).

The Five Tribes of Machine Learning (Association for Computing Machinery (ACM))

Published on Dec 29, 2015

Trying to categorize the growing range of machine learning techniques can be difficult, but understanding the different approaches will be critical. Pedro Domingos of the University of Washington has been working on a classification scheme for some time. According to the presenter:

There are five main schools of thought in machine learning, and each has its own master algorithm – a general-purpose learner that can in principle be applied to any domain. The symbolists have inverse deduction, the connectionists have backpropagation, the evolutionaries have genetic programming, the Bayesians have probabilistic inference, and the analogizers have support vector machines. What we really need, however, is a single algorithm combining the key features of all of them. In this webinar I will summarize the five paradigms and describe my work toward unifying them, including in particular Markov logic networks. I will conclude by speculating on the new applications that a universal learner will enable, and how society will change as a result.

Presenter: Pedro Domingos, University of Washington in Seattle

Slides are available.

A Mostly Complete Chart of Neural Networks (Asimov Institute)

From The Neural Network Zoo
Posted on September 14, 2016 by Fjodor Van Veen

The Asimov Institute developed a chart that provides a quick overview of Neural Network architectures. According to Fjodor Van Veen:

With new neural network architectures popping up every now and then, it’s hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) can be a bit overwhelming at first. So I decided to compose a cheat sheet containing many of those architectures. Most of these are neural networks, some are completely different beasts. Though all of these architectures are presented as novel and unique, when I drew the node structures… their underlying relations started to make more sense.

The chart:

Video Series: Neural Networks Demystified (Welch Labs)

7 videos, Oct 2, 2015

Published on Nov 4, 2014

This short series builds and trains a complete Artificial Neural Network in Python.

Part 1: Data + Architecture
Part 2: Forward Propagation
Part 3: Gradient Descent
Part 4: Backpropagation
Part 5: Numerical Gradient Checking
Part 6: Training
Part 7: Overfitting, Testing, and Regularization

Supporting Code:
https://github.com/stephencwelch/Neur…..