Swarm Intelligence for Robots, Algorithms, and Humans: The Video

Swarm intelligence occurs in nature when social creatures such as ants combine their behavior in an integrated way for a collective purpose. Understanding swarm intelligence and replicating it is of increasing interest as we move into a world of AI and clustered robots. Understanding emergent behavior from swarms also furthers concepts of autonomy and organization at every level of intelligence.

Here we have provide a few videos that look at the potential for applying swarm concepts to human thought, as well as to robotics. As usual, the videos here are available on a standard YouTube license.

What is Swarm Intelligence? (UNU videos)

Published on Feb 3, 2016
From Unanimous AI, a company focusing upon human swarms. According to the company website:
Unlike traditional AI, which aims to replicate human intelligence, our Swarm A.I. technologies builds intelligent systems that keep people in the loop, leveraging our natural human knowledge, insights, and intuitions. At Unanimous, we use technology to amplify human intelligence, not replace it.
(The final video in this set is also from Unanimous.)

Taming the swarm – Collective Artificial Intelligence (TEDx Talks)

Published on Jan 14, 2016

by Radhika Nagpal

Radhika Nagpal is the Kavli Professor of Computer Science at Harvard University and a core faculty member of the Wyss Institute for Biologically Inspired Engineering. At Harvard, she leads the Self-organizing Systems Research Group (SSR) and her research combines computer science, robotics, and biology. Her main area of interest is how cooperation can emerge or be programmed from large groups of simple agents.

Can A Thousand Tiny Swarming Robots Outsmart Nature? (Deep Look)

Published on Jul 21, 2015

How does a group of animals — or cells, for that matter — work together when no one’s in charge? Tiny swarming robots–called Kilobots–work together to tackle tasks in the lab, but what can they teach us about the natural world?

How do you simultaneously control a thousand robots in a swarm? The question may seem like science fiction, but it’s one that has challenged real robotics engineers for decades. In 2010, the Kilobot entered the scene. Now, engineers are programming these tiny independent robots to cooperate on group tasks. This research could one day lead to robots that can assemble themselves into machines, or provide insights into how swarming behaviors emerge in nature.

In the future, this kind of research might lead to collaborative robots that could self-assemble into a composite structure. This larger robot could work in dangerous or contaminated areas, like cleaning up oil spills or conducting search-and-rescue activities.

DEEP LOOK is a short video series created by KQED San Francisco and presented by PBS Digital Studios. This is made available on a standard YouTube license.

Human Swarming (IEEE Blended Intelligence, 2015. UNU videos)

Published on Oct 7, 2015

Dr. Louis Rosenberg of Unanimous AI (same as first video) describes the basics of “Human Swarming”, discussing the natural models that inspire Human Swarms and the benefits of swarms over votes, polls, markets, and other methods for tapping the Collective Intelligence of groups.


 


Animated Humor: Okay Rodney on the IoT, Cloud and Security

These are part of a set of cartoon animations that I created several years ago to enliven discussion of these subjects. All are standard YouTube license, and were created very simply using text to voice and PowerPoint graphics, exclusively.

Okay Rodney: The Internet of Things (IoT)

Okay Rodney: The Mobile Cloud

Okay Rodney: Security


 


Leading EU Bank Forges Ahead with AI Investment

A leading EU bank, Banco Santander SA, has just invested in AI companies Personetics Technologies and Gridspace, highlighting recent moves in the Financial Services industries to embrace AI across a growing swathe of operations, after initial reluctance. The size of the investment is as yet unknown, but promises to be fairly large and is also notable for the fact that these are recently established global companies that have major industry clients.

These investments are a part of Santander’s Investment arm, which has joined a relatively small number of investment ventures focusing upon bringing innovation to financial technologies (fintech). The focus is upon startups that challenge traditional financial institutions–which, in themselves, are bringing the AI to the Financial Industry as a whole.

According to the company, Santander InnoVentures is based in London and maintains a global reach. It builds on a philosophy of collaboration and partnership with small and start-up companies.

We launched our $100 million fund in July 2014 to get closer to the wave of disruptive innovation in the FinTech space. We aim to support the digital revolution to make sure Santander customers around the world benefit from the latest know-how and innovations across the Banking Group’s geographies.

The fund is part of the Santander Group’s broader innovation agenda, in which we help FinTech companies grow from a very early stage (i.e. seed) to a more mature stage.

While the two new investments are focused specifically on customer service, they open the way for increased involvement in more sophisticated AI capable of operating across a broad spectrum of financial services. Personetics creates “chatbots” that can respond to customer questions through social media–specifically focusing on finance, and Gridspace is used to monitor call center conversations.

Gridspace is a collaboration between SRI International (developer of Siri), and a multidisciplinary engineering team.

From the Gridspace web site:

Gridspace is the leading platform for Conversational Intelligence. It enables companies to analyze and operationalize the conversational speech and text inputs others can’t. It provides everything you need to make your company more aware, customer-friendly, profitable, and secure. Get communications that talk back.

Personetics presents a more fintech-specific profile. From the Personetics web site:

Personetics enables the world’s leading financial institutions to transform the way they engage and serve their customers in the digital age. We bring a unique combination of financial services domain expertise, tightly embedded into a cognitive application framework using AI, predictive analytics, and Machine Learning technologies to deliver a personalized experience that help customers better manage their financial lives.

Combining built-in financial proficiency with advanced cognitive capabilities, our solutions enable financial institutions to understand and anticipate individual customer behavior and needs, communicate in a conversational and personalized manner, and continuously learn and improves from each interaction.


 


Once More into the Breach: Times of Great Change Bring Opportunity

Times of great change are also times of great opportunity. We are all aware of the erroneous Chinese ideograms that define Crisis as Danger plus Opportunity. They have been pulled out by politicians since 1938, most famously by JFK.They are popular because the statement remains correct, though the derivation is false. But beware!

Correct Vision, Wrong Characters

In a greater sense, the current upheavals in politics across the world should be understood as a natural result of the changes in technology, which have ushered in the 21st century. We have long argued that recent developments such as growth of social media, development of stronger AI, spread of the Internet, and all of the trappings of today’s online universe were having a revolutionary impact. But, when you equate the Internet with Gutenberg and mildly consider the technological results as revolutionary and creating great opportunities for the future, you cannot ignore the inevitable political and social traumas which come with such radical change. The Gutenberg press created the Protestant Revolution by making it possible to distribute vast numbers of Bibles in regional languages. Wars were subsequently fought for hundreds of years; powers rose and fell; people were burned at the stake; and some moved across the seas to America.

The technical revolution which we are facing today will take, perhaps, hundreds of years to be incorporated in the social fabric. Even as the technology continues to evolve, we have not yet come to grips with a digital world and the instantaneous communications that it makes possible. As we add technologies such as Big Data Analytics and AI to the equation, the situation becomes even more difficult. Human society must now adapt to competing intelligence; this is not to the robot revolution portrayed in science fiction, but, rather, the type of intelligence that will be incorporated in a wide range of human tools and activities. The complex result of this association will inevitably yield new visions of how people must live together and how the various tribes that populate the earth will cooperate—or not—going forward into the future.

The changes that are occurring appear on the surface to be minor for most people, since lives continue, errands must be run, children must be schooled and so forth. But the greater movements of society–jobs, economies, interactions, global relationships, political groupings, and all the rest that exists on a meta level–is in flux with a need to respond continuously to new situations.

All of this yields enormous uncertainties. While this creates great opportunity for those with foresight in areas subject to positive change, it also means that populations react to unforeseeable consequences. This is the basis for international conflict, which, in a complex society, inevitably creates a maelstrom.

Recent political movements such as Brexit and the US election, have certainly responded to changes in technology, and technology has also added to uncertainty. Campaigns are being waged globally with big data, hacking, and extensive use of social networks. People are enraged by tweets, and real news clashes with fake news and disinformation so quickly that verification is impossible.

Is extreme conflict the new normal? Will we disengage emotions from the constant barrage of new developments? Not likely. But, one interesting possibility looms. If we reach the point that we can no longer adapt rapidly to changing situations due to limited information and emotional disturbance, then it may gradually become time to bring in automated leadership based upon big data and artificial intelligence. Some would say that this is already happening. But such a result creates a new category of risk, indeed.


Verizon Grabs Skyward to Help Feed the Drones

Verizon has just acquired drone operations management company Skyward to add to its growing concentration in the IoT. Drones have enormous potential, and management will be of increasing importance in expanding deployment opportunities.

Drones have been operating at the edges of the technological horizon for several years now, but recent developments have made them more interesting and more important for business. Today’s drones come in innumerable sizes and shapes, and miniaturization of technology has meant that they are able to carry more sensors and provide more useful information about areas over which they pass. They have also been provided with new ways for interaction, such as capability to manipulate controls or pick up and release objects. This is creating innumerable opportunities, particularly in areas such as agriculture, aerial photography, mapping, surveying, and other such operations. Regulation issues are also being gradually adjusted to accommodate new concepts of drone usage.

As the importance of drones has continues to grow, concerns have been raised regarding air safety and interference with other devices. Regulations have limited operations to line-of-sight and there is a range of rules which need to be followed. Additionally, operation of drones can be complex, particularly if individuals with piloting experience are not available or the mission demands extensive coordination. As drones continue to develop, we are beginning to see hive operations involving innumerable drones, spectacularly illustrated recently in Intel’s Super Bowl lighting display; we are also beginning to see various levels of autonomy and precise coordination issues.

Within this mix, huge opportunities exist, and this is the objective of Verizon Skyward acquisition. Verizon itself has been developing an LTE based network to allow operation of unmanned airborne vehicles (UAVs) in operations beyond visual line-of-sight (BVLOS). As line-of-sight restrictions are lifted, remote operation of drones will become significantly more common, and drone fleets will need to be managed.

Verizon began work on its in-flight LTE operations in 2014 and expanded operations in 2016 by engaging American Aerospace (AATI) to test connectivity between aerial platforms and Verizon’s 4G LTE network. Verizon’s airborne LTE operations (ALO) initiative has now undergone technical trials across the country in a combination of unmanned and manned aircraft using the company’s 4G LTE network.

Entering this area at an early stage permits Verizon to build the network connections and experience that it will require to offer services for complex drone operations involving many devices. But, operating and managing such networks demands that devices remain within FAA parameters, meet regulations, and function in accordance with agreements such as insurance that may be related to specific task environments. Skyward is an operations management solution for commercial drone businesses. It is a cloud-based platform that integrates a drone airspace map with flight planning tools, fleet and equipment management, and a digital system of record. This makes it possible to coordinate a complex drone operation with confidence, leading to greater accessibility to the technology and the possibility of expanding the opportunity horizon for drone usage.

Verizon is taking a leadership role in this area and is betting on the further development of drones and IOT devices as an additional service opportunity for the company. Already, it has over $1 billion of revenue from the IOT space. Development of drones is a natural outgrowth of this concentration.

According to Verizon’s news release, Mike Lanman, senior vice president Enterprise Products and IOT said:

“Last quarter we announced our strategy to drive innovation and widespread adoption for in-flight wireless connectivity through our Airborne LTE Operations (ALO) initiative, a new service to simplify certification and connectivity of wireless drones. This acquisition is a natural progression of our core focus on operating in innovative, high-growth markets, leveraging our network, scale, fleet management, device management, data analytics and security enablement capabilities and services to simplify the drone industry and help support the adoption of IoT.”

Undoubtedly AI will play a part in creating more autonomous drones systems; but these operations, given their complexity, must remain within the bounds of regulation and human legal constraints.

In a greater sense operation of a centralized solution that provides management of a regulatory environment is very similar to what is occurring within the GRC space. As new technology evolves and needs to be fitted into human society and legal and economic boundaries, regulatory management platforms need to be put into place to ensure that behavior, no matter how autonomous or operator-controlled, remains within parameters that ensure harmonious operation of all components of our increasingly complex technological world.


Car Wars: Ford Adds Billion Dollar Investment Acquisition to its AI

Ford has just invested $1 billion in a startup called Argo AI that will operate as a subsidiary, focusing upon autonomous vehicles and AI. With the wave of mergers and acquisitions in the AI area recently, this should come as no surprise. Autonomous vehicles represent the cutting edge of a number of AI and machine learning technologies. We have considered the problem of autonomy in several blogs (Autonomy INFOGRAPHIC, Challenges of Autonomy: The Video, Autonomous Social Robots: The Video). In the automobile and transportation sector, autonomous technologies are particularly aggressive and well-funded.

Argo AI’s brief, as stated on the startup website:

We founded Argo AI to tackle one of the most challenging applications in computer science, robotics and artificial intelligence — self-driving vehicles. While technology exists today to augment the human driver and automate the driving task up to a certain level of capability, replacing the human driver remains an extremely complex challenge. Advances in artificial intelligence, machine learning and computer vision are required to solve it. These technologies will eventually lead us to a new generation of the automobile — a vehicle that is connected, intelligent, and able to safely operate itself alone or as part of a shared fleet. The potential of these shared fleets of self-driving vehicles will be one of the most transformative advancements in this century.

It is a transformative vision that fits well with Ford’s recent moves, as well as with initiatives throughout the transportation industries.

In recent months we have seen Google spin off Waymo (it’s autonomous vehicle unit), GM tying up with IBM Watson; TomTom grabbing Autonomous, and Uber acquiring Geometric. In 2015, Uber acquired the entire robotics department of Carnegie Mellon. Noting that Argo AI draws upon personnel from Uber and Waymo, it is clear that the battle for AI talent, particularly in deep learning, is in full swing and is likely to have a continuing impact on how AI progresses even as skills become more widely available in the next few years..

Ford’s investment does have an interesting twist. In attempting to navigate the world of skills shortage, it has created Argo as a subsidiary with majority Ford ownership but the possibility of equity sharing with AI employees who come aboard. This could be attractive, to less well-known but proficient practitioners wishing to develop advanced skills in this area. The fact that Uber was able to hire away Carnegie’s AI team sent shock waves through the industry and caused many to reevaluate employment policies and skills acquisition. The numerous startups in this area demonstrate that providing equity is a good place to start.

Another issue in automotive AI acquisitions is the growing realization that autonomy will change the industry so profoundly that companies will need to completely re-tool their business models in order to survive. Ford has been making aggressive moves in this direction through other acquisitions.

In the company’s own words:

Ford invested in Velodyne, the Silicon Valley-based leader in LiDAR (Light Detection and Ranging) sensors, to move quickly towards mass production of a more affordable, automotive-grade LiDAR sensor… We’re acquiring SAIPS, a machine learning and artificial intelligence start-up based in Israel, which will play a key role in image and video processing, object detection, signal processing and deep learning capabilities to help autonomous vehicles learn and adapt to their surroundings.
We’re forming an exclusive licensing agreement with Nirenberg Neuroscience, founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code that the eye uses to transmit visual information to the brain…
And we’re investing in Civil Maps, helping us develop 3D, high-resolution maps of our autonomous vehicles’ surroundings.
These four new partnerships build on a recent Ford investment in Pivotal, which is helping accelerate the software needed to support autonomous vehicles.
Plus, we’re working with a long list of universities around the world, including Stanford University, MIT, the University of Michigan and Aachen University in Germany.

That was from August. In the same announcement the company promised to have fully autonomous vehicles in commercial operation for a ride-sharing service beginning in 2021.

As we have pointed out before, Ford’s efforts are by no means unique. Every company remotely related to transportation is frantically trying to move in a similar direction, drawing from a very limited pool of talent having both the skills and the practical experience to spin up autonomy in its latest garb at the speed companies need to stay in this game.

Let the Car Wars begin! Seriously. It will vastly accelerate development and adoption of advanced AI and analytics technologies across every facet of business.


 


Digital Transformation at the World Economic Forum: The Video

Digital Transformation and the convergence of digital technologies will become increasingly critical in the years to come, and it is important to understand what this means for people and industries–and how it is likely to impact you, both on the job and at home. It is an essential part of planning for business and government, and dominates business reorganization conversations.

Digital Transformation is also a very broad area, with enormous implications worldwide, and innumerable nuances both in understanding and in implementation. A video summation is a good way to focus on implications that might not have been considered, and the World Economic Forum (WEF) has recently delivered a set of interesting examples.

This is a topic that we have covered frequently in this blog. Recent posts include:

The Fungible Digital Universe: Convergence in a New Key

Digital Transformation: The Video

Guest Blog: Lagging In Digital Transformation? 30% Of Your Senior Executives Are Going To Leave Within A Year

The Art of Digital Transformation INFOGRAPHIC

Guest Blog: A Very Short History of Digitization

The WEF has been following the impact of Digital Transformation for some years now, and has interesting insights from its global developmental perspective. The main video and animation series are excellent guides to where this stands in early 2017.

All of the following videos were released under the standard YouTube license.

Digital Transformation Initiative (Digital Transformation of Industries)

Uploaded on Jan 15, 2017

The World Economic Forum (WEF) launched its Digital Transformation Initiative (DTI) in 2015. The initiative offers insights into the impact of digital technologies on business and wider society over the next decade. DTI research supports collaboration between the public and private sectors focused on ensuring that digitalization unlocks new levels of prosperity for both industry and society.

Following is the launch video from the WEF site.

The WEF “Challenges of Digital Transformation” Animation Series

A series of explanatory animations based upon WEF scripts was produced in support of the initiative, by Mair Perkins and released in January 2017.

According to Perkins:

“Between November 2016 and January 2017 I worked on a series of animations for the World Economic Forum. The animations showcase some of the WEF’s research into the challenges of the digital transformation. It’s full of interesting facts about technology and predictions for the future.”

Research and script writing by the World Economic Forum;
Art Direction and storyboard design by Mair Perkins;
Illustration by Mair Perkins;
Animation by Mair Perkins, Sam Glyn Davies, Jack Pearson and Adam Horsepool;
Music and sound design by Ben Haynes.

Ep 1 of The Challenges of Digital Transformation – Animation for the World Economic Forum

Ep 2 – How Can Businesses Stay Ahead of Digital Disruption – Animation for the World Economic Forum

Ep 3: A Robot Will Take Your Job – Animation for the World Economic Forum

Ep 4 – Ensuring Digital Transformation Benefits Everyone – Animation for the World Economic Forum


 


Choose Your Concepts Well: You May Need to Make them Work

Technology is constantly accelerating, spinning off new terminologies and buzzwords that develop their own trajectories of meaning. Attempting to chart these trends has become almost a discipline in itself. But, with so much vested in this area, the problems are often misrepresented. Gartner’s “Hype Cycle,” for example, presents an occasionally dangerous picture of simple technological progression. It would be nice if things worked that way. In reality, however, buzzwords become marketing concepts, become generalized, and are then fed back into innovation.

Unfortunately, inflated buzzwords and marketing terms can also be used to define products that don’t really exist. Such “vaporware” (as it was called in the 1980s) has always been with us. Today, it can even create “unicorns”.

In the end, it is important to follow the trends. But it is more important to make certain there is a sound business case, and that the technology has a practical objective.

As the fury of terminologies and “emerging trends” continues in AI, we would do well to remember the lessons of the past.


 


The Fungible Digital Universe: Convergence in a New Key

With digital transformation so active in enterprise considerations, relatively little time has been spent in considering its implications. Digital transformation is often presented as a means of improving efficiency and creating processes that can be easily and flexibly integrated across the corporation. But one of the key issues in digital transformation is digital convergence.

In digital transformation, the vision is to transform all processes, designs, work products, services, and anything that might be so-converted into a digital form. Digital form makes instantaneous transmission over vast distances possible; integrates processes, and opens them to replication or posting within the cloud. The advantages are extraordinary and digital transformation has been underway for a very long time. But there is another factor in this conversation: the fact that all digital streams can easily be subjected to the same or similar processes, thereby opening the way for extreme innovation and cross-pollination, as diverse and heretofore entirely discrete fields are brought together.

As we move into an era of embedded artificial intelligence and big data analytics, digital convergence becomes an issue of extreme importance. The same processes that analyze and interrogate one digital stream might easily be applied to another; the algorithms used in deep learning, for example, easily move between image recognition and voice. Similarly, because they embody the same digital format, the same algorithms might be used to find patterns within programs; to analyze architectural drawings; to understand transactions; and to provide new forms of user access and data comprehension. Digital convergence becomes a kind of synesthesia, where the boundaries between objects, activities and functions become increasingly blurred.

Digital convergence as an issue first arose with telecommunications, and then with multimedia. These areas were early examples of how digitization erased the boundaries between dissimilar components. As everything became transmissible in digital form across the network, markets for multimedia changed; intellectual property rights became problematic; and the capability to copy and broadcast items such as movies and audio recordings became nearly infinite. Legal systems are still struggling to fit the new possibilities within social and legal frameworks and understandings. Now, with everything becoming digital, issues such as intellectual property, privacy, and segregation of one component of the universe from another become moot.

The world itself, as we know it, is a fabrication of knowledge whose definitions, patterns, and interactions we digest and share. It is an imperfect system since it is overlaid upon an existing physical reality; the vertices become sharp and apparent, and logical argumentation becomes less precise because definitions encompass our understanding of an item rather than the item itself. In a converged digital universe, the world we interact with is, in fact, the primary level. Language starts to stumble in its description, because all of those new concepts become terms whose meaning changes as swiftly as the territory shifts.

Digital transformation is the means by which we are moving to this more flexible, more fungible universe; it is an essential process for business which will reap immediate benefits in being able to act far swifter than any non-digital process or comprehension. But, in a greater sense, it will also change how we understand the universe and how future interactions will take place as human beings evolve toward assisted and hybrid man-machine thought processes.


 


Robot Invasion Begins This Week: The Video

London’s Science Museum is putting on perhaps the largest and most significant robot exhibition ever, featuring both the cutting edge present and examples from the distant past. While this blog’s “The Video” offerings are generally not posted back-to-back, this time the event seems newsworthy, lasting, and really needing video! Most news coverage fails to offer an adequate picture.

Robotic history, particularly focusing on humanoids, is important in demonstrating evolution of the robot concept, as well as the gradual development of increasingly sophisticated capabilities. Mankind has always wanted to create a Golem; but the capabilities imagined depend upon the mud with which it is wrought.

To remedy the coverage gap, we have assembled a set of videos of the event, showing different aspects. Per usual, they are YouTube licensed, and provided with identifying information clipped from their landing pages.

Robots: 500 Years in the Making (Science Museum)

Published on Feb 7, 2017

From the dawn of mechanized human forms to cutting-edge technology fresh from the lab, curator Ben Russell, looks at Robots and reveals the astonishing 500-year quest to make machines human.

Seven Must see Robots (Science Museum)

Published on Feb 7, 2017

Join curator Ben Russell for the seven robots you must see.

Robots Through the Ages go on Show in London (AFP news agency)

Published on Feb 7, 2017

From an 18th-century clockwork swan to a robot quoting Shakespeare, a new exhibition at the Science Museum in London charts the 500-year history of machines that fascinate and terrify in equal measure.

Backstage at Science Museum’s Robots Exhibition: ‘You can always unplug them’ (Guardian Science and Tech)

Published on Feb 7, 2017

The Guardian’s design critic, Oliver Wainwright, goes behind the scenes at the Science Museum’s robots exhibition with the curator, who introduces him to some of the most advanced humanoid robots in the world.

From a lifelike baby to robots without conscience, the curator explains where the technology is at, who may use it and how far it has to go.