Shhh! The robots are moving back home to software! (Not that they ever left.) Concepts forged in the IoT will become part of every other system. Digital components are easily connected and share a multitude of underlying principles, so concepts quickly move between unrelated disciplines, and all related technologies tend to converge.
While considering the IoT we have looked at autonomy, and what is required for devices such as automobiles and industrial robots to operate safely on their own, coordinated with other devices and working with humans (Autonomy INFOGRAPHIC, Challenges of Autonomy: The Video, Where the AI Rubber Hits the NHTSA Road: Letter to GM Describes State of Autonomy). We have reviewed the special challenges of autonomy, and how they are being solved to create efficient and effective systems. These capabilities are now destined to enrich other areas.
A key issue is how to apply this learning to business processes themselves. Autonomous robots and vehicles are extensions of digital processes. These processes are defined by software and engage numerous other systems toward the performance of a given end.
The development of an autonomous device requires layer upon layer of intelligence performance (see The Composite Nature of Embedded Cognition). Devices must be able to sense the activities surrounding them; they must have the ability to interact with their surroundings; and they must be able to provide a wide variety of actions that can be flexibly fitted to the accomplishment of a mission. All of these details might also be applied to strictly software systems.
A cognitive approach does not simply glue artificial intelligence to existing processes with the assumption that AI will provide the required result. Instead, we will see multiple AI systems used in conjunction with each other to perfect and deliver software solutions. Each routine will access a range of sensor and processing capabilities which will offer autonomy at the process level. Autonomous agents—software systems capable of specific actions in a larger whole—will then perform their functions as needed to achieve a desired end result. This is in line with a growing understanding of the composite nature of artificial intelligence. It also demands new forms of orchestration and new ways of providing AI capability.
An autonomous agent business process management solution will be able to sense when processes are required rather responding to a fixed call within another software program. This means that processes will anticipate requirements and act early to create an efficient solution. They will act with project managers understanding of when specific data or tasks need to be accomplished. Autonomous agents will be able to interact with other programs and bring a catalog of analytic, machine learning, predictive, and sensor-driven capabilities. This range of functional autonomy will need brokerage, data sharing, and orchestration. A collaborative framework will be required to ensure that the components do not block each other and that the priority of specific tasks is respected.
With an autonomous agent-based process management solution, response in a complex environment will be much faster and more effective than with a fixed system. Similarly, the cognitive capabilities of such a composite system would likely create new possibilities in overall management and furtherance of larger goals. It would become possible to orchestrate all business processes and micromanage them on an atomic level through ability to immediately activate an autonomous response from coordinated process components.
The further development of digital business, AI, autonomy, and cloud computing all tend in the direction of componentized autonomous agents. However, if we look for a timeframe, this will occur well in the future. We are now at the stage of integrating tiny amounts of AI in small and disparate processes. Robotics are merely at the edge of achieving true autonomy. And the processes of orchestration and synchronization of vast independence and coordinated autonomous systems is at the moment beyond our grasp.
However, it is important to understand that in a digital business environment, all of the advances that are made in one field filter with little delay into all other sectors. As we develop cognition and autonomy for robotics and vehicles, these same processes become available to programs of recruitment, sales, finance, manufacturing, medicine, and everything else. We are moving into not only a artificial intelligence – driven world, but a composite artificial intelligence driven world. The capability of developing layer upon layer of such cognition will create field effects that will ultimately change the nature of the combined process. Just as the human mind is entirely different from the neural mechanism of a single cell, the enormous multilayered possibilities of a galaxy of autonomous agents creates a subtly new system whose capabilities cannot yet be adequately explored.
We are just at the beginning of this change, and the marketers are fierce in describing their products as the apex of this evolution. But we are nowhere near the ability to fully comprehend the requirements, capabilities, and consequences of such a cognitive software environment.
For business, taking a more complex view of AI in the enterprise is mandatory. The effects will require a shift in strategy. Software vendors will need to understand the subtle ways in which their programs will need to interact. This is a long term movement, but preparing for it must begin now.
Here is a mixed collection of sales and academic presentations on the topic of machine learning for embedded vision. Where available, descriptions are provided from the web source.
Real time image processing is absolutely critical for most autonomous systems, and understanding current capabilities is important in developing new applications for business and in consumer technology. Deep learning is proving essential to making these systems work.
Visual Intelligence in Computers – Fei Fei Li (Stanford Vision Lab)
As director of one of the top Artificial Intelligence (AI) labs in the world, Dr. Fei-Fei Li is leading the next wave of AI that is rapidly being integrated into companies, governments and the lives of individual consumers. The way we work, drive, entertain and live our lives will never be the same. Dr. Li heads a team of the world’s top scientists and students who enable computers and robots to see and think, as well as conduct cognitive and neuroimaging experiments to understand how our brains function. She is a world-renowned expert on computer vision, machine learning, artificial intelligence, cognitive neuroscience and big data analytics. She directs both the Stanford Artificial Intelligence Lab (SAIL) and the Stanford Vision Lab, as well as teaching Computer Science at Stanford University.
CEVA’s Jeff VanWashenova interviewed at AutoSens 2016 (AutoSens)
CEVA is a leader in developing DSP technologies for image recognition. Here, Alex Lawrence-Berkeley interviews CEVA’s Jeff VanWashenova at AutoSens 2016, held at AutoWorld in Brussels, Belgium.
Computer Vision System Toolbox Overview (MATLAB)
Design and simulate computer vision and video processing systems using Computer Vision System Toolbox™. The Toolbox provides algorithms, functions, and apps for designing and simulating computer vision and video processing systems. You can perform feature detection, extraction, and matching; object detection and tracking; motion estimation; and video processing. For 3-D computer vision, the system toolbox supports camera calibration, stereo vision, 3-D reconstruction, and 3-D point cloud processing. With machine learning based frameworks, you can train object detection, object recognition, and image retrieval systems.
Movidius Demonstration of Its Machine Intelligence Technology (Embedded Vision Alliance)
Jack Dashwood, Marketing Communications Director at Movidius, demonstrates the company’s latest embedded vision technologies and products at the May 2016 Embedded Vision Summit. Specifically, Dashwood demonstrates the Fathom neural compute framework, running an image classifier in an ultra-low power embedded environment (under 1W), and enabling a whole new class of miniature robot overlords of which to be fearful.
The recent Dyn server DDoS attack using a botnet of cameras and other devices points to a growing security problem with the IoT. The attack exploited known vulnerabilities in common internet-connected devices such as cameras, watches, and lighting systems. Taking down a few popular websites was an important warning on network and device vulnerabilities. But this also draws attention to a much more significant issue: What happens when devices become intelligent and autonomous? Without adequate attention to security beginning in the earliest stages of device development, and without security standards, security issues could easily develop beyond our capacity to control them.
Currently, devices tend to have less processing capability and lack adequate security measures.Consumers tend to leave them unprotected. These defects make it possible to create attack systems using thousands of devices. But these attacks are part of a larger problem:
- The growing availability of attack software. Shortly after the Mirai software online attack on Dyn, Mirai was released to the public domain, making it possible for less sophisticated hackers to create effective DDoS attacks using devices. Mirai itself incorporates concepts from predecessors such as Lizardstresser which used a botnet of home routers. These and similar programs are now evolving to create new threats.
- The increasing sophistication of DDoS attacks, and ability to project damaging outcomes using relatively unsophisticated software. Large scale DDoS attacks are growing with available software and bandwidth. According to Akami’s most recent State of the Internet Security Report, the first quarter of 2016 marked an all time high in the number of attacks peaking at more than 100 Gbps.
- The proliferation of devices for popular use, with no effective standards in place for security. Device makers need to respond rapidly to a growing market for consumer devices and security is often an afterthought. Gartner forecasts that 6.4 billion connected things will be in use worldwide in 2016, reaching 11.4 billion by 2018.
- The failure of consumers, businesses, and device manufacturers to emphasize security in the release and installation of internet-connected devices. Devices using default passwords continue to offer the greatest vulnerability to attack.
IoT DDoS attacks are only the beginning of a new and more complex cyber threat environment. Symantec’s 2016 Internet Security Threat Report describes multiple vulnerabilities in 50 commercially available devices, including a ‘smart’ door lock that could be opened remotely online without a password; vulnerabilities in medical devices such as insulin pumps, x-ray systems, CT-scanners, medical refrigerators, and implantable defibrillators; vulnerabilities in Internet-connected TVs and connection vulnerabilities in thousands of everyday devices, including routers, webcams, and Internet phones due to networking issues. This is before even considering industrial controls and devices that can be used in complex compound attacks.
At the moment, the chief concerns are with potential for havoc with networks. The Spiceworks 2016 IoT Trends survey of 440 IT professionals found one of the top security concerns to be the fact that IoT devices create more entry points into the network (84%) and that IoT manufacturers aren’t implementing sufficient security measures (about 75%). Proliferation of network entry points creates potential for dangerous backdoors, and unsecured devices provide unlimited opportunity for intrusion. While a DDoS attack requires control of thousands of devices; a network break-in requires only one.
As devices get smarter, the dynamics are likely to change and issues will become more serious. Potential for more botnet attacks using strategies well beyond DDoS certainly exists. The range of vulnerabilities will inevitably involve Industrial Control Systems (ICS), as well as an increasing range of autonomous vehicles and robots.
At the present time, the security issues are manageable. Reviewing vulnerabilities in this area and ensuring adequate measures are in place is the best place to start. It is important to consider security usage, device access, and network isolation of devices that might be easily compromised. Device management policies need to be in effect, and need to be constantly adjusted to growing threats. We are beginning to see use of big data and machine learning as a part of the solution, but this needs to be incorporated in a comprehensive program that also focuses upon security awareness and vigilance.
The IoT remains loosely defined, and is growing to include an increasing variety of things, both for consumers and for industry. This can increase confusion regarding possible security issues. Devices include the personal items of which we are all aware; but they also include routers and modems, network equipment, autonomous systems, and the equipment and Industrial Control Systems. Each type of device brings its own security threat, and each unsecured device is a potential recruit to a botnet device brigade. It is important to consider carefully how devices will interact with your business and personal life.
Device makers need to consider security issues from the earliest stages of development, and better standards need to be put in place to ensure that more complex devices do not become a new problem for our increasingly complex networks of connected things.
The latest attacks point to the growing threat from connected devices. But DDoS attacks are the tip of the iceberg. There are direct physical threats to automobile and home systems developing as well. The IoT must be secured; this is of growing importance to business, government, and the individual consumer.
Google has just released its autonomous car project as a separate company called Waymo, operating under Alphabet. This move has been in progress for at least a year, and it represents the maturation of the company’s interests in this technology. From an AI standpoint, it means that the new company will have to turn a profit. It also means that it is likely to pursue automotive industry partnerships in its drive toward a completely autonomous vehicle rather than attempting to develop all facets of the technology itself.
About a year ago, John Krafcik, an ex-Hyundai North America executive, was put in charge of the project, and he continues CEO of Waymo. During his tenure, there was a shakeup of engineers and executives, several of whom went on to create AI startups such as Otto (now acquired by Uber), Nuro, and an as-yet-unnamed autonomous vehicle company from former project head Chris Urmson, who left in August.
As a preface to the move, Krafcik blogged:
Waymo may be a new company, but we’re building on advanced self-driving technology developed over many years at Google. On October 20, 2015, we completed the world’s first fully-self driven car ride. Steve Mahan rode alone in one of our prototype vehicles, cruising through Austin’s suburbs. Steve is legally blind, so our sensors and software were his chauffeur. His route reflected the way millions of people could use a self-driving car in everyday life: riding from a park to a doctor’s office and through typical neighborhoods.
Google’s autonomous vehicles have the experience of millions of driving miles under the hood, and this has given the company an advantage. But now, as competition grows fiercer, it is time to make the technology into a product, and find a niche for it that can fit Google’s needs and capabilities. Building cars is difficult, as other tech companies, such as Apple, have discovered. And Tesla has early mover advantage in pursuing a high tech/high visibility approach to every aspect of personal transportation.
The Google project had been gradually extending its partnerships in pursuit of autonomy through a range of strategic hookups, such as a partnership with Fiat Chrysler that has recently morphed into a ride-sharing service using semi-autonomous Pacifica vans, to be available by the end of 2017.
It is clear that Autonomy is becoming more important as a distinct branch of AI, fueled by billions of research dollars and development by an increasing number of automobile and transportation companies, software developers, and niche-market startups. Autonomy has a set of unique problems, as we explored in our recent Autonomy Infographic. As a digital technology, the concepts developed can easily be stretched to new fields, and combined with evolving capabilities in other fields. Transportation and logistics are obvious uses, but autonomous robots will bring whole new issues to the fore as we struggle to mold society to its autonomous robotic future.
Education and training are absolutely critical to the new economy. Yet we are having increasing difficulties in satisfying these needs. As huge populations demand higher education, institutions are stretched. While the continued movement of education online with MOOCs has created new possibilities, these training programs have suffered from low completion rates and have not proven adequate to meeting the growing crisis. Artificial Intelligence is now opening new avenues for improving the availability and effectiveness of education through a more interactive online experience and more effective processes for meeting educational goals.
The use of AI in education is a natural. Educational institutions have long led research across a variety of topics in technology. In a case of “physician heal thyself,” AI research can be turned to solving the problems of higher education. Jobs are becoming more knowledge- and skills- focused; vast numbers of new workers are joining the workforce globally; barriers are falling constantly as network access makes it possible to perform jobs as easily from Chicago as from Bangalore; and growing demands for success and prosperity, fueled by access to greater information about the rest of the world in every village, are creating a huge and previously untapped market for learning.
The jobs of tomorrow will require new training. Meanwhile, educational institutions are becoming increasingly expensive. Aspirations of the billions of people who need higher learning to achieve success will not be thwarted. As with every other sphere, the needs of the many will drive the market away from the few.
AI offers ways of improving the educational process; of improving interaction with students; of determining appropriate curricula; of tailoring education to personal and institutional needs; and of collaborating with teachers and schools in extending their efforts to a wider community. As we move into a new era, all of these issues need to be explored. AI will not replace teachers or schools; it will simply make it possible for everyone to achieve the learning results that they need to prosper in an increasingly complex, and knowledge-driven world.
Here we will look at a set of videos on this topic, with their published explanations.
Artificial intelligence & the future of education systems (TEDx Talks)
Dr. Bernhard Schindlholzer is a technology manager working on Machine Learning and E-commerce. In this talk he gave at TEDx FHKufstein, Bernhard Schindlholzer contemplated the implications of ephemeralization – the ability of technological advancement to do “more and more with less and less until eventually you can do everything with nothing” – through artificial intelligence and machine learning. He explores the challenges that this technological approach poses to our economy and, furthermore, how they could be addressed by questioning established norms of our education systems.
Dr. Bernhard Schindlholzer is a technology manager working on Machine Learning and E-commerce.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community.
Artificial Intelligence & Education: Lifelong Learning Dialogue (TEDx Talks)
In this talk, Prof. Iiyoshi goes head to head with an AI questioning the fate of education and lifelong learning!
Toru Iiyoshi was previously a senior scholar and Director of the Knowledge Media Laboratory at the Carnegie Foundation for the Advancement of Teaching (1999-2008), and Senior Strategist in the Office of Educational Innovation and Technology at Massachusetts Institute of Technology (2009-2011). He is the co-editor of the Carnegie Foundation book, “Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge” (MIT Press, 2008) and co-author of three books including “The Art of Multimedia: Design and Development of The Multimedia Human Body” and numerous academic and commercial articles. He received the Outstanding Practice Award in Instructional Development and the Robert M. Gagne Award for Research in Instructional Design from the Association for Educational Communications and Technology. Currently, he is the director and a professor of the Center for the Promotion of Excellence in Higher Education (CPEHE) at Kyoto University.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community.
Neuroscience, AI and the Future of Education | Scott Bolland (TEDx Talks)
Currently around 63% of students are disengaged at school, meaning that they withdrawal either physically or mentally before they have mastered the skills that are required to flourish in later life. In this talk Scott Bolland explores the science of learning, the mismatch between how we teach and how the brain natural learns, and the important role that artificial intelligence could take in addressing the limitations in our current education system.
Dr Scott Bolland is the founder of New Dawn Technologies, a high-tech software company aiming to revolutionise education through the use of artificial intelligence. He has spent the last 20 years actively researching and teaching in the field of cognitive science – the scientific study of how the mind works – which spans disciplines such as psychology, philosophy, neuroscience, artificial intelligence and computer science. He holds a PhD in this field, as well as a university medal for outstanding academic scholarship.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community.
Dr. James Lester on Artificial Intelligence in Education (Vimeo)
Dr. James Lester is a Distinguished Professor of Computer Science at North Carolina State University, where he is Director of the Center for Educational Informatics.
Ride sharing company Uber has been moving into the world of self-driving cars, autonomous operations, and AI for a while. It is now acquiring Geometric Intelligence, a company focusing upon Deep Learning and Machine Intelligence, that will provide the core for the newly announced Uber AI lab. The lab will be headed by Geometric Intelligence co-founders Gary Marcus and Zoubin Ghahramani, both well-known for their work in this area, and will have 15 founding members.
Uber has a vision of autonomous transportation that could have revolutionary implications for movement of people and things. Self-driving cars are the current focal point of today’s work on autonomy due to revolutionary implications and tremendous financial opportunities. They will affect every business model in the transportation industry, from personal travel to freight delivery, including fueling, routing, and optimization. There is also an almost desperate search for top-level talent as major transportation companies from Tesla to Toyota and data companies seek to crack the autonomy puzzle. Autonomy is difficult, but competition is accelerating its development.
Geometric’s four founders are Gary Marcus, Zoubin Ghahramani, Doug Bemis and Ken Stanley. According to the company web site, Gary Marcus is a scientist, bestselling author, and entrepreneur. His published works include The Algebraic Mind: Integrating Connectionism and Cognitive Science and The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought. He is also Professor of Psychology and Neural Science at NYU.
Zoubin Ghahramani is a world leader in the field of machine learning and Professor of Information Engineering at the University of Cambridge. He is known in particular for fundamental contributions to probabilistic modeling and Bayesian nonparametric approaches to machine learning systems, and to the development of approximate variational inference algorithms for scalable learning.
Doug Bemis has served as CTO for several other startups, including Syntracts LLC and Windward Mark Interactive. Doug also received a PhD from NYU in neurolinguistics, for work using magnetoencephalography to investigate the neural bases of semantic composition. Subsequently, he worked with Stanislas Dehaene at Neurospin in France.
Ken Stanley is an associate professor of computer science at the University of Central Florida. He is a leader in neuroevolution (combining neural networks with evolutionary techniques), where he helped invent prominent algorithms such as NEAT, CPPNs, HyperNEAT, and novelty search. His ideas have also reached a broader audience through the recent popular science book, Why Greatness Cannot Be Planned: The Myth of the Objective.
The Uber AI team seems well-balanced, and heavily weighted in Machine Learning through a variety of approaches. The stakes are high, and this bodes well for future advances in autonomy that will be of benefit across the AI and robotics areas. Although details of the Geometric approach are sketchy at this point, its model appears to focus upon combining a rules-based approach with machine intelligence inference. Such hybrid approaches could offer the best of all possible worlds.
According to the release from Uber’s Jeff Holden, “In spite of notable wins with machine learning in recent years, we are still very much in the early innings of machine intelligence. The formation of Uber AI Labs, to be directed by Geometric’s Founding CEO Gary Marcus, represents Uber’s commitment to advancing the state of the art, driven by our vision that moving people and things in the physical world can be radically faster, safer and accessible to all.”
While looking for solid but basic explanations of important cognitive algorithms, we recently came across a video series by Deep Learning TV. This excellent set of videos describes how these algorithms work and fits them into the big picture. The videos combine good presentation with well-considered material, and that stands out from the ordinary range of presentations on this subject.
According to the providers:
There’s nothing wrong with technical explanations, and to go far in this field you must understand them at some point. However, Deep Learning is a complex topic with a lot of information, so it can be difficult to know where to begin and what path to follow.
The goal of this series is to give you a road map with enough detail that you’ll understand the important concepts, but not so much detail that you’ll feel overwhelmed. The hope is to further explain the concepts that you already know and bring to light the concepts that you need to know. In the end, you’ll be able to decide whether or not to invest additional time on this topic.
Deep Learning TV is available on Facebook at https://www.facebook.com/De…
The team that put the videos together:
Following are the videos in this series:
Boeing is acquiring Liquid Robotics, developer of the Wave Glider autonomous ocean surface robot, strengthening pursuit of the Digital Ocean. The Digital Ocean concept seeks to bring extensive monitoring and surveillance capabilities to largely unobserved ocean spaces through use of unmanned vehicles and advanced sensors. Possibilities include a range of environmental, commercial, and defense missions that could be supported by AI, big data, and autonomous operation in a largely untapped sphere of operations.
The acquisition follows on from a successful teaming agreement begun in 2014, resulting in extensive integration of Boeing’s advanced sensors with the Sensor Hosting Autonomous Remote Craft (SHARC), a version of the Wave Glider. The SHARC is designed to connect intelligence, surveillance and reconnaissance capabilities ranging from satellites to manned and unmanned aircraft to sub-surface craft. The Wave Glider is a wave and solar-powered autonomous ocean vehicle developed in 2007 and now with more than one million nautical miles traveled.
The 2014 agreement with Boeing focused initially on developing total integrated solutions for anti-submarine warfare, maritime domain awareness and other maritime defense applications. The agreement combined Boeing Defense, Space & Security’s experience developing and fielding multi-layered intelligence, surveillance and reconnaissance solutions with Liquid Robotics’ autonomous ocean technology.
At that time, Gary Gysin, President and CEO of Liquid Robotics said “We look forward to teaming with Boeing to expand domestic and international opportunities that combine Boeing’s expertise in aircraft systems and integrated defense solutions with Liquid Robotics’ expertise in persistent unmanned ocean vehicles. Together, Boeing and Liquid Robotics will provide customers an integrated, seafloor-to-space capability for long duration maritime defense.”
The Boeing acquisition is likely to further the Defense capabilities of the Liquid Robotics platform. While Liquid Robotics has other missions as an ocean drone manufacturer with a goal of instrumenting the ocean, its military applications, such as ability to detect submarines and other marine threats, have been highlighted recently and are likely to be driving the acquisition.
Just a few months ago Liquid Robotics and Boeing demonstrated their autonomous maritime warfare capabilities at the British Royal Navy’s Unmanned Warrior Demonstration, “Together, Liquid Robotics and Boeing achieved a groundbreaking milestone in unmanned maritime warfare,” said Gysin. “We proved that SHARCs can augment the tedious and dangerous task of continuous maritime surveillance by our war fighters and provide critical real-time intelligence to commanders.”