Recruiters: Mind Your T’s and Q’s

Finding the right person for the growing range of new technologies can be difficult. There are skills shortages in a number of areas, and there is a rush to fill them with obvious talent. In this rush, however,  it is important to remember that the specific skills are of only transitory value, and requirements will shift as the next new thing comes along.

Digitization is rapidly converging traditionally disparate processes and technologies, creating a need for a new kind of worker. In focusing constantly on specific technical skills, we may be weakening the ability to understand the broader context that must fuel innovation. There needs to be input from the Arts, from global experience, and from the imagination. This demands a different type of learning.

In the late nineties, “T-shaped skills” were introduced, with the vertical bar representing dept of skills and the horizontal bar, the ability to work across disciplines. This was useful in a structured and deterministic world. But we are now in a time of vast changes, shifting skills requirements, and new pressures from robotics and AI. It’s time for a new, and complementary, concept.

Some years ago, I drew a cartoon about recruitment fads (https://bjdooleytoons.wordpress.com/?s=hiring+fad) in reaction to “T-shaped” assumptions. In it, I introduced “Q-shaped” skills as “roundness of knowledge with a squiggly bit underneath.”

Although this was partly in jest, it does raise a significant point. In a converged world, “T-shaped” is no longer enough. Just as Steve Jobs drew from calligraphy for PC invention, exposure to a much wider range of knowledge is increasingly essential for innovation. Imagination and ingenuity are also at a premium.

Certainly, “T-shaped” skills will always continue to be important. But handling the growing possibilities of digital convergence creates a need for the nuanced “Q-shaped” skills that focus upon the big picture and its imaginative possibilities.

We have already seen how over-emphasis upon rote learning and tests can increase “T” and diminish “Q” skills. Companies lacking in the former will have trouble meeting the needs of the moment; companies lacking in the latter will fail to envision  the opportunities of the future.

We need to increase our “Q” skills to create the Total Quality workforce of tomorrow.


 


AI and Risk Management: The Video

Risk Management is growing in importance for all companies interested in survival. It has particular relevance to the financial industries, where risk is at the heart of every investment strategy, and is the basis of insurance.  For other industries, calculating risk and prioritizing options is at the heart of resilience, both within financial and operations areas. It is also tied up with regulatory risk in  Governance, Risk and Compliance (GRC).

As a calculation, Risk would seem an early candidate for AI and Big Data approaches. It has lagged, however, partly due to  inherent conservatism, and partly due to broader questions of prediction that might require an Artificial General Intelligence. There are a lot of niche possibilities in this area, such as fraud detection and portfolio management, and interest seems destined to grow. Risks are growing, regulations are becoming more complex, and data is exploding. Companies need better risk management to improve resilience, and the financial industry would benefit in innumerable ways.

We have assembled a small set of videos on this subject, all with the standard YouTube license, and with descriptions from their landing pages.

Why Automate GRC Management Systems? (360factors)

Published on Oct 26, 2016

Governance, Risk and Compliance (GRC) software based on artificial intelligence technology automates the compliance functions.

 2016-2017 AI trends in Financial Services (Jose Allan Tan)

Published on Jun 13, 2016

Baker & McKenzie partner, Astrid Raetze, believes that financial institutions are looking at the various applications of artificial intelligence from risk management to credit assessment. AI can process an enormous amount of data very quickly. This will enable financial institutions to improve efficiency and customer service.

Applying Robotic Process Automation (RPA) in Finance and Risk (Accenture)

Published on Oct 12, 2016

Accenture Finance & Risk Practice is helping our financial services clients better manage the onslaught of data and regulations with the use of robotic process automation (RPA). Robots can drive cost, time and accuracy efficiency and work 24/7 around key tasks such as anti-money laundering and order-to-cash. Ultimately, this frees up valuable employees to focus on higher value work that only humans can do.

Intelligent Automation for Risk Management, Fraud Prevention, and Security Compliance (Cloud Raxak)

Published on Dec 19, 2016

July 2016 Webinar: Cloud Raxak, Gartner Cool Vendor in IT Automation, hosted a panel discussion on how intelligent automation is enabling regulated industries like financial services to leverage the cloud, while effectively managing risk, fighting digital fraud and money laundering, and maintaining security compliance.

Former executives from Bank of America, JP Morgan Chase, Silicon Valley Bank, and the Canadian Imperial Bank of Commerce provided insights on:
— Banking and finance industry regulatory compliance and fraud management challenges.
— How analytics and machine learning can streamline fraud risk management.
— How automation can reduce the cost and complexity of security compliance.


 

 

 


Shhh! Your Things Might Be Listening!

Recent discussions highlight the fact that the domestic devices in the blossoming IoT will be listening and interacting with you as part of their normal function. This could have a number of privacy consequences, and could even spark unanticipated actions depending upon how these devices are installed and integrated.

As proposed in BJ Dooley’s IT Toons, there are many ways this could go wrong, the least of which might be a coffee overdose.

Recently, a prosecutor attempted to gain access to Amazon Alexa devices and cloud storage to obtain any recordings that might have been inadvertently made during an  incident inside the suspect’s home. Regardless of whether this is permitted, it does point to the problem of recorded interactions and privacy. What audio and video is being recorded, and for how long? Who can access it, and when?

In a multi-device household, there will also be several systems contending for attention that might be accidentally switched on with a mistaken keyword. Device coordination could become a problem. Devices will be making increasingly sophisticated decisions, and they could easily come into conflict with each other or with a user’s wishes.

While it is very early, indeed, this all points once again to the social nature of the IoT, and the need to integrate devices with the human context–as well as with the burgeoning thingaverse. If your digital assistant talks to your television, what will it say? And will the television talk back?

All of this will open up a market for personal device management, which will draw from and feed enterprise mobility management solutions that have been available for some time. The consumer versions will need to be less expensive and more user friendly, and they will feed right back to business systems–as is generally the case these days.

Devices will also need to be designed with the social context in mind. This will raise the bar a bit, and inevitably demand at least a primitive AI.

Meanwhile, the devices will keep talking. We only hope that they do not conspire against us.


 


Digital Transformation, an Evolving Vision: The Video

Digital Transformation is viewed by many companies as a strategic necessity, these days. It can energize a business, build efficiency, reduce cost, and open the way for innovation. As with many concepts, however, it has become diffuse in meaning and marketing strategies tend to focus upon a few simple areas in which digitization can help. What is missing is a greater story of the impact, implications, and the actual consequences of undertaking such a transformative process.

Previous blogs on this subject are: Digital Transformation at the World Economic Forum: The VideoDigital Transformation: The Video, and Guest Blog: Lagging In Digital Transformation? 30% Of Your Senior Executives Are Going To Leave Within A Year.

We have now reached a point where most companies understand that this is coming and have programs in place for digitization of some aspects of their business. Now we need to look more closely at “Digitization 2.0,” where we begin to consider the opportunities and effects that have emerged from the innumerable successes and failures in this realm.

Here we have assembled a few videos on broader aspects of Digital Transformation. Most are standard YouTube licenses with explanation drawn from landing pages. The CEB/Gartner video is a share from the company’s site, with description from that page.

The case for Digital Reinvention (McKinsey & Company)

Published on Feb 28, 2017

Digital technology, despite its seeming ubiquity, has only begun to penetrate industries. As it advances, the implications for revenues, profits, and opportunities will be dramatic. Here we explore the results of our survey into digitization across industries and detail the case for digital reinvention.

The New IT Operating Model for Digital (CEB Global – Gartner)

Two-thirds of business leaders believe that their companies must pick up the pace of digitization to remain competitive. The pace and volatility of digitization opportunities, as well as blurred technology responsibility across the enterprise, makes it more difficult for IT leaders to help their organizations exploit emerging digital opportunities.

To meet these challenges head on, progressive IT leaders are changing IT’s operating model. We have identified nine features of the new operating model that will position IT teams for digital success.

The Digital Transformation Playbook (Columbia Business School)

Published on Jun 3, 2016

BRITE founder, author, and Columbia faculty member David Rogers talks at BRITE ’16 about how businesses need to transform by understanding that now: 1) customers are part of a network, 2) competition comes from platforms more than products, 3) data is a strategic asset, 4) innovation is driven by small experiments and scaling, 5) value is dynamic and adaptable. Get further insights and tools to make this transformation by reading his new book, “The Digital Transformation Playbook: Rethink Your Business for the Digital Age.”

The BRITE conference on brands, innovation and technology is hosted by the Center on Global Brand Leadership at Columbia Business School.

Digital Transformation of Society, Business (Gerd Leonhard

Published on Apr 5, 2017

A talk by futurist Gerd Leonhard.

“This is the edited version of my keynote at DST Systems Advance conference in Phoenix Arizona. This talk is about digital transformation (challenges and opportunities) – the next 5 years. You can download the slidedeck use in this talk via this link (PDF) http://gerd.fm/2ncx0Gc”


 

 


Evolving the Cognitive Man-Machine

Human intelligence and artificial intelligence will increasingly interact as we extend the range of mechanical cognition to include sensory interpretation, role playing, and sentiment (Affective Computing, Intersecting Sentiment and AI: The VideoShifting the Boundaries of Human Computer Interaction with AI: The Video). Such advancement will both create new conflicts and increase our understanding of the human mind by providing an objective platform for comparison.

But the cross-pollination of human and machine understanding does not stop there. As digital assistant roles progress, robots will need to understand and influence people. They will need to win negotiations, and devise strategies of engagement. AI will need to become increasingly cognizant of human thought patterns and social characteristics. This will make them a part of the greater “human conversation.”

As cognitive systems are assigned roles in which they must take the lead or suggest actions, AI will be playing a human game with human pieces. Social interaction is a construct: Knowing the rules, anyone can play. This will lead to competition and friction between automata and humans across a wide range of activities.

Even as AI continues to advance, human capabilities will be amplified through integrated advisers, prostheses, and avatars that will vastly increase our ability to process information, remember and assemble concepts, travel to remote locations, and communicate–all at the speed of light.

Robots and mankind are locked in a co-evolution that will ultimately lead to hybridization. We can add new robotic capabilities much faster than we can evolve them on our own. Simple toolmaking was the first step along this path; the final step will be where the intersection of humanity and machine becomes blurred, and finally, almost invisible.

Organisms adapt to fill a niche; when they can no longer adapt, their cousins take over. Evolution is about survival of the fittest, not of the strongest or the largest or even the smartest. Technology is an evolution of tools to fit a world defined by humans, that will continue to be shaped by human thought. Hybridization is inevitable, because it will augment human capability. Technology can evolve and be adapted much quicker than native biology; so further evolution of the species will be based on technology.

At present, we are barely on the doorstep of hybridization. We have clumsy “wearables,” limited but promising smart prostheses; the beginnings of AR concepts from Google Glass to HoLolens; social industrial robots that can work with people; digital assistants that can insert themselves into social settings; and an increasing range of smart devices that bridge the human context and the IoT.

In a somewhat distant future, we will likely view this as simply “making better  tools.” The alarming possibilities we envision today will be the commonplace realities.  As with the unknown Chinese inventor of printing blocks for text, we will ignore revolutionary change and create a narrative in which everything is consistently normal.

Twilight of the Gods? Perhaps. For the present, we are faced with the problem of understanding these changes and applying new technologies in a way that society continues to benefit, and the multitude of interstices are filled. This will create great opportunity, but it will also demand innovation directed specifically toward human-machine interaction.

Ultimately, of course, this solves the problem of “the Singularity,” and a robotic Apocalypse. To quote Walt Kelly’s Pogo comic strip, “we have met the enemy and he is us.”


 


Cisco Adds MindMeld for Conversational Assistance

Cisco is buying AI digital assistant startup MindMeld as part of a string of May acquisitions. Cisco will be using the technology to improve its collaboration suite by adding conversational interfaces, beginning with Cisco Spark.

MindMeld is a relatively small company, but it is a recognized player in the conversational interface area. It provides a flexible protocol called “Deep-Domain Conversational AI,” which can be used to add knowledge and expertise around any custom content domain.  This allows companies to amplify the capabilities of natural language conversational interfaces. MindMeld is currently used by Spotify, and Samsung, among others.

MindMeld brings its 10 patents in its domain, and was founded by Tim Tuttle, a former AI researcher from MIT and Bell Labs,  and Moninder Jheeta in 2014.

Capabilities included in the MindMeld offering are broad vocabulary natural language understanding, question answering across any knowledge graph, dialog management and dialog state tracking, and large scale training data generation and management.

The value proposition of MindMeld’s offering has always been in enabling conversations within specific knowledge domains–a particular weakness of general purpose assistants, particularly as AI moves to respond to business needs.

According to Cisco’s head of M&A and venture investment, Rob Salvagno, writing on the Cisco blog,

I’m excited for the potential represented by the MindMeld team and their technology, coupled with Cisco’s market-leading collaboration portfolio, to enable us to create a user experience that is unlike anything that exists in the market today. Together, we will work to create the next generation collaboration experience. The MindMeld team will form the Cognitive Collaboration team and report into the IoT and Applications group under Jens Meggers, senior vice president and general manager.

Of course, acquisition of this company also brings significant AI skill sets which will now be directed toward enhancing Cisco’s efforts. It is part of the continuing movement to find a place for digital assonants in business that will match their blossoming in the consumer realm (Digital Assistants Coming of Age with Alexa).


 


Outsourcing versus Robots: The Video

Business Process Outsourcing (BPO) is currently under threat not only from nativism, job loss, and immigration issues; but also from Artificial Intelligence. Robotic Process Automation (RPA) uses AI to handle a growing range of routine tasks; tasks that have been handled by outsourcing companies since the beginning of this century. Outsourcing firms are struggling to compete with the new technology, while also attempting to move up the value chain. This means that they are becoming more vested in AI,  both to provide RPA as a service, and to help other companies to develop their own automated solutions.

The giant Indian IT outsources are particularly affected. This is already leading to tectonic changes affecting businesses around the globe.

Following is a selection of four videos on this subject featuring presentations and a discussion. One is an embed from Vimeo, and the rest are provided under standard YouTube license, with explanations from their landing pages.

Automation’s Impact on the Economy and the Outsourcing Maretplace ( IRPA AI)

Published on Vimeo. A conversation with Raheem Hasan, IRPA (Instaitue for Robotic Process Automation), Joe Hogan, HCL, and Martin Ford, Author and Futurist

Ford Hogan from IRPA AI on Vimeo.

Approach to Cognitive vs Traditional RPA (EdgeVerve)

Published on Feb 9, 2017

Beyond Robotic Process automation – How AI based systems are ushering the next wave of efficiencies and transformation. Critical process questions before embarking on a RPA journey.

EdgeVerve Systems is a wholly-owned subsidiary of Infosys developing  software offered on-premise or as cloud-hosted business platforms.

Dr. Vishal Sikka, CEO, Infosys, introduces Infosys Nia (Infosys)

Published on Apr 26, 2017

Dr. Vishal Sikka, CEO, Infosys, introduces Infosys Nia, the next-generation artificial intelligence platform from Infosys.

How Robotic Process Automation and Artificial Intelligence Will Change Outsourcing (Mayer Brown)

Published on Jun 14, 2016


 


Google Adds a Parliament of Owls to its VR Team:  Acquisition of Owlchemy

Google is moving a bit further into the VR space with its acquisition of “absurd” VR gaming company Owlchemy Labs.  This could accelerate VR innovation, bringing extra attention to the user experience (UX), and a concentration of skills for Google that could provide an edge as VR and AR progress.

Owlchemy Labs began as a conventional games designer, and then became one of the first companies with work with the Oculus Rift VR platform, releasing the VR game Job Simulator. Job Simulator now has over $3 million in sales, and the company has grown from a team of 4 in 2010 to a team of 23. According to the Owlchemy Press Release:

This means Owlchemy will continue building high quality VR content for platforms like the HTC Vive, Oculus Touch, and PlayStation VR. This means continuing to focus on hand interactions and high quality user experiences, like with Job Simulator. This means continuing our mission to build VR for everyone, and doing all of this as the same silly Owlchemy Labs you know and love.

Job Simulator was an overnight success with its particular focus on tracked head and hands to enhance the gaming experience. The VR game was launched with HTC Vive, Oculus + Touch, and PlayStation VR and won multiple awards for gameplay and interaction. Recently, the company has pioneered a new way to show VR footage with its “Spectator Mode” and it continues to improve its VR presentation.

Google’s blog announcement shows where the new acquisition fits into the Googleverse:

We care a lot about building and investing in compelling, high-quality, and interactive virtual reality experiences and have created many of our own—from YouTube, Street View, and Photos on Daydream to Google Earth VR and Tilt Brush. And, we work with partners and support developers and creators outside of Google to help bring their ideas to VR. …

Together, we’ll be working to create engaging, immersive games and developing new interaction models across many different platforms to continue bringing the best VR experiences to life. There is so much more to build and learn, so stay tuned!

Google has been actively pursuing VR for some time now, from before it’s almost accidental rollout of the Cardboard VR headset. Unlike many other operators in this space, it has the resources to explore a very broad array of VR issues.  The company’s Daydream Labs, for example, is focusing upon social aspects of the VR UX experience as well as content. Usability and controls for the VR experience are of particular importance in building new uses for these platforms, and this is an area in which Owlchemy excels.

As with AI, this acquisition also brings more VR experience and skills into Google. Competition includes Microsoft HoloLens and Facebook Oculus, among others. As has always been the case, the next generation of technologies are being actively explored in gaming before being put to practical use.  The intersection of AI with VR (On the Intersection of AI and Augmented Reality) will become particularly important as a means of building and enhancing virtual and enhanced realities, and interacting with their components.

The Duke of Wellington is famously attributed with the quotation, “The battle of Waterloo was won on the playing fields of Eton.”  It is quite possible that the battle of Consumer and Enterprise VR will be won on the gaming headsets of FAMGA (Facebook, Apple, Microsoft, Google, Amazon).

 


Affective Computing, Intersecting Sentiment and AI: The Video

Affective Computing is the combination of emotional intelligence with artificial intelligence. The role of emotion in AI is coming increasingly into focus as we attempt to integrate robots, digital assistants, and automation into social contexts. Interaction with humanity always involves sentiment, as demonstrated by the growing understanding that self-driving vehicles need to understand and react to their emotional surroundings–such as responding to aggressive driving. Meanwhile, sentiment analysis is growing independently in marketing as companies vie to create emotional response to products and react to social media comments. Meanwhile, in the uneven understanding of this technology, some still separate human from cyber systems on the basis of emotion.

AI must use and respond to emotional cues. This must be considered a component of the thought process. Companies are now beginning to focus upon this area, and combine it with the other elements of AI to build a more responsive and human-interactive technology.

Following are a few videos explaining where Affective Computing is heading. These are under standard YouTube license, and the descriptive information is, as usual, provided from the video landing page with minor edits.

The Human Side of AI: Affective Computing (Intel Software)

Published on Feb 13, 2017

Affective Computing can make us aware of our emotional state, helping us take better decisions, can help us to help others, or help machines make decisions to enrich our lives. There is another exciting use for emotional data: Machine Learning. This is where data is collected so the machine refines its understanding, to ultimately better personalize your experiences.

Imagine if the environments where you live and interact could personalize your experience based on how you feel in that moment. Imagine being able to provide superior care-giving to elderly, children and people with limited abilities.

The introduction is provided below. Some additional videos:

Affective Computing Part 1: Interpreting Emotional States to Unleash New Experiences

Affective Computing Part 2: Global User Insights and Recommendations for Developers

Artificial Intelligence meets emotional intelligence – CEO Summit 2016 (Mindtree Ltd.)

Published on Nov 8, 2016

With Artificial Intelligence (AI) gaining credence, Mindtree’s Chairman KK talks about the evolving roles of people and the importance of fostering emotional quotient (EQ) to remain relevant. He elaborates upon how Mindtree is helping its retail, finance, travel and hospitality clients reimagine customer service, the area most touched by AI and automation.

How Virtual Humans Learn Emotion and Social Intelligence (Tested )

Published on Aug 26, 2016

At USC ICT’s Virtual Humans lab, we learn how researchers build tools and algorithms that teach AI the complexities of social and emotional cues. We run through a few AI demos that demonstrate nuanced social interaction, which will be important for future systems like autonomous cars.

Shot by Joey Fameli and edited by Tywen Kelly
Music by Jinglepunks

Stanford Seminar: Buildings Machines That Understand and Shape Human Emotion (stanfordonline)

Published on Feb 3, 2017

Jonathan Gratch, Research Professor of Computer Science and Psychology at the University of Southern California (USC)

Affective Computing is the field of research directed at creating technology that recognizes, interprets, simulates and stimulates human emotion. In this talk, I will broadly overview my fifteen years of effort in advancing this nascent field, and emphasize the rich interdisciplinary connections between computational and scientific approaches to emotion. I will touch on several broad questions: Can a machine understand human emotion? To what end? Can a machine “have” emotion, and how would this impact the humans that interact with them? I will address these questions in the context of several domains, including healthcare, economic decision-making and interpersonal-skills training. I will discuss the consequences of these findings for theories of intelligence (i.e., what function does emotion serve in human intelligence and could this benefit machines?) as well as their practical implications for human-computer, computer-mediated and human-robot interaction. Throughout, I will argue the need for an interdisciplinary partnership between the social and computational sciences around to topic of emotion.