Analytics of Where You Are: The Video

Location Analytics continues to grow in importance as technologies such as machine learning and mobile sensing converge and new use cases are discovered. Advertising, ecommerce, and performance management uses have been proliferating, and they are now being joined by security, location-based restriction, and an increasing range of possibilities in robotics and the IoT.

Location analytics is important, whether it is on the macro geolocation level or at the micro internal level of process control. As with robotics, there are some great visual stories that encapsulate the possibilities; but they exist in a cloud of Powerpoints. Understanding the importance of location is a decidedly visual thing.

We’ve assembled a set of four videos intended as advertising that provide a good visual exploration of the subject. As with some of the finer robotics issues, you really have to see it to believe it! Compelling video helps you to visually identify concepts and place them in a context of everyday life.

Note that we are not endorsing these companies or their methods; we just like their videos! Feel free to comment or suggest analytics videos that make a compelling case or provide an appealing representation of the technology.

Proximus Analytics

QlikMaps

SweetIQ Analytics

ESRI Location Analytics


Guest Blog: A Very Short History of Digitization

Original blog link:

A Very Short History of Digitization

Milestones in the story of the adoption and proliferation of today’s most widely spoken language, the computer’s binary code.

by Gil Press

Ones and zeros are eating the world. The creating, keeping, communicating, and consuming of information are all being digitized, turned into the universal language of computers. All types of enterprises, from small businesses to large corporations to non-profits to government agencies, are going through a “digital transformation,” turning digitization into new processes, activities, and transactions.

From the 1950s on, with a distinct bounce in the 1990s due to the advent of the Web, digitization has changed the way we work, shop, bank, travel, educate, govern, manage our health, and enjoy life. The technologies of digitization enable the conversion of traditional forms of information storage such as paper and photographs into the binary code (ones and zeros) of computer storage. A sub-set is the process of converting analog signals into digital signals. But much larger than the translation of any type of media into bits and bytes is the digital transformation of economic transactions and human interactions.

The expression of data as ones and zeros facilitates its generation, replication, compression, and dissemination (see A Very Short History of Big Data); its analysis (see A Very Short History of Data Science); and its organization (see A Very Short History of the Internet and the Web). It also encourages the replacement or augmentation of the physical with the virtual or online presence (see A Very Short History of the Internet of Things).

Here are a few milestones in the story of the adoption and proliferation of today’s most widely spoken language.

1679 Gottfried Wilhelm Leibniz develops the modern binary number system and, in 1703, publishes Explication de l’Arithmétique Binaire (Explanation of Binary Arithmetic), linking it to ancient China.
1755 Samuel Johnson publishes A Dictionary of the English Language and includes an entry for “Binary arithmetick,” quoting Ephraim Chambers’ Cyclopaedia: “A method of computation proposed by Mr. Leibnitz, in which, in lieu of the ten figures in the common arithmetick, and the progression from ten to ten, he has only two figures, and uses the simple progression from two to two. This method appears to be the same with that used by the Chinese four thousand years ago.”
1847 George Boole introduces Boolean algebra in The Mathematical Analysis of Logic, creating the field of mathematical logic, leading eventually to universal computation. In 1854, he writes in An Investigation into the Laws of Thought: “The respective interpretation of the symbols 0 and 1 in the system of logic are Nothing and Universe.”
1937 Claude Shannon submits his master’s thesis at MIT, establishing the theoretical underpinnings of digital circuits. Shannon showed how Boolean algebra could optimize the design of systems of electromechanical relays then used in telephone routing switches.
1938 Alec Reeves conceives of the use of pulse-code modulation (PCM) for voice communications, digitally representing sampled analog signals. It was not used commercially until the 1950s, when the invention of the transistor made it viable. PCM has become the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications.
1940 John V. Atanasoff writes in Computing Machine for the Solution of Large Systems of Linear Algebraic Equations, a paper describing the electronic digital calculating machine he has built with Clifford Berry: “…for mechanized computation, the base two shows a great superiority… a card of a certain size used with the base-two recording system will carry more than three times as much data as if used with the conventional base-ten system.”
1943 The SIGSALY secure speech system performs the first digital voice transmission, used for high-level Allied communications during World War II.
June 25, 1945 John von Neumann’s A First Draft of a Report on the EDVAC is distributed to 24 people working on the development of the EDVAC, one of the earliest computers. It documents the key decisions made in the design of the EDVAC, among them the decision to use binary to represent numbers, thus reducing the number of components required compared to its predecessor, the ENIAC, which used the decimal system. The document became the technological basis for all modern computers.
1948 Claude Shannon publishes “A Mathematical Theory of Communication” in the July and October issues of the Bell System Technical Journal. Shannon: “If the base 2 is used [for measuring information] the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information.”
Summer 1949 Claude Shannon lists in his notebook the storage capacity in bits of a number of items. He estimated that a punch card has just under 103 bits and a single-spaced typed page 104 bits. Four years before the discovery of the double-helix structure of DNA, Shannon estimated that the “genetic constitution of man” is about 105 bits. The largest holder of bits he could think of was the Library of Congress which he estimated to hold 1014 bits of information (p. 232 in The Information by James Gleick).
1954 General Electric’s Major Appliance Division plant in Louisville, Kentucky, installs the UNIVAC I computer, the first business use—payroll processing and manufacturing control programs—of a computer in the United States. “The Univac I was also hooked up with speakers, and the operator had the machine playing classical music each evening,” recalls Burton Grad who designed and wrote (in machine language) a manufacturing control program for GE’s Dishwasher and Disposer Department.
1955 John Hancock Mutual Life Insurance Co., a pioneer in digitizing customer information, digitizes 600 megabytes of two million life-insurance policies.
September 4, 1956 IBM announces the 350 Disk Storage Unit, the first computer storage system based on magnetic disks and the first to provide random access to stored data. It came with fifty 24-inch disks and a total capacity of 5 megabytes, weighed 1 ton, and could be leased for $3,200 per month; its first customer was United Airlines’ reservations system.
September 14, 1956 IBM announces the 305 RAMAC and the 650 RAMAC (Random Access Memory Accounting) which incorporated the 350 Disk Storage Unit. It promised, as the IBM press release said, “that business transactions will be completely processed right after they occur. There will be no delays while data is grouped for batch processing… Random access memory equipment will not only revolutionize punched card accounting but also magnetic tape accounting.”
When it was exhibited in the 1958 Brussels World’s Fair, visitors could query “Professor RAMAC” using a keyboard and get answers in any of ten languages. The RAMAC became obsolete within a few years of its introduction as the vacuum tubes powering it were replaced by transistors. But disk drives, invented in a search for faster access to information, are still used as the containers for almost all digital information today.
1960 American Airlines’ Sabre flight-reservation system digitizes a process developed in the 1920s, processing 84,000 telephone calls per day and storing 807 megabytes of reservations, flight schedules and seat inventory.
1962 The term database is mentioned in print for the first time, according to the Oxford English Dictionary, quoting a Systems Development Corporation technical memo: “A ‘data base’ is a collection of entries containing item information that can vary in its storage media and in the characteristics of its entries and items.”
1963 Charles Bachman, at GE’s computer division, develops the Integrated Data Store (IDS), one of the first database management systems using what came to be known as the navigational database model in the Manufacturing Information and Control System (MIACS) product.
April 19, 1965 Gordon Moore publishes “Cramming more components onto integrated circuits” in Electronics magazine, the first formulation of what became to be known as “Moore’s Law.” The observation of the constant doubling of the number of transistors that can be “crammed” into an integrated circuit became the rallying cry that has guided manufacturing process innovations that have reduced the price and increased the power of electronic components and drove a constant expansion of the scope and reach of digitization.
1968 U.S. libraries begin using Machine Readable Cataloging (MARC) records.
1969 Willard Boyle and George E. Smith at AT&T Bell Labs invent the charge-coupled device (CCD), transforming light into electric signals. The CCD has played a major role in the development of digital imaging in general and the development of digital cameras and medical imaging in particular. Boyle and Smith were awarded the 2009 Nobel Prize in Physics.
June 1970 Edgar F. (“Ted”) Codd publishes “A relational model of data for large shared data banks,” in the Communications of the ACM, presenting the theoretical basis for relational databases, which became the dominant type of databases from the 1980s to around 2000.
1971 Arthur Miller writes in The Assault on Privacy that “Too many information handlers seem to measure a man by the number of bits of storage capacity his dossier will occupy.”
July 4, 1971 Michael Hart launches Project Gutenberg with the goal of making copyright-free works electronically available by entering the text of the U.S. Declaration of Independence into the mainframe he was using at the University of Illinois.
1972 Pulsar, the world’s first all-electronic digital watchand the first to use a digital display, is launched.
1973 Charles Bachman is awarded the Turing Award. From The Programmer as Navigator, Bachman’s Turing Award lecture: “Copernicus presented us with a new point of view and laid the foundation for modern celestial mechanics… A new basis for understanding is available in the area of information systems. It is achieved by a shift from a computer-centered to the database-centered point of view. This new understanding will lead to new solutions to our database problems and speed our conquest of the n-dimensional data structures which best model the complexities of the real world… The availability of direct access storage devices laid the foundation for the Copernican-like change in viewpoint… From this point, I want to begin the programmer’s training as a full-fledged navigator in an n-dimensional data space.”
December 1975 The first digital camera, invented by Steven Sassonat Eastman Kodak, takes 23 seconds to capture its first image. The camera weighed 8 pounds, recorded black and white images to a compact cassette tape, and had a resolution of 0.01 megapixels.
977 Citibank installs its first ATM. By the end of the year, all the bank’s New York branches had at least two machines operating 24 hours a day, seven days a week, ensuring 24-hour access in case one fails. When a huge blizzard hit New York in January 1978, banks were closed for days and ATM use increased by 20%. Within days, Citibank had launched its “The Citi Never Sleeps” ad campaign. A decade later, the bank’s ATM network stored 450 megabytes of electronic transactions.
1979 Federal Express launches COSMOS (Customers, Operations, and Services Master Online System), digitizing the management of people, packages, vehicles, and weather scenarios in real time, with a computer storage capacity of 80 gigabytes.
April 1980 I.A. Tjomsland gives a talk titled “Where Do We Go From Here?” at the Fourth IEEE Symposium on Mass Storage Systems, in which he says “Those associated with storage devices long ago realized that Parkinson’s First Law may be paraphrased to describe our industry—‘Data expands to fill the space available.’”
1981 Edgar F. (“Ted”) Codd is awarded the Turing Award for his fundamental and continuing contributions to the theory and practice of database management systems—“whenever anyone uses an ATM machine, or purchases an airline ticket, or uses a credit card, he or she is effectively relying on Codd’s invention.”
In his Turing Award Lecture, Codd notes that “As it stands today, relational database is best suited to data with a rather regular or homogeneous structure. Can we retain the advantages of the relational approach while handling heterogeneous data also? Such data may include images, text, and miscellaneous facts. An affirmative answer is expected, and some research is in progress on this subject, but more is needed.” The challenge of heterogeneous data or “big data” will be addressed almost three decades later but not with a relational database approach.
July 9, 1982 The movie Tron, in which the Jeff Bridges character is digitized by an experimental laser into a mainframe where programs are living entities appearing in the likeness of the humans who created them, is released.
August 17, 1982 The first commercial compact disc (CD) is produced, a 1979 recording of Claudio Arrau performing Chopin waltzes.
1984 8.2% of all U.S. households own a personal computer, the U.S. Census Bureau finds in its first survey of computer and Internet use in the United States. In 2013, 83.8% of U.S. households reportedcomputer ownership, with 74.4% reporting Internet use.
Februrary 1985 Whole Earth’s ‘Lectronic Link (WELL) established, one of the first “virtual communities.”
1988 More compact discs (CDs) are sold than vinyl records.
June 1990 General Instruments, an American manufacturer of cable television converters and satellite communications equipment, upsets the race to build the television of the future by announcing it has succeeded in squeezing a digital HDTV signal into a conventional broadcast channel. Up until then all the companies preparing proposals for an HDTV standard were working on analog systems.
1991 The first 2G cellular network is launched in Finland. 2G networks used digital signals rather than analog transmission between mobile phones and cellular towers, increasing system capacity and introducing data services such as text messaging.
July 1992 Tim Berners-Lee posts the first photo uploaded to the Web, showing the all-female parody pop group Les Horribles Cernettes(LHC), consisting of four of his colleagues at CERN.
May 1993 O’Reilly Digital Media group launches the Global Network Navigator (GNN), the first commercial web publication and the first website to offer clickable advertisements.
1994 Teradata has the largest commercial database at 10 terabytes.
Summer 1994 A large pepperoni, mushroom and extra cheese pizza from Pizza Hut is ordered online, possibly the first transaction on the Web.
October 1994 HotWired is the first web site to sell banner ads in large quantities to a wide range of major corporate advertisers.
1995 After a five-year pilot project, the National Digital Library program begins digitizing selected collections of Library of Congress archival materials.
June 1995 The Norwegian Broadcasting Corporation (NRK) launches the world’s first Digital Audio Broadcasting (DAB) channel.
November 22, 1995 Toy Story opens in U.S. theaters, the first feature-film to be made entirely with computer-generated imagery (CGI).
1996 Brewster Kahle establishes the Internet Archives, to preserve and provide access to nearly every site on the Web, later evolving to become a comprehensive digital library. Other Web archiving projects launched in 1996 include the National Library of Australia’s PANDORA Project, and the Royal Library of Sweden’s Kulturarw Heritage Project.
1996 Digital storage becomes more cost-effective for storing data than paper.
1996 E-gold is launched, becoming the first successful digital currency system to gain a widespread user base and merchant adoption.
1998 Jim Gray is awarded the Turing Award for seminal contributions to database and transaction processing research and technical leadership in system implementation.
1998 Production of analog cameras peaks at almost 40 million as they are replaced by digital cameras.
1998 Digital Television transmission commences in the U.K. and the U.S., launching the process of converting and replacing analog television broadcasting with digital television.
March 25, 1998 Microsoft patents ones and zeroes, says The Onion.
October 23, 1998 The Last Broadcast is the first feature-length movie shot, edited and distributed digitally via satellite download to 5 theaters across the United States.
December 1998 Nicholas Negroponte writes in Wired: “Like air and drinking water, being digital will be noticed only by its absence, not its presence.”
1999 Wal-Mart has the largest commercial database at 180 terabytes.
2000 The number of photos preserved on film annually peaks at 85 billion, rapidly replaced in subsequent years by digital photos.
September 2000 MP3 player manufacturer, i2Go, lunches a digital audio news and entertainment service called MyAudio2Go.com that enabled users to download news, sports, entertainment, weather, and music in audio format. In February 2004, Ben Hammersley writes in the Guardian: “Online radio is booming thanks to iPods, cheap audio software and weblogs… But what to call it? Audioblogging? Podcasting? GuerillaMedia?”
January 1, 2001 The Electronic Product code (EPC) is defined [PDF] at MIT as a replacement for the Universal Product Code (UPC or ‘bar code’).
2002 Digital information storage surpasses non-digital for the first time.
2003 More digital cameras than traditional film cameras are sold in the U.S. for the first time.
2003 Electronic payments in the U.S. surpass the use of cash and checks for the first time.
June 2003 The DVD format (launched in the late 1990s) becomes more popular than VHS in the U.S.
October 2003 The Check 21 Act makes check images a legal transfer medium in the U.S., allowing financial institutions to create a digital version of the original check. Over 50 billion paper checks were processed in the U.S. in 2003.
2004 Google announces it is working with the libraries of Harvard, Stanford, the University of Michigan, and the University of Oxford as well as The New York Public Library to digitally scan books from their collections. The Internet Archives starts a similar effort, the million book project.
2007 94% of the world’s information storage capacity is digital, a complete reversal from 1986, when 99.2% of all storage capacity was analog.
2008 More music is sold by iTunes than by Wal-Mart.
October 2008 Satoshi Nakamoto publishes “Bitcoin: A Peer-to-Peer Electronic Cash System,” describing the first decentralized digital currency. In October 2015, The Economist stated that blockchain, the technology behind bitcoin, “could transform how the economy works.”
2010 Online advertising ($26 billion) in the United States surpasses newspaper advertising ($22.8 billion) for the first time.
2010 Production of digital cameras peaks at just over 120 million as they are replaced by smartphones.
January 2011 Jean-Baptiste Michel, et al., publish “Quantitative Analysis of Culture Using Millions of Digitized Books” in Science. On a basis of a corpus of digitized texts containing about 4% of all books ever printed, they investigate linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000, calling their field of study “culturomics.”
2011 Amazon.com sells more Kindle books than print books.
2012 U.S. consumers pay more for online movies than for DVDs and Blu-ray discs, for the first time.
2012 180 petabytes (180 million gigabytes) are added annually to Facebook’s data warehouse which has grown 2500x in the past four years.
December 2012 Annual e-commerce sales top $1 trillion worldwide for the first time.
2014 Streaming revenue from services like Spotify and Pandora overtake CD sales for the first time.
February 2014 45% of Internet users ages 18-29 in serious relationships say the Internet has had an impact on their relationship.
Summer 2014 The number of Internet users worldwide reaches 3 billion.
2015 Michael Stonebraker is awarded the Turing Awardfor fundamental contributions to the concepts and practices underlying modern database systems.
2015 Every minute, Skype users make 110,040 calls, Twitter users send 347,222 tweets, YouTube users upload 300 hours of new videos, Pinterest users pin 9,722 images, Netflix subscribers stream 77,160 hours of video, Snapchat users share 284,722 snaps, and Facebook users like 4,166,667 posts.
2015 Digital America: A tale of the haves and have-mores, a McKinsey Global Institute (MGI) report, is the first major attempt to measure the ongoing digitization of the U.S. economy at a sector level. It introduces the MGI Industry Digitization Index, which combines dozens of indicators to provide a comprehensive picture of where and how companies are building digital assets, expanding digital usage, and creating a more digital workforce. Because the less digitized sectors are some of the largest in terms of GDP contribution and employment, MGI concludes that the U.S. economy as a whole is only reaching 18% of its digital potential and it estimates that digitization could add up to $2.2 trillion to annual GDP by 2025.
Gil Press

Gil Press is Managing Partner at gPress, a marketing, publishing, research and education consultancy. Previously, he held senior marketing and research management positions at NORC, DEC and EMC. Hewas Senior Director, Thought Leadership Marketing at EMC, where he launched the Big Data conversation with the “How Much Information?” study (2000 with UC Berkeley) and the Digital Universe study (2007 with IDC). He blogs at http://whatsthebigdata.com/ and http://infostory.com/ Twitter: @GilPress


Software Modernization and your Smart Digital Future

Code modernization is essential in transitioning to digital business. Ancient code will have numerous liabilities in integration and in remaining secure. Fundamental to the problem is the fact that languages, programming approaches, and the surrounding IT are all evolving even as the business environment is evolving. This means that programs accumulate technical debt, leading to growing inefficiencies and maintenance costs over time. Continued accumulation of technical debt complicates any conversion effort. Yet, as we move into a future of newly designed smart processes and omnipresent digital interactions, it is certain that radical change and more invasive modernization will be necessary.

It is clear that a general approach is needed that leads both to effective conversion and to meeting the unknown requirements of the future. So, companies that wish to change need to centralize the modernization effort and discover the technologies which will be specifically applicable to the firm. In this context, it is important to consider the ROI of change efforts. Modernization must provide both for the current situation and for the unknown environment of the future.

One of the most persistent problems in modernization is migration of COBOL which exists in millions of lines across critical applications in high accuracy/high-volume areas such as finance and healthcare. These systems are particularly vulnerable as industry evolves to meet complex new requirements from clients and partners. While these systems have often operated for many years, as “black boxes” around which code might be wrapped, this approach eventually must break down. It entails a growing maintenance burden and serious security issues when pathways are built into the code to enable API access to obscure routines. Familiarity with the code base disappears as employees retire, and there is a growing lack of talent and experience in working with older programs.

To make essential changes and build for a digital and interconnected future, there is a range of possible remedies. These include:

  • Continuing the black box. Since the software has continued to operate for many years and performed its functions, you can hope that nothing bad will happen and simply continue the maintenance cycle. You will be faced with increasingly expensive maintenance, and potentially serious security flaws as the years drag on. There will also be an opportunity cost as new technologies are increasingly unavailable due to the lack of malleability in access to critical code.
  • Off-the-shelf replacement. It is sometimes possible to replace critical programs originally built in-house with commercial software or SaaS solutions. This often requires considerable customization, and will be less flexible. Processes may need to be changed, licensing costs will be incurred and there may be integration issues and unforeseen consequences.
  • Greenfield replacement. Starting from scratch demands creating a project of at least the size of the original one. All of the lessons learned in the original coding will be lost, and there are likely to be huge over-runs in time and cost in both adding new features and making certain that critical functions continue to operate as expected.
  • Manual conversion. Manual conversion or refactoring of the original system can be a massive project, potentially larger and more expensive than the original system. It is possible to introduce the modernized COBOL languages or move to later generation code. Without the specific knowledge of the original code and access to the programmer’s logic, much of the original functionality can be compromised. Such projects have very poor rates of completion on time and with adequate success. This will also be true of many “lift and shift” efforts which convert and bring the application to the cloud.
  • Incremental conversion. Large programs could be split up, with only critical “must change” code subject to conversion. This provides short term benefits, but it also potentially adds to technical debt in the interfaces, and the original code that persists will continue as a potential source of future problems.
  • Automated model-based conversion. For some situations, an automated conversion based on modeling can provide a cost effective outcome, depending upon the technology in use. Here, the original code is converted to a semantic model which is then optimized and used to generate code in another language.

Each situation is likely to have different needs, and demand a different solution. This is part of the reason that conversion has become such an intractable problem.

There are numerous companies involved with modernization of code and with bringing older programs into the digital environment—and huge differences, depending upon whether you are looking at a change of platform, a coding conversion, an update, a refactoring, or a rewrite of ancient routines. The most important issue is to determine what the overall modernization requirements are: what is absolutely critical, and what could be reserved for later. Modernization can be very expensive; but it also needs to be correct.


Apple Chases Indian AI Talent, Buys Tuplejump

Apple has furthered its interest in Machine Learning with a new acquisition, Indian big data firm Tuplejump. Tuplejump is a company specializing in managing big data, with its most visible project being the open source FiloDB project, which applies machine learning and analytics to large and complex streamed data sets.

This is Apple’s third recent AI acquisition, after Turi Inc. and Emotient. Apple is not commenting, and there’s not a lot of information available. It appears that the initial deal may have been struck as early as June, but it was reported by TechCrunch as recent (Apple acquires another machine learning company: Tuplejump). Apple’s most visible use of AI is in its Siri virtual assistant product line, but the company is investing in a wide range of IoT projects that will use machine learning, including watches and automobiles.

A key issue with small startup purchases in these areas is the “acqui-hire” value, where the real object of acquisition is the personnel. This is particularly critical in technology areas where there is an existing or perceived skills shortage. Most of Tuplejump’s 16 employees were already located on the US West Coast, and the principals had already been working with Apple, so the acquisition points more to how the battle for AI and Analytics talent is being waged than to the specifics of the Tuplejump solution.


Etsy Bets on AI, Acquires Blackbird Technologies

The popular online handmade items store Etsy has a problem. Its catalog contains over 40 million items, and it has 26.1 million active buyers. As a seller of highly customized goods, the product list can only grow. But searching for products in this vast catalog, and matching buyers to sellers poses some extraordinary challenges. Etsy’s answer? A machine learning acquisition. The company has just acquired AI search startup Blackbird Technologies to perform this critical function (Etsy Acquires Blackbird Technologies to Enhance Search Capabilities).

Blackbird is a deep-learning company that uses a GPU-based approach to search, with a Theano deep learning framework. According to Etsy’s release, Blackbird adds:

  • Machine learning that analyzes user behavior, unstructured data, and other variables to suggest relevant and personalized search recommendations;
  • Natural language processing to understand complex search queries;
  • Deep Learning-based image recognition techniques to index and catalog photos, which help provide more relevant search results; and
  • Spelling correction and predictive type-ahead to make searching faster and more intuitive.
  • Blackbird’s talented employees, including its co-founders, who serve as the company’s CEO and CTO, will join Etsy. The team possesses a deep expertise in Artificial Intelligence, search, and distributed systems and has direct experience working in these areas for some of the largest technology companies in the world.

The addition of experienced analytics employees who are versed in deep learning is a notable plus for Etsy’s future. This is likely to become increasingly important as the company grows in the area of 3d Printing. 3d-printed goods are now making a strong appearance on the platform, and are capable of expanding the “handmade” catalog geometrically.

In any case, AI-based search is a good addition, and helps Etsy with an increasingly competitive environment.


Human-Centered Robotics–The Video

Human-Centered Robotics is a somewhat broadly defined area looking at interactions between AI-driven robots and humans. Robots and humans must coexist in society, in the home, and in the workplace. An issue of particular urgency is how process robots, particularly in manufacturing, will operate efficiently without harming people, and without requiring complex programming skills to function in a changing environment.

Robotics companies and academic institutions are working to ease the way for an interactive robotics future. Here are a few videos of the current state of the human-robot interaction art:

Automatica 2016

University of Texas at Austin

DIAG Robotics Lab, Sapienza University of Rome

Nao Robot Task Learning


Tremors in the Workplace: the Dawn of a New HR

Human Resources technology has come a long way in the past several years. Without a lot of fanfare, it has been subject to many of the trends that are active throughout today’s enterprise IT. There is change based around use of analytics, around mobility in new roles; in social media usage, and in use of cloud solutions to democratize access and act as a centralization point for mobility.

HR activities and systems are shifting from maintaining records to active management of skills deployment and development, and enabling continuous monitoring of job performance. There are key changes afoot as we move into this new era; they include the end of rigid performance reviews; new opportunities for training, such as MOOC’s and inexpensive online courses; and, overall, much greater opportunity for employee input and engagement in innovation and performance improvement.

Traditional HR software is being challenged by startups from outside of HR. Of particular interest is the intersection with analytics. A significant area of development is in employment engagement and feedback. Here we see companies such as TINYPulse, Glint, CultureIQ, Culture Amp and others. Other vendors such as Humanyze are bringing location monitoring to this sector. Korn Ferry provides a recruitment tool that generates profiles for open executive positions based on benchmark data from people who have performed in that role. In social media, LinkedIn offers a tool providing insight into social activity of candidates to understand who they may know, and how they are interacting.

Opportunities are appearing for

  • Mining of social media to determine suitability of candidates;
  • Analysis of needed positions and job requirements gained from mining internal data;
  • Process change based on real-time monitoring of employee actions;
  • Creating a more dynamic and plastic environment in which HR technology is a component of resilience and agility
  • Permitting companies to respond swiftly and optimize outcomes across all activities.

As HR technology continues to evolve there will be a significant impact on the workplace. Within an environment of rapidly shifting skills and job descriptions, there is a need for greater engagement with employees and for more attention to the specifics of everyday roles.

There are also downsides to this technology. Employees may rebel against over-monitoring and over-direction, for example. On the whole, however, next generation employees are likely to be more comfortable with the trade-offs between privacy and opportunity. Still, there is a risk of reducing flexibility by forcing employees to think constantly about their action. Answers might come from gamification to motivate and ease control issues.

Assuming, that we have learned the lessons of early 20th century “Scientific Management,” there is great promise in these technologies creating a better integration between employees and the evolving IT and robotics environment.

As we move into an era of continuous AI and Big Data analysis, it is important to understand that behavioral expectations for employees will change. These changes will enable us to think faster, react more flexibly, and perform more comfortably within a technology environment. But the effects upon culture and thought processes are yet to be determined.


Facebook Adds Nascent to its Thingaverse

Facebook is bolstering the less-publicized hardware side of its business with acquisition of Nascent Objects, a company that has developed a modular consumer electronics platform. According to Nascent’s website:

Nascent Objects was founded on the principle that product development shouldn’t be so hard. That’s why we created the world’s first modular consumer electronics platform – to make product development fast, easy and accessible. By combining hardware design, circuitry, 3D printing and modular electronics, our technology allows developers to go from concept to product in just weeks, much faster and less expensive than traditional methods.

Nascent joins Oculus and other experiments in Facebook’s “Building 8,” a hardware lab set up by the company in April of this year. Facebook is ramping up its virtual reality, 3d printing, and modular development initiatives to compete in the new world of the IoT. Competitors and potential competitors such as Apple, Google, and Amazon are all vying for an edge in an area where many of the opportunities are only dimly understood.

Modular rapid-prototyping may be a good fit for a company wishing to rapidly expand hardware offerings in potentially explosive markets.


They’re Here! Five Humanoids of Today

Robotics and AI have been developing rapidly, and we are beginning to see capacities that have been worked on for decades begin to bear fruit. Bringing everything together for a fully humanoid machine is still some years away, but recent progress is stunning. It is clear that we need to prepare the way for intelligent machines. As usual with advancing technology, science fiction provides some guidance–Asimov’s laws of robotics being seminal in this area. But there will be repercussions in law, regulation, economics, lifestyle and social interaction.

For the moment, progress demonstrates how concepts evolve under a relentless international cooperative effort. As humans, we want humanoids as much as we wanted photographs and video. It is part of our effort to understand humanity. Yes, someday they might even also be useful!

Here are five recent humanoids from around the globe, in current videos. A few are popular, a few have not often been viewed. But they’re all from this year, and represent different aspects of development from around the world. Everyone likes a good robot video, right?

Atlas

Sophia

Asimo (2016)

Nadine

Alter