Guest Blog: Lagging In Digital Transformation? 30% Of Your Senior Executives Are Going To Leave Within A Year

Original blog link:

Lagging In Digital Transformation? 30% Of Your Senior Executives Are Going To Leave Within A Year

by Gil Press

Companies lagging in their digital transformation or not even trying to become digital, face the risk of losing substantial portions of their sales, IT leadership, and senior management. About 30% of senior vice presidents, vice presidents, and director-level executives who don’t have adequate access to resources and opportunities to develop and thrive in a digital environment are planning to leave their company in less than one year.

This is one of the key finding of a new research report, Aligning the Organization for its Digital Future. It is based on a worldwide survey of 3,700 business executives, managers, and analysts, conducted for the fifth year in a row by MIT Sloan Management Review, in collaboration with Deloitte.

There is remarkable across the board agreement about digital disruption which 87% of those surveyed believe will impact their industry. This is considerably up from last year’s survey, where only 26% said that digital technologies present a threat of any kind. Regardless of the much-increased anticipation of digital disruption, only 44% think their organizations are adequately preparing for it. Similarly, a recent Gartner survey of IT professionals found that 59% said that their IT organization is unprepared for the digital business of the next two years.

“Digital” has a strong external orientation, according to the reported objectives of the digital strategy of the organizations surveyed. 64% “strongly agree” with improving customer experience and engagement as a key objective. Only 41% cite “fundamentally transform business processes and/or business model.”

While the orientation of companies’ digital strategy is primarily external, the perceived obstacles to digital success are primarily internal. The biggest barrier impending the organization from taking advantage of digital trends is too many competing priorities, followed by lack of organizational agility. “Disruption,” to these respondents, begins at home, not with the startups promising to disrupt their industry.

Understanding technology is a required but not the most important skill for success in a digital workplace. Says the report: “In an open-ended question, respondents said that the ability to steer a company through business model change is the most important skill, cited by 22%.” They also think that there are not enough people with the right skills. Only 11% say that their company’s current talent base can compete effectively in the digital economy.

The report goes beyond the raw data to assess “companies’ sophistication in their use of digital technologies.” Explaining the methodology for this assessment, it says:

“For the past two years, we have conducted surveys in which we asked respondents to “imagine an ideal organization transformed by digital technologies and capabilities that improve processes, engage talent across the organization, and drive new value-generating business models.” We then asked them to rate their company against that ideal on a scale of 1 to 10. Respondents fall into three groups: companies at the early stages of digital development (rating of 1-3 on a 10-point scale, 32% of respondents), digitally developing companies (rating of 4-6, 42% of respondents), and businesses that are digitally maturing (rating of 7-10, 26% of respondents).”

The assessment of whether a company is digitally mature or not is a subjective assessment by the respondents, not by outside observers applying objective criteria. It may well be that the respondents who rated their companies low on the digital maturity scale simply are not happy with their current employer—not enough opportunities to develop, generally incompetent leaders, too much hierarchy and not enough collaboration.

Notwithstanding the issue of how digitally mature companies were identified, the report’s conclusion—and prescription—is that to succeed in a digital world you must adopt a digital culture. It says:

“A key finding in this year’s study is that digitally maturing organizations have organizational cultures that share common features…The main characteristics of digital cultures include: an expanded appetite for risk, rapid experimentation, heavy investment in talent, and recruiting and developing leaders who excel at “soft” skills.”

Sounds to me very much like the prescriptions for business success emanating from business schools for at least half a century, way before “digital” has become a set of new technologies, processes, and attitudes companies must invest in and take advantage of to stay competitive.

The importance of becoming digital today is a good enough reason to read the report carefully and take note of how business executives in 131 countries and 27 industries answered the questions posed to them. The Sloan Management Review and Deloitte should be commended for conducting a large annual survey probing the state-of-the-art of digital transformation.

But for a more convincing assessment of what constitutes “digital maturity” we will have to wait until Sloan and Deloitte (or someone else) conduct research that compares objectively companies that have invested heavily in “digital” with companies that have invested only lightly in this new new thing. A difficult research challenge, no doubt, as very few companies willingly admit to falling behind the times.

The findings will be even more meaningful if the research will compare objectively successful companies (e.g., profitable) not investing in digital with not-so-successful companies (e.g., losing money, market share) that have totally embraced digital. Aren’t there out there today companies that are hierarchical, risk-averse, and do not invest in talent and digital but still that make a ton of money ? Can we be absolutely confident that these will not be the characteristics of (at least some) successful companies in the future?

Gil Press

Gil Press is Managing Partner at gPress, a marketing, publishing, research and education consultancy. Previously, he held senior marketing and research management positions at NORC, DEC and EMC. Hewas Senior Director, Thought Leadership Marketing at EMC, where he launched the Big Data conversation with the “How Much Information?” study (2000 with UC Berkeley) and the Digital Universe study (2007 with IDC). He blogs at and Twitter: @GilPress

Guest Blog: Machines Are Taking All The Jobs? What Decision-Makers Say And Do

Original blog link:

Machines Are Taking All The Jobs? What Decision-Makers Say And Do

by Gil Press

A new PwC survey provides fresh and illuminating data on the burning questions of the day: Are machines going to take over our jobs? And how much do we rely (or over-rely) today on machines, automation, and algorithms?

Experts are confident that machines are going to replace many workers. A much-quoted report from Oxford University has estimated that “about 47% of total US employment is at risk” for being fully automated. The machine threat to employment is even greater in developing economies—a recent report from Oxford estimates that 77% of jobs in China and 69% of jobs in India are “at high risk of automation.”

But maybe estimating the type of jobs that the machines are going to replace is the wrong approach. Tom Davenport, who just published a book on strategies for coping with automation, Only Humans Need Apply: Winners and Losers in the Age of Smart Machines (co-authored with Julia Kirby), told the Wall Street Journal recently: “Computers don’t tend to replace whole jobs; they replace specific tasks.”

The McKinsey Global Institute (MGI) agrees: “…a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined.”

MGI estimates that 45% of work-related tasks can be automated. This finding does not bode well for knowledge workers who were sure their cognitive skills could not be automated and that they will always outrun the machines. Even CEOs, according to MGI, spend over 20% of their time on activities that can be automated with current technology.

What has been missing in this discussion is data on how much we rely (or don’t) on machines today, rather than estimates based on experts’ assessments of how automation-prone are various occupations and activities. Specifically, has the era of big data and increasingly sophisticated algorithms changed the nature of business decision-making? What is the extent by which business executives rely on machines today when they make strategic decisions?

A new PwC survey of more than 2,100 business decision-makers across more than 10 countries and 15 industries sheds new light on these questions. It frames the discussion as follows: “Executives who once relied firmly on their intuition and experience are now face-to-face with machines that can learn from massive amounts of data and inform decisions like never before.”

59% of the decision-makers PwC surveyed say that the analysis they require relies primarily on human judgment rather than machine algorithms. That means that 41% already tend to rely more on algorithms than their own experience, judgment, and intuition. “We are not talking about pricing a seat on an airline,” says (via email) Dan DiFilippo, Global & US Data and Analytics Leader at PwC. “We are talking about big, strategic decisions that almost certainly involve some combination of human and machine, but clearly we see a significant involvement of the machine.”

The most interesting findings are about the type of decisions that tend to be assisted by machine algorithms and the ones that rely more on human judgement. In the chart above, “respondents who answered closest to zero are nearest to the survey’s overall average reliance on analysis from machine algorithms and human judgment. The farther away from the center point, the greater reliance on either mind or machine,” says PwC.

“Shrinking existing business” was deemed by survey respondents as the type of decision that relies most on human judgement and “Investment in IT” as the one relying most on algorithms. “Investment in IT,” says DiFilippo, “can cover many areas including shop floor automation, CRM systems, HR systems, risk management systems, etc., all of which have varying degrees of machine algorithms and can be assessed by machine algorithms.”

The breakdown of results by country offers a striking juxtaposition of China and Japan with the former as the country/region relying more than others on machine algorithms and the latter as the country/region second only to Central and Eastern Europe in its reliance on human judgement. One would think that China and Japan will have similar attitudes toward and use of algorithms in decision-making but this is apparently not the case. It’s possible, however, that the results are due to different interpretations of the survey questions. Says DiFlippo: “We don’t have a precise answer or explanation for this—we are still working to gather more on this front.”

Finally, the breakdown of results by industry shows that different economic sectors differ in the degree by which decision makers rely on their own judgement vs. relying on machine algorithms. Conclude DiFilippo: “Involving the machine can help reduce/eliminate bias (at the individual, department or organization level), add more accuracy and/or more computing power to crank through a high volume of scenarios that human can’t do (or can’t do in a timely manner), and importantly—and the data supports this—there is a sense that the machine can help de-risk the strategic decision… we see that those who had a high degree of machine algorithms felt a high degree of managed and known risks.”

So should we search for the right mix of minds and machines in the context of a specific decision or should we succumb to a universal McAfee’s Law and agree that “as the amount of data goes up, the importance of human judgment should go down”? What’s your experience with trusting machine algorithms rather than your own judgment?

Gil Press

Gil Press is Managing Partner at gPress, a marketing, publishing, research and education consultancy. Previously, he held senior marketing and research management positions at NORC, DEC and EMC. Hewas Senior Director, Thought Leadership Marketing at EMC, where he launched the Big Data conversation with the “How Much Information?” study (2000 with UC Berkeley) and the Digital Universe study (2007 with IDC). He blogs at and Twitter: @GilPress

Guest Blog: Only Humans Need Apply Is A Must-Read On AI For Facebook Executives

Original blog link:

Only Humans Need Apply Is A Must-Read On AI For Facebook Executives

by Gil Press

Under pressure to remove alleged human bias from its “Trending Topics” section, in August. Facebook fired the editors who were selecting and writing headlines for the stories, explaining that this “will make the product more automated.” The results of trusting algorithms more than humans have continued to make headlines ever since with the Trending “product” promoting a fake news story about Fox News’ Meghan Kelly, a conspiracy article claiming the 9/11 twin towers collapsed because of “controlled demolition,” and Apple AAPL +0.78%’s Tim Cook announcing that Siri will physically come out of the phone and do all the household chores (a story from an Indian satirical website, Faking News, that was Trending’s top story on the day of the iPhone 7 launch event), to mention just a few of the more embarrassing machine failures.

Silicon Valley has never displayed much love for fallible humans, but has shown a lot of confidence in the continuous improvement and now, self-improvement, of machines. Do humans still have an important role to play in our automated lives which are increasingly controlled by sophisticated algorithms and seemingly smarter machines?

In Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, knowledge work and analytics expert Tom Davenport and Julia Kirby, a contributing editor for the Harvard Business Review, offer optimistic, upbeat and practical answers to this much-debated question. “The upside potential of the advancing technology is the promise of augmentation—in which humans and computers combine their strengths to achieve more favorable outcomes than either could do alone,” they write.

There is not much difference, contend Davenport and Kirby, between technologies of automation and technologies of augmentation. The difference lies in the goals and attitudes behind the application of these technologies. Automation is unidirectional and focuses “primarily or exclusively on cost reduction” via the elimination of human labor. In contrast, “augmentation approaches tend to be more likely to achieve value and innovation” and they are bidirectional, making “humans more capable of what they are good at” and “machines even better at what they do.”

It is a shortsighted (and short-term) strategy for companies to favor automation over augmentation: “If the goal is to provide truly exceptional or differentiated products and services at scale, only an augmentation arrangement can accomplish that,” write Davenport and Kirby. They advocate a “workplace that combine sophisticated machines and humans in partnerships of mutual augmentation” and mutual benefit.

Gil Press

Gil Press is Managing Partner at gPress, a marketing, publishing, research and education consultancy. Previously, he held senior marketing and research management positions at NORC, DEC and EMC. Hewas Senior Director, Thought Leadership Marketing at EMC, where he launched the Big Data conversation with the “How Much Information?” study (2000 with UC Berkeley) and the Digital Universe study (2007 with IDC). He blogs at and Twitter: @GilPress

Guest Blog: A Very Short History of Digitization

Original blog link:

A Very Short History of Digitization

Milestones in the story of the adoption and proliferation of today’s most widely spoken language, the computer’s binary code.

by Gil Press

Ones and zeros are eating the world. The creating, keeping, communicating, and consuming of information are all being digitized, turned into the universal language of computers. All types of enterprises, from small businesses to large corporations to non-profits to government agencies, are going through a “digital transformation,” turning digitization into new processes, activities, and transactions.

From the 1950s on, with a distinct bounce in the 1990s due to the advent of the Web, digitization has changed the way we work, shop, bank, travel, educate, govern, manage our health, and enjoy life. The technologies of digitization enable the conversion of traditional forms of information storage such as paper and photographs into the binary code (ones and zeros) of computer storage. A sub-set is the process of converting analog signals into digital signals. But much larger than the translation of any type of media into bits and bytes is the digital transformation of economic transactions and human interactions.

The expression of data as ones and zeros facilitates its generation, replication, compression, and dissemination (see A Very Short History of Big Data); its analysis (see A Very Short History of Data Science); and its organization (see A Very Short History of the Internet and the Web). It also encourages the replacement or augmentation of the physical with the virtual or online presence (see A Very Short History of the Internet of Things).

Here are a few milestones in the story of the adoption and proliferation of today’s most widely spoken language.

1679 Gottfried Wilhelm Leibniz develops the modern binary number system and, in 1703, publishes Explication de l’Arithmétique Binaire (Explanation of Binary Arithmetic), linking it to ancient China.
1755 Samuel Johnson publishes A Dictionary of the English Language and includes an entry for “Binary arithmetick,” quoting Ephraim Chambers’ Cyclopaedia: “A method of computation proposed by Mr. Leibnitz, in which, in lieu of the ten figures in the common arithmetick, and the progression from ten to ten, he has only two figures, and uses the simple progression from two to two. This method appears to be the same with that used by the Chinese four thousand years ago.”
1847 George Boole introduces Boolean algebra in The Mathematical Analysis of Logic, creating the field of mathematical logic, leading eventually to universal computation. In 1854, he writes in An Investigation into the Laws of Thought: “The respective interpretation of the symbols 0 and 1 in the system of logic are Nothing and Universe.”
1937 Claude Shannon submits his master’s thesis at MIT, establishing the theoretical underpinnings of digital circuits. Shannon showed how Boolean algebra could optimize the design of systems of electromechanical relays then used in telephone routing switches.
1938 Alec Reeves conceives of the use of pulse-code modulation (PCM) for voice communications, digitally representing sampled analog signals. It was not used commercially until the 1950s, when the invention of the transistor made it viable. PCM has become the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications.
1940 John V. Atanasoff writes in Computing Machine for the Solution of Large Systems of Linear Algebraic Equations, a paper describing the electronic digital calculating machine he has built with Clifford Berry: “…for mechanized computation, the base two shows a great superiority… a card of a certain size used with the base-two recording system will carry more than three times as much data as if used with the conventional base-ten system.”
1943 The SIGSALY secure speech system performs the first digital voice transmission, used for high-level Allied communications during World War II.
June 25, 1945 John von Neumann’s A First Draft of a Report on the EDVAC is distributed to 24 people working on the development of the EDVAC, one of the earliest computers. It documents the key decisions made in the design of the EDVAC, among them the decision to use binary to represent numbers, thus reducing the number of components required compared to its predecessor, the ENIAC, which used the decimal system. The document became the technological basis for all modern computers.
1948 Claude Shannon publishes “A Mathematical Theory of Communication” in the July and October issues of the Bell System Technical Journal. Shannon: “If the base 2 is used [for measuring information] the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information.”
Summer 1949 Claude Shannon lists in his notebook the storage capacity in bits of a number of items. He estimated that a punch card has just under 103 bits and a single-spaced typed page 104 bits. Four years before the discovery of the double-helix structure of DNA, Shannon estimated that the “genetic constitution of man” is about 105 bits. The largest holder of bits he could think of was the Library of Congress which he estimated to hold 1014 bits of information (p. 232 in The Information by James Gleick).
1954 General Electric’s Major Appliance Division plant in Louisville, Kentucky, installs the UNIVAC I computer, the first business use—payroll processing and manufacturing control programs—of a computer in the United States. “The Univac I was also hooked up with speakers, and the operator had the machine playing classical music each evening,” recalls Burton Grad who designed and wrote (in machine language) a manufacturing control program for GE’s Dishwasher and Disposer Department.
1955 John Hancock Mutual Life Insurance Co., a pioneer in digitizing customer information, digitizes 600 megabytes of two million life-insurance policies.
September 4, 1956 IBM announces the 350 Disk Storage Unit, the first computer storage system based on magnetic disks and the first to provide random access to stored data. It came with fifty 24-inch disks and a total capacity of 5 megabytes, weighed 1 ton, and could be leased for $3,200 per month; its first customer was United Airlines’ reservations system.
September 14, 1956 IBM announces the 305 RAMAC and the 650 RAMAC (Random Access Memory Accounting) which incorporated the 350 Disk Storage Unit. It promised, as the IBM press release said, “that business transactions will be completely processed right after they occur. There will be no delays while data is grouped for batch processing… Random access memory equipment will not only revolutionize punched card accounting but also magnetic tape accounting.”
When it was exhibited in the 1958 Brussels World’s Fair, visitors could query “Professor RAMAC” using a keyboard and get answers in any of ten languages. The RAMAC became obsolete within a few years of its introduction as the vacuum tubes powering it were replaced by transistors. But disk drives, invented in a search for faster access to information, are still used as the containers for almost all digital information today.
1960 American Airlines’ Sabre flight-reservation system digitizes a process developed in the 1920s, processing 84,000 telephone calls per day and storing 807 megabytes of reservations, flight schedules and seat inventory.
1962 The term database is mentioned in print for the first time, according to the Oxford English Dictionary, quoting a Systems Development Corporation technical memo: “A ‘data base’ is a collection of entries containing item information that can vary in its storage media and in the characteristics of its entries and items.”
1963 Charles Bachman, at GE’s computer division, develops the Integrated Data Store (IDS), one of the first database management systems using what came to be known as the navigational database model in the Manufacturing Information and Control System (MIACS) product.
April 19, 1965 Gordon Moore publishes “Cramming more components onto integrated circuits” in Electronics magazine, the first formulation of what became to be known as “Moore’s Law.” The observation of the constant doubling of the number of transistors that can be “crammed” into an integrated circuit became the rallying cry that has guided manufacturing process innovations that have reduced the price and increased the power of electronic components and drove a constant expansion of the scope and reach of digitization.
1968 U.S. libraries begin using Machine Readable Cataloging (MARC) records.
1969 Willard Boyle and George E. Smith at AT&T Bell Labs invent the charge-coupled device (CCD), transforming light into electric signals. The CCD has played a major role in the development of digital imaging in general and the development of digital cameras and medical imaging in particular. Boyle and Smith were awarded the 2009 Nobel Prize in Physics.
June 1970 Edgar F. (“Ted”) Codd publishes “A relational model of data for large shared data banks,” in the Communications of the ACM, presenting the theoretical basis for relational databases, which became the dominant type of databases from the 1980s to around 2000.
1971 Arthur Miller writes in The Assault on Privacy that “Too many information handlers seem to measure a man by the number of bits of storage capacity his dossier will occupy.”
July 4, 1971 Michael Hart launches Project Gutenberg with the goal of making copyright-free works electronically available by entering the text of the U.S. Declaration of Independence into the mainframe he was using at the University of Illinois.
1972 Pulsar, the world’s first all-electronic digital watchand the first to use a digital display, is launched.
1973 Charles Bachman is awarded the Turing Award. From The Programmer as Navigator, Bachman’s Turing Award lecture: “Copernicus presented us with a new point of view and laid the foundation for modern celestial mechanics… A new basis for understanding is available in the area of information systems. It is achieved by a shift from a computer-centered to the database-centered point of view. This new understanding will lead to new solutions to our database problems and speed our conquest of the n-dimensional data structures which best model the complexities of the real world… The availability of direct access storage devices laid the foundation for the Copernican-like change in viewpoint… From this point, I want to begin the programmer’s training as a full-fledged navigator in an n-dimensional data space.”
December 1975 The first digital camera, invented by Steven Sassonat Eastman Kodak, takes 23 seconds to capture its first image. The camera weighed 8 pounds, recorded black and white images to a compact cassette tape, and had a resolution of 0.01 megapixels.
977 Citibank installs its first ATM. By the end of the year, all the bank’s New York branches had at least two machines operating 24 hours a day, seven days a week, ensuring 24-hour access in case one fails. When a huge blizzard hit New York in January 1978, banks were closed for days and ATM use increased by 20%. Within days, Citibank had launched its “The Citi Never Sleeps” ad campaign. A decade later, the bank’s ATM network stored 450 megabytes of electronic transactions.
1979 Federal Express launches COSMOS (Customers, Operations, and Services Master Online System), digitizing the management of people, packages, vehicles, and weather scenarios in real time, with a computer storage capacity of 80 gigabytes.
April 1980 I.A. Tjomsland gives a talk titled “Where Do We Go From Here?” at the Fourth IEEE Symposium on Mass Storage Systems, in which he says “Those associated with storage devices long ago realized that Parkinson’s First Law may be paraphrased to describe our industry—‘Data expands to fill the space available.’”
1981 Edgar F. (“Ted”) Codd is awarded the Turing Award for his fundamental and continuing contributions to the theory and practice of database management systems—“whenever anyone uses an ATM machine, or purchases an airline ticket, or uses a credit card, he or she is effectively relying on Codd’s invention.”
In his Turing Award Lecture, Codd notes that “As it stands today, relational database is best suited to data with a rather regular or homogeneous structure. Can we retain the advantages of the relational approach while handling heterogeneous data also? Such data may include images, text, and miscellaneous facts. An affirmative answer is expected, and some research is in progress on this subject, but more is needed.” The challenge of heterogeneous data or “big data” will be addressed almost three decades later but not with a relational database approach.
July 9, 1982 The movie Tron, in which the Jeff Bridges character is digitized by an experimental laser into a mainframe where programs are living entities appearing in the likeness of the humans who created them, is released.
August 17, 1982 The first commercial compact disc (CD) is produced, a 1979 recording of Claudio Arrau performing Chopin waltzes.
1984 8.2% of all U.S. households own a personal computer, the U.S. Census Bureau finds in its first survey of computer and Internet use in the United States. In 2013, 83.8% of U.S. households reportedcomputer ownership, with 74.4% reporting Internet use.
Februrary 1985 Whole Earth’s ‘Lectronic Link (WELL) established, one of the first “virtual communities.”
1988 More compact discs (CDs) are sold than vinyl records.
June 1990 General Instruments, an American manufacturer of cable television converters and satellite communications equipment, upsets the race to build the television of the future by announcing it has succeeded in squeezing a digital HDTV signal into a conventional broadcast channel. Up until then all the companies preparing proposals for an HDTV standard were working on analog systems.
1991 The first 2G cellular network is launched in Finland. 2G networks used digital signals rather than analog transmission between mobile phones and cellular towers, increasing system capacity and introducing data services such as text messaging.
July 1992 Tim Berners-Lee posts the first photo uploaded to the Web, showing the all-female parody pop group Les Horribles Cernettes(LHC), consisting of four of his colleagues at CERN.
May 1993 O’Reilly Digital Media group launches the Global Network Navigator (GNN), the first commercial web publication and the first website to offer clickable advertisements.
1994 Teradata has the largest commercial database at 10 terabytes.
Summer 1994 A large pepperoni, mushroom and extra cheese pizza from Pizza Hut is ordered online, possibly the first transaction on the Web.
October 1994 HotWired is the first web site to sell banner ads in large quantities to a wide range of major corporate advertisers.
1995 After a five-year pilot project, the National Digital Library program begins digitizing selected collections of Library of Congress archival materials.
June 1995 The Norwegian Broadcasting Corporation (NRK) launches the world’s first Digital Audio Broadcasting (DAB) channel.
November 22, 1995 Toy Story opens in U.S. theaters, the first feature-film to be made entirely with computer-generated imagery (CGI).
1996 Brewster Kahle establishes the Internet Archives, to preserve and provide access to nearly every site on the Web, later evolving to become a comprehensive digital library. Other Web archiving projects launched in 1996 include the National Library of Australia’s PANDORA Project, and the Royal Library of Sweden’s Kulturarw Heritage Project.
1996 Digital storage becomes more cost-effective for storing data than paper.
1996 E-gold is launched, becoming the first successful digital currency system to gain a widespread user base and merchant adoption.
1998 Jim Gray is awarded the Turing Award for seminal contributions to database and transaction processing research and technical leadership in system implementation.
1998 Production of analog cameras peaks at almost 40 million as they are replaced by digital cameras.
1998 Digital Television transmission commences in the U.K. and the U.S., launching the process of converting and replacing analog television broadcasting with digital television.
March 25, 1998 Microsoft patents ones and zeroes, says The Onion.
October 23, 1998 The Last Broadcast is the first feature-length movie shot, edited and distributed digitally via satellite download to 5 theaters across the United States.
December 1998 Nicholas Negroponte writes in Wired: “Like air and drinking water, being digital will be noticed only by its absence, not its presence.”
1999 Wal-Mart has the largest commercial database at 180 terabytes.
2000 The number of photos preserved on film annually peaks at 85 billion, rapidly replaced in subsequent years by digital photos.
September 2000 MP3 player manufacturer, i2Go, lunches a digital audio news and entertainment service called that enabled users to download news, sports, entertainment, weather, and music in audio format. In February 2004, Ben Hammersley writes in the Guardian: “Online radio is booming thanks to iPods, cheap audio software and weblogs… But what to call it? Audioblogging? Podcasting? GuerillaMedia?”
January 1, 2001 The Electronic Product code (EPC) is defined [PDF] at MIT as a replacement for the Universal Product Code (UPC or ‘bar code’).
2002 Digital information storage surpasses non-digital for the first time.
2003 More digital cameras than traditional film cameras are sold in the U.S. for the first time.
2003 Electronic payments in the U.S. surpass the use of cash and checks for the first time.
June 2003 The DVD format (launched in the late 1990s) becomes more popular than VHS in the U.S.
October 2003 The Check 21 Act makes check images a legal transfer medium in the U.S., allowing financial institutions to create a digital version of the original check. Over 50 billion paper checks were processed in the U.S. in 2003.
2004 Google announces it is working with the libraries of Harvard, Stanford, the University of Michigan, and the University of Oxford as well as The New York Public Library to digitally scan books from their collections. The Internet Archives starts a similar effort, the million book project.
2007 94% of the world’s information storage capacity is digital, a complete reversal from 1986, when 99.2% of all storage capacity was analog.
2008 More music is sold by iTunes than by Wal-Mart.
October 2008 Satoshi Nakamoto publishes “Bitcoin: A Peer-to-Peer Electronic Cash System,” describing the first decentralized digital currency. In October 2015, The Economist stated that blockchain, the technology behind bitcoin, “could transform how the economy works.”
2010 Online advertising ($26 billion) in the United States surpasses newspaper advertising ($22.8 billion) for the first time.
2010 Production of digital cameras peaks at just over 120 million as they are replaced by smartphones.
January 2011 Jean-Baptiste Michel, et al., publish “Quantitative Analysis of Culture Using Millions of Digitized Books” in Science. On a basis of a corpus of digitized texts containing about 4% of all books ever printed, they investigate linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000, calling their field of study “culturomics.”
2011 sells more Kindle books than print books.
2012 U.S. consumers pay more for online movies than for DVDs and Blu-ray discs, for the first time.
2012 180 petabytes (180 million gigabytes) are added annually to Facebook’s data warehouse which has grown 2500x in the past four years.
December 2012 Annual e-commerce sales top $1 trillion worldwide for the first time.
2014 Streaming revenue from services like Spotify and Pandora overtake CD sales for the first time.
February 2014 45% of Internet users ages 18-29 in serious relationships say the Internet has had an impact on their relationship.
Summer 2014 The number of Internet users worldwide reaches 3 billion.
2015 Michael Stonebraker is awarded the Turing Awardfor fundamental contributions to the concepts and practices underlying modern database systems.
2015 Every minute, Skype users make 110,040 calls, Twitter users send 347,222 tweets, YouTube users upload 300 hours of new videos, Pinterest users pin 9,722 images, Netflix subscribers stream 77,160 hours of video, Snapchat users share 284,722 snaps, and Facebook users like 4,166,667 posts.
2015 Digital America: A tale of the haves and have-mores, a McKinsey Global Institute (MGI) report, is the first major attempt to measure the ongoing digitization of the U.S. economy at a sector level. It introduces the MGI Industry Digitization Index, which combines dozens of indicators to provide a comprehensive picture of where and how companies are building digital assets, expanding digital usage, and creating a more digital workforce. Because the less digitized sectors are some of the largest in terms of GDP contribution and employment, MGI concludes that the U.S. economy as a whole is only reaching 18% of its digital potential and it estimates that digitization could add up to $2.2 trillion to annual GDP by 2025.
Gil Press

Gil Press is Managing Partner at gPress, a marketing, publishing, research and education consultancy. Previously, he held senior marketing and research management positions at NORC, DEC and EMC. Hewas Senior Director, Thought Leadership Marketing at EMC, where he launched the Big Data conversation with the “How Much Information?” study (2000 with UC Berkeley) and the Digital Universe study (2007 with IDC). He blogs at and Twitter: @GilPress