A viral video released in February showed Boston Dynamics’ new bipedal robot, Atlas, performing human-like tasks: opening doors, tromping about in the snow, lifting and stacking boxes.
Shortly thereafter, White House economists released a forecast that calculated whom Atlas and other forms of automation are going to put out of work. Most occupations that pay less than $20 an hour are likely to be, in the words of the report, “automated into obsolescence.”
In other words, the so-called Fourth Industrial Revolution has found its first victims: blue-collar workers and the poor.
The general response in working America is disbelief or denial. A recent Pew Research Center survey found 80 percent of Americans think their job will still exist in 50 years, and only 11 percent of today’s workers were worried about losing their job to automation. Some – like my former CIA colleagues – insist their skills and knowledge can’t be replaced by artificial intelligence. That is, until they see plans for autonomous drones that don’t require a human hand and automated imagery analysis that outperforms human eyes.
Human workers of all stripes claim desperately that they’re irreplaceable. Bus drivers. Bartenders. Financial advisors. Speechwriters. Firefighters. Umpires. Even doctors and surgeons. Meanwhile, corporations and investors are spending billions toward making all those jobs replaceable. Why? Robots and computers don’t need health care, vacation days or even salaries.
Powerhouse consultancies like McKinsey & Co. forecast that 45 percent of today’s workplace activities could be done by robots, AI or some other already demonstrated technology. Some professors argue that we could see 50 percent unemployment in 30 years.
Deniers of the scope and scale of this looming economic upheaval point to retraining programs, and insist that there always will be a need for people to build and service these machines (even as engineers are focused on developing robots that fix themselves or each other). They believe that such shifts are many decades away, even as noted futurist Ray Kurzweil, who is also Google’s director of engineering, says AI will equal human intelligence by 2029. Deniers also talk about the new jobs that will be created during this Fourth Industrial Revolution. Alas, a report from the 2016 World Economic Forum calculated that the technological changes underway likely will destroy 7.1 million jobs around the world by 2020, with only 2.1 million replaced.
With the future value of human labor (read: our incomes) in doubt, what do we do?
One way to cushion the economic blow is to reclaim something we’ve been giving away for free: our personal data.
Companies that sell personal data should pay a percentage of the resulting revenue into a Data Mining Royalty Fund that would provide annual payments to U.S. citizens, much as the Alaska Permanent Fund distributes oil revenues to Alaskans. This payment scheme would start with traditional data but would extend to future forms of data like our facial expressions and other biometrics. If Google, Facebook or others were profiting from any other public resource, it would be illegal and immoral for them not to pay for it. The same logic should apply to our data.
Profound changes lie ahead with implications beyond our paychecks. Ethicists and philosophers already are debating what a world without work might look like. It’s clear no one will escape the outcomes – negative and positive – of this revolution.
A Data Mining Royalty Fund isn’t about helping just the unemployed factory worker who used to earn $20 an hour. It’s about taking steps to guarantee some minimum income to your family, or the one down the block, before any of us are automated into obsolescence.
Bryan Dean Wright, a former CIA covert operator, resides in Oregon. Twitter: @BryanDeanWright.