With our growing dependence on artificial intelligence (AI), the Internet of Things (IoT), and autonomous products, we face an erosion of basic skills as well as many deep questions about how this technology affects humankind. To explore this paradigm shift, NetApp’s VP of Brand and Influence, Emily Miller, recently sat down for a conversation with CEO of The Futures Agency and author of Technology Vs. Humanity, Gerd Leonhard. They discussed the future of digital transformation, technology companies’ ethical responsibilities, and the importance of maintaining our humanity in the face of automation.
Science Fiction becomes Science Fact
As technologies give us new capabilities, we become more and more dependent on them and, according to Leonhard, risk “‘amputating’ the skills we once had.” Miller agreed, describing a recent trip on which she couldn’t find anyone to give her local directions. Instead, they all suggested that she use Google Maps. “What do we need to do to make sure that we’re not amputating?” Miller asked. “Having GPS is wonderful, but I still need to know how to navigate.”
In 10 years, technology will be limitless, said Leonhard—and potentially dangerous if left unchecked. Society frowns on certain technological innovations, such as autonomous weapons. Other innovations, like the first gene-edited babies—an advance that Leonhard discussed in his NetApp® Insight® 2018 Barcelona keynote—are debatable. Miller wondered about the role that ethics plays–and will continue to play–in these scenarios, given that different societies approach these capabilities from different points of view.
“Defining right and wrong isn’t necessarily easy,” Leonhard said. “For example, if you lose both of your legs in a car accident, of course you should have [the option to get] prostheses. But what if you voluntarily say you would lose your legs to get new ones that are better? That’s probably not good, and who would decide? That’s Supreme Court material.”
With Great Power Comes Great Responsibility
“Ethics is knowing the difference between what you have a right to do and what is right to do.”– Justice Potter Stewart
Leonhard calls himself a tech optimist, acknowledging technology’s potential to solve humanity’s most pressing problems. He gave examples, including energy issues, vertical farming for increased agricultural production, and preventing diseases such as diabetes and cancer.
For Leonhard, the ethical dilemma comes into play when innovative benefits are not fairly distributed. “If we have achieved things like cheap energy through solar energy, we probably [ethically] have to license it to other countries very cheaply, and we’re not doing that,” he said.
Leonhard argued that the current model, where the companies that own the technology are the only ones to realize the benefits, is unfair. He believes that we need to ensure that these benefits are shared, whether through an automation tax or another yet-to-be-determined solution.
The Future of Human Skills
Leonhard pointed out that looking at skills through the lens of humanity can help us preserve our abilities instead of ceding them to technology. Is driving a car a skill that makes us human? No. Handwriting a letter? Yes.
He noted that if technology companies invested as much money in humanity as they do in technology, they could simultaneously innovate, propel success, and preserve valuable skills. This investment would include diversifying their workforces and hiring employees with high emotional quotients (EQs). Leonhard believes that these practices are as valuable to tech companies as software.
Maintaining Humanity in a Connected World
Regardless of technological advances, Leonhard asserts that all people have the right to retain their humanity. In addition to an international digital ethics treaty, he proposes these five human rights for the digital age:
- The right to remain neutral
- The right to be inefficient
- The right to disconnect
- The right to be, and remain, anonymous
- The right to employ or engage people instead of machines
Miller and Leonhard delved further into the right to be inefficient, which is core to our humanity. Leonhard explained that technology gives us the idea that everything has to be efficient. He argued that if we’re no longer allowed to be our fundamentally inefficient human selves, it’s reductionism.
“If the world becomes so efficient that our inefficiency—if we’re tired or we just didn’t have a good day, or we had some bad incident—if that no longer has any room, then that’s reductionism: we are being reduced,” Leonhard noted. ”We’re saying, ‘okay, don’t worry about doing something that takes a long time because you can use this technology to cut [your time investment] down to 5 percent.’ So no musician ever learns how to play a guitar. They all learn how to play the iPad and it only takes 10 hours, not 10,000 hours.”
Given our digitally connected world, it’s difficult to avoid reductionism. Leonhard suggested that we encourage a balance in technological consumption the way we would balance any other substance: “You can drink 50 cups of coffee, but you aren’t going to be very happy.”
As technology continues to advance and we debate the ethics of its impact, we need to keep humanity and Leonhard’s proposed human rights at the top of our minds. “The human brain is wired for experiences. If we take out relationships and meaning, we have nothing,” he stated. “We could be an amazing machine, but there’d be no purpose.”
See how other visionaries are revolutionizing technology’s relationship with data as they journey through digital transformation.