Don’t Fear The Bots–Fear The Humans Who Control Them
Eleonore Pauwels, a research fellow on emerging cyber-technologies at the United Nations University Centre for Policy Research, recently published a report highlighting the threats and opportunities for global policy makers.
The threats should scare you. It isn’t that the machines will take over; rather, it is that people can use AI maliciously and in ways we cannot detect. For instance, people can use AI to make a CT scan convincingly appear to show a healthy person has a cancerous tumor. Imagine what could be done to your credit rating or your bank account–or to everyone’s credit rating or bank account at once.
Conversely, AI can be used to improve peace and conflict resolution efforts, she says. By monitoring public radio communications in local languages using AI trained translation tools, threats can be identified and addressed earlier.
Interview with Eleonore Pauwels, the Research Fellow on Emerging Cyber-technologies of United Nations University’s Centre for Policy Research.
The following is the pre-interview with Eleonore Pauwels. Be sure to watch the recorded interview above.
Expert Insights:
Insight 1: The Deception Machine: AI Convergence, Data Poisoning And New Global Security Risks
As artificial intelligence (AI) is increasingly combining with complex dual-use technologies, we face an era of technological convergence, which is simply too powerful for humankind to refuse.
Earlier this month, researchers created an AI-driven malware that can be used to hack hospital CT scans, generating false cancer images that deceived even the most skilled doctors. If introduced into today’s hospital networks, healthy people could be treated with radiation or chemotherapy for non-existent tumors, while early-stage cancer patients could be sent home with false diagnoses. Today’s medical intelligence about the treatment of cancers, blood clots, brain lesions, and viruses could be manipulated, corrupted and destroyed. This is just one example of how “data-poisoning” – when data is manipulated to deceive – poses a risk to our most critical infrastructures. Without a common understanding of how AI is converging with other technologies to create new and fast-moving threats, far more than our hospital visits may turn into a nightmare.
Data attacks are the nuclear weapon of the 21st century. Far more important than who controls territory, whoever controls data has the capacity to manipulate the hearts and minds of populations. AI-driven algorithms can corrupt data to influence beliefs, attitudes, diagnoses, and decision-making, with an increasingly direct impact on our day-to-day lives. Data-poisoning is a new and extremely powerful tool for those who wish to sow deception and mistrust in our systems.
The risk is amplified by the convergence of AI with other technologies: data-poisoning may soon infect country-wide genomics databases, and potentially weaponize biological research, nuclear facilities, manufacturing supply chains, financial trading strategies and political discourse. Unfortunately, most of these fields are governed in silos, without a good understanding of how new technologies might, through convergence, create system-wide risks at a global level.
Policymakers need to start working with technologists to better understand the security risks emerging from AI’s combination with other dual-use technologies and critical information systems. If not, they must prepare for large-scale economic and social harms inflicted by new forms of automated data-poisoning and cyber-attacks. In an era of increasing AI-cyber conflicts, our multilateral governance system is needed more strongly than ever.
Insight 2: The Promises Of AI For Conflict Prevention: Can We Save More Lives Using Unstructured, Complex And Too Often Forgotten Data?
Across the globe, there is outrage over the missed use – as “missed opportunity” – of complex, unstructured and strategic data.
At the United Nations, we have been exploring completely new scenarios for AI: its potential to be used for the noble purposes of peace and security. This could revolutionize the way of how we prevent and solve conflicts globally.
Two of the most promising areas are Machine Learning and Natural Language Processing. Machine Learning involves computer algorithms detecting patterns from data to learn how to make predictions and recommendations. Natural Language Processing involves computers learning to understand human languages.
How can these emerging technologies be deployed for the good of humanity to de-escalate violence and increase international stability?
Firstly, overcoming cultural and language barriers. By teaching computers to understand human language and the nuances of dialects, not only can we better link up what people write on social media to local contexts of conflict, but we can also more methodically follow what people say on radio and TV. As part of the UN’s early warning efforts, this can help us detect hate speech in a place where the potential for conflict is high. This is crucial because the UN often works in countries where internet coverage is low, and where the spoken languages may not be well understood by many of its international staff.
Natural Language Processing algorithms can help to track and improve understanding of local debates, which might well be blind spots for the international community.
Secondly, anticipating the deeper drivers of conflict. We could combine new imaging techniques – whether satellites or drones – with automation. For instance, many parts of the world are experiencing severe groundwater withdrawal and water aquifer depletion. Water scarcity, in turn, drives conflicts and undermines stability in post-conflict environments, where violence around water access becomes more likely, along with large movements of people leaving newly arid areas.
By combining these imaging techniques with Machine Learning, the UN can work in partnership with governments and local communities to anticipate future water conflicts and begin working proactively to reduce their likelihood.
Thirdly, advancing decision making. In the work of peace and security, it is surprising how many consequential decisions are still made solely on the basis of intuition.
Yet complex decisions often need to navigate conflicting goals and undiscovered options, against a landscape of limited information and political preference. This is where we can use Deep Learning – where a network can absorb huge amounts of public data and test it against real-world examples on which it is trained while applying with probabilistic modeling. This mathematical approach can help us to generate models of our uncertain, dynamic world with limited data.
With better data, we can eventually make better predictions to guide complex decisions. Future senior peace envoys charged with mediating a conflict would benefit from such advances to stress test elements of a peace agreement. Of course, human decision-making will remain crucial but would be informed by more evidence-driven robust analytical tools.
Doing the above inside the UN, will require training staff and senior leaders in new approaches and trusting in their competence. And it will also require collaborating with university researchers, and forging close partnerships with leading private AI and technology firms.
Insight 3: The Perils Of The Internet Of Bodies, Genomes And Minds
The all-encompassing capture of our personal information – the quirks that help define who we are and trace the shape of our lives – will increasingly be used for various purposes without our direct knowledge or consent. On an individual level, what this means is that our privacy is receding, and we are being exposed. The evolution of AI is occurring in parallel with technical advances in other fields, such as genomics, epidemiology, and neuroscience. That means not only are your coffee maker and your plane’s engine sending information to the cloud, but so are facial-recognition cameras, personal assistants, keyboards, wearable sensors like Fitbits, intelligent implants inside and outside our bodies, brain-computer interfaces, and even portable DNA sequencers.
When optimized using AI, this trove of data provides information superiority to fuel truly life-saving innovations in precision medicine, suicide prevention, and epidemic detection. Never before has our species been equipped to monitor and sift through human behaviors and physiology on such a grand scale. I call this set of networks the “Internet of Bodies, Genomes and Minds.”
There is great promise here, but also great peril, especially when it comes to ownership and control of our most intimate data. When computer codes analyze not only shopping patterns and dating preferences, but our genes, cells, and vital signs, the entire story of you takes its place within an array of fast-growing and increasingly interconnected databases of faces, genomes, bio-metrics, and behaviors. The digital representation of your characteristic data could help create the world’s largest precision medicine data-set – or it could render everyone more vulnerable to exploitation and intrusions than ever before.
What might governments seek to do with such information and capabilities? How might large corporations, using their vast computing and machine-learning platforms, try to commodify these streams of information about humans and ecosystems? Indeed, behavioral and biological features are beginning to acquire a new life on the internet, often with uncertain ownership and an uncertain future.
There is great promise here, but also great peril, especially when it comes to ownership and control of our most intimate data. When computer codes analyze not only shopping patterns and dating preferences, but our genes, cells, and vital signs, the entire story of you takes its place within an array of fast-growing and increasingly interconnected databases of faces, genomes, bio-metrics, and behaviors. The digital representation of your characteristic data could help create the world’s largest precision medicine data-set – or it could render everyone more vulnerable to exploitation and intrusions than ever before.
What might governments seek to do with such information and capabilities? How might large corporations, using their vast computing and machine-learning platforms, try to commodify these streams of information about humans and ecosystems? Indeed, behavioral and biological features are beginning to acquire a new life on the internet, often with uncertain ownership and an uncertain future.
At the end of the electrifying 1970s in France, Michel Foucault coined the term “biopower” to describe how nation-states rely on an “explosion of numerous and diverse techniques for achieving the subjugation of bodies and the control of populations.” The ongoing digital and AI revolution magnifies his concerns. While we are not entering an Orwellian world or a dystopian episode of Black Mirror just yet, we cannot and should not ignore that the weakening boundary – and the weakening distinction – between “private” and “public” is a reality.
Consider the Chinese students whose pictures and saliva samples have been collected on campus to feed a database of faces and genomes. One Chinese facial recognition software company, Cloud Walk, is developing AI technology that tracks individuals’ movements and behavior to assess their chances of committing a crime. Chinese police forces have debuted AI-augmented glasses to identify individuals in real time. Notably, however, Chinese citizens are also beginning to resist such breaches of personal privacy.
There are other examples in which governments and companies are tapping into this Internet of Bodies, sometimes without informed consent or democratic deliberation. The National Institution for Transforming India, also called NITI Aayog, is helping the Indian government to aggregate private and public data on projects ranging from optimization of agriculture to healthcare. The Indian government has also mandated compliance with the creation of a country-wide bio-metrics database as part of Aadhaar’s identification profile. What India intends to do if and when it applies AI technology to such a database is uncertain.
What is certain is that national and international governance structures are not well-equipped to handle the concerns over privacy, ownership, and ethics that are already beginning to emerge.
WEF: From drone swarms to modified E. Coli: say hello to a new wave of cyberattacks
More about United Nations University’s Centre for Policy Research:
Twitter: @AI_RRI_Ethics
Website: https://cpr.unu.edu/author/pauwels
The Centre for Policy Research at United Nations University in New York is an independent think tank within the UN system. We combine research excellence with deep knowledge of the multilateral system to generate innovative solutions to current and future global public policy challenges.
Eleonore Pauwels. Photo Credit: United Nations University
Eleonore Pauwels’s bio:
Twitter: @AI_RRI_Ethics
Linkedin: https://www.linkedin.com/in/eleonore-pauwels-b5a0a11b/
Eleonore Pauwels is the Research Fellow on Emerging Cyber-technologies at United Nations University Centre for Policy Research. Pauwels held the position of Director of the Anticipatory Intelligence (AI) Lab with the Science and Technology Innovation Program at the Woodrow Wilson International Center for Scholars. She is a former official of the European Commission’s Directorate on Science, Economy and Society.
Pauwels is a writer and international science policy expert, who specializes in the governance and democratization of converging technologies. She is the author of a landmark report for the United Nations University, titled “The New Geopolitics of Converging Risks: The UN and Prevention in the Era of AI.”
Pauwels’ research analyzes and compares how emerging technologies, such as artificial intelligence, cyber- and bio-technologies, raise new opportunities and challenges for health, security, economics and governance in different Geo-political contexts. She examines the promises and perils that will likely arise with the development of AI civil and military technologies, the Internet of Bodies, Genomes and Minds, and the convergence of cyber- and bio-security.
Pauwels is a Member of the Council on Extended Intelligence, Adviser on the AI Initiative at Harvard Kennedy School, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, as well as an expert for the World Economic Forum.
Pauwels regularly testifies before U.S., European and international authorities including the U.S. Department of State, the U.S. National Academy of Sciences, the U.S. National Institutes of Health, the U.S. National Intelligence Council, the European Commission, the Organization for Economic Co-operation and Development, and the United Nations. Pauwels is also well-versed in communicating complex and novel scientific developments for lay audiences (her TEDxCERN on CRISPR, PBS Interview on the dual nature of AI, and AI Media Interview). Bilingual in French and English, she frequently writes for Nature, The New York Times, The Guardian, Scientific American, Le Monde, UN News The UN Chronicle, The South China Morning Post, Axios, Slate and The World Economic Forum.
Are you part of the good crowd? Join GoodCrowd.school, school for changemakers today.
The post Don’t Fear The Bots–Fear The Humans Who Control Them appeared first on Your Mark On The World.