Us, the Machines, and Democracy

CIPE Insight | Louisa Tomar

Artificial intelligence (AI) and machine learning are quickly evolving as the scale of computation power and the volume of data continues to grow. This technology already has major implications for society, business, governance, rule of law and human rights.

Globally, norms and standards around AI’s governance remain in their infancy, despite a proliferation of “AI principles,” which to some suggests a crisis of legitimacy. In the absence of governing consensus, there has been a push for “ethical AI” in public, private, and academic spheres. While this is an important stop-gap, there is a lack of clarity of what ethics in the context of AI means on a global level or what accountability entails when it is violated. To achieve a shared understanding and approach to AI governance, there is a need for common understanding of terms, technical and ethical considerations, and meaningful solutions to managing risks and harms.

AI, defined as “the science and engineering of making intelligent machines,” is often designed to improve accuracy and, more importantly, speed of tasks that humans are innately good at. Examples include understanding natural language, identifying pictures, or solving a multi-faceted problem such as winning a game of chess. Machine learning is a subfield of AI that involves computer systems that “can improve their perception, knowledge, thinking, or actions based on experience or data” without a human explicitly programming it to achieve a known result. According to MIT researchers “machine learning refers to a process that starts with a body of data and then tries to derive rules or procedures to explain the data or predict future data.” Common examples include email spam filters, product, newsfeed and content recommendations, chatbots, and autonomous vehicles. Machine learning is quickly becoming the largest field of AI especially in terms of commercial use, with innovations in deep learning using neural networks that learn through iteration and analysis of successive layers of data to produce insights which are far beyond explanation by their human engineers.

There are many alarming examples of governments leveraging AI for the purpose of surveillance. This is especially the case in global “swing states” – countries which are defined as having both democratic and autocratic features, often with weak data privacy regimes and poor human rights records.

This emerging technology has vast implications for individuals and society. While many of the use cases for machine learning are in commercial ventures, governments are increasingly seeking access to privately held data presumably to also build AI and machine learning systems without having to purchase huge data sets on their citizens from brokers. Rather than buying or collecting the data, investing in the computation power, or hiring the engineers and data scientists needed to build and maintain complex algorithms, governments are looking to the private sector for access to data gathered through existing digital products and services. While this trend is alarming even if AI is used to improve public service delivery, so far AI usage by governments tends to trend more towards surveillance. There are many alarming examples of governments leveraging AI for the purpose of surveillance. This is especially the case in global “swing states” – countries which are defined as having both democratic and autocratic features, often with weak data privacy regimes and poor human rights records. Even in mature democracies, AI use by law enforcement agencies, namely facial recognition technology, is frequently undermining due process and rule of law.

What is more, in the absence of global norms and standards around data and AI, debates over privacy, data localization, and internet governance continue with limited consensus. A few years ago, many of these debates focused on “big data” whereas today they are about how all that data is being used, and reused, primarily for AI. For example, the Organisation for Economic Co-operation and Development (OECD) developed the AI Principles to build a common approach to trusted AI. These principles highlight several considerations to ensure trust, uphold rights, and reduce bias; they also encourage cross-border data flows. Nonetheless, debates over data localization and increasingly “data sovereignty” remain unresolved.

Bias in AI can be both a reflection of long-standing prejudice against certain communities, often minority populations that have faced historical and ongoing discrimination, as well as the absence of data that accurately reflects these communities, such as women.

For the purpose of thinking through the implications of AI, let us assume some communities are less likely to have their data shared into the ever-increasing big data sets that inform AI products. To be sure, lack of access to the internet and digital exclusion of marginalized communities, especially in developing countries, and for women and girls, already exacerbates this trend. Moreover, bias in AI can be both a reflection of long-standing prejudice against certain communities, often minority populations that have faced historical and ongoing discrimination, as well as the absence of data that accurately reflects these communities, such as women. Whether individuals are less likely to have their data as part of the AI learning process because they leave less of a digital footprint, as a result of government policy, or on the opposite end of the spectrum are empowered to protect their data, ultimately AI systems learn from the data available for them to analyze. While we must be mindful of the harms that come from misuse of private or identifiable data, exclusion can bring its own set of challenges.

Exclusion from big data sets may appear to protect privacy but it does not necessarily reduce potential harms. For example, if algorithms developed to identify diseases based on symptoms are only representative of diseases most commonly affecting men or found in wealthy countries, and trained on light skin tones, unrepresented communities may experience a reduction in quality and accuracy of care. Algorithmic bias is ultimately a reflection of the biases and prejudices in our societies as well as a lack of representative data. While this may seem far-fetched now, it is reasonable to expect AI to be utilized for health diagnostics in a growing number of settings regardless of whether the algorithm used is likely to be inclusive of (or trained on) the target patient population.

What this necessitates is smart public policy, clear data privacy standards, accountable commitment to ethical and inclusive practices, and safeguards to protect human rights. In research and commercial settings, there are some interesting approaches being leveraged to help reduce bias. One of them, for instance, the Data Shapley method, builds off game theory to assign weighting credits to certain types of data, knowing that its frequency or influence on the algorithm is too high or low. For example, per the health diagnostic illustration mentioned above, Data Shapely would allow researchers to assign a higher value to data of a disease symptom found more often in women, and a lower or negative value to symptoms more often found in men, as the researcher is aware that the model includes, say, 20 percent more sample data from men.

How to determine such a bias in an algorithm in the first place if not already known by researchers is likely to soon be primarily done by AI as well. An “AI auditor is an algorithm that systematically probes the original machine-learning model to identify biases in both the model and the training data,” which researchers can then correct for. The AI audit might conclude that the algorithmic inaccuracy in identifying disease symptoms in women is likely the result of the training data containing fewer examples from women. In this case, a researcher might decide to increase examples of symptoms found in women by 20 percent (or apply the Data Shapely method to the same effect). This is certainly an oversimplification, but when it comes to reducing AI bias, it often requires knowing more about the data and/or adding more data to compensate for any failure, bias, or inaccuracy. The question is, can we as individuals be fairly represented by algorithms if we (a) do not know what data is out there about us; (b) cannot confirm its accuracy; and (c) have no say over whether it is included or not in the datasets on which algorithms are trained?

The question is, can we as individuals be fairly represented by algorithms if we (a) do not know what data is out there about us; (b) cannot confirm its accuracy; and (c) have no say over whether it is included or not in the datasets on which algorithms are trained?

Communities should not have to decide between privacy or exclusion. The complexity of managing AI bias, risks, and potential for harm without consistent data privacy standards that protect human rights is daunting. The efficiency and convenience of machine learning products have made many of us complacent about the wider implications of these technologies. As has lore around robot sentience in popular culture and science fiction. The implications of AI’s disruption to society often invokes robots taking over jobs – which is likely to happen to some extent as new technology often does (think tractors and farming). However, a more nuanced understanding of AI and how it is used is needed to ensure that it actually benefits us, humans.

Despite the global nature of the internet and technology, many decisions that will determine how AI impacts our lives will be left up to governments and having a say in government decisions – through voting, policy dialogue, transparency, media, and so forth – is key. According to Dr. Nicholas D. Wright, expert whose work combines neuroscientific, behavioral, and technological insights to understand decision making in politics, “as artificial intelligence (AI) and AI-related technologies potentially unlock the value of large-scale data collection, authoritarian regimes stand ready to manipulate the development of global surveillance to serve their own interests,” and “thus the challenge for democracies and democratic civil society is to build digitized systems that enable economic and social development but do not afford a shift to authoritarianism.” To address the breadth of change to governance, the economy, and society writ large, government, businesses of all sizes, academia, and civil society must work together to prepare for these disruptions and seek to establish rules that uphold democratic values and human rights.

Business and trade associations and chambers of commerce have an important role to play in contributing both to public policy and smart regulatory solutions as well as promoting accountable and ethical AI, trust, and best practice conventions across the private sector in the absence of global standards. In emerging markets and developing countries in particular, AI may feel like a far-off concern. However, decisions being made today about data privacy, data protection, data sovereignty, and digital rights at the national level will inform how these technologies impact local communities. A robust civil society, including an engaged private sector, empowered to work cooperatively with government for the widest possible benefit, is essential to technology’s governance and a natural advantage of democratic civic space. Let us leverage the democratic values and ideals that embody humanity at its best to shape our shared digital future with machines.

Published Date: June 17, 2022