Artificial Intelligence Future

Privacy, Policy, and AI; The Need of a Global Framework for a Globalized World!

Introduction

There is already a considerable amount of resources on the technicalities behind AI, its enormous benefits, and its risks. Besides touching on the interdependence of AI, policy, and privacy, the paper focuses on the challenges and, most importantly, the possible solution(s) to address them at scale.

One of the main characteristics of futuristic thinking is backcasting, which allows us to start with the probable, the possible, or the desired future and work our way backward; with the AI advancements we see today (mainly driven by the private sector), the ambiguous nature of privacy, and the policymakers trying to catch up; adopting this kind of thinking is becoming a must. Moreover, the need for a global framework has never been more urgent.

Before picturing a future with AI, with both its socially good and harmful sense, let us first try defining it; the concept of artificial intelligence originated from philosophy hundreds of years ago, it started with philosophers trying to define human intelligence as a mechanical process, which subsequently inspired scientists to consider the possibility of an electronic brain. Today, and according to Tesler’s Theorem, “AI is… whatever has not been done yet.”. Tomorrow, Thanks to big data, artificial intelligence might surpass human-level intelligence.

Big data is simply information that marketing organizations, government agencies, corporations, regulators, and others collect about us mainly through our online activities (ex. browsing the web, shopping online, posting a selfie, sending a direct message, using a personal device, or interacting with a voice assistant). With so many data touch-points, we leave a trail of data that reveals many things about us, things like who we are, what we like, what we dislike, what we buy, where we go, and even who we are likely to become. So what does privacy mean in this digital era? Furthermore, how can a framework address these issues at scale?

As individuals, we each have our expectation of privacy, and with the rise of new devices allowing constant communication and more information sharing, much of what used to be private may now be considered public.

As societies, we view and value privacy based on influences such as our history, culture, and social norms.
As corporations, we want to provide our customers with exceptionally personalized experiences, get ahead of our competition while conducting our activities in secret and with no exposure to legal jeopardies.

While privacy has never been vaguer, people have never cared more about their data, a trend likely to become more prevalent, particularly in an AI-driven world where personal data utilization is becoming the heartbeat of many organizations across different industries, forcing governments to think about a new form of policy.

A policy is a law, a regulation, a process, a procedure designed to provide guidelines on why and how to do things. Our approach to policy analysis in regards to AI gives a central role to future risk avoidance as the primary indicator of success; risks such as algorithmic bias that would lead to discrimination, privacy violations, deepfakes proliferation, socioeconomic inequality, weapons automatization, misalignment between our goals and the machines’, and much more.

Policymakers now have to think in a multi-faceted way where policy faces a technical, ethical, and philosophical challenge, and with the technological pace that we are seeing, the challenge has never been more significant. Furthermore, in a global economy where 1% of vendors will control the vast majority of pre-trained AI models (according to Gartner), businesses and particularly multinational corporations already struggling with the complexity of global governance, will be at the center of the debate as they will have to address the risk of widespread bias and security flaws when leveraging these algorithms. This issue suggests a desperate need for a global structure capable of addressing issues at scale.

Introduced for signature in 1968, The Non-Proliferation of Nuclear Weapons Treaty (NPT) has three simple pillars of equal importance, non-proliferation of nuclear weapons, nuclear disarmament, and the peaceful use of nuclear technology. Today, 191 states have become parties to the treaty, and while many critics argue that NPT is discriminatory for accepting five nuclear powers and freezing out others, NPT opponents agree it only recognized the reality and contributed to stopping a deadly trend.

Today, the question is whether or how a framework can be created for artificial intelligence to comply with international business laws, human rights laws, and egalitarianism laws while considering the challenges mentioned above; and without undermining the industry as a whole.

First, why do we need a global framework?
With algorithms like product recommenders, robots, autonomous cars, virtual assistants, and many more! Users, customers, business organizations, trade unions, and government organizations interact daily through some form of AI. A clear guideline that protects the rights and reinforces the responsibilities of these social partners will not only address ambiguities but also allow companies to accelerate the rate of innovation to enhance and elevate the human condition. Furthermore, when approaching AI from a global perspective, we have to deal with a Collingridge dilemma, enormous positive and negative possibilities with an uncertainty of what might unfold over time, mainly when asking complex ethical and technological questions.

So how can a dynamic pyramid framework help?
The world is one complex interconnected network; on the one hand, the risk of widespread misuse of AI could have irreparable global ramifications. On the other hand, especially when dealing with ethical concerns, different countries have different rules, laws, and norms respective of their culture; therefore, another layer of complexity gets added to the equation. Moreover, in a world constantly changing, human beings evolve, re-appropriate, and redefine meanings and practices to adapt and understand themselves, a phenomenon that forces us to be flexible in the way we approach the future.

The Dynamic Pyramid Framework suggests that a global framework is a challenge too big for any one nation or institution; it requires constant adjustments to meet uncertainties. Hence the word Dynamic.

Structurally, similar to NPT, it starts at the top of the pyramid in the form of a global framework with simple pillars designed to help us scale AI safely, sustainably, and responsibly. As we go down the pyramid, and with the help of ethicists, philosophers, scientists, engineers, NGOs, and corporations, governments can get more granular in designing sub-frameworks that allow their nations to equally harness the benefits of AI while adhering to the global vision and complying with the guidelines of the global framework.

In summary, AI is a black box, and depending on each nation’s principles and values, it can potentially have a constructive or destructive outcome. By creating a global framework for scaling AI safely, sustainably, and responsibly, and with simple terms that all of us can understand as a society, we collectively ensure that our world will be safe not only for us but for generations to come.

“The potential benefits of AI are huge. So are the dangers”

Dave Waters
Leave a Comment