Tuesday, May 9, 2023

AI Principles

Triggered by OpenAI's ChatGPT and Microsoft's Bing chatbots, the Artificial Intelligence scare has arrived at our doorsteps. It has infected not only regular folk, but even the best minds out there, including those who are directly involved in AI's development.

What are they afraid of?

World Economic Forum says: Unemployment, Inequality, Mistakes, Racism, Security, Robot rights, Staying in control.
Wired says: Control, Loss of jobs, Bias, Explanability, Ethical decision making.
Tech Target says: Distribution of harmful content, Copyright and legal exposure, Data privacy violations, Sensitive information disclosure, Amplification of existing bias, Data provenance, Lack of explainability and interpretability.

That just about summarizes it all.

Virtually all of these worries are unfounded. And that's because they fail to take into account the human factor behind AI. This is due to the collectivist view prevalent today in the world. It is accepted, as an indisputable fact, that it is society that must act to protect itself against AI. From this false premise it follows directly that governments must act promptly in order to fulfill society's wish and, at the cost of billions, make dozens of new laws that will regulate the development of AI, slowing it down with unnecessary oversight and with confusing and contradictory rules. All of this can be easily avoided by not letting emotions (fear in this particular case) overwhelm us, and simply acknowledge one obvious fact, the one and only AI Principle:

Artificial Intelligence is a tool, only a tool, and nothing but a tool.

Like any other tool, an artificial intelligence system (AIS) is not liable for its functionality. It is the humans behind it who are. If humans decide to use this tool, whether it is to act upon the information provided by it, or to allow it to act by itself based on its own decisions, it is they, the humans, who are responsible for the consequences. They cannot blame negative outcomes on their tools. Whether the responsibility falls on the manufacturer, the distributor, the service provider, or the end user is a (simple) legal issue, as it is with any other tools currently in use. It is not the amorphous, unidenfiable entity we call "society" that must watch over AI, but the concrete individual humans who are behind it.

This approach applies to every possible scenario involving AI, from displaying information on a screen, to the handling of autonomous vehicles, to the policing of Robocops and to the launching of nuclear missiles. It makes no difference whether a car veered into a tree because AI mistook it for a street, or because one of its wheels fell off. This natural, common sense, approach to the "ethical problem" of AI should silence all fear-mongerers and quell all attempts to hinder its development. As long as humans are held accountable for what they produce and use, no tool will ever rebel against its maker. This approach does not need a new vision for this new type of tool, it only needs to adapt the existing one to its particularities. Its cost is negligible, and it provides all the safety we need.

A few other issues:

Unemployment First, this is not an AI issue, but a socio-economic one. If employers choose to replace humans with AIS, so be it. Second, as I wrote in another post, this is not even a socio-economic problem, but a socio-economic salvation. This is the centuries old problem of mechanization, automatization, robotization and now AI-ization. The increase in productivity thanks to the replacement of humans with AIS will make the number of jobs lost to AI only a fraction of the number of jobs that it will create.

Explanability Traceability is a commonly accepted requirement for an AIS. It is needed as feedback, for bug fixing, improvements and legal defense. The opponents of AI claim, correctly, that as a product of a complex neural network, traceability cannot be fully achieved. Therefore the decisions made by AI are impossible to fully predict or explain. But then they conclude that this argument is enough to stop, or at least slow down, the development of AI. That's false. The blackbox paradigm applies in the AI realm as it does in any other. The AIS developers produce AIS-s, not neural nets. They are responsible for the system as such, as a whole. Just as a manufacturer of nuts and bolts is responsible for the nuts and the bolts, and does not have to find the particular atom of aluminum that failed first, AIS developers don't need to precisely identify the "neuron" that misfired, or the weight that was assigned to a specific node. The impossibility to fully trace a decision is a technical problem, not a social one. From a social perspective, failure needs to be traced only so far as to establish accountability.

Safety "Guns don't kill people, people kill people". Bad actors will always be there, ready to use whatever is available to them to harm others. The only legitimate purpose of governments is to protect us against aggressors. Whether those aggressors use baseball bats, guns, viruses, nuclear missiles or AI is irrelevant. Governments need to adapt, find the threats and annihilate them. The developers of AIS should NOT be concerned with the misuse, or the malevolent use, of their products. More importantly, the governments should NOT force them to be. Any governmental intervention would only create obstacles and slow down the advancement, which would create a great advantage for bad actors, including nations, who have no such restrictions and scruples. The only way to stop a bad guy with a gun is a good guy with a better gun. Vladimir Putin knows this best.

So, then, what about AI Ethics? If AI is not liable for anything it does, is there such a thing as AI Ethics? Yes, there is. A huge thing. However, it only concerns developers, not "society", it is strictly a technical issue, not social. Before considering Ethics, the AI community must first deal with epistemological notions, such as senses, perceptions and concepts. But that's another story, for another day.

2 comments:

  1. Fair enough, AI is simply a tool but it is not a simple tool. And it is not necessarily a safe tool either. Just like with any complex and potentially dangerous technology, there should be a rule book that provides the guidelines for how the tool is to be developed and used in order to reduce or minimize societal risks. It's the very existence of these guardrails and regulations that restrict bad actors' access to nuclear tech, why planes are not flown into buildings and why flying is safer than walking. Relying on a bigger/better gun approach would only lead to an arms race which, yes, it would probably accelerate progress but it will also increase the risks exponentially.

    IMO, a set of safety rules is indeed needed. The challenge is about striking the right balance between progress, giving developers and users free reins, and risk mitigation through guardrails and regulations. A few principles should be respected in this balancing act:

    - AIS should be human aligned : so that is stays true to its original/intended objective.

    - AIS should be self-identifying : so that the user can discern between human and machine knowledge.

    - AIS should be non self-replicable : so that its training and advancement stays under human control.

    ReplyDelete
  2. Here's a good example. The chatbot is nothing but a tool in the hands of Air Canada who is responsible for what the bot does.

    https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/

    ReplyDelete