Triggered by OpenAI's ChatGPT and Microsoft's Bing chatbots, the Artificial Intelligence scare has arrived at our doorsteps. It has infected not only regular folk, but even the best minds out there, including those who are directly involved in AI's development.
What are they afraid of?
Wired says: Control, Loss of jobs, Bias, Explanability, Ethical decision making.
Tech Target says: Distribution of harmful content, Copyright and legal exposure, Data privacy violations, Sensitive information disclosure, Amplification of existing bias, Data provenance, Lack of explainability and interpretability.
That just about summarizes it all.
Virtually all of these worries are unfounded. And that's because they fail to take into account the human factor behind AI. This is due to the collectivist view prevalent today in the world. It is accepted, as an indisputable fact, that it is society that must act to protect itself against AI. From this false premise it follows directly that governments must act promptly in order to fulfill society's wish and, at the cost of billions, make dozens of new laws that will regulate the development of AI, slowing it down with unnecessary oversight and with confusing and contradictory rules. All of this can be easily avoided by not letting emotions (fear in this particular case) overwhelm us, and simply acknowledge one obvious fact, the one and only AI Principle:
Artificial Intelligence is a tool, only a tool, and nothing but a tool.
Like any other tool, an artificial intelligence system (AIS) is not liable for its functionality. It is the humans behind it who are. If humans decide to use this tool, whether it is to act upon the information provided by it, or to allow it to act by itself based on its own decisions, it is they, the humans, who are responsible for the consequences. They cannot blame negative outcomes on their tools. Whether the responsibility falls on the manufacturer, the distributor, the service provider, or the end user is a (simple) legal issue, as it is with any other tools currently in use. It is not the amorphous, unidenfiable entity we call "society" that must watch over AI, but the concrete individual humans who are behind it.
This approach applies to every possible scenario involving AI, from displaying information on a screen, to the handling of autonomous vehicles, to the policing of Robocops and to the launching of nuclear missiles. It makes no difference whether a car veered into a tree because AI mistook it for a street, or because one of its wheels fell off. This natural, common sense, approach to the "ethical problem" of AI should silence all fear-mongerers and quell all attempts to hinder its development. As long as humans are held accountable for what they produce and use, no tool will ever rebel against its maker. This approach does not need a new vision for this new type of tool, it only needs to adapt the existing one to its particularities. Its cost is negligible, and it provides all the safety we need.
A few other issues:
-
Unemployment First, this is not an AI issue, but a socio-economic one. If employers choose to replace humans with AIS, so be it. Second, as I wrote in
another post, this is not even a socio-economic problem, but a socio-economic salvation. This is the centuries old problem of mechanization, automatization, robotization and now AI-ization. The increase in productivity thanks to the replacement of humans with AIS will make the number of jobs lost to AI only a fraction of the number of jobs that it will create.
- Explanability Traceability is a commonly accepted requirement for an AIS. It is needed as feedback, for bug fixing, improvements and legal defense. The opponents of AI claim, correctly, that as a product of a complex neural network, traceability cannot be fully achieved. Therefore the decisions made by AI are impossible to fully predict or explain. But then they conclude that this argument is enough to stop, or at least slow down, the development of AI. That's false. The blackbox paradigm applies in the AI realm as it does in any other. The AIS developers produce AIS-s, not neural nets. They are responsible for the system as such, as a whole. Just as a manufacturer of nuts and bolts is responsible for the nuts and the bolts, and does not have to find the particular atom of aluminum that failed first, AIS developers don't need to precisely identify the "neuron" that misfired, or the weight that was assigned to a specific node. The impossibility to fully trace a decision is a technical problem, not a social one. From a social perspective, failure needs to be traced only so far as to establish accountability.
- Safety "Guns don't kill people, people kill people". Bad actors will always be there, ready to use whatever is available to them to harm others. The only legitimate purpose of governments is to protect us against aggressors. Whether those aggressors use baseball bats, guns, viruses, nuclear missiles or AI is irrelevant. Governments need to adapt, find the threats and annihilate them. The developers of AIS should NOT be concerned with the misuse, or the malevolent use, of their products. More importantly, the governments should NOT force them to be. Any governmental intervention would only create obstacles and slow down the advancement, which would create a great advantage for bad actors, including nations, who have no such restrictions and scruples. The only way to stop a bad guy with a gun is a good guy with a better gun. Vladimir Putin knows this best.
So, then, what about AI Ethics? If AI is not liable for anything it does, is there such a thing as AI Ethics? Yes, there is. A huge thing. However, it only concerns developers, not "society", it is strictly a technical issue, not social. Before considering Ethics, the AI community must first deal with epistemological notions, such as senses, perceptions and concepts. But that's another story, for another day.