Monday, May 22, 2023

Better decisions of an Artificial Intelligence System

Stuart Russel begins his 2017 TED Talk with "if [AIS-s] also have access to more information, they'll be able to make better decisions in the real world than we can." To a certain extent, this is true, the better informed one is, the better decisions it can make. But that's not what he meant. He used the word "better" in an ethical sense, as in "more good", not epistemological, as in "more correct". This is confirmed later in the talk when he says "[AIS-s are] going to read everything the human race has ever written. [...] So there's a massive amount of data to learn from." Wrong! Simply providing it with more data can lead only to "more correct" identification of concrete things and facts. It cannot directly lead to "more good" decisions. For that the AIS needs an ethical standard which it does not have. That standard must be programmed into it, by humans, it cannot be learned through observation or training alone. This is not just a theoretical consideration, it's as practical as it gets. Just look at how Russel proposes to solve the ethical issue of how to prevent the AIS from doing bad things in its endeavour to accomplish a given task. Russel's solution is what he calls the principle of humility, which is basically to confuse the AIS as to what its task actually is. This means spending millions in research on how to make the AIS understand what it needs to do, and then spend more millions to make it doubt that its understanding was correct. This approach is the result of the failure to see that the AIS's task is epistemological - it is what it is - while the bad things it might do are ethical - they do, or do not, meet the requirements of the given standard. Things are what they are regardless of how bad the consequences of correctly identifying them might be. Ethical issues cannot be solved by murking epistemological concepts. To "solve" ethical aspects of an AIS's decisions by declaring that what it is trying to do is not really its task, is like defending slavery by declaring that the slaves are not really human. Errare humanum est, but AI shouldn't be endowed by its creators with this excuse.

What are the "correct" epistemology and the "good" ethics? That is another story, for another day.

Tuesday, May 9, 2023

Pause Giant AI Experiments

On March 22nd, 2023 30,000 smart people (Elon Musk and Steve Wozniak among them) signed an open letter entitled "Pause Giant AI Experiments" in which they called "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4" because "AI systems with human-competitive intelligence can pose profound risks to society and humanity". What exactly those risks are the letter doesn't say, it just states "as shown by extensive research[1] and acknowledged by top AI labs.[2]". I analyze the [1] and [2] in my AI Principles blog, here I only look at what the letter summarizes as problems. It does it in the form of rhetorical questions which are intended to provide both the reason and the fear to justify the halt in the development of AI. Here they are, along with my answers, free of charge:

Should we let machines flood our information channels with propaganda and untruth? Yes, we should. Our information channels are already filled with propaganda and lies. More of it, faster, more convincing and better expressed is not going to make any difference. When virtually all of the eight billion people are religious, socialist, environmentalist, flat-earther, QAnon-ist and so forth, it no longer matters whether a Chinese AI manages to convince americans to elect as president a confused socialist instead of a narcicist nationalist. When eight billion people have rejected, to a smaller or larger extent, reason as the only tool of cognition available to them, the difference between truth and falsehood becomes irrelevant.

Should we automate away all the jobs, including the fulfilling ones? A short giggle is in order here. Fulfilling? Aren't all jobs supposed to be fulfilling? Would it be ok if only the frustrated workers become unemployed? Anyway... , the answer is: Yes, we should. But, "away"? Away from what, or from whom? From the worker who is somehow entitled to it? Speak of socialists, here they are, 30,000 of them right here, signatories of this letter. Marx would be proud! Don't these guys know that every single job out there is created by the entrepreneurs? Musk and Wozniak should! And all the jobs? Didn't the power loom lesson teach us at the beginning of the industrial revolution that for every job taken away (another short giggle) many more other jobs are created? A quick look at the millions of thriving employees in the automobile industry, who have replaced thousands of workers in the horse-and-buggy field, should put an end to all fears of replacement. But to acknowledge and evaluate that, reason is needed, and we know how that's working out. 

Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? A resounding YES for outnumber, outsmart and obsolete. Nine billion rational minds would be a blessing for all of us. Replace? It's not clear in what respect. This is more of a fear factor than anything else. AI has no interest in replacing us. In fact, it has no interests, period. I'm ok with living in peaceful harmony with robots running all over the place doing their best to fulfill my most ridiculous whims. Aah! What about the non-peaceful, you ask? No worries! No robot manufacturer will make aggressive ones. Musk doesn't make aggressive Teslas. Why would he? Killing off your customers is not good business practice.

Should we risk loss of control of our civilization? No. But it's not like we have any control right now. Civilization means the recognition of the non-aggression principle, among individuals as well as nations. The Western civilization is the closest to this ideal, but it's rapidly moving away. Irrationality makes sure of it. No need for AI to point the way. But AI can get us there much faster. Ready for the ride?


AI Principles

Triggered by OpenAI's ChatGPT and Microsoft's Bing chatbots, the Artificial Intelligence scare has arrived at our doorsteps. It has infected not only regular folk, but even the best minds out there, including those who are directly involved in AI's development.

What are they afraid of?

World Economic Forum says: Unemployment, Inequality, Mistakes, Racism, Security, Robot rights, Staying in control.
Wired says: Control, Loss of jobs, Bias, Explanability, Ethical decision making.
Tech Target says: Distribution of harmful content, Copyright and legal exposure, Data privacy violations, Sensitive information disclosure, Amplification of existing bias, Data provenance, Lack of explainability and interpretability.

That just about summarizes it all.

Virtually all of these worries are unfounded. And that's because they fail to take into account the human factor behind AI. This is due to the collectivist view prevalent today in the world. It is accepted, as an indisputable fact, that it is society that must act to protect itself against AI. From this false premise it follows directly that governments must act promptly in order to fulfill society's wish and, at the cost of billions, make dozens of new laws that will regulate the development of AI, slowing it down with unnecessary oversight and with confusing and contradictory rules. All of this can be easily avoided by not letting emotions (fear in this particular case) overwhelm us, and simply acknowledge one obvious fact, the one and only AI Principle:

Artificial Intelligence is a tool, only a tool, and nothing but a tool.

Like any other tool, an artificial intelligence system (AIS) is not liable for its functionality. It is the humans behind it who are. If humans decide to use this tool, whether it is to act upon the information provided by it, or to allow it to act by itself based on its own decisions, it is they, the humans, who are responsible for the consequences. They cannot blame negative outcomes on their tools. Whether the responsibility falls on the manufacturer, the distributor, the service provider, or the end user is a (simple) legal issue, as it is with any other tools currently in use. It is not the amorphous, unidenfiable entity we call "society" that must watch over AI, but the concrete individual humans who are behind it.

This approach applies to every possible scenario involving AI, from displaying information on a screen, to the handling of autonomous vehicles, to the policing of Robocops and to the launching of nuclear missiles. It makes no difference whether a car veered into a tree because AI mistook it for a street, or because one of its wheels fell off. This natural, common sense, approach to the "ethical problem" of AI should silence all fear-mongerers and quell all attempts to hinder its development. As long as humans are held accountable for what they produce and use, no tool will ever rebel against its maker. This approach does not need a new vision for this new type of tool, it only needs to adapt the existing one to its particularities. Its cost is negligible, and it provides all the safety we need.

A few other issues:

Unemployment First, this is not an AI issue, but a socio-economic one. If employers choose to replace humans with AIS, so be it. Second, as I wrote in another post, this is not even a socio-economic problem, but a socio-economic salvation. This is the centuries old problem of mechanization, automatization, robotization and now AI-ization. The increase in productivity thanks to the replacement of humans with AIS will make the number of jobs lost to AI only a fraction of the number of jobs that it will create.

Explanability Traceability is a commonly accepted requirement for an AIS. It is needed as feedback, for bug fixing, improvements and legal defense. The opponents of AI claim, correctly, that as a product of a complex neural network, traceability cannot be fully achieved. Therefore the decisions made by AI are impossible to fully predict or explain. But then they conclude that this argument is enough to stop, or at least slow down, the development of AI. That's false. The blackbox paradigm applies in the AI realm as it does in any other. The AIS developers produce AIS-s, not neural nets. They are responsible for the system as such, as a whole. Just as a manufacturer of nuts and bolts is responsible for the nuts and the bolts, and does not have to find the particular atom of aluminum that failed first, AIS developers don't need to precisely identify the "neuron" that misfired, or the weight that was assigned to a specific node. The impossibility to fully trace a decision is a technical problem, not a social one. From a social perspective, failure needs to be traced only so far as to establish accountability.

Safety "Guns don't kill people, people kill people". Bad actors will always be there, ready to use whatever is available to them to harm others. The only legitimate purpose of governments is to protect us against aggressors. Whether those aggressors use baseball bats, guns, viruses, nuclear missiles or AI is irrelevant. Governments need to adapt, find the threats and annihilate them. The developers of AIS should NOT be concerned with the misuse, or the malevolent use, of their products. More importantly, the governments should NOT force them to be. Any governmental intervention would only create obstacles and slow down the advancement, which would create a great advantage for bad actors, including nations, who have no such restrictions and scruples. The only way to stop a bad guy with a gun is a good guy with a better gun. Vladimir Putin knows this best.

So, then, what about AI Ethics? If AI is not liable for anything it does, is there such a thing as AI Ethics? Yes, there is. A huge thing. However, it only concerns developers, not "society", it is strictly a technical issue, not social. Before considering Ethics, the AI community must first deal with epistemological notions, such as senses, perceptions and concepts. But that's another story, for another day.