Monday, May 22, 2023

Better decisions of an Artificial Intelligence System

Stuart Russel begins his 2017 TED Talk with "if [AIS-s] also have access to more information, they'll be able to make better decisions in the real world than we can." To a certain extent, this is true, the better informed one is, the better decisions it can make. But that's not what he meant. He used the word "better" in an ethical sense, as in "more good", not epistemological, as in "more correct". This is confirmed later in the talk when he says "[AIS-s are] going to read everything the human race has ever written. [...] So there's a massive amount of data to learn from." Wrong! Simply providing it with more data can lead only to "more correct" identification of concrete things and facts. It cannot directly lead to "more good" decisions. For that the AIS needs an ethical standard which it does not have. That standard must be programmed into it, by humans, it cannot be learned through observation or training alone. This is not just a theoretical consideration, it's as practical as it gets. Just look at how Russel proposes to solve the ethical issue of how to prevent the AIS from doing bad things in its endeavour to accomplish a given task. Russel's solution is what he calls the principle of humility, which is basically to confuse the AIS as to what its task actually is. This means spending millions in research on how to make the AIS understand what it needs to do, and then spend more millions to make it doubt that its understanding was correct. This approach is the result of the failure to see that the AIS's task is epistemological - it is what it is - while the bad things it might do are ethical - they do, or do not, meet the requirements of the given standard. Things are what they are regardless of how bad the consequences of correctly identifying them might be. Ethical issues cannot be solved by murking epistemological concepts. To "solve" ethical aspects of an AIS's decisions by declaring that what it is trying to do is not really its task, is like defending slavery by declaring that the slaves are not really human. Errare humanum est, but AI shouldn't be endowed by its creators with this excuse.

What are the "correct" epistemology and the "good" ethics? That is another story, for another day.

No comments:

Post a Comment