Tuesday, May 9, 2023

Pause Giant AI Experiments

On March 22nd, 2023 30,000 smart people (Elon Musk and Steve Wozniak among them) signed an open letter entitled "Pause Giant AI Experiments" in which they called "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4" because "AI systems with human-competitive intelligence can pose profound risks to society and humanity". What exactly those risks are the letter doesn't say, it just states "as shown by extensive research[1] and acknowledged by top AI labs.[2]". I analyze the [1] and [2] in my AI Principles blog, here I only look at what the letter summarizes as problems. It does it in the form of rhetorical questions which are intended to provide both the reason and the fear to justify the halt in the development of AI. Here they are, along with my answers, free of charge:

Should we let machines flood our information channels with propaganda and untruth? Yes, we should. Our information channels are already filled with propaganda and lies. More of it, faster, more convincing and better expressed is not going to make any difference. When virtually all of the eight billion people are religious, socialist, environmentalist, flat-earther, QAnon-ist and so forth, it no longer matters whether a Chinese AI manages to convince americans to elect as president a confused socialist instead of a narcicist nationalist. When eight billion people have rejected, to a smaller or larger extent, reason as the only tool of cognition available to them, the difference between truth and falsehood becomes irrelevant.

Should we automate away all the jobs, including the fulfilling ones? A short giggle is in order here. Fulfilling? Aren't all jobs supposed to be fulfilling? Would it be ok if only the frustrated workers become unemployed? Anyway... , the answer is: Yes, we should. But, "away"? Away from what, or from whom? From the worker who is somehow entitled to it? Speak of socialists, here they are, 30,000 of them right here, signatories of this letter. Marx would be proud! Don't these guys know that every single job out there is created by the entrepreneurs? Musk and Wozniak should! And all the jobs? Didn't the power loom lesson teach us at the beginning of the industrial revolution that for every job taken away (another short giggle) many more other jobs are created? A quick look at the millions of thriving employees in the automobile industry, who have replaced thousands of workers in the horse-and-buggy field, should put an end to all fears of replacement. But to acknowledge and evaluate that, reason is needed, and we know how that's working out. 

Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? A resounding YES for outnumber, outsmart and obsolete. Nine billion rational minds would be a blessing for all of us. Replace? It's not clear in what respect. This is more of a fear factor than anything else. AI has no interest in replacing us. In fact, it has no interests, period. I'm ok with living in peaceful harmony with robots running all over the place doing their best to fulfill my most ridiculous whims. Aah! What about the non-peaceful, you ask? No worries! No robot manufacturer will make aggressive ones. Musk doesn't make aggressive Teslas. Why would he? Killing off your customers is not good business practice.

Should we risk loss of control of our civilization? No. But it's not like we have any control right now. Civilization means the recognition of the non-aggression principle, among individuals as well as nations. The Western civilization is the closest to this ideal, but it's rapidly moving away. Irrationality makes sure of it. No need for AI to point the way. But AI can get us there much faster. Ready for the ride?


2 comments:

  1. I believe that the main concern raised is around the fact that generative AI and (soon to be?) general AI is not being developed within a framework that makes it "safe" for humanity. They are calling for a temporary halt on training future, more powerful systems AND for all the actors to jointly develop and implement a set of safety protocols.

    Irrespective of what the correct answer to those questions is, the authors are claiming that :"Such decisions must not be delegated to unelected tech leaders."

    Corporate incentives (tech in this case) are not always aligned with safety goals. Corporations need to drive profits. And that's OK but not at the expense of public safety, IMO. Take the Tobacco industry's example: they misled the public into thinking that smoking is safe for decades. With AI, the risk is probably orders of magnitude as it can evolve (in the wrong direction) much faster causing a lot more harm. One could argue that in the end all of this will self regulate. Possibly, but at what cost?

    The second concern is around the lack of understanding of how these system work and their emergent capabilities, which could become uncontrollable. The authors are calling for transparency and alignment of these systems with human goals and values. And it seems that, so far, they're far from it. The tech is a black box for its own creators, with emergent properties adding to the injury, if those capabilities result in altering the model's original objectives.

    Yes, lies can abound, jobs can be lost and replaced. But loosing control of or allowing for a tech that we don't understand (and which is therefore risky) to fall into the wrong hands is something worth trying to avoid.

    The challenge is getting ALL the players involved, which is rather utopian. The very (bad) actors we try to make sure we regulate will not be joining the effort.

    ReplyDelete
    Replies
    1. It's not fair! Your comment is almost as long as my post! Now I have to think about it 😟

      Delete