Thursday, July 6, 2023

Panne de courant

Today in Montreal there was a "generalized power outage" affecting hundreds of thousands of inhabitants. Hydro Québec explained that a safety switch was automatically activated, the cause of the activation is being investigated.

That is what they say. Here is what actually happened. And what will happen tomorrow.
--------
It takes him about .3 seconds to figure out what the truth is, and another .2 seconds to decide to go against it. The President of Hydro Québec knows that all the other six people sitting at the table have already done the same, reached the same conclusion and took the same decision. But the meeting must go on. They must go through the motions, dance the dance. There is time to be spent, forms to be filled, a back to watch, an ass to cover. "It's going to take about a couple of hours" he says to himself, reaching for the pot of fresh coffee. In front of each participant sit hard copies of two documents. A two page request from the Government of Québec to have 5000 new electric vehicle chargers installed across the province, and the 450 page feasibility study from Hydro Québec's Technical Department. He read the study's first two paragraphs and knew right away that it's just a copy-paste of the study they produced in the winter. He never read it, but there's no need to, he knows exactly what it says. It says that there's no fucking way in hell the grid is going to hold when the temperature goes above X or below -Y. It's only the X and the Y that the Head of the Technical Department modifies in each new revision. Does he actually calculate those figures or he just subtracts 1 from each end every six months or so? Probably the latter, there's no reason why he wouldn't just go through the motions as well. So, most likely he did not take into account the new requirement to have most chargers installed "particularly in disadvantaged areas". It doesn't matter, no one can blame him. After all he's the one who's going to take the fall. He always does and he doesn't seem to mind. Of course, it's easy for him, he's a techie, he can't be cancelled. At least not yet. For us, on the other hand, it's not that simple. We have to tread lightly not to upset that delicate balance between a hefty bonus and the wrath of the Head of the newly created Bureau of Social Justice. Hm... why do I think of it as a delicate balance? This is not even a balance, let alone delicate! It's not like they have approximately equal weights! This is a no-brainer. I have to find another metaphor... Only twenty minutes have passed. At least another hour and a half to go. The lady presenting the Power Point mumbles something about racial inequalities. He reaches again for the coffee pot. Ninety... very... looong... minutes.
-------
"My fellow citizens," says the Prime Minister of Québec, "the power outage we have experienced yesterday is only a warning nature gives us, a preview of what's to come. Global Warming... um... I mean, Climate Change isn't just a nuisance, it is an existential threat to mankind! Quebec is leading the global fight against it and we will not relent!" Hurrah! burst into frenetic applause the little imaginary loyal subjects in His head. "We have seen the future, and the future is electric!" Hurrah! they go again. "Quebec will invest $2.7 trillion gazillion into alternative energy and we will forbid internal combustion engine vehicles by 2030!"

"Tabarnak" say the Quebecers, "Aux prochaines élections, je vote pour la séparation"

Monday, May 22, 2023

Better decisions of an Artificial Intelligence System

Stuart Russel begins his 2017 TED Talk with "if [AIS-s] also have access to more information, they'll be able to make better decisions in the real world than we can." To a certain extent, this is true, the better informed one is, the better decisions it can make. But that's not what he meant. He used the word "better" in an ethical sense, as in "more good", not epistemological, as in "more correct". This is confirmed later in the talk when he says "[AIS-s are] going to read everything the human race has ever written. [...] So there's a massive amount of data to learn from." Wrong! Simply providing it with more data can lead only to "more correct" identification of concrete things and facts. It cannot directly lead to "more good" decisions. For that the AIS needs an ethical standard which it does not have. That standard must be programmed into it, by humans, it cannot be learned through observation or training alone. This is not just a theoretical consideration, it's as practical as it gets. Just look at how Russel proposes to solve the ethical issue of how to prevent the AIS from doing bad things in its endeavour to accomplish a given task. Russel's solution is what he calls the principle of humility, which is basically to confuse the AIS as to what its task actually is. This means spending millions in research on how to make the AIS understand what it needs to do, and then spend more millions to make it doubt that its understanding was correct. This approach is the result of the failure to see that the AIS's task is epistemological - it is what it is - while the bad things it might do are ethical - they do, or do not, meet the requirements of the given standard. Things are what they are regardless of how bad the consequences of correctly identifying them might be. Ethical issues cannot be solved by murking epistemological concepts. To "solve" ethical aspects of an AIS's decisions by declaring that what it is trying to do is not really its task, is like defending slavery by declaring that the slaves are not really human. Errare humanum est, but AI shouldn't be endowed by its creators with this excuse.

What are the "correct" epistemology and the "good" ethics? That is another story, for another day.

Tuesday, May 9, 2023

Pause Giant AI Experiments

On March 22nd, 2023 30,000 smart people (Elon Musk and Steve Wozniak among them) signed an open letter entitled "Pause Giant AI Experiments" in which they called "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4" because "AI systems with human-competitive intelligence can pose profound risks to society and humanity". What exactly those risks are the letter doesn't say, it just states "as shown by extensive research[1] and acknowledged by top AI labs.[2]". I analyze the [1] and [2] in my AI Principles blog, here I only look at what the letter summarizes as problems. It does it in the form of rhetorical questions which are intended to provide both the reason and the fear to justify the halt in the development of AI. Here they are, along with my answers, free of charge:

Should we let machines flood our information channels with propaganda and untruth? Yes, we should. Our information channels are already filled with propaganda and lies. More of it, faster, more convincing and better expressed is not going to make any difference. When virtually all of the eight billion people are religious, socialist, environmentalist, flat-earther, QAnon-ist and so forth, it no longer matters whether a Chinese AI manages to convince americans to elect as president a confused socialist instead of a narcicist nationalist. When eight billion people have rejected, to a smaller or larger extent, reason as the only tool of cognition available to them, the difference between truth and falsehood becomes irrelevant.

Should we automate away all the jobs, including the fulfilling ones? A short giggle is in order here. Fulfilling? Aren't all jobs supposed to be fulfilling? Would it be ok if only the frustrated workers become unemployed? Anyway... , the answer is: Yes, we should. But, "away"? Away from what, or from whom? From the worker who is somehow entitled to it? Speak of socialists, here they are, 30,000 of them right here, signatories of this letter. Marx would be proud! Don't these guys know that every single job out there is created by the entrepreneurs? Musk and Wozniak should! And all the jobs? Didn't the power loom lesson teach us at the beginning of the industrial revolution that for every job taken away (another short giggle) many more other jobs are created? A quick look at the millions of thriving employees in the automobile industry, who have replaced thousands of workers in the horse-and-buggy field, should put an end to all fears of replacement. But to acknowledge and evaluate that, reason is needed, and we know how that's working out. 

Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? A resounding YES for outnumber, outsmart and obsolete. Nine billion rational minds would be a blessing for all of us. Replace? It's not clear in what respect. This is more of a fear factor than anything else. AI has no interest in replacing us. In fact, it has no interests, period. I'm ok with living in peaceful harmony with robots running all over the place doing their best to fulfill my most ridiculous whims. Aah! What about the non-peaceful, you ask? No worries! No robot manufacturer will make aggressive ones. Musk doesn't make aggressive Teslas. Why would he? Killing off your customers is not good business practice.

Should we risk loss of control of our civilization? No. But it's not like we have any control right now. Civilization means the recognition of the non-aggression principle, among individuals as well as nations. The Western civilization is the closest to this ideal, but it's rapidly moving away. Irrationality makes sure of it. No need for AI to point the way. But AI can get us there much faster. Ready for the ride?


AI Principles

Triggered by OpenAI's ChatGPT and Microsoft's Bing chatbots, the Artificial Intelligence scare has arrived at our doorsteps. It has infected not only regular folk, but even the best minds out there, including those who are directly involved in AI's development.

What are they afraid of?

World Economic Forum says: Unemployment, Inequality, Mistakes, Racism, Security, Robot rights, Staying in control.
Wired says: Control, Loss of jobs, Bias, Explanability, Ethical decision making.
Tech Target says: Distribution of harmful content, Copyright and legal exposure, Data privacy violations, Sensitive information disclosure, Amplification of existing bias, Data provenance, Lack of explainability and interpretability.

That just about summarizes it all.

Virtually all of these worries are unfounded. And that's because they fail to take into account the human factor behind AI. This is due to the collectivist view prevalent today in the world. It is accepted, as an indisputable fact, that it is society that must act to protect itself against AI. From this false premise it follows directly that governments must act promptly in order to fulfill society's wish and, at the cost of billions, make dozens of new laws that will regulate the development of AI, slowing it down with unnecessary oversight and with confusing and contradictory rules. All of this can be easily avoided by not letting emotions (fear in this particular case) overwhelm us, and simply acknowledge one obvious fact, the one and only AI Principle:

Artificial Intelligence is a tool, only a tool, and nothing but a tool.

Like any other tool, an artificial intelligence system (AIS) is not liable for its functionality. It is the humans behind it who are. If humans decide to use this tool, whether it is to act upon the information provided by it, or to allow it to act by itself based on its own decisions, it is they, the humans, who are responsible for the consequences. They cannot blame negative outcomes on their tools. Whether the responsibility falls on the manufacturer, the distributor, the service provider, or the end user is a (simple) legal issue, as it is with any other tools currently in use. It is not the amorphous, unidenfiable entity we call "society" that must watch over AI, but the concrete individual humans who are behind it.

This approach applies to every possible scenario involving AI, from displaying information on a screen, to the handling of autonomous vehicles, to the policing of Robocops and to the launching of nuclear missiles. It makes no difference whether a car veered into a tree because AI mistook it for a street, or because one of its wheels fell off. This natural, common sense, approach to the "ethical problem" of AI should silence all fear-mongerers and quell all attempts to hinder its development. As long as humans are held accountable for what they produce and use, no tool will ever rebel against its maker. This approach does not need a new vision for this new type of tool, it only needs to adapt the existing one to its particularities. Its cost is negligible, and it provides all the safety we need.

A few other issues:

Unemployment First, this is not an AI issue, but a socio-economic one. If employers choose to replace humans with AIS, so be it. Second, as I wrote in another post, this is not even a socio-economic problem, but a socio-economic salvation. This is the centuries old problem of mechanization, automatization, robotization and now AI-ization. The increase in productivity thanks to the replacement of humans with AIS will make the number of jobs lost to AI only a fraction of the number of jobs that it will create.

Explanability Traceability is a commonly accepted requirement for an AIS. It is needed as feedback, for bug fixing, improvements and legal defense. The opponents of AI claim, correctly, that as a product of a complex neural network, traceability cannot be fully achieved. Therefore the decisions made by AI are impossible to fully predict or explain. But then they conclude that this argument is enough to stop, or at least slow down, the development of AI. That's false. The blackbox paradigm applies in the AI realm as it does in any other. The AIS developers produce AIS-s, not neural nets. They are responsible for the system as such, as a whole. Just as a manufacturer of nuts and bolts is responsible for the nuts and the bolts, and does not have to find the particular atom of aluminum that failed first, AIS developers don't need to precisely identify the "neuron" that misfired, or the weight that was assigned to a specific node. The impossibility to fully trace a decision is a technical problem, not a social one. From a social perspective, failure needs to be traced only so far as to establish accountability.

Safety "Guns don't kill people, people kill people". Bad actors will always be there, ready to use whatever is available to them to harm others. The only legitimate purpose of governments is to protect us against aggressors. Whether those aggressors use baseball bats, guns, viruses, nuclear missiles or AI is irrelevant. Governments need to adapt, find the threats and annihilate them. The developers of AIS should NOT be concerned with the misuse, or the malevolent use, of their products. More importantly, the governments should NOT force them to be. Any governmental intervention would only create obstacles and slow down the advancement, which would create a great advantage for bad actors, including nations, who have no such restrictions and scruples. The only way to stop a bad guy with a gun is a good guy with a better gun. Vladimir Putin knows this best.

So, then, what about AI Ethics? If AI is not liable for anything it does, is there such a thing as AI Ethics? Yes, there is. A huge thing. However, it only concerns developers, not "society", it is strictly a technical issue, not social. Before considering Ethics, the AI community must first deal with epistemological notions, such as senses, perceptions and concepts. But that's another story, for another day.

Friday, April 21, 2023

Theresa Tam is at it again

The latest from the Public Health Agency of Canada:

"We heard from the experts that solutions [to a healthy environment] must first involve addressing systemic issues (i.e., capitalism, colonialism, racism), which drive common inequitable outcomes for public health and nature."

Oh, God, these experts, again... So, now, capitalism is a systemic issue? No, it's not. It's definitely not an issue, and I wish it was systemic. Capitalism in Canada has been all but obliterated, buried under the mountain of government taxes, tarifs, rules, regulations, and restrictions. Far from being systemic, one needs to dig deep under the rubble of mal-investments, subsidies, taxation schemes, and political machinations of the central planners to find any remnants of something that vaguely resembles capitalism. And far from being an issue, it's actually our only salvation. Those little pebbles of capitalism are the only source of profit ("social value" for socialists) that allows us all (including socialists unfortunately) to live and prosper.

And then there's "inequitable outcomes for public health and nature". Best case scenario, this is a typo, because it if it isn't, it is pure evil any way you look at it. It could mean that nature should get the same outcomes as our public health system!?? Should we start performing surgeries on wild animals, or should we revert to healing ourselves by licking our wounds? Not clear... But it could also mean that we should strive for (violent shuddering) equitable outcomes in the health and the nature of the public!!! 😬😵‍💫

Saturday, March 4, 2023

Vivek Ramaswamy - The New Republican Candidate

 Vivek Ramaswamy is a new candidate for the Republican Party nomination. These are his main views:

- Eliminate affirmative action; - Excellent! He is the guy who created an Anti-Woke / Anti-ESG mutual fund.
- Dismantle climate religion; - Excellent!
- 8-year limits for federal bureaucrats; - OK. As long as they don't hold power over us, who cares.
- Shut down worthless federal agencies; - Excellent! I hope he considers all of them worthless.
- Declare Total Independence from China; - OKish. Depends what he means by Total.
- Annihilate the drug cartels; - Very bad! It's the war on drugs itself that should be annihilated, not the cartels.
- Make political expression a civil right. - Very, very bad!!! It means it would force all (social) platforms to accept political posts of all orientations. This means violation of the platforms' freedom of expression.
- No CBDCs. - Very good! The Govt won't be able to track citizens' transactions.
- Revive merit & excellence. OK. He should only revive freedom, merit will be revived as a natural consequence.

I couldn't find anything on religion and abortion. He's Hindu, went to Catholic school, should be OK. In the end, he's by far the best candidate. Go Vivek!!

Thursday, March 2, 2023

So, Jully Black, "O Canada! Our home ON native land", eh?

 "I sang the facts."

Guilty on both counts! First, you were supposed to sing the National Anthem of Canada, not the facts. Second, that was not singing, that was wailing. The main problem however is your evaluation of the facts. You claim that we (Europeans) have made our home on their (Native) land. Let's put aside the fact that the alleged disposition occurred centuries ago and everyone should have gotten over it by now, and let's just look at the possessive pronouns - our and their. In this context only our is indeed a possessive pronoun. In the true sense of ownership, of property, the Natives never possessed the land, the land was never theirs. John Locke figured out property 300 years ago: "he that so imployed his pains about any of the spontaneous products of nature, as any way to alter them from the state which nature put them in, by placing any of his labour on them, did thereby acquire a propriety in them." Property must be gained, whether by the owner's own labor or by free trade with other owners. That's what endows humans with the right to property.

With the possible exception of the Iroquois, no native tribes on Canadian territory have worked the land they inhabited. Virtually all tribes were nomadic, living off whatever the land happened to provide, such as berries and buffalos. Even the Iroquois' agriculture was primitive and limited in time - the land was abandoned after a few years when its yield was no longer sufficient. The Natives viewed land as "... sentient. It encompasses many life forms and spaces. It holds immense energy". In fact, the idea of “owning” land is a foreign concept for Native peoples. This narrative is often employed to show that the Natives were tricked into selling their land, since they had no idea what that really meant. But instead it shows quite the opposite. Selling presupposes ownership, that the land belonged to them, which by their own description is not true. Therefore, the Europeans did not steal the land, they took ownership of land that belonged to no one. And they worked it into skyscrapers, and telescopes, and launch pads for space exploring vessels.

Why is Locke's view on property the correct one, and not the one of the Natives? Because he was white and colonialist? No. It's because of man's nature as a rational being. Property is the means by which man sustains his life through long term planning. In John Galt's words, "Just as man can’t exist without his body, so no rights can exist without the right to translate one’s rights into reality—to think, to work and to keep the results—which means: the right of property." The Native view is purely mystical, with no connection to this reality, including land itself. An irrational belief in the supernatural does not endow rights in the natural world.

So, Jully Black, not only what you did was wrong, what you meant was wrong as well. I hope you didn't get paid for this gig. Moreover, I hope you get sued by ESPN and NBA for loss of income, there must have been quite a few viewers who switched the channel after your horrible performance. Next time try to stick to the script and the notes on the sheet. Passionately howling an approximation of the original song is not interpretation, it's butchery. You wanted equal opportunity? You had it. You blew it.