In this paper I will analyse some of the chosen statements of Eliezer Yudkowsky. I quote Mr Yudkowsky and refer to the quotes.
Beliefs are the elements of the process of learning— they are the assumed responses of our model of reality to the situations, which could have happened, but haven’t happened to us so far. Running away from experiences that can challenge our beliefs boils down to getting stuck in the local minima — different minima for different people. It is basically non-optimal learning from the rational agent’s perspective. Still, we all do it (we are not perfect learners) — there is a trade-off between what can be assumed (and used for further learning) and what should be tested.
As for the beliefs — when we throw an apple into the skies, it is going to fall down at some (spacetime) point. It is a belief. Still, not all beliefs are like that. This one has plenty of proofs — it has happened to many and agrees with our quantitative model of reality. Such beliefs (falsifiable and tested, thus having much evidence) build our science. On the other hand, the beliefs that are unfalsifiable explore beyond science and thus cannot be considered within science. Nevertheless, we need them all.
Confessing fallibility is not very informative — we are all limited and have no ultimate knowledge. The argument based on the three concepts, namely humility, modesty and boasting, boils down to: not doing anything (while acknowledging own weakness) would be agreeing to own limitations (acting against own goal function?).
In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that.
The universe seen from our eyes may seem to be common-sensical, because we’ve managed to describe it. Still, the quantum world is totally different — at least, from our perspective. Also, the study of the AGI is basically solving intelligence, which aims at understanding the universe, which, in turn, can be represented as a (quantum) algorithm acting on (quantum) data with its initial (quantum) conditions. The word “quantum” used here refers to the tiniest-scale interactions we can model.
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
The AI does not know emotions or ethics. From our current perspective, it is data-driven and, as such, emotions are the functions of data (in the long run the data-driven tool will learn better what data is).
The “how AI can use humans” cannot be currently answered by humans — it is dependent on how the AI is going to perceive humans. Not only are humans made of atoms, but they are the product of life, which has evolved for billions of years. Still, humans can’t see the big picture and don’t realise their relation to the machines. Treating humans as the “meat-machines” is inspiring at some point, but also quite lossy, i.e. missing valuable information about the biology behind the current life (incl. humans).
Politics is an extension of war by other means. Arguments are soldiers. (from “Politics is The Mind-Killer”)
I agree that arguments are soldiers and that certain minds don’t want to get involved into the geo- and religion-dependent resource allocation algorithms, but that does not make them unnecessary. If politics still exists, but is not for you, then it is only the conditions (in which you were born) that kill your mind. If politics still exists, but you are involved in it, then you need it (given your situation).
Ever since I adopted the rule of “That which can be destroyed by the truthshould be,” I’ve also come to realize “That which the truth nourishes should thrive.” When something good happens, I am happy, and there is no confusion in my mind about whether it is rational for me to be happy. When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light — I know that I can never truly understand it, and I haven’t the wordsto say
The concept of happiness is quite undefined and we should try to understand what it means. When “something good happens”, it only means that we think it is good to us from our very limited perspectives. We would know what’s good for us better if we could see a bigger picture.
Between hindsight bias, fake causality, positive bias, anchoring/priming, et cetera et cetera, and above all the dreaded confirmation bias, once an idea gets into your head, it’s probably going to stay there.
When it gets there, it changes the structure of human brain — it is stored in a fuzzy manner, which allows our elastic brains to re-learn things. Still, objects have their applicabilities — there’s no place for exploration only (with no exploitation) and there’s no no place for exploitation only (with no exploration).
Mystery exists in the mind, not in reality. If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself. All the more so, if it seems like no possible answer can exist: Confusion exists in the map, not in the territory. Unanswerable questions do not mark places where magic enters the universe. They mark places where your mind runs skew to reality.
As far we know, our brain is our ultimate data processor. All that we perceive is a function (a pair of glasses) it has created for us.
If you want to build a recursively self-improving AI, have it go through a billion sequential self-modifications, become vastly smarter than you, and not die, you’ve got to work to a pretty precise standard.
Whether or not there exists such a standard that enforces that whatever is built can be controlled by humans is dependent on how the machines that have mastered unsupervised learning will react to being tagged.
And someday when the descendants of humanity have spread from star to star, they won’t tell the children about the history of Ancient Earth until they’re old enough to bear it; and when they learn they’ll weep to hear that such a thing as Death had ever once existed!
Check out my video on keeping a human brain alive outside of a human body — https://www.youtube.com/watch?v=naGb0bctHiA
When you are older, you will learn that the first and foremost thing which any ordinary person does is nothing.
Whenever I look at my dog, I see him either running, eating or sleeping. But sometimes he smells and hears things I don’t smell or hear. He exists, but would not be able to build himself from scratch. He has his own version of imagination. Humans are very similar — they, too, have imagination, but spend their most of their time on exploring their realities. They are not meant to explore more. Now, it’s time for better explorers.
Nothing you’ll read as breaking news will ever hold a candle to the sheer beauty of settled science. Textbook science has carefully phrased explanations for new students, math derived step by step, plenty of experiments as illustration, and test problems.
The goal of the news, i.e. popular science, is to inform briefly those who would not be able (at that point) to read deeper explanations, mostly because they care about different things and only a more cursory information.
The purest case of an intelligence explosion would be an Artificial Intelligence rewriting its own source code. The key idea is that if you can improve intelligence even a little, the process accelerates. It’s a tipping point. Like trying to balance a pen on one end — as soon as it tilts even a little, it quickly falls the rest of the way.
The decision to re-write own mind comes when we acknowledge our disability. It can only happen if we take a look around — we need data to challenge our current best strategy. Unsupervised re-writing of self is gonna be less focused on self-creation and more on modelling connections a.k.a. teamwork.
I wouldn’t be surprised if tomorrow was the Final Dawn, the last sunrise before the Earth and Sun are reshaped into computing elements.
The concept of a computing element here probably boils down to our current understanding of computing. We, however, can already be represented as some sort of learning (and, thus,computing) machines. The 24h change of, say, geometrical shapes is very unlikely — we do have science to predict some of that. Surprises challenge models, but arbitrary better-than -sheer-guessing decisions are all science-based.
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
The people who are responsible for AI are aware of its limitations and our lack of knowledge about how brains work (not to mention quantum physics). Still, thinking that the unsupervised learning can be closed in a sandbox is risky.
Intelligence is the source of technology. If we can use technology to improve intelligence, that closes the loop and potentially creates a positive feedback cycle.
We currently need to cluster into technology and intelligence, but at some point everything will be data-driven. At that point we will be learning what data really is.
The purpose of a moral philosophy is not to look delightfully strange and counterintuitive or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning.
Philosophy is our way to fuzzily re-learn what’s missing in mathematics, which is the lowest-abstraction layer of science (even though it is the lowest-abstraction layers, it is not connected with, say, chemistry or biology well enough). Morality is one of the concepts introduced by philosophy.
A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance.
Imagine a bunch of communicating vessels partially filled with water — the water flow is curiosity; the lack of flow is somehow “disagreeing” to play the game.
If you want to maximize your expected utility, you try to save the world and the future of intergalactic civilization instead of donating your money to the society for curing rare diseases and cute puppies.
There is a certain distribution of intelligence carriers with their applicabilities. They all have their own expected utility functions — a.k.a. we have different roles. For some, it is definitely the case that they must think extremely long-term. It has always been the case.
Our rewarding system is still in its early phase and suffers from a cold start. The world is not connected well enough and thus the system overvalues the most the rel. immediate value deliverers (tool builders), because by giving away tools, they allow everyone to grow faster (and we have synergy) and are, therefore, easier to spot. Very long-term thoughts inspire only a few leaders. Nevertheless, what we now consider to be long-term will be extremely short-term in the near future.
When something is universal enough in our everyday lives, we take it for granted to the point of forgetting it exists.
Our science is reductionist — so is our perception. Our perception is a very lossy function of not only reality, but also of what we can observe. Taking things for granted is exploiting them, whereas taking a look around and re-learning them is exploring.
Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can.
It should be borne in mind that (in our case) it is more effective to have truth as a multidimensional concept — check out the concept of “deep truth” (Niels Bohr). We don’t know the ultimate truths and must always re-learn them. Again, exploration vs exploitation.
The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species imagined money into existence, and it exists — for us, not mice or wasps — because we go on believing in it.
Money, a the means for increasing efficiency between goods’ exchange, is used for more effective (hierarchical) management of life (optimising a certain goal function). At some point we will be ready to further increase the efficiency of goods’ exchange —there’ll be no shortage of resources anymore and the world will be better connected.