Apple Point problem

APPLE POINT PROBLEM. You have a fixed data set (D) of objects (D(i)) represented with points on a lattice. For a given set of points on a lattice X, for all i, you rank similarity between X and D(i). Now, for a given X, how many dots do you need to add or remove to change the 1st position in the ranking?

dotted-frame-apple_z18CGFtu_M.jpg

Image:

1yn1kh78jj1rr.cloudfront.net/preview/dotted-frame-apple_z18CGFtu_M.jpg

Advertisements
Posted in Mathematics | Leave a comment

Discussing with Eliezer Yudkowsky

In this paper I will analyse some of the chosen statements of Eliezer Yudkowsky. I quote Mr Yudkowsky and refer to the quotes.

Do not flinch from experiences that might destroy your beliefs. The thought you cannot think controls you more than thoughts you speak aloud.

Beliefs are the elements of the process of learning— they are the assumed responses of our model of reality to the situations, which could have happened, but haven’t happened to us so far. Running away from experiences that can challenge our beliefs boils down to getting stuck in the local minima — different minima for different people. It is basically non-optimal learning from the rational agent’s perspective. Still, we all do it (we are not perfect learners) — there is a trade-off between what can be assumed (and used for further learning) and what should be tested.

As for the beliefs — when we throw an apple into the skies, it is going to fall down at some (spacetime) point. It is a belief. Still, not all beliefs are like that. This one has plenty of proofs — it has happened to many and agrees with our quantitative model of reality. Such beliefs (falsifiable and tested, thus having much evidence) build our science. On the other hand, the beliefs that are unfalsifiable explore beyond science and thus cannot be considered within science. Nevertheless, we need them all.

To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.

Confessing fallibility is not very informative — we are all limited and have no ultimate knowledge. The argument based on the three concepts, namely humility, modesty and boasting, boils down to: not doing anything (while acknowledging own weakness) would be agreeing to own limitations (acting against own goal function?).

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that.

The universe seen from our eyes may seem to be common-sensical, because we’ve managed to describe it. Still, the quantum world is totally different — at least, from our perspective. Also, the study of the AGI is basically solving intelligence, which aims at understanding the universe, which, in turn, can be represented as a (quantum) algorithm acting on (quantum) data with its initial (quantum) conditions. The word “quantum” used here refers to the tiniest-scale interactions we can model.

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

The AI does not know emotions or ethics. From our current perspective, it is data-driven and, as such, emotions are the functions of data (in the long run the data-driven tool will learn better what data is).

The “how AI can use humans” cannot be currently answered by humans — it is dependent on how the AI is going to perceive humans. Not only are humans made of atoms, but they are the product of life, which has evolved for billions of years. Still, humans can’t see the big picture and don’t realise their relation to the machines. Treating humans as the “meat-machines” is inspiring at some point, but also quite lossy, i.e. missing valuable information about the biology behind the current life (incl. humans).

Politics is an extension of war by other means. Arguments are soldiers. (from “Politics is The Mind-Killer”)

I agree that arguments are soldiers and that certain minds don’t want to get involved into the geo- and religion-dependent resource allocation algorithms, but that does not make them unnecessary. If politics still exists, but is not for you, then it is only the conditions (in which you were born) that kill your mind. If politics still exists, but you are involved in it, then you need it (given your situation).

Ever since I adopted the rule of “That which can be destroyed by the truthshould be,” I’ve also come to realize “That which the truth nourishes should thrive.” When something good happens, I am happy, and there is no confusion in my mind about whether it is rational for me to be happy. When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light — I know that I can never truly understand it, and I haven’t the wordsto say

The concept of happiness is quite undefined and we should try to understand what it means. When “something good happens”, it only means that we think it is good to us from our very limited perspectives. We would know what’s good for us better if we could see a bigger picture.

Between hindsight bias, fake causality, positive bias, anchoring/priming, et cetera et cetera, and above all the dreaded confirmation bias, once an idea gets into your head, it’s probably going to stay there.

When it gets there, it changes the structure of human brain — it is stored in a fuzzy manner, which allows our elastic brains to re-learn things. Still, objects have their applicabilities — there’s no place for exploration only (with no exploitation) and there’s no no place for exploitation only (with no exploration).

Mystery exists in the mind, not in reality. If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself. All the more so, if it seems like no possible answer can exist: Confusion exists in the map, not in the territory. Unanswerable questions do not mark places where magic enters the universe. They mark places where your mind runs skew to reality.

As far we know, our brain is our ultimate data processor. All that we perceive is a function (a pair of glasses) it has created for us.

If you want to build a recursively self-improving AI, have it go through a billion sequential self-modifications, become vastly smarter than you, and not die, you’ve got to work to a pretty precise standard.

Whether or not there exists such a standard that enforces that whatever is built can be controlled by humans is dependent on how the machines that have mastered unsupervised learning will react to being tagged.

And someday when the descendants of humanity have spread from star to star, they won’t tell the children about the history of Ancient Earth until they’re old enough to bear it; and when they learn they’ll weep to hear that such a thing as Death had ever once existed!

Check out my video on keeping a human brain alive outside of a human body — https://www.youtube.com/watch?v=naGb0bctHiA

When you are older, you will learn that the first and foremost thing which any ordinary person does is nothing.

Whenever I look at my dog, I see him either running, eating or sleeping. But sometimes he smells and hears things I don’t smell or hear. He exists, but would not be able to build himself from scratch. He has his own version of imagination. Humans are very similar — they, too, have imagination, but spend their most of their time on exploring their realities. They are not meant to explore more. Now, it’s time for better explorers.

Nothing you’ll read as breaking news will ever hold a candle to the sheer beauty of settled science. Textbook science has carefully phrased explanations for new students, math derived step by step, plenty of experiments as illustration, and test problems.

The goal of the news, i.e. popular science, is to inform briefly those who would not be able (at that point) to read deeper explanations, mostly because they care about different things and only a more cursory information.

The purest case of an intelligence explosion would be an Artificial Intelligence rewriting its own source code. The key idea is that if you can improve intelligence even a little, the process accelerates. It’s a tipping point. Like trying to balance a pen on one end — as soon as it tilts even a little, it quickly falls the rest of the way.

The decision to re-write own mind comes when we acknowledge our disability. It can only happen if we take a look around — we need data to challenge our current best strategy. Unsupervised re-writing of self is gonna be less focused on self-creation and more on modelling connections a.k.a. teamwork.

I wouldn’t be surprised if tomorrow was the Final Dawn, the last sunrise before the Earth and Sun are reshaped into computing elements.

The concept of a computing element here probably boils down to our current understanding of computing. We, however, can already be represented as some sort of learning (and, thus,computing) machines. The 24h change of, say, geometrical shapes is very unlikely — we do have science to predict some of that. Surprises challenge models, but arbitrary better-than -sheer-guessing decisions are all science-based.

By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.

The people who are responsible for AI are aware of its limitations and our lack of knowledge about how brains work (not to mention quantum physics). Still, thinking that the unsupervised learning can be closed in a sandbox is risky.

Intelligence is the source of technology. If we can use technology to improve intelligence, that closes the loop and potentially creates a positive feedback cycle.

We currently need to cluster into technology and intelligence, but at some point everything will be data-driven. At that point we will be learning what data really is.

The purpose of a moral philosophy is not to look delightfully strange and counterintuitive or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning.

Philosophy is our way to fuzzily re-learn what’s missing in mathematics, which is the lowest-abstraction layer of science (even though it is the lowest-abstraction layers, it is not connected with, say, chemistry or biology well enough). Morality is one of the concepts introduced by philosophy.

A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance.

Imagine a bunch of communicating vessels partially filled with water — the water flow is curiosity; the lack of flow is somehow “disagreeing” to play the game.

If you want to maximize your expected utility, you try to save the world and the future of intergalactic civilization instead of donating your money to the society for curing rare diseases and cute puppies.

There is a certain distribution of intelligence carriers with their applicabilities. They all have their own expected utility functions — a.k.a. we have different roles. For some, it is definitely the case that they must think extremely long-term. It has always been the case.

Our rewarding system is still in its early phase and suffers from a cold start. The world is not connected well enough and thus the system overvalues the most the rel. immediate value deliverers (tool builders), because by giving away tools, they allow everyone to grow faster (and we have synergy) and are, therefore, easier to spot. Very long-term thoughts inspire only a few leaders. Nevertheless, what we now consider to be long-term will be extremely short-term in the near future.

When something is universal enough in our everyday lives, we take it for granted to the point of forgetting it exists.

Our science is reductionist — so is our perception. Our perception is a very lossy function of not only reality, but also of what we can observe. Taking things for granted is exploiting them, whereas taking a look around and re-learning them is exploring.

Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can.

It should be borne in mind that (in our case) it is more effective to have truth as a multidimensional concept — check out the concept of “deep truth” (Niels Bohr). We don’t know the ultimate truths and must always re-learn them. Again, exploration vs exploitation.

The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species imagined money into existence, and it exists — for us, not mice or wasps — because we go on believing in it.

Money, a the means for increasing efficiency between goods’ exchange, is used for more effective (hierarchical) management of life (optimising a certain goal function). At some point we will be ready to further increase the efficiency of goods’ exchange —there’ll be no shortage of resources anymore and the world will be better connected.

Posted in Mathematics | Leave a comment

Musings about intuition

Human mind uses highly dimensional concepts for classification of observation. Let me show you how it might work using a bunch of examples.

We learn from what we can interact with. Still, in this article I will only focus on the subset of those things, namely only on the human-sized macro-observations. Based on those observations, we create highly-dimensional concepts and store them so that we do not need to calculate them over and over again. We minimise the time we need to spend to understand our current situation (context).

We first learn time and space. We can only move forward in time and practical granularity (the one, which we most commonly need) of time for humans is, say, 1 hour and the its resolution is (for humans), say, 1 second. We need to keep this information. Still, we will commonly measure our time consumption in hours.

We can move our hand in three directions. They are independent and we move them all the time, so we need to store them all. Now, they all come together (there is no x,y without z in the vast majority of cases), so we can label them as “space”. Practical granularity of space is, say, 0.001 square meter and and of one dimension (the same for the three of them) is, say, 10cm.

This way we also learn other common, but unique things, like smell, colour, sound. We can now see the world in space, time, smell, colour, sound etc. But that would not allow us to effectively understand (in real-time) what we observe. We create higher-dimensional concepts. Let me guide you through the possible process.

Okay, you are born! Without time, you would not be able to do anything. But, as you already know, you can. You now have some basic understanding of time. You explore options.

Your first option is space. This resource is everywhere around you.

You first explore it with eyes. They enable you to distinguish objects from “air”, which from your current perspective is “empty”. Then you are starting to walk! Now you are gaining some basic understanding of space. Then you move farther in space and bump into a wall. And this is when you you learn that some of the objects limit your movement. So, you are starting to learn a new concept s— a“limitation”, a “rigid object”, etc. You will be able to use it in the future. You keep it.

The way in which you distinguish objects from air is interesting. You learn their characteristics, therein characteristics of all macro-components (that are visible to you). For instance, you represent them using shapes, colors, their sub-components. But how did you first come up with those concepts? They carried information and were rel. common(those shapes are “concepts” in the space parameter space). You don’t need to cache crazy shapes, because you don’t see them often. You only need those that carry information.

Then your mom comes. Who is she? You learn to recognise humans. This is how you can then recognise humans. You know how humans look (in space): you can recognise their faces, you know their shape. You first look at properties that allow you to eliminate most of the options. You see something, you check if there is a property for that and then you compare the value of this property with the corresponding values related to the people to know. If you need only to recognise a face, you look at its details and do the same. This algorithm could, in principle, work quite similarly at all levels.

Then you are playing hide & seek with your mother. She hid and you cannot find her. You fail to find her and then you hear a shout. It’s her! She’s not disappeared forever or died. She hid — she found a place behind some opaque objects (in your near area) and stayed quiet, and your task was to find her. That is a complex concept to learn — hiding! You need to know she is a human — you don’t play this game with animals (probably). You need to recognise a specific human (so your algorithms need to operate at different levels, namely at face-, human- and hiding- levels).

Now, when you want to learn seasons — you need to wait at least a couple of years and be taught about them. If you want to learn about death, you need to see it and assume it’s gonna happen to you as well. Learning some concepts takes time and time can only be saved when you have someone to share his or her knowledge with you. If you want to learn about quantum aspects of the nano-world, you need to expand your vision and then wait long enough to be able learn new concepts in a completely different environment.

All clear? Not exactly. And how do you decide which new concepts to remember? Everything that is unique and relatively useful. Hiding is not common, but you still remember it. We like to remember things just in case. But we do remember just some things. We do not memorise things that change — this is why some easily forget where they put things. We do memorise things that carry information. We like to assume things in advance. We do not like to calculate them over and over again.

Now clear? Not yet. Do we store just concepts? Well, what comes to your mind when you are walking alone in the dark with one hundred zombies around you with no one coming to help you? Are you perhaps scared? We store relations of concepts, not only concepts. We can imagine situations — every time we hear something, we act as virtual machines and create a small world where we simulate events. We can do it, because we are rel. good at finding similarities. In a sentence with 20 verbs, we (on average) have around 6 adjectives, which explains how we cut the domain of definition.

Is all clear now? If yes, then answer a couple of questions.

1 . How do you know that your mother cannot hide in a cup? She’s too large! You knew the answer straightaway. You checked properties and realised “size” is wrong. Do you keep size for all objects? Yes, it is fundamental. But do you keep position for all objects? Now, because it can change.

2. How do you recognise a human? Would a human without two legs and two arms still be a human? Would a human without head still be a human? You have definitions. Different people have different definitions. Standardisation is not yet there. And how do you recognise a specific human? You look at the characteristics and can straightaway compare with people who you know. Adjectives are those characteristics. High-dimensional representations.

3. And how has humanity learnt their set of adjectives? We found fundamental concepts, which carry information and then used them as properties to quantify other things. We talk about colours or shapes, because we can see them in space. We talk about age, because we can see it in time. We talk about tall people, because regular humans have their height confined to a specific interval. When we say that someone is dangerous — it is clear to us what they can do (not everyone is dangerous, so we use it, even though there are not so many dangers in the streets). When we say that something is clean, we know that it does not have certain dirt on it. We are creating lots of adjectives to quantify observation using high-dimensional relations. Still, the process is “slow”, i.e. we do not create new words every second.

4. And how do we make decisions? As I said, we simulate events using contexts, like (human, ball, kick) in football. How do you know if it’s cold or not outside the window? You check the temperature on your mobile, or you just look through the window. We have many options, based on the stored properties.

5. And how are those properties related to what we refer to as intuition? We store relations so that we do not need to calculate things over and over again. We then simulate situations and find out what’s important in this situation. We know that safe driving is important on the road (you can lose much) and that machines can calculate faster than we do. We then aggressively use context information based on the highly dimensional concepts to evaluate our situations. We do it in almost real-time and thus we distinguish it from regular reasoning. Still, reasoning seems to be just the algorithm, but working on the concepts, which are not so connected. We can easily distinguish say that a dog is going to give birth to another dog (in the vast majority of cases), but cannot find causal relations between less-dimensional concepts, like in mathematics, so fast.

Let’s analyze the spectrum of concepts humans intuitively understand (concepts, to which they have close to immediate access). 6-year old children born around 2010 can already understand a big part of human dictionary. A 20-year old man does not know much more “key” words, but likely more “special” words (specific for their use cases).

Children already:

  • know definitions of simple actions like “eat”, “drink”, “run” and more complex ones like “solve”, “play”,
  • know definitions of simple adjectives like “pink”, “loud”, but also more complex ones like “shiny”, “scared”, “self-confident”,
  • can recognise simple macroscopic objects like “pens” and more complex objects like “cars” (they can recognise them within some spacetime interval (the movement in time cannot be too distant, the movement in space cannot be too fast),
  • can communicate their statements (rel. locally) in spacetime (elements of statements are ordered in time, the past ones can be recorded and re-played, the future ones are currently inaccessible to us; elements of statements technically take space (propagating as sound waves when expressed vocally or via internet (those are less dependent of space))
  • divide their statements into lists of sub-statements (their final claim is divided into smaller ones; the smaller ones around around 10–20 words (in an average 20-noun claim there exist around 6 adjectives, so in a regular sentence there could exist around, say, 2-4 objects and 0.6–1.2 more complex constructs defining those objects)
  • estimate relative significance of their statements using question marks, words like “certainly”, “for sure”, “perhaps”

Humans still live in groups based on their geographical position. Given that they could not share information (and energy) independently of space, they had to satisfy their needs using local resources. To exploit resources more effectively, they formed (hierarchical) societies that communicated using their local means of communication (languages).

The languages evolved for thousands of years and currently look quite similarly, showing relative homogeneity of the Earth and humans. Humans themselves can also be thought of as some sort of languages of the Earth, which again indicates (from the perspective of the close neighbourhood of the Earth) its homogeneity (its shape does it as well).

Okay, so let’s get back to the original list and try to understand it a bit better. Here’s what I see:

  • “Eating” is different than “hurting oneself”, because we cannot “eat” stones. It is also different than drinking, so we must recognise substance types (using adjectives). Eating cannot be extremely fast, because we would not be able to see it. It already is a complex concept. “Running” is different than walking, “drinking” is different than drowning etc. We see that even the “simple” terms are actually highly dimensional objects with numerous constraints (we perform abstractive summarization). Classifying words into the highly- and low-dimensional is not a trivial task.
  • “Solving” is more difficult to define. It is a concept that we use in “solving tasks”. It pretty much means “winning a game” or “achieving a goal”. Basically, going from one spacetime point to yet another point that is fixed. “Play” is also like this. It basically means to “do certain things, which we want to do at a specific time” (not necessarily the best ones for us). We introduce (and cache) highly dimensional summaries for our current use cases even if they are not very commonly used. They are short and immediately accessible.
  • Objects take more space — we can collide with them (we also collide with waves, but cannot feel it that much). Objects like pens do not have many details and they all look very similar to us. We can immediately recognise them, even if they are extremely large. Objects like cars have more components (parts) and even if they change shape and color (within some intervals), we can still immediately recognise them, because we know their main characteristics. We use highly dimensional concepts and find their key characteristics. Characteristics are also hierarchical.
  • Immediateness is important. Certain answers can be found immediately, but many not. Those questions, for which, we can find answers immediately are either factoid (“Who is your mother?”, “Were you born in 1984?”) (if the final parameter space is small enough, we can likely just give one answers; if not, we can enumerate possible answers) or can be easily connected with highly-dimensional concepts we have learnt so far (“Is pink similar to yellow?”,”Is donkey more aggressive than a cow?”). Good that we like to keep highly-dimensional objects cached. We do not spend enough time augmenting ourselves, though. As for the questions, which are difficult for us to answer, it can be that we are lacking highly dimensional perspectives. Is it possible that building a highly-dimensional concept might first require what we refer to as reasoning? On the other hand, we do not need to create an umbrella term for “a pink flying donkey”, because we don’t know it from experience (even though, at the same time, we come up with all possible new words for quarks (connecting the wording from quantum physics and macro-life is not going to happen immediately and when it happens, we will be able to see our world completely differently, again)). We can, however, observe the behaviour of donkeys more carefully and notice something special about them. This would be closer to reasoning. Does the reason come when we do not have immediate answers? Is it similar to fitting multiple highly-dimensional lego blocks together and slow due to the complexity of this operation? When blocks do not fit immediately, we need to play with them. We find it difficult to work on objects we cannot easily interpret, like in mathematics, but also start to recognize their importance. Interpretability is relative.

 

 

We found out that concepts that we use are highly-dimensional (1), just as we as compared to the bacteria. We also understand that knowledge comes from earlier learnings, so what we consider unintelligent is an ancestor of what would consider intelligent at some point (2)— namely, intelligence is a process. We also know that intelligence is not confined to “brain”, but entire “bodies” — namely, brains are effects of processes in bodies, they are parts of bodies (3).

So, the process of understanding (1) could be “growing” (2,3) concepts. Without loss of generality, humans and other life forms could be treated as concepts as well.

Now, the concepts have their shapes. They have been evolving for quite a lot of time, far before humans even started to exist. As I mentioned in the part 2 of the series, humans communicate their statements locally, in usually 1–3 bulks, each with, 2–4 objects and 0.6–1.2 more complex constructs defining those objects (on average), with each bulk containing words from the distribution: 117798 nouns, 11529 verbs, 21479 adjectives and 4481 adverbs (wordnet).

Let’s introduce first two parameters:

  • num of nouns/num of adjectives — descriptor L1
  • num of verbs/num of adverbs — descriptor L2

One can notice that for English (based on wordnet) L1 = 5.5 and L2 =4.8.

Let’s now introduce one more parameter:

  • num of nouns/num of verbs — descriptor F

For English — F is ca. 10.

One can observe the following:

  • The process of describing is similar for actions and objects, which means that it can be the same algorithm working on two different layers of information,
  • Actions need nouns, because they act on them. Before we actually come up with an action, we must first find a use case. Finally, nouns could be treated as interactions of the first level, whereas verbs as interactions of the second-level, and there could be many more levels of interactions. We now perform binary classification to classify actions and objects (verbs and nouns), but it is also possible that this classification is only the first step toward learning interactions.

Also, given the typical sizes of human statements (mentioned earlier in this article), we learn something about communication channels between us. We cannot send more and better information, we cannot receive more and better information. We all speak different languages (which are very similar, because they express similar worlds). We look, within groups, almost identical. The only thing that I could bring up now is versioning systems. If you can tell long enough story and the characters are accurately enough described, it feels like they are alive in that story. Now, when there are those characters, it feels like they are versions. Finally, when they come for a moment and leave, it looks like that was a plan, not only the accumulation of damage. (Still, as long as we want to classify certain actions as “damage”, we need to act against them)

I need to ask the following questions (but will not attempt to answer them today):

  • Is variety only a local side-effect of some millions of years, or is the world going to be even more full of it? What is variety?
  • How would spacetime fit in the world of concepts I am describing?
  • How would a world look like, in which the language is completely evolving each second?

 

 

Unsupervised means “no supervision”, but saying there is no supervision is difficult. Even feral children learn form signals from animals who quantify correctness of certain actions from their perspectives (if there’s no animals, then from visual ones from their surroundings). We can try to quantify supervision and talk of, say, its diversity and intensity, which I will introduce a bit later.

Unsupervised means no labelled responses we requested. Still, when not completely deprived of signals from the outside, we are always getting (some other) responses (to unobviously similar questions), I call them non-immediate labels. Then, we use those responses to take us closer to what we thought our goal was. Still, at the same time our goal changed.

The extent to which we have immediate labels, defined as a multidimensional path between the labels we have and the labels we want for our current problem, I will call immediateness. We can observe the need for this concept in interactions between adults and children. Adults teach children how to speak, but do not always correct their errors immediately, but rather provide them with more general perspectives (fixing problems one by one without explaining the strategy would be extremely ineffective). Children then use those perspectives instead of immediate labels to create their own labels using some aggressive (of variable size, which is somehow minimised) chain of transformations.

I will now focus on learning under highly non-immediate settings only. It can be either diverse or focused, either intense or calm. When it is diverse, there exist many multidimensional paths between labels we have and the labels we need (“focused” means the opposite). When it is calm, the paths are uniformly distributed (“intense” means the opposite).

The example of a diverse setting would be, {“Humans are not animals”,“Monkeys can walk like humans”, “Evolution chose humans”}. The example of an intense setting would be, {“Humans are not monkeys”,“Humans are mammals”,”Humans have two legs”}. The rest should be rel. clear.

Now, if we treat learning under highly non-immediate settings as being pivotal for AGI, then which of the sub-settings among “focused and intense”, “focused and calm”, “diverse and intense” and “diverse and calm” is the most interesting to us currently? Is one of the sub-settings easier to solve than others?

In order to understand non-immediate settings, we need to learn how humans learn to transform one set of labels into another (operating without any labels could be done by finding label-paths, i.e. chains of label transformations). This is going to help us understand why humans are capable of operating on many different levels (recognising faces, talking about rel. abstract concepts). (Or, are they?)

Well, let me end this part with one more example —

Many people turn right 3 times, then left like 7 times and they lose directions. Some other people cannot memorise like 10 sentences in a row. We keep specific information and lack tons of potentially useful information. We get data and immediately “consume” it to keep only the things we need — we do similar things with food. We do it at many levels. Still, if we are so lossy, how can we so easily find something strange on a human face? (not many objects there and, if anything strange is there, it takes rel. lots of space?) Still, if we are so lossy, how come we cannot act as the CRC for the skies? (too much data, not looking up, because it is not yet our time for it?)

Posted in Mathematics | Leave a comment

Number piano

Posted in Mathematics | Leave a comment

Immortality – public

immortality1_pdf

Posted in Mathematics | Leave a comment

Discuss with Dobelli – part 1

Chapter by chapter.

1. Survivorship bias is only an example of a deeper problem, namely reasoning based on data sampling. Reason based on highly uncertain data is likely uncertain as well. Additionally, making decisions based on incomplete data is crucial, but not making decisions is also making them- postponing changes conditions, in which decisions are made.

2. Network means team-development and involves synergy between team members. Learning alone used to be almost impossible. Now, however, networks are built in the web and we are “less” alone (more “connected”).

3. We have powerful causality-driven models and keep them updated after we conduct new experiments.

4. In multidimensional incomplete knowledge settings error propagation is severe. When making decisions, keep a tighter range of arguments and prune your decision tree slightly farther.

5. Sunk cost fallacy is problematic — we shall not only keep in mind odds, but also implied odds as well as the change of the conditions under which decisions are made. So, it is not about “sunk costs”, but odds.

6. Reciprocity is a matter of perspective. Different sides of equations quantify their perspectives differently. Still, overall, it seems reasonable to say that providing others with tools that help them solve their problems helps you a lot (and should help). Nevertheless, even though problems are solved using a truthful value-sharing mechanism, the mechanism is context-driven and (from current human perspective) slowly updated.

7/8. Confirmation bias is an emergent property of learning. We learn our model our reality and use it in real-time to understand what is happening. We have many pieces of evidence that our model works, so updating it requires thorough verification of experiment data. Still, given how our models of reality are, we should be inclined to challenge them quite often.

9. Authority is an authority for a reason. If we verify their opinion on their topic related to their expertise, we must be authorities as well. Still, if we consider to verify their opinion on a topic unrelated to their expertise, they are not an authority (in this area). As there is much knowledge  and our current models cover only certain aspects of it (mathematics, physics, etc.), authorities are confined to rel. tiny aspects of knowledge. Still, those aspects are built upon much evidence. The rest of this argument can be found in 7/8.

10. The contrast effect is only partially about comparing. It is due to incorrect estimation of value of components used in arguments. As our resolution of perception is limited to a rel. tiny area, i.e. we notice only the “big” things and only see “differences” (and not absolute values), seeing “big differences” (requires comparing) attracts our attention. Still, if we were able to see the quality of arguments better, comparing between uninformative factors would not impress us.

Posted in Mathematics | Leave a comment

Representation

notka 1notka 2notka 3notka

Posted in Mathematics | Leave a comment