## Learn from others! (1)

Question from raxacoricofallapatorius: Why orbits don’t eventually decay?

Response from anna v:

You are right, the planetary model of the atom does not make sense when one considers the electromagnetic forces involved. The electron in an orbit is accelerating continuously and would thus radiate away its energy and fall into the nucleus.

One of the reasons for “inventing” quantum mechanics was exactly this conundrum.

The Bohr model was proposed to solve this, by stipulating that the orbits were closed and quantized and no energy could be lost while the electron was in orbit, thus creating the stability of the atom necessary to form solids and liquids. It also explained the lines observed in the spectra from excited atoms as transitions between orbits.

Question from anna v: How does uranium from supernovae explosions end up in mineral veins in a planet?

Response from Martin Beckett:

Mostly because they are heavy.

Rocks erode putting their constituents into solution, the heavy stuff settles out in river/sea beds, and metals are heavy.

For many metals hydrothermal process are more important. Super hot water deep in the earth dissolves the rock containing the minerals, it moves along cracks in the rock and cools depositing the salt and metals as lines in the rock.

In an asteroid with no geological process the metals are found in their raw state having cooled directly from the original ball of primeval gas

Question from jcw: Why don’t metals bond when touched together?

Response from Hasan:

I think that mere touching does not bring the surfaces close enough. The surface of a metal is not perfect usually. Maybe it has an oxide layer that resists any kind of reaction. If the metal is extremely pure and if you bring two pieces of it extremely close together, then they will join together. It’s also called cold welding.

Question from Nogwater: How does gravity escape a black hole?

Response from Vagelford:

Well, the information doesn’t have to escape from inside the horizon, because it is not inside. The information is on the horizon.

One way to see that is from the fact that from the perspective of an observer outside the horizon of a black hole, nothing ever crosses the horizon. It asymptotically gets to the horizon in infinite time (as it is measured from the perspective of an observer at infinity).

On how big a bubble would have to be for us to live inside? http://physics.stackexchange.com/questions/67970/surviving-under-water-in-air-bubble

Why do airplanes fly? (sersiously) http://home.comcast.net/~clipper-108/lift.pdf

Sun’s light “to be” in phase, or not in phase? http://physics.stackexchange.com/questions/69929/stupid-yet-tricky-question-why-do-we-actually-see-the-sun

Event horizon vs black hole: http://physics.stackexchange.com/questions/95366/why-does-stephen-hawking-say-black-holes-dont-exist

## My questions to others! (1)

My problem:

Let $G(n,k)$ be the n-th k-almost prime. Prove that for every for every $n \in N$ there exists infinitely many $k \in N$ satisfying $2*G(n,k) = G(n,k+1)$.

And a solution from Ross Millikan:

$G(n,1)=p_n$, the $n^{\text{th}}$ prime.
$G(n,k) \le 2^{k-1}p_n$ because we can display $n-1$ numbers that must be smaller; $2^{k-1}$ times all the smaller primes.
Given $n$, we can find $m$ such that $3^m > 2^{m-1}p_n \ge G(n,m)$
Then for all $k \ge m, 2G(n,k)=G(n,k+1)$

My question:

Every program P which built of function sequence (order counts): $F_1,..,F_n$, where $F_i$ returns $R_i$ and $F_{i+1}$ takes $R_i$ as an argument, can be shown as $F_1(F_2(F_3(...(F_n))))$, i.e. we do not need to store intermediary program states. Can every program be transformed to such without intermediary states?

Yes, as long as the semantics of the underlying programming language are effective and we allow higher-order functions.

Assume for the sake of the argument that we are given big-step semantics for the underlying programming language, using the notation $s_1 \to s_2$, where $s_1$ and $s_2$ are program states. Suppose furthermore that the underlying programming language is “roughly imperative”: We have several base commands (e.g., assignment) that are joined into a sequence. Then the execution of a sequence $c_1; c_2; \ldots c_n$ of commands corresponds to executing the big-step reductions for $c_1, c_2, \ldots, c_n$ in sequence. Since we assumed that the semantics are effective, in the sense that there are recursive functions implementing the reduction rules, this boils down to applying the corresponding functions implementing the semantics in sequence.

If we add control-flow constructors like if or while, things get a bit more interesting. Consider for example the following simple if command: $\text{if } e \text{ then } c$, where $e$ is an expression and $c$ is a command.
The semantics could be given by the two big-step reduction rules
$$\frac{e(s_1)=1\quad s_1 \to_c s_2}{s_1 \to_{\text{if } e \text{ then } c} s_2} \qquad \frac{e(s_1)=0}{s_1 \to_{\text{if } e \text{ then } c} s_1}$$
This gives rise to an evaluation function of the form $E_\text{if}(E_e,E_c,s)$ which takes two functions as arguments, namely the evaluation functions for $e$ and $c$, and the execution state $s$. $E_\text{if}$ is clearly recursive. Since the evaluation functions for $E_e$ and $E_c$ can be derived from the program being considered, we can treat them as parameters and therefore get an
implementation function $E_{\text{if } e \text{ then } c}$ that maps execution states to execution states, as above.

My question: What is the rigorous definition of the Aufbau principle and the mathematical model used for its description?

The Aufbau principle isn’t rigorous because it’s based upon the approximation that the electron-electron interaction can be averaged into a mean field. This is called the [Hartree-Foch][1] or self consistent field method. The centrally symmetric mean field results in a set of atomic orbitals that you can populate 2 electrons at a time.

The trouble is that the electron correlations mix up the atomic orbitals so that distinct atomic orbitals no longer exist. Instead you have a single wavefunction that describes all the electrons and does not factor into parts for each electron. For example this is explicitly done in the [configuration interaction][2] technique for improving the accuracy of Hartree-Foch calculations.
[1]: http://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method
[2]: http://en.wikipedia.org/wiki/Configuration_interaction

My question: Unstable atomic nuclei will spontaneously decompose to form nuclei with a higher stability. What is the algorithm for deciding what sort of it is? (alpha, beta, gamma, etc. Also, given that alpha and beta emission are often accompanied by gamma emission, what is an algorithm for deciding about the distribution of the radiation?

Gamma emission is emission of a photon upon a nucleus transitioning from an excited state to a lower or ground state **of the same nucleus**. The number of neutrons and protons in the nucleus is exactly the same before and after the gamma photon is emitted.

Beta decay results from a nucleus having too few or too many neutrons relative to the number of protons to be stable. If there are too many neutrons, a neutron becomes a proton, an electron and an anti electron-neutrino. If there are too few neutrons, a proton may become a nuetron by positron emission or electron capture. Whether beta decay is favorable can be calculated based upon the energies of the parent and daughter nuclei, as well as the energies of other particles.

Alpha decay is only observed in heavy nuclei, with at least 52 protons. Iron (26 protons, 30 neutrons) is the most stable nucleus. The [semi-empirical mass formula][1] may be used to determine if alpha decay is energetically favorably, but even if it is, the rate of decay may be extremely slow. There is a potential energy barrier to the particle’s escape from the nucleus. See [this reference][2] for further information.
[1]: http://en.wikipedia.org/wiki/Semi-empirical_mass_formula
[2]: http://www.astro.uwo.ca/~jlandstr/p467/lec8-alpha_reactions/

## Learn from Bertrand Russell!

- “I have lived in the pursuit of a vision, both personal and social. Personal: to care for what is noble, for what is beautiful, for what is gentle; to allow moments of insight to give wisdom at more mundane times. Social: to see in imagination the society that is to be created, where individuals grow freely, and where hate and greed and envy die because there is nothing to nourish them. These things I believe, and the world, for all its horrors, has left me unshaken.”, Me: physical body life-term goal lives as long as physical body works, a goal beyond is just beyond; to “see in imagination” means to imagine “what is should be” (at his current level of understanding); the big goal: “to care for what is noble” is cursory- still, it highlights seeking for explicit features

- “The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.”, Me: “stupid” have limited observation, hence must be more “sure” of many claims; given the Gaussian distribution of people (given that we assume naive genetics behind the birth of new beings), optimizing for quality requires optimizing for a tight range; Still, lets remember that we still understand barely anything, so it is also a good idea to seek in others different talents than intelligence (understood as the ability to effectively solve problems)

- “Love is something far more than desire for sexual intercourse; it is the principal means of escape from the loneliness which afflicts most men and women throughout the greater part of their lives.”, Me: but why do people want to escape from loneliness? cannot they talk to arbitrary people in the street? why is love understood as tight (selective) feature so often? what really is loneliness?

- “I would never die for my beliefs because I might be wrong.”, Me: we just see and model the observable (our observable)

- “The good life is one inspired by love and guided by knowledge.”, Me: love would be just “love for life”, i.e. appreciation of the given, and knowledge would be “learning about the black box” through the (arbitrarily effective) walk in the parameter space; here Russell adds: “Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of mankind.”, so he names three features, i.e. includes suffering, but I would remove suffering from here, understanding it as one of things we experience when learning

- “Conventional people are roused to fury by departure from convention, largely because they regard such departure as a criticism of themselves.”, Me: a fixed set of unchallenged rules, or more generically, ceasing to challenge own rules, refrains many from effective learning; Russell adds more here: The world is full of magical things patiently waiting for our wits to grow sharper.”, i.e. by looking at learning with more happiness, we learn to live better; why do we die instead of living forever learning?

- “One should respect public opinion insofar as is necessary to avoid starvation and keep out of prison, but anything that goes beyond this is voluntary submission to an unnecessary tyranny.”, Me: as noted before, when learning, use tight range for choosing, and constantly challenge claims

- “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”, Me: we remember Russell’s paradox, we understand how many axioms and assumptions are introduced to mathematics; we need to think more and more as how to revise assumptions using knowledge from experiments

- “Life is nothing but a competition to be the criminal rather than the victim.”, Me: important words, but I need time to investigate them deeply

- “We are faced with the paradoxical fact that education has become one of the chief obstacles to intelligence and freedom of thought.”, Me: remember than to learn you need to perceive (hear, see) and then understand at your level of understanding, you often learn without understanding; secondly, it is misleading to many than “school” ends, in fact- “school” is all the time, for more – also refer to Warren Buffet’s words quotes in one of my former posts

- “Science is what you know, philosophy is what you don’t know.”, Me: science describes the perceived using our models, philosophy or thinking in general challenges the models, and dreams of a larger picture

- “It is preoccupation with possessions, more than anything else, that prevents us from living freely and nobly.”, Me: I hope that it is not even the case, but the fact that people are afraid to go for what they like because they need to survive, and they need food before self-development; an intelligent being will never focus on implicit and misleading information, such as money, but that does not imply one will not have money; an intelligent being will focus on important features, going beyond physical life, and will crave for and do things that are worth doing

- “Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric.”, Me: you have to define what “eccentric” means; if an opinion different than the opinion of majority, then by definition of Gauss distribution and the naive assumption about the birth-rate, we will have the answer

- “The only thing that will redeem mankind is cooperation.”, Me: about taking the picture of group rather than one being; in such a case, we do not optimize what is good for one being; however, what is not “good” for being will not be “good” for the mankind; still, being good for 2-year perspective is different than 20-year perspective, and different than 1M-year pearspective

More to come

## Widening the range of the perceived

Increasing the perceived (see: Goedel) allows us to iteratively improve our model of the universe. We can also use it for negative, e.g. non-existence, proofs. The range of the perceived are clustered to: sight, smell, taste, hearing, touch, vestibular, proprioception. Sight, smell, taste, hearing, touch – all those boil down to sensing certain features of data. Vestibular and proprioception: the former is about sensing positioning, the latter is about knowing about the structure of our body.

So, all in all, senses are about knowing certain features of the given at the perceived level. This means we could have more perspectives as how to observe our data set and therefore more senses. In this post I aim at learning more about the features determining the senses.  I want to investigate and cluster current methods for enhancing the known senses. And finally, I also want to touch the problem of finding new features.

Sight. Recognized and observed by Leonardo da Vinci: “The function of the human eye … was described by a large number of authors in a certain way. But I found it to be completely different.”. From Gestalt theory (regarding eyes)  we should distinguish between: proximity, similarity, closure, symmetry, common Fate (i.e. common motion), and continuity. But from our perspective, in the 22nd century, these are just different, but correlated, perspectives on the visual data. The question is why visual data is visualized and not heard. Why cannot we hear the visual data, and see the speech? We model it with waves. So theoretically, we’d be able to extract information about arbitrary observable data in way we want. The key is that we should aim at having a larger picture of things rather than focusing on correlated perspectives.

Hearing. Famous 20-20k Hz. Also modeled with frequencies. Still, we non-rigorously allow for “many” frequencies. Not as if we used integers. Might change due to on-going quantum research. Visual spectrum’s (sight) 430-790 THz is much faster peaking than our hearing frequencies. Taste, olfaction are defined with cations and ion channels.

There are very many more senses mentioned in related articles. The question is to find correlations between them as well as simplify the model of description. Another question is whether our model would enable us to build a generic sensor for learning about data and this data would be rendered and further analyzed. That could enable us to widen the range of the perceived as well as face the real issues regarding building such a tool.

So, the initial question would be. Would would we “see” the sound and different other “frequencies”? Which of the “frequencies” would be correlated? How could automate learning from newly acquired data? How could we employ computers to widen the range of perception for us? We have microscopes for transforming nano to larger-scale, we have telescope for transforming macro to lower-scale, and for now we still learn from this apparently different pictures and only occasionally draw big conclusions. But, at the very same time, we should always try to learn what is there rather than what is visible.

A more general question- how would we build the general sensor for learning from the entire spectrum? And finally, what is this entire spectrum? How does this spectrum look like? Is our model with waves satisfactory for the game at scale?

## Elaborating on programming paradigms in the context of the observable

We perceive the objects in the universe as if they had intrinsic properties. Those intrinsic properties last a moment (“how long is now?” on one of Berlin’s houses). For this moment they have a value. If we “omit” now, then we say “flow”. “Flow” indicates that value of property changes in every moment and therefore, as such, can be modeled the way we do we it in hydrodynamics, with connected pipes. Constant flow. So, we either have objects with properties with values that characterize the “now”, or there is only the connection of pipes, and logic behind it. If we have the latter, then in every moment we have  objects with properties in current states, unless moment in given sense does not exist (but is still a good approximation for many of our use cases). But, even if we have the object with states, we might be interested about the flow rather than objects and their states, and then we way omit storing information of states. The big thing behind is that when we forget about states, we focus on the flow. If we focus on the flow, we focus on the bigger mechanism of interaction. We can then focus on more complex flows.

Focusing on the flow requires from us understanding the structure of the problem we deal with. A bigger picture is required before we start to deal with a problem. But also solutions are more like sculptures. Now, we will go both directions. But finally better solutions will be chosen. As long as we are not interested in current object properties and more in the logic behind the problem, we will focus on the flow.

A couple of examples. Focusing on processes is secondary since it is based on implicit feedback, i.e. they are derived from the fact that some problems have their data structure such that can be handled with parallel processing. Procedure is a general idea that an action can be encapsulated. But, as I mentioned before, either we care about the result or not. If not then we care about what comes next. Is it possible to turn every program that uses states into a program with no intermediary states?

If yes,  can we automatedly  learn about the flow of the problem based on its data structure and then heuristically model the solution, and finally iteratively arrive at final solution without states? If this is possible, then finding the formula for primes numbers would first involve learning about prime data set, learning about its informative features. Then re-learning about these informative features until we arrive at truly informative ones that enable us to see the final true picture. And then would we need states in between?

And now, which is the direction for augmented reality featured brains exploiting automated learning about data sets? And how to develop better learning methods and will arrive faster at what really informative is? One thing is that we can lie with statistics a lot, since most of people just see pictures “going up” or “going down”, or “clusters”. Iterative and automated learning for decreasing the amount of states used in between could be an interesting idea.