My questions to others! (1)

My problem:

Let G(n,k) be the n-th k-almost prime. Prove that for every for every n \in N there exists infinitely many k \in N satisfying 2*G(n,k) = G(n,k+1).

And a solution from Ross Millikan:

G(n,1)=p_n, the n^{\text{th}} prime.
G(n,k) \le 2^{k-1}p_n because we can display n-1 numbers that must be smaller; 2^{k-1} times all the smaller primes.
Given n, we can find m such that 3^m > 2^{m-1}p_n \ge G(n,m)
Then for all k \ge m, 2G(n,k)=G(n,k+1)

My question:

Every program P which built of function sequence (order counts): F_1,..,F_n, where F_i returns R_i and F_{i+1} takes R_i as an argument, can be shown as F_1(F_2(F_3(...(F_n)))), i.e. we do not need to store intermediary program states. Can every program be transformed to such without intermediary states?

Answer from Johannes Kloos:

Yes, as long as the semantics of the underlying programming language are effective and we allow higher-order functions.

Assume for the sake of the argument that we are given big-step semantics for the underlying programming language, using the notation s_1 \to s_2, where s_1 and s_2 are program states. Suppose furthermore that the underlying programming language is “roughly imperative”: We have several base commands (e.g., assignment) that are joined into a sequence. Then the execution of a sequence c_1; c_2; \ldots c_n of commands corresponds to executing the big-step reductions for c_1, c_2, \ldots, c_n in sequence. Since we assumed that the semantics are effective, in the sense that there are recursive functions implementing the reduction rules, this boils down to applying the corresponding functions implementing the semantics in sequence.

If we add control-flow constructors like if or while, things get a bit more interesting. Consider for example the following simple if command: \text{if } e \text{ then } c, where e is an expression and c is a command.
The semantics could be given by the two big-step reduction rules
$\frac{e(s_1)=1\quad s_1 \to_c s_2}{s_1 \to_{\text{if } e \text{ then } c} s_2}  \qquad  \frac{e(s_1)=0}{s_1 \to_{\text{if } e \text{ then } c} s_1}  $
This gives rise to an evaluation function of the form E_\text{if}(E_e,E_c,s) which takes two functions as arguments, namely the evaluation functions for e and c, and the execution state s. E_\text{if} is clearly recursive. Since the evaluation functions for E_e and E_c can be derived from the program being considered, we can treat them as parameters and therefore get an
implementation function E_{\text{if } e \text{ then } c} that maps execution states to execution states, as above.

My question: What is the rigorous definition of the Aufbau principle and the mathematical model used for its description?

Answer from John Rennie:

The Aufbau principle isn’t rigorous because it’s based upon the approximation that the electron-electron interaction can be averaged into a mean field. This is called the [Hartree-Foch][1] or self consistent field method. The centrally symmetric mean field results in a set of atomic orbitals that you can populate 2 electrons at a time.

The trouble is that the electron correlations mix up the atomic orbitals so that distinct atomic orbitals no longer exist. Instead you have a single wavefunction that describes all the electrons and does not factor into parts for each electron. For example this is explicitly done in the [configuration interaction][2] technique for improving the accuracy of Hartree-Foch calculations.
[1]: http://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method
[2]: http://en.wikipedia.org/wiki/Configuration_interaction

 

My question: Unstable atomic nuclei will spontaneously decompose to form nuclei with a higher stability. What is the algorithm for deciding what sort of it is? (alpha, beta, gamma, etc. Also, given that alpha and beta emission are often accompanied by gamma emission, what is an algorithm for deciding about the distribution of the radiation?

Answer from DavePhD:

Gamma emission is emission of a photon upon a nucleus transitioning from an excited state to a lower or ground state **of the same nucleus**. The number of neutrons and protons in the nucleus is exactly the same before and after the gamma photon is emitted.

Beta decay results from a nucleus having too few or too many neutrons relative to the number of protons to be stable. If there are too many neutrons, a neutron becomes a proton, an electron and an anti electron-neutrino. If there are too few neutrons, a proton may become a nuetron by positron emission or electron capture. Whether beta decay is favorable can be calculated based upon the energies of the parent and daughter nuclei, as well as the energies of other particles.

Alpha decay is only observed in heavy nuclei, with at least 52 protons. Iron (26 protons, 30 neutrons) is the most stable nucleus. The [semi-empirical mass formula][1] may be used to determine if alpha decay is energetically favorably, but even if it is, the rate of decay may be extremely slow. There is a potential energy barrier to the particle’s escape from the nucleus. See [this reference][2] for further information.
[1]: http://en.wikipedia.org/wiki/Semi-empirical_mass_formula
[2]: http://www.astro.uwo.ca/~jlandstr/p467/lec8-alpha_reactions/

Advertisements

About misha

Imagine a story that one can't believe. Hi. Life changes here. Small things only.
This entry was posted in Mathematics. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s