Lets think what makes a beautiful constant. It is its ability to capture intrinsic information in our number structure. The deeper we go, the better the constants will be. One important note: the constants deal with our number system and as such sometimes don’t stand a chance to learn about
I will mention a couple of interesting ones. I took the pictures of interesting ones from an article on Wikipedia- http://en.wikipedia.org/wiki/Mathematical_constants_and_functions
Golden ratio (Beautiful due to the spiral coming out of it, might be also over-rated- might be there are some better looking spirals?):
Dottie number (shown here due to its recursiveness and association with fixed points):
(fixed points: http://en.wikipedia.org/wiki/Fixed_point_(mathematics) )
Complex (complex had never existed among the regular integers, but then the distance from (0,0) among the “new” and “complex” did not exist in integers; I still will spend more time to understand the foundations of these numbers):
Square root of 2 (the concept of irrational numbers is yet another one, like the concept of complex numbers, which goes beyond many ideas )
Polya Random Walk constant (parameter space exploration exploiting the idea of random walk, how close to so many statistics-based ideas in machine learning; it is important to show what mathematics such a mechanism could use):
Liouville number (for its tremendous value in disclosing transcendental numbers, which direction to go to define what a number really is? see some of my previous posts)
Pi number (for connecting 2d distance with n-sphere, transcendental numbers; I think there is still much to learn about Pi)
Euler’s number (what is the relation of n+1 and n for big n’s, unbelievable thing that it is e)
But the constants themselves are just single shots at a data set. Others try to capture a larger picture by iterative approximation and modeling. Now I will show to to speak the language of distributions. The list is here: http://en.wikipedia.org/wiki/List_of_probability_distributions. I will shortly address only those that are of interest to this post.
Bernoulli distribution (for potentially inf. sequence of binary random variables, assuming 0/1)
Binominal distribution (number of successes in these binary 0/1 experiments)
Beta-Binominal (where success probability varies)
Hypergeometric distribution (first m of n 1/0 experiments, with the number of successes known)
Poisson distribution (probability of a given number of events occurring in time interval)
Or Zeta distribution- for learning about the world with Zeta glasses.
Now, the point is that all the distributions model the world given what we know about numbers. Distributions based on combinatorics and Borel’s probability measure. Based on our knowledge we model the processes and build universes’es “building blocks” based on our models. Like one called “amplituhedron”. It is supposed to be a higher level abstraction for prior learning in physics.
The same way we model processes, we model more granular structures, such as particles. We use e.g. the notions of simplex or polytope. Given the amount of data in the model some cry out that everything must be non-deterministic. This is clearly not necessarily the case, since one would have to provide a non-existence proof of any deterministic relation in the data set. Given that we don’t even have the entire one, but only the observable (at our level) part, some of us should focus on revising the most fragile foundations of the number system.
We model and will iteratively model. We will use distributions and n-dimensional notions but those notions are going to evolve. Larger abstraction is required for the prior and those will come up from the unsupervised free learning. To do so we have to investigate what is possible and capture limits and constants like the ones that I mentioned at the very beginning.