## Prime notion

As I am dying, I would like to cross-investigate the notion of a prime. Prime notion yields the notion of a prime.

A number defines the quantity of objects, i.e. the power of the set of objects. To discuss about a prime, we therefore have to talk about sets of objects. As for a notion of the set, we observe that if we talk about a specific set, then we change the relation of objects within this set and all other objects outside the set, and here we need to assume that this relation does not influence choosing a set.

If such a relation itself would be able to influence the choice of a set, then what would define the choice of relations, and sets of relations etc. After all there might be a predefined structure defining how the data is to be aggregated. In this sense, the idea behind decomposing the information of quantity into a multidimensional vector using prime numbers is appealing.

The very decomposition, though, is not about geometry itself, as geometry, as such, but about spreading the information into a vector of information while preserving the character of the quantity-based information within the number. So now, again, back to the clue, i.e. the character of the information hidden in the number.

Number contains information about quantity. There is 1-1 connection between the quantity of elements and the number. However, the objects need to be of the same type. Going deeper into the objects, we observe that the objects consists of yet another objects, and also that the connection is not a “set” type of connection, but rather a more sophisticated structure. That shows that the counting of the objects may not be as simple as using the integers.

Then, we notice that there have been introduced numerous number systems and number types, like complex or rational numbers. Those numbers have certain characteristics and ought to describe 1-1 the relation that is hidden in the real world. Still, in order to achieve that we would have to understand what is there to be modelled, i.e. what type of information do we need to address.

Rational numbers address the idea that the line starting from point A and ending in point B might consist of infinite amount of points. So, the idea that there is this infitnity between two points. But then we would need to have this infinity somehow tackled.

Complex numbers address the idea that we might use the unit i that does not exist among the integers to look at the numbers on 2d plane. There, we also notice that all of the points on the circle have the distance to the center point, i.e. (0,0). Such an approach could be used to put the numbers into infinite number of planes, for all of the planes we could have circles, and then generalized circles for multidimensional spaces.

Still, is it this notion of having the same distance from a certain point that should shape creating the rationale for a number? Does it allow enough connection between these numbers? If not, have we thought about a notion of number where the numbers would be very strongly interconnected, i.e. interconnected in a way that would resemeble some sort of multidimensional structure, as if we were looking at the crystal structure, but multidimensional.

Now, lets focus that we number the elements by “incrementing” this generalized notion of quantity, i.e. counting along the goemetries of objects consisting of yet another objects etc. How could such numbers look like?

Now, think again about prime numbers. If you multiply a number by a prime (a prime that is not a divisor of the very number) then you somehow save the information about that object and give it a specific character. That could be used for handling a object that has some structure, i.e. using the notion of prime we might model the geometrical (not for eyes) structure of objects and connections between objects. In this sense, objects with common divisors should be connected.

To even talk about such ordering, we would have to make it starting from low-level, i.e. a particle level. Still, it is very likely that our first attempts would be not good enough, which means we would have to have the numbering scalable and allowing changes within its structure. For such a modelling we could use a generalized notion of graphs, assigning primes to relation types and nodes. In the final round we would have it with primes only and could forget about the notion of graph.

This new enhanced notion of graph could be created with prime numbers only.  A few questions that arise involve:

- are we capable of making it fully scalable and allowing changes to its structure?

- how would then look counting? ie. adding objects? it would probably have to look like addition in chemistry, ie. adding H2 to 02 would give water in some sense?

We observe that having infinite number of prime allows us to infinitely develop this system. Building this system must be more like sketching and constant improvement rather than making everything depict the universe perfect from the beginning. It has been addressed in one of the previous posts, though, that having low-level-based strategies for the understanding of universe functioning is unlikely to bring the true results. Low-level strategies are more interconnected, having graphs in mind, with low-level adaptations, such as algorithms and higher level are more into physics and end-products, more with engineering and experimentation. Connecting two dots: one of the very low mathematical level, lets say, a prime, and one of very high abstraction, like universe expansion is unlikely to be true and the mathematical-level theorem would just give a physical level background, and, as such, might also be often developed w/o that much of low level abstraction.

New ideas of how to connect dots of high level in compact systems, but of high parameter space, involve a lot of decomposition and machine learning and still require a lot from the mankind. Don’t be too proud, humans.

## Knowledge of the perceived only

1. Based on the model of time and space, which we currently use, the universe expands. Here, our aparatus includes “G=T” and topology, therein partial, lie and covariant derivatives, multiple operators and products (e.g. tensor, wedge, exterior, etc.). We could not long think that mass generates a gravitational field according to Poisson’s equation and a gravitational field creates acceleration. That would mean that a geometrical n->1 (losing information) notion of mass would allow us to understand the change in movement (velocity). At the same time, keep in mind that the definitions used in the sentence are arguable.

2. It is built of information carrying elements. Two arbitrary elements are potentially different, i.e. to learn about all we have O(n), assuming we might enumerate them max in O(n). For that we have the numbering (integers) , and therefore everything that comes with the integers, including the primes. A prime allows picturing an arbitrary in (understandable to brain) geometry that is, in a sense, not arbitrary, as allows no loss of generality. The question would be whether a notion of integer itself results in a loss of information.

3. In problem solving, due to parameter space (what we now even more understand from the machine learning). the process of creating a model of the perceived (everything we might access may be referred to as the perceived, either by eyes, or a set of senses, brain etc.) should involve a possibly strict picture of what we perceive correct (from axioms to lemmas, theorems in mathematics). Then, the strategic approach would be to find the places where the theorems most likely connect to build new knowledge. Building working products rather than low-level algorithms involves then connecting the dots in the engineering rather than low-level dots. Due to parameter space. Finding the places of dense connections should also be possibly effective and that has been thoroughly studied in the machine learning.

4. All elements of engineering and low level development of abstract thinking may be automated. One of current advantage of humans over machines boils down to building models of high resolution and then application of more fuzzy logic based on pattern recognition, where two “similar” patterns might only be slightly connected (inspiration) or a bit more, still slightly (rough guessing). Therefore, using the newly implemented pattern recognition, we might also automate higher level knowledge-building through fuzzier pattern recognition.

5. Knowledge is confined to the perceived, as is, as it is built on the perceived. Also, cannot be built on anything else rather than the perceived. The “good” side of it is that it allows learning about the entire perceivable. The “bad” is the same, i.e. it is only the perceivable that might be learnt. Defining what is perceivable and what is not remains an open question. For now, we could start by saying that everything we can imagine is perceivable. Therein, assuming that we could imagine a human, we could create it ourselves after the process of learning how to do it. To make it clearer, if we can imagine a notion of human (or, in general, a notion of being, if human is one of the beings not distinctive through having a non-perceivable element), then we will be able to create it ourselves.

6. Claiming the existence of the non-perceivable cannot be supported, by it definition, i.e. we assume that we can imagine an object, therefore we might be able to create it.  Any development within the perceivable exists in the perceivable, thus our current methodology of knowledge building. Claiming the existence of the non-perceivable might make sense but cannot help us develop better unless we also contain an element of the unperceivable, being unperceivable ourselves, i.e. being capable of understanding the non-perceivable, a contradiction.

7. From the 6. we learnt that there might exist new ways of cognition, which could let us learn about an element that does not belong to the plane containing our potential learnings. Therefore, the non-perceivable would be not perceived only in the context of the learning methodology. Improving it might allow to go further.

8. Going further than in 7., what if we could develop our cognition to learn about the things that stay beyond of our current perception. Imagine that you are blind, thinking how the world must look like. You see it black, but then fall or bump into a wall, perceive different temperature throughout the day, become more tired after a long period of running. You then assign a model for describing new vectors of information: temperature, time, etc. The model involves a number of parameters which then turn out to be somewhat connected. We are that blind, i.e. this is exactly our situation.

9. Based on the learnings from 8., we observe that finding out more description vectors and their connections, as well as automation, might allow a faster development of the unperceived in the context of “previous” learning capabilities. Assuming that there is nothing globally unperceivable, then we could learn everything. Otherwise, if we assume an element of the globally unperceivable within us, then we cannot perceive the entire “I”, therefore cannot take responsiblity for “I”. So, we assume no globally unperceivable within us.

## Elaboration on two known issues in the context of finding an automated approach

Each post marked with “known” in the title is about the known results and concerns my notes, many of the notes are not thoroughly checked and the solutions are non-rigorous. In this post, we are going to start with the problems, have informal look a these as well as the surroundings of the problems and then try to elaborate on a potentially interesting direction in problem solving. The key point would be to use multi-variable approach, ie. the conversion of the variables of the original task to a multivariable task, then introduce strategies that would just take advantage of contraction search and then involve 2nd price auction problem solvers bidding for their chance, which would finally lead to more structured problem solving. This post is very informal and remains part of my internal analysis of certain problems and the reader might find it difficult to read.

cos1 irrational

As many of you know the answer to the problem, I will make a short introduction as how to handle problems in a generic way. What we currently know is that we will be dealing with a function. Will we be analyzing its values or only one value? We will be analyzing its values in general unless we notice something special about the given value for an argument equal to 1. Here, we are dealing with a known function and students know that the cosine function has also irrational values. Assuming that we will not pursue the path claiming that argument 1 is in some way special, we will focus on taking a closer look at more values of the cosine function.

Knowing that will focus on more values of the cosine function, lets think what to do next. Here we have the case that we know much about the function itself, in generic cases that is not the case and we should learn more about the function. That is indeed the very first thing as might show us that, for instance, argument 1 is a special case. Here, we don’t claim it.

As we know that cosine function has irrational values, we might try to build a chain of implications based on the character of the cosine function to connect $cos1$ with $cos30$, value of which is irrational. If that were possible, we could be able to prove the task.

Let us now assume that $\cos 1$ be rational as we know that . From the character of the cosine function we have that $\cos(2x)=2\cos(x)^2-1$, which leads to a conclusion that $cos2$ is also rational. Knowing that $\cos (n+1)+\cos (n-1)=2\cos n\cos 1$, we will see that for $n=2$ we could find a rational value of $cos3$. Iterating that, we could find a rational value of $cos30$, which leads to contradiction, making our claim false, q.e.d.

What we managed to do was to find the chain of implications based on the character of the cosine function to show that if rationality of $cos1$ depends on rationality of $cos30$. But, why did we choose this solution?

Lets take a closer look at the problem again. We have a special case. If we don’t say it is special indeed, we must know how to connect values of $cos30$ and $cos1$. In the same time, we don’t know if $cos1$,$cos2$,…,$cos89$ are rational or not. If we were not able to connect these two values, we would have to find something special about $cos1$.

Reductio ad absurdum, as is, is an assumption of the opposite so what matters is using it is to find the place where the logical chain leads to contradiction with the very assumption. So, before diving into the line of the proof, we need to be able to tell how we could be able to find the contradiction.

A good example of this is a proof of irrationality of $\pi$ or $e$. In the first proof, (for the sake of contradiction) we assumme $\pi =\frac{p}{q},$, where $p, q$ are comprime integers. And then we consider $I_n =\int_0^{p/q}\frac{x^n(p-qx)^n}{n!}\sin x dx, n\in N$, showing that $I_n$ is an integer, $I_n \ge 1$ and converges to 0, which is a contradiction. In the second proof, for any natural integer $n$, we have $e =\sum_{k=0}^n\frac {1}{k!}+\int_0^1\frac{(1-t)^n}{n!}e^t dt$ and for any $n, 0 < \int_0^1\frac{(1-t)^n}{n!}e^t dt <\frac {3}{(n+1)!}.$ .To conclude, in both cases we made assumptions of the opposite as we had a strategy of finding a contradiction during the analysis of the function (another presentation of the same value) that takes advantage of the element in question, namely either $\pi$ or $e$. Having that in mind, before diving into the very line of contradiction, we should think as how to embed the element in question into the environment in which we feel more comfortable, i.e. in one where we could foresee potential contradictions. To find the contradiction, we just analyze different aspects of the function in the given environment.

Prove that $ab+cd$ is not a prime as an example of looking at the same thing from different perspectives

Let $a, b, c, d$ be integers with $a>b>c>d>0$. Suppose that $ac+bd =k(b+d+a-c)$ $k \in Z$. Prove that $ab+cd$ is not prime. Before going further with the proof, I will address a couple of issues.

Firstly, we don’t know what’s the most crucial elements of a certain task. Even if such does not exist, the more we know about the problem, the more likely it is that a reasonably interesting solution or new theory will be created. New perspective enable different decomposition of the problem and a new decomposition allows new attack strategies. To increase the amount of information we know about a specific problem we are going to use computers for gathering information and deduction. In the long run the computers will also be used even more for deduction, thus the need to redefine the position of humans shows up in the horizon.

Secondly, in our task, we do have a certain equation we got used to, thus the very first idea for many is to re-arrange it. That, combined with the fact that the equation contains additional parameter, results in a vision that we are going to be dealing with a proof with multiple cases. Still, such proofs are rarely elegant as instead of the logical chain those represent a set of “if” clauses handled with smaller chains. For the automated theorem proving such an approach would have the potential to give good results due to computational capabilities of computers whilst won’t due to the fact that heuristics is often enough for computers to assume associations without the need for a legitimate rigorous proof.

Those rigorous proofs are still required for the sake of development of strategies thinking whilst it is often the case that computers might take advantage of assumptions, which in certain cases turn into reductio ad absurdium. For problems in the area of automated theorem proving or computational complexity automation of conjecturing the assumptions cuts the domain of definition effectively enough to enable further heuristics. Given a certain degree of pure iteration we face the very useful assumptions and a decision tree with certain branches handled with proofs by contradiction (those branches are pruned later on).

Lets now get back to our initial question. Firstly, in our task there are many trivial cases, those will be omitted in the analysis. We notice that the number we will be checking for primarility is a sum. We don’t have any tools for verification of sums for primarility. I have made a proposition of research in the field of number spectral analysis and the R-sequence, but here omitted. So, for now, we don’t have any method for verification of primility of a sum of two numbers. Now, another set of words from the given task: “not prime”. To show that an integer is not prime, we need to show that it has more than one divisor, i.e. its number of divisors belongs to a set $\{$2,3,4,5,…$\}$. Lets now assume the number in question is prime. We then have $ab+cd=(a+d)c+(b-a)a=m(a+d,b-c)$ $m\in N$. For $m=1$, we have that $ab+cd=(a+d,b-c) < a+d$, which contradicts with the assumption. For a case $(a+d,b-c)=1$, we have $b+d+a-c|ac+bd$, i.e. there exists $p$ such that $ac+bd=p(b+d+a-c)$. Also, $ac+bd=(a+d)b-(b-c)a=p(b+d+a-c)=p(a+d)-(b-c)p$, thus $(p-b)(a+d)=(a+p)(b-c)$. From the assumption for this case, $(a+d,b-c)=1$, we have $a+p=(a+d)k \rightarrow b-p=k(b-c)$ $\rightarrow a+b=k(a+b+d-c)$. We also see that if $k=1$ then $c=d$, which is a contradiction again. And, for $k\ge 2$ we have that $a+b\ge 2(a+b+d-c) \rightarrow 2c\ge a+b+2d$, which again is a contradiction. Given that we fulfilled all the cases, we are done.

From a perspective of automated theorem proving, we would have many special cases and verification of contradictions. So, a computer could do our thing here and this was the reason why I chose this task here. Another thing is to try to understand what what equation might mean. Geometrically, if we have the sum of two fields (a carrot per one lattice point) the same as the field from the RHS, ie. the multiplication, we already see that the LHSes field could be shown as 2d field in integers. And knowing that this equation is true, we have to prove that yet another sum of two fields can also be shown in 2d (it is enough not being prime means that it has at least 2 divisors, i.e. having two divisors is already enough).

And now, knowing the result from the proof with many special cases, we already know the answer, thus based on that might try to judge notice geometrical connections. These could be used for looking at such proofs from yet another perspective, for instance, seeing $p_1p_2$ as a field of a rectangle. Like in the example of a rectange with $a$ and $b$ is arbitrarily cut into $n$ squares with sides $x_{1},...,x_{n}$, where we need to show that $\frac{x_{i}}{a}\in Q$ and $\frac{x_{i}}{b}\in Q$ for all $i\in\{1,..., n\}$. At first sight we see that it is interesting result connecting geometry and numbers. The generally known solution to this task is to express the intersections in the square tilting as a grid and then use a basis of the set of lengths of the squares as representation for constructing two lemmas regarding the grid structure to finally be able to deduce the answer to the question while counting as area of the rectangle by adding all the squares from the grid.

The point

Now, if we take a closer look at both issues, we notice that for building long-way implications regarding and- at the same time- keeping the argument quite self-contained, it might be an effective approach to introduce more varaibles that describe somehow differently the known definition and then look for potential contradictions. In the case of cos1, it was relatively easier than in the case of either $\pi$ or $e$. Still, in the general case we would have a situation quite similar to the case of the latter, also in many variables. Combining that with visual representation that would be described by the newly introduced variables would give us a chance to deal with the problem twofold, i.e. by visually “noticing” contradictions  (using a finite computational resource, either our head or a computer) or through finding the contradictions as indicated in the first examples and many known results. That combined with a more computational approach to set theorz would  let us use finiatary formulations of infinitiary statements and quite a different approach to mathematical objects (one is more finitary than the other, oracles) and that could be used for automated problem solving using the problem decomposition (from the examples), contradiction search as well as convex optimization for minimizing time of the task being solved.

So, for yet another problem, we would introduce a model based new variables that would describe the known facts and in the same time might allow contradiction search. Such a multi-variable approach would also allow more effective problem handling using a computer. Then, we would have different problem solving strategies ready based on different problem decomposition. The strategies would bid based on the decomposition of the problem as not all might have enough resources to work simulataneously and would bid truthful values to ensure dominant strategy of k-th price auction, taking into consideration resources required. Then, we’d let the winner work first trying to look for contradictions based on the pre-programmed knowledge. The ideas to do that would be used we can derive from these examples, ie. to deduce the most trivial knowledge about the character of two sides of equalities of a specific problem.

Also, regarding the relative construction of geometrical objects so that would could decompose such problems too or decompose other problems into such problems so that our human brain would also be more likely to assist in finding vulnerable points. It would be crucial to note that as of now human brains are better at finding patterns than the computes due to the fact that the knowledge based on numerous senses is, by default, way more strucutred and the patterns are likely more blurred, ie. the comparision of two extensive branches of a tree are less accurate but give useful hints and approximations. Such approximations could be used to build the problem solving strategy before launching the contradiction search process due to the parameter space.

## From the idea of M-tree to number M transformation

Firstly, short elaboration on the decimal system and similar systems (e.g. binary) in general. The primitive of the number shown in the very system carries information about its value (quantity related), i.e. does not carry information about the connections between this and other primitives (digits).  Currently, from what we understand about digits is that everything we learn from them is the absolute quantitative information. Still, say, Alice has 4 eggs and Bob has 3 eggs. Could we learn more?

Take 2838195719. Divide the digits into subgroups: (28)(38)(19)(57)(19). For all subgroups find prime divisor list (exclude 1): (2,2^2,7)(2,19)(19)(3,19)(19). Find if you can find a connection between the divisors for subgroups for the entire number.

For instance, here:

The M-transformation returns as the output the path built from the numbers connected with a red line. Can this information help us to decide about primarility?

Let $f_k(x,y,z,t) =2^x+3^y+5^z+7^t, x,y,z,t \in N$, where $k=4$ and defines the number of consecutive primes used for the function, i.e. here: {2,3,5,7}. For all prime $p$ find all such $(x,y,z,t)$ such that $f(x,y,z,t)=p$ (if exists).

Additional question: how does that change when $f_2(x,y)=2^x+3^y$ (and the rest of the task changes accordingly)?

## Logical struggles regarding the vagueness and partial knowledge

Some of my recent struggles concern logic. Therein, embedded in the logic that we know, we deal with binary assessment of a logical value based on the environment of a claim surrounding the sentence in question. What does that mean? We could say that a certain sentence is right given certain knowledge about the universe that surrounds it. However, gaining more knowledge about the universe could lead to making the same sentence false. In this context we could mention set theory axioms that faced numerous contradictions in the past. An example to that would be, for instance, enumeration of all integers (countable). The enumeration takes place in an environment when we can afford to enumerate all those and the structure of the universe is not in contradiction with that capability. If we think of the very enumeration as of enumeration in naive set theory, i.e. the enumeration of a set of integers, it might be the case that not all the integers are in that set etc.

Dr. Tao referrs to “infinitiary” and “more infinitiary” sets as well as their corresponding computational requirements, which clearly indicates the problem behind the word “naive” that should be used for all the theories that we know. Under very specific conditions certain theory might be true, but at the same time it might be the case that the larger picture would allow us find a leak within the very theorem. Yet another thing is that the level of formality used to build the model used for the proof of the theorem might involve low level problems that we fail to see. For instance, in a standard axiom system (ZFC) in set theory, we currently don’t know of any paradoxes. In the same time, it might be possible that there exist ones. Enumeration of those is also a function of tree traversal and space partitioning (problem space) and therefore has own corresponding computational requirements.

Based on the aforementioned, it is often the case that objects that we create cannot really exist and just exist due to vaugeness. Detection of the very vagueness requires in-depth verification of each object used in the proof. Each object is constructed using a language that is very likely to leak more when we get to know more about the universe. And it is the set, ie. the construction of “multiple” primitives carried in a bag that exists next to the primitives (numbers) and is widely used in the language of mathematics, where we have found so many problems. The same applies to understanding the structure of numbers.

All that combined with Tarski’s undefinability theorem and impredicavity allows more space for rethinking how numbers as primitives and sets as aggregates of primitive should describe the universe as of now, i.e. the one we know so little about, so that we don’t have to re-write too much in case of fundamental contradictions in our understanding of the universe. Interesting approach to thinking of sets has been presented by dr. Tao who put forward Banach-Tarski paradox and Cantor’s theorem in a “finitary” manner, using the notion of oracle. It might be the case that we should think about the mathematical tool for finding contradictions within the mathematics that we currently use, i.e. an analyser of a formal language used for constructing mathematical objects and proofs.

Question list:

1. Is there a way to describe the movement of planets assuming that there are k>=2 planets in k-example using the GTR?

2. From Mr Lipton’s blog:

“GLL: How can the same system be complete and incomplete? How can making something stronger make it incomplete?

Gödel: Ach—your words in English are too short. We have longer words so we think more before finishing them. Less confusion.”

The question is: could we redefine all the words and use only very accurate wording for whatever we want to state?

3. What are good examples of proofs that take advantage of induction and in the same time use other than integer number ordering (n->n+1)? What about p(n)->p(n+1)?

4. Can we be using an induction using a Turing machine  built out of logical statements and then use induction with an ordering defined by how deep we are in the three? (using a tree traversal rather than known sequential traversal)?

5. In case of a complex game where it is difficult to define the goal and there is a certain number of constraints that keep us alive in the game (e.g. life)- should we always look for implied odds rather than odds for the certain decision?

6. How is it now possible for a single person to conduct real astronomical research which would connect mathematical modelling with large amount of data?

Second part of this post as I am a little bit tired of logic now.

Whenever I think of the theorems regarding the inequalities (Muirhead, Jensen, WPM, Holder, Rearrangement, Chebyshev, Schur, Maclaurin, majorization, Bernoulli, etc.), I am thinking about three different things:

- how a certain function acts given a specific “extensive” argument,

- what are the types of interesting “extensive” arguments (this “extensive” argument could be either $A = x_1+x_2+...x_n$ or $B = w_1 x_1 + w_2 x_ + .. +w_n x_n$), then we could think if we can generalize the relation between $f(A^B)$ and $f(B^A)$.

- ways to settle the generalizations about the function $f(C) = {{f(A)}\over{f(B)}}$ (or similar) using the (human) analytic capabilities, automated (computation) analytic capabilities or heuristics.

What comes to mind is that for creating such theorems we are using:

- analytic (non-combinatoric) analysis of $f(C)$,  from there we can proceed in the area of more automated analytic approach using the problem solvers, here also we would use some sort of theory creating tool that would describe a feature of the function and then investigate it for $f(C)$,

- permutations, our brain will only succeed in finding the easiest to spot connections, whereas what we need is a tool that will permute and test,

When it comes to the analytic approach, we do have manifold notions including smoothing (unsmoothing), convex (concave),, extrema, constrained extrema, derivative test, hessian test etc. When it comes to permutations, we have majorization, symmetric sums, etc. It might also be the case that it is possible to reduce the dimensionality of the problem, i.e. decrease the amount of variables used in the problem.

## Measuring number symmetry and introduction of M(k)-tree

Starting from the last digit of every prime number (1,3,7,9), we then might iteratively append to numbers on the left side of the last digit (first added). This could be a way to create all (but divisible by 5) odd numbers. Now, to decrease the probability of constructing a non-prime number, we might think of a new to handle divisibility. We notice that divisibility by 3 and 9 is connected with the sums of digits. Due to the character of sum, we notice that for a number with digits abcdefg, when we cluster (ab)(cdef)(g) or (abc)(defg), it’s always the same about the remainders of the clusters, i.e. those cannot add up to a number divisible by 3 and 9. Not only the remainders but also potentially primiarility of the clusters also matters. Or, symmetry.  This post is just an indication that I would like to think about the symmetry of numbers. How to measure the symmetry of numbers to say that a number is prime or not. What could be the interesting clustering of the number?

As for the symmetry, I would like to introduce yet another abstract idea that came out of my dream. Take 1042341, lets paint its M(1)-tree.

1042341

Starting from the last digit as this is the digit that starts the number and might already tell us that we are not dealing with a prime. Lets start substraction: (n+1)th digit -nth digit.1. We have 1. (black)

The next digit is 4, so 4-1=3.(green, as positive)

2. So, we have 3.

Then, we have 3-4 = -1 (red, as negative)

3. So, we have -1. Then: -1,2,-4,1.

4. So, all in all, 1 black, 3 green, -1 red,-1 red, 2 green, -4 red, 1 green.

Painting it here:

That we could paint for all the numbers but for M(1) its length is the length of the number minus 1. In general case, we could cluster the number of digits into clusters of the same length and then add the digits forming the clusters. For instance, for 1042341, we have only one option as the number of digits is prime but for 10423411, we might have (10)(42)(34)(11) or (1042)(3411). In the first case the solution would be  23, 8,-32. In the second one: -2369.

And the question: does the primiarility of number of digits of a certain number influences the probability of a number being prime?

## Primes modulo 3 either 1 or 2, ie. concatenating for prime finding

Divisibility by 3, 5, 7, 11 etc. (for many numbers) is something that is not new to us, we understand it. For the concatenated quarcs to be prime, the number cannot be divisible by any of those numbers, therein by 3. For this case, we should understand that we have two types of primes: primes mod 3 that give either 1 or 2. When concatenating, that must be borne in mind. That”s for a good start.  Also, the last quarc factor cannot be divisible by 5, i.e. the last digit of the last quarc factor cannot be 5 etc. These rules. Without the generalization of divisibility rules, one cannot do much about that do find big primes by hand. Therefore, yet another type of approach suggested.

211,311 and 811 are primes. 211 = (2)(11)=(21)(1). In the first case both are primes. In the second one, there is 1 and a composite 21. For 311, we have 311=(3)(11)=(31)(1), i.e. in the first case both are primes and in the second case there is 1 and prime 31. In the third case, we have 811=(8)(11)=(81)(1), the same case. 1091,1559 and 2053 are all prime. 1091=(109)(1)=(10)(91) etc.

And now, we could use do the following:

- for different pattern-based number construction models check if a number is prime,

- (A) cluster the results

- (B) use matrix factorization with kernelling for disclosing the pattern,

- conjecture a linear problem based on the results,

- try out convex optimization to redefine the clusters and go back to point (B)

Below, just a peomatic infographic of my thoughts.  Mostly, it is about choosing own goals and passing the needing in the streets.

As for the lessons from Euler, who was indeed mathematically gifted  for his age, I will refer to things in a longer book. That’d be more than referring to all his papers. As for the letters in the infographic, it says GOEDEL as it was his thoughts about using contradictions to get rid of “wrong paths” within the thought process. Therefore, what we should decide about the structure life. Indeed, we could start from memento mori. But, it is right there, where we notice that we cannot really be dead, as we influenced the world and it is also part of us, not only the physical body. For those that focus on physical “being”, such being, as is, would require from us accepting its gradual loss.