1. Based on the model of time and space, which we currently use, the universe expands. Here, our aparatus includes “G=T” and topology, therein partial, lie and covariant derivatives, multiple operators and products (e.g. tensor, wedge, exterior, etc.). We could not long think that mass generates a gravitational field according to Poisson’s equation and a gravitational field creates acceleration. That would mean that a geometrical n->1 (losing information) notion of mass would allow us to understand the change in movement (velocity). At the same time, keep in mind that the definitions used in the sentence are arguable.
2. It is built of information carrying elements. Two arbitrary elements are potentially different, i.e. to learn about all we have O(n), assuming we might enumerate them max in O(n). For that we have the numbering (integers) , and therefore everything that comes with the integers, including the primes. A prime allows picturing an arbitrary in (understandable to brain) geometry that is, in a sense, not arbitrary, as allows no loss of generality. The question would be whether a notion of integer itself results in a loss of information.
3. In problem solving, due to parameter space (what we now even more understand from the machine learning). the process of creating a model of the perceived (everything we might access may be referred to as the perceived, either by eyes, or a set of senses, brain etc.) should involve a possibly strict picture of what we perceive correct (from axioms to lemmas, theorems in mathematics). Then, the strategic approach would be to find the places where the theorems most likely connect to build new knowledge. Building working products rather than low-level algorithms involves then connecting the dots in the engineering rather than low-level dots. Due to parameter space. Finding the places of dense connections should also be possibly effective and that has been thoroughly studied in the machine learning.
4. All elements of engineering and low level development of abstract thinking may be automated. One of current advantage of humans over machines boils down to building models of high resolution and then application of more fuzzy logic based on pattern recognition, where two “similar” patterns might only be slightly connected (inspiration) or a bit more, still slightly (rough guessing). Therefore, using the newly implemented pattern recognition, we might also automate higher level knowledge-building through fuzzier pattern recognition.
5. Knowledge is confined to the perceived, as is, as it is built on the perceived. Also, cannot be built on anything else rather than the perceived. The “good” side of it is that it allows learning about the entire perceivable. The “bad” is the same, i.e. it is only the perceivable that might be learnt. Defining what is perceivable and what is not remains an open question. For now, we could start by saying that everything we can imagine is perceivable. Therein, assuming that we could imagine a human, we could create it ourselves after the process of learning how to do it. To make it clearer, if we can imagine a notion of human (or, in general, a notion of being, if human is one of the beings not distinctive through having a non-perceivable element), then we will be able to create it ourselves.
6. Claiming the existence of the non-perceivable cannot be supported, by it definition, i.e. we assume that we can imagine an object, therefore we might be able to create it. Any development within the perceivable exists in the perceivable, thus our current methodology of knowledge building. Claiming the existence of the non-perceivable might make sense but cannot help us develop better unless we also contain an element of the unperceivable, being unperceivable ourselves, i.e. being capable of understanding the non-perceivable, a contradiction.
7. From the 6. we learnt that there might exist new ways of cognition, which could let us learn about an element that does not belong to the plane containing our potential learnings. Therefore, the non-perceivable would be not perceived only in the context of the learning methodology. Improving it might allow to go further.
8. Going further than in 7., what if we could develop our cognition to learn about the things that stay beyond of our current perception. Imagine that you are blind, thinking how the world must look like. You see it black, but then fall or bump into a wall, perceive different temperature throughout the day, become more tired after a long period of running. You then assign a model for describing new vectors of information: temperature, time, etc. The model involves a number of parameters which then turn out to be somewhat connected. We are that blind, i.e. this is exactly our situation.
9. Based on the learnings from 8., we observe that finding out more description vectors and their connections, as well as automation, might allow a faster development of the unperceived in the context of “previous” learning capabilities. Assuming that there is nothing globally unperceivable, then we could learn everything. Otherwise, if we assume an element of the globally unperceivable within us, then we cannot perceive the entire “I”, therefore cannot take responsiblity for “I”. So, we assume no globally unperceivable within us.