Increasing the perceived (see: Goedel) allows us to iteratively improve our model of the universe. We can also use it for negative, e.g. non-existence, proofs. The range of the perceived are clustered to: sight, smell, taste, hearing, touch, vestibular, proprioception. Sight, smell, taste, hearing, touch – all those boil down to sensing certain features of data. Vestibular and proprioception: the former is about sensing positioning, the latter is about knowing about the structure of our body.
So, all in all, senses are about knowing certain features of the given at the perceived level. This means we could have more perspectives as how to observe our data set and therefore more senses. In this post I aim at learning more about the features determining the senses. I want to investigate and cluster current methods for enhancing the known senses. And finally, I also want to touch the problem of finding new features.
Sight. Recognized and observed by Leonardo da Vinci: “The function of the human eye … was described by a large number of authors in a certain way. But I found it to be completely different.”. From Gestalt theory (regarding eyes) we should distinguish between: proximity, similarity, closure, symmetry, common Fate (i.e. common motion), and continuity. But from our perspective, in the 22nd century, these are just different, but correlated, perspectives on the visual data. The question is why visual data is visualized and not heard. Why cannot we hear the visual data, and see the speech? We model it with waves. So theoretically, we’d be able to extract information about arbitrary observable data in way we want. The key is that we should aim at having a larger picture of things rather than focusing on correlated perspectives.
Hearing. Famous 20-20k Hz. Also modeled with frequencies. Still, we non-rigorously allow for “many” frequencies. Not as if we used integers. Might change due to on-going quantum research. Visual spectrum’s (sight) 430-790 THz is much faster peaking than our hearing frequencies. Taste, olfaction are defined with cations and ion channels.
There are very many more senses mentioned in related articles. The question is to find correlations between them as well as simplify the model of description. Another question is whether our model would enable us to build a generic sensor for learning about data and this data would be rendered and further analyzed. That could enable us to widen the range of the perceived as well as face the real issues regarding building such a tool.
So, the initial question would be. Would would we “see” the sound and different other “frequencies”? Which of the “frequencies” would be correlated? How could automate learning from newly acquired data? How could we employ computers to widen the range of perception for us? We have microscopes for transforming nano to larger-scale, we have telescope for transforming macro to lower-scale, and for now we still learn from this apparently different pictures and only occasionally draw big conclusions. But, at the very same time, we should always try to learn what is there rather than what is visible.
A more general question- how would we build the general sensor for learning from the entire spectrum? And finally, what is this entire spectrum? How does this spectrum look like? Is our model with waves satisfactory for the game at scale?