Thought parsing: Information, communication, and intelligence

is information just an attributional property of the interaction between space-time (non-exclusive to matter, but rather anything that exists1) and perception?

Matter (within the bounds and constraints of space-time in the observable universe) and space itself has certain qualitative attributes—in the form of physical properties, broadly speaking—like mass, energy, and density (on the elemental side), and colour, texture/surface variation, and the like (on the secondary, or emergent-property side). The vacuum of space is, well, a vacuum. It contains no substance, has no atmosphere—in other words, there is an absence of matter. Science notably uses these qualities—this information—to draw conclusions about things (e.g., their past, present and future states), and the relationship between such things and other things. I would argue that the term “information” is in many ways analogous to the question of whether a tree makes a sound when it falls but no one is around to hear it. Quantum theories, of course, would say that the tree neither fell nor didn’t until the moment an observer happened upon it, and in perceiving it, forced an outcome. Information is much the same—consider the observer effect. Information—e.g., photon behaviour—can be altered depending on whether it is being measured or not. This brings us back to the original suggestion: that information is only possibly conceived as such if there is in fact an “informee”, an observer or perceiving entity, to be informed. One might counter that we could have information but be unable to make use of it, such as if one received a letter written in a foreign language, with an altogether unfamiliar orthography. While it is true that you might not be able to understand the intended semantic content (if indeed there was any to begin with—but let’s assume that this letter was written by a real person, in their—real—native language, so there was), you are still able to access the informational output (the symbols), only you don’t have the key to hash the author’s meaning. On the other hand, we could consider information to simply be anything that is, and specifically, anything that can either be measured or observed, or be inferred based on other measurements and observations. Information is…stuff. Or patterns caused by stuff. Which itself, is effectively stuff (though perhaps qualitatively different stuff, via a human understanding). While this definition doesn’t explicitly necessitate the existence of a perceiver, it is implicitly commanded.

Language as a basis for communication of information

We tend to think that language represents an objective reality, at least to some degree, because functionally, it would appear that way. You can input an address into a GPS and arrive at your destination flawlessly (in theory) solely by following verbal directions provided by the disembodied robot-woman voice projecting from the device. Proof—language reflects objective reality. It, like so many other examples of exquisite cooperation and task execution made possible exclusively by language, is compelling evidence that language does consistently transmit information effectively and successfully. Yet there are infinite things that we do not have words for. Language, therefore, reflects but a tiny sliver of That Which Exists, and in order to better understand ourselves—human cognition, reasoning, learning—we must ask: what pieces, which parts of perceivable reality do we attend to, such that they achieve lexicalized status in our collective vocabulary? Which concepts, which forms, which configurations of matter—and which relationships between forms and configurations—are salient to humans such that we linguistically (and by extension conceptually) materialize their existence? Of course the answer is right in front of us: those which are linguistically materialized, as lexical-semantic entries.

And so, we might approach this from a slightly different angle: given the linguistic-conceptual distribution observed (cross-linguistically2, as well) what are the features of reality that human beings attend to? More abstractly, how do humans attend to, identify, classify and categorize, and prioritize information available from their environment (not necessarily in the order listed), and specifically, do so in order to acquire and use language, and then further use the language they have to acquire new, often more complex information and concepts? Are the processes (i.e., the initial learning involved in language acquisition and the learning facilitated by language literacy) analogous? On a slightly divergent note, there is a wealth of literature and previous work on language concreteness, which distinguishes between concrete and abstract language, and significant evidence supports a differentiation between the two (child acquisition, comprehension/processing, LCM). However, a “chair”, despite representing a concrete thing (as opposed to, say, “justice”, a markedly abstract label in terms of what it represents), is unavoidably abstract—it is arbitrary (no correspondence in features between the label and the thing it represents, cf. Saussure), and also requires that we have a stored conceptual prototype for that which constitutes a chair. How do we differentiate a chair from a table? From a stool? From a short bench? Broadly speaking, the answer is by their—differential—salient features. But let us pause, and consider: what does “salient” actually mean here? Salience in this context—as in most—refers to perceptual salience. Perceptual salience presupposes a perceiver—and naturally, we, humans, are the perceivers (of course we could include other animals under this umbrella without altering the argument in any significant way). So, more precisely, the answer to the question of how we differentiate (e.g. functionally similar) objects is actually: by their perceptually salient differential/distinguishing features.

The problem of artificial intelligence (AI) primitiveness, in many ways, is likely linked to the (largely) defining properties of general intelligence (GI) and categorization ability in humans. AI is capable of far surpassing human computational and predictive ability in many respects, and capable of modeling data (extracting patterns from and organizing information) in ways a human being is simply not able to do. Yet humans still outperform AI on nearly every domain-general skill and especially, famously, natural language use (and it goes without saying that language use draws on domain-general skills - if you even want to consider them separately, as, e.g. “domain-general cognitive ability” and “language”, rather than intertwined and interdependent). In perceiving, humans are critically, selectively and preferentially attending to certain specific patterns and pieces of information that AI algorithms are not. Is this because they (the AI) are attending equally to all information? Or selectively attending to different pieces of information? Or, selectively attending to the same information, but in a categorically different way? Or, is it because an AI algorithm is only exposed to a slice of data, compared to the virtually infinite information—and potential loci of focus—available to humans from the moment of birth onward through their existence? To be precise, it isn’t the information per se that is infinite, but rather, the possibilities arising from innumerable potential interpretations of it, in countless different conceptualizations. And so equally interesting, is it not, that despite this overwhelming quantity of data, of sensory stimuli that we must constantly, ceaselessly sift through and process, that we collectively settle on very consistent interpretations.

This would appear to be a function less of the inherent salience of any particular feature or facet of information, and more one of the human condition. This epistemological, Kantian quandary poses a unique problem for the puzzle of AI, with a twist: if we can only know the world and reality such as we subjectively perceive it via our senses (and by extension our cognition), how can we expect an artificial (i.e., not naturally occurring, as in biological life-forms created via sexual reproduction) intelligence to perform complex, human-centric tasks (some of which, like language, as well as music, are fundamental and universal in our species) with a high degree of efficacy unless it is imbued with the same motivations, capacities, and tools as those of humans which are required to carry out such functions? Additionally, the problem of stunted artificial intelligence invites us, in a rather meta way, to revisit the concept-label mapping of the term “artificial intelligence” itself, along with its corresponding conceptual realization. It’s interesting that the bar for what is considered intelligence artificially is apparently much lower than the criteria for evaluation of natural intelligence. This discordance—awareness of it, and action from it—may be the key to novel breakthroughs and discovery. Perhaps through no ill intent, the term AI has been misappropriated in an excited haste to christen a but-nascent field (or industry, rather) with a title it has not earned. Intelligence must be globally operationalized (i.e., defined) if we are to make greater strides toward machines that can competently carry out complex tasks at a human-like (or human-superior) level of ability—including learning and generalizing knowledge. This last point raises the important and nontrivial concern of whether or not we should strive to create a system with human-based or human-like intelligence, and thus demands rigorous, careful, and holistic ethical consideration before proceeding with active attempts to create such an entity. If we don’t fully understand the basis of consciousness, but we have good reason to believe, given what we currently know, that it is linked with general intelligence, emotion and reasoning abilities, should that—a complete understanding of consciousness—not ultimately be the priority, before creating a thing that possesses these abilities?

1in this discussion, “exists” is simply a simplified way of saying “is anything measurable within the bounds and constraints of the physics of the observable universe” (including emergent properties and relationships between things)

2this would additionally require a typological investigation and comparison to see if there are any differences in this distribution as a function of language or language family, but assuming for now that the distribution is similar cross-linguistically