Fig. 7.5. Confabulation architecture evaluation process (see text)
A good analogy for this system is a child learning a human language. Young children need not have any formal knowledge of language or its structure in order to generate it effectively. Consider what this architecture must "know" about the objects of the world (e.g., their attributes and relationships) in order to generate these continuations, and what it must "know" about English grammar and composition. Is this the world's first AI system? You decide.
Note that in the above examples the continuation of the second sentence in context was conducted using an (inter-sentence, long-range context) knowledge base educated via exposure to meaning-coherent sentence pairs selected by an external agent. When tested with context, using completely novel examples, it then produced continuations that are meaning-coherent with the previous sentence (i.e., the continuations are rarely unrelated in meaning to the context sentence). Think about this for a moment. This is a valuable general principle with endless implications. For example, we might ask: How can a system learn to carry on a conversation? Answer: simply educate it on the conversations of a master human conversationalist! There is no need or use for a "conversation algorithm." Confabulation architectures work on this monkey-see/monkey-do principle. If these statements upset you, then you are one of those exquisite few who actually delve into details. Your reward is to now understand how profoundly alien confabulation theory is in the context of the panorama of classical information-processing and systems neuroscience. Again: Conversation involves no algorithms whatsoever.
This sentence continuation example reveals the true nature of cognition: it is based on ensembles of properly phased confabulation processes mutually interacting via knowledge links. Completed confabulations provide assumed facts for confabulations newly underway. Contemporaneous confabulations achieve mutual "consensus" via rapid interaction through knowledge links as they progress (thus the term multiconfabulation). There are no algorithms anywhere in cognition; only such ensembles of confabulations. This illustrates the starkly alien nature of cognition in comparison with existing neuroscience, computer science, and AI concepts.
In speech cognition (see Sect. 7.4), elaborations of the architecture of Fig. 7.3 can be used to define expectations for the next word that might be received (which can be used by the acoustic components of a speech understanding system), based upon the context established by the previous sentences and previous words of the current sentence which have been previously transcribed. For text generation (a generalization of sentence continuation, in which the entire sentence is completed with no starter), the choices of words in the second sentence can now be influenced by the context established by the previous sentence. The architecture of Fig. 7.3 generalizes to using larger bodies of context for a variety of cognition processes.
Even more abstract levels of representation of language meaning are possible. For example, after years of exposure to language and co-occurring sensory and action representations, modules can form that represent sets of commonly encountered lower-abstraction-level symbols. Via the SRE mechanism (a type of thought process), such symbols take on a high level of abstraction, as they become linked (directly, or via equivalent symbols) to a wide variety of similar-meaning symbol sets. Such symbol sets need not be complete to be able to (via confabulation) trigger activation of such high-abstraction representations. In language, these highest-abstraction-level symbols often represent words! For example, when you activate the symbol for the word joy, this can mean joy as a word, or joy as a highly abstract concept. This is why in human thought the most exalted abstract concepts are made specific by identifying them with words or phrases. It is also common for these most abstract symbols to belong to a foreign language. For example, in English speaking lands, the most sublime abstract concepts in language are often assigned to French, or sometimes German, words or phrases. In Japanese, English or French words or phrases typically serve in this capacity.
High-abstraction modules are used to represent the meaning content of objects of the mental world of many types (language, sound, vision, tactile, etc.). However, outside of the language faculty, such symbols do not typically have names (although they are often strongly linked with language symbols). For example, there is probably a module in your head with a symbol that abstractly encodes the combined taste, smell, surface texture, and masticational feel of a macaroon cookie. This symbol has no name, but you will surely know when it is being expressed!
Was this article helpful?