home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Gold Fish 1
/
GoldFishApril1994_CD2.img
/
d4xx
/
d411
/
mind
/
mind.lzh
/
AITheory
< prev
next >
Wrap
Text File
|
1990-11-21
|
176KB
|
2,633 lines
Artificial Intelligence Theory Journal, Part Three of Three
(1978 - 1979)
Standard Technical Report Number: MENTIFEX/AI-3
by Arthur T. Murray
Mentifex Systems
Post Office Box 31326
Seattle, WA 98103-1326 USA
(Not copyrighted; please distribute freely.)
13 NOV 1978
This evening at Vaierre I am trying to do some more work on bringing
the whole Nommultic system together out of the various components. I am
trying to attach the language-work of last year to the recent overview of an
experiential and motor system.
I am beginning to suspect that I don't need a centrally located,
pyramidal-type language decoder with the "ultimate-tags" as denominated on
1.OCT.1977. It looks like it may work just to let the known and
recognizable words exist in the vastly long "pipeline" of auditory memory.
Idea of the moment: Node-slices could be kept thin and compact by
having just axons within the slice. Then cell bodies could be arranged in
tiers above the "pipeline." Who knows, maybe "carrier" neurons could send
their axons lengthwise through the pipe, while "nodular" neurons would
introduce perpendicular axons. A single such perpendicular cell might be
able to have many, many "yes-or-no" synapses within a slice. In fact, such
a perpendicular cell, or groupings of them, might CONSTITUTE THE VERY ACTUAL
ASSOCIATIVE TAG. But you sure would have to have flat synaptic branching
from a cell body.
I am beginning to realize lately how passive the experiential side and
how active the motor side must be. I suppose the habituation and learning
mechanisms must all be on the motor side. The whole perceptual and passive
apparatus exists just to serve the motor side in its decision-making
deliberations.
I've had a possible insight tonight on how the motor mind may perhaps
reflect on things: through variation by the aid of engram fatigue. When
the motor mind first associates towards a passive memory, such as a visual
image slice, the following scenario may take place. The accessed image has
been fetched through its associative tag. The nodular image slice becomes
"energized" within the visual "pipeline." Up and down the pipeline, similar
images become stimulated. One such image wins out and is the first to send
out a signal over its own associative tag. This signal keeps the
associative process rolling. It may evoke further information from any
perceptual sense or from language-memory. At any rate, the consciousness
has accessed visual memory and has had a certain output. Now here's where
"engram fatigue" or "neuron fatigue" enters in. The motor mind may get
shunted back to the same original visual memory slice. In this second
instance, however, the fatigue of the formerly first responding engram may
allow a competing different engram to respond, with the process being
repeated many times over. Thus the same neuronal input can yield a wide
variety of successively different outputs.
You know, it may be that verbal thought really does take place in the
passive experiential auditory pipeline, and that each word or grammar-ending
just always happens to have a habituated motor sequence attached to it. So
that when we feel we are hearing our own verbal thought, we are really just
remembering what it sounded like when we formerly spoke each sound.
Likewise if we pronounce in our minds a seen combination of alphabetic
letters, we are quickly just joining together the remembered motor sounds of
phonemes.
I guess we have to have pipeline word-decoding, or else there could
never be any variations from a strictly spoken standard for a word.
(Nolarbeit) AI Theory Journal 17 NOV 1978
Remembrance of Verbs to Describe Actions
Our current impasse is at a point where we are trying to bring together
all the accumulated subsystems of our intended automaton. Recently we have
tended to simplify several elaborate designs of ours, so that for the final
design we luckily end up with a choice of either the elaboration or the
simplification, which might be too simple to work. We have tended to
simplify the volition system from last March and the verbal decoding system
from a year ago. The impasse rests most pointedly in such questions of a
grammar system as how the automaton will observe actions and then recall
verbs to name the actions.
That verb-problem stands out because we see rather readily now how we
can at least find nouns to go with perceived objects. Actually, the verb-
problem grew out of a narrower problem from several days ago, when I was
trying to figure out how the automaton would assign the concept of plurality
to perceived objects, so as to be able to form noun-plurals. I was making a
little progress on the plurality-question. For instance, I realized that to
perceive plurality is not a one-step process, because just to perceive the
unity of one entity is a conceptual step in itself. To recognize two
creatures, for instance, a mind can recognize first the one and then the
other, but not both at the same time. So I was recently tending towards the
conclusion that use of the concept of plurality involves (the processing of)
multiple slices of perception.
However, that quasi-conclusion caused me trouble because a system of
unitary associative tags from percept to word didn't seem sufficient to
handle plurality. I even started hypothesizing that our minds project
plurality onto things, with greater or lesser success.
Then late the night of 13NOV1978, I got the idea that maybe there
should be an additional memory alongside the others (sensory and motor), a
memory which held perceptual content but not sensory content, a memory which
would handle conceptual associations beyond the linear scope of the purely
sensory memories: an abstract memory.
So for several days I revelled in this possibility of a new insight,
but meanwhile I came to focus on this problem of assigning verbs to
perceived actions.
I sensed an analogy here with the work of 16.OCT.1978 on the skin-
surface. When we verbally name a perceived action, we automatically tend to
select the most aptly differentiated verb available to us. In so doing, we
automatically pass over many less apt verbs which would nevertheless have
correctly described the action. For example, "He ruined it" is correctly
within the meaning of "He destroyed it."
Another idea which I have been getting is that the remembrance of verbs
is perhaps the function of a rather elevated "abstract language-domain."
(Nolarbeit) AI Theory Journal FRI 19 JAN 1979
More on the Verb Problem
Perhaps a verb can be viewed as follows. A verb is a punctiform
expression of relationships ramifying from the logical punctum of the verb
itself.
We can then think of a non-specified verb, which, in the history of its
being known to us, has developed some very highly ramified main branches
beneath the punctum.
A verb will describe an event of greater or lesser complexity. In
accordance with its complexity, a verb residing in a mind's semantic
knowledge will have "main branches" as divisions leading to the (probably
quite numerous) minuscule ramifications.
When we perceive an event and recognize its nature as being properly
described by a certain verb, what we do is find and connect logical
categories which satisfy the main-branch logical-input requirements of the
specific verb.
When two verbs are somewhat similar in their semantic meanings, we
discriminate between the two verbs by means of differences between the two
groups of main branches.
We must have thousands of categories into which we can classify things
when we perceive them. Perhaps we even classify things into multiple
categories as a prelude to the connection of main branches.
The same sort of system could perhaps serve to assign prepositions
according to perceived relationships.
When we perceive a thing, our mind seeks to attach to it both a name
and a set of one or more semantic categories. The greater the
discrimination we achieve in our attaching of semantic categories, the
greater the discrimination we can also achieve in the selection of verbs.
Finding a semantic category for a thing is not the same as finding a noun
with which to name the thing.
Right now I am mentally deep into the present subject, the verb
problem. Some strange possibilities are opening up. Since this Nolarbeit
Theory Journal is ipso facto a journal, I think I will override my tendency
to keep personally out of my discussion, and instead just ramble on with
topical thoughts.
The strange possibility is also a slightly disconcerting one. It
points to that "abstract memory" which I was mentioning in the work of
17NOV1978. Suppose there had to be an extra memory like a long bundle of
minuscule fibers. Each fiber would represent one of the semantic categories
presently under discussion. The disconcerting idea is that our minds might
be limited by such a system in which we can access verbs only through this
intermediate subsystem of semantic category-fibers. In other words, we are
always limited in our ability to describe the events which we perceive,
inasmuch as we must break down each percept into a set of minuscule semantic
categories coming from a larger set already resident in our mind, after
which breakdown we can then re-assemble the information-flow via "main
branches" or "semantic trunks" to fetch a specific and (hopefully)
appropriate verb.
For weeks or months now I have been stewing on this problem of how we
recall appropriate verbs. I have been pondering this problem while keeping
in mind my 12APR1978 diagram of a visual-memory channel. I was telling
myself that I had enough theory to recall appropriate nouns for things, but
I could not even begin to figure out how multiple image-slices, taken in
succession, would lead a mind to recall a verb. It was as if I was trying
to imagine extra hardware and extra processes into the system of the
diagram. But I knew I wanted to do it all with just the associative tags
coming from the image-slices as originally planned.
Even now I may not have the solution, but I will describe how I began
today's writing. In my search for verb-recall, I was picturing an entity
lying on a surface. If I were asked to describe with a verb the action of
the entity while lying there, I would say that it is lying there. Now, how
do I arrive at that verb from just a still picture? Obviously, I am
detecting a relationship between the entity and the surface. I would
recognize the entity all by itself, but in this case its side is orientated
to the surface in such a way as to help me recall the verb "lying." My
pondering mind seized upon the idea of the side of the entity as being in a
special, semantic category. From the notion of concentrating upon the side
of the entity, as opposed to the total entity, I got the idea of the leg of
a system in which multiple legs had to be "satisfied" so that a common
summit would be reached where a verb stood.
I also got the idea that the semantic legs (trunks, main branches)
could have very many minuscule categories attached to them, but that it
would take only one activated category per leg to satisfy the recall-
requirements for a given verb.
It was at around that point in my thinking that I began writing the
body of today's work. It is always thus; I usually wait until I have the
rudiments of a solution before I start writing down thoughts. But I have
been so stymied by this problem of verbs that today (on Seattle's Pier 51) I
have gone back and written down even my preliminary thought. Now I can go
on.
If the broad "trunk" requirements for selection of a typical verb can
be satisfied on each semantic trunk by any one of many numerous semantic
categories, then obviously a verb is typically a very generalized notion.
Highly specific verbs would probably tend to ramify into relatively few
categories, but, on the other hand, some categories must be so general that
they encompass the trunks of almost all verbs.
At any rate, we have posited today a practice of "intermediation"
between percepts and verbs. Verbs are to be visualized as like an octopus
or a furcated carrot. A percept can summon a verb only by generalizing into
semantic categories and then un-generalizing along semantic trunks to reach
a specific verb.
Scratch-Leaf
- Time division: verbs in infancy vs. in maturity.
- Verbs become categorized?
- How do we recognize that someone is sitting, or lying down?
It's a relational thing.
- A conceptualization as legerdemain.
12 MAR 1979
Ways of Approaching the Verb Problem
- Look at how the first verbs are learned in infancy.
- Infancy learning of first verbs.
- Future learning of new verbs.
- Look at how verbs are recalled to describe perceptions.
- Look at how verbs are used for internal mental states.
- Look at the tie-in of motor memory to verbs.
- Avalanche the problem by writing down all possible ideas.
- Study the transformation of nouns into verbs, as in "booking" a flight, or
"chaining" things together.
- Treat the problem as that of a chain of complexities. As the perception
of action is processed in the visual channel, what complex transformations
can the process go through without losing the full information necessary
to reach an appropriate verb?
- [26MAR1979] Consider how modal, auxiliary verbs work.
As I further ponder the verb problem, all kinds of preliminary
propositions come to mind. With so many of them, they can't all be very
correct, but from enough of them true directions should eventually emerge.
As visual images come down the visual memory channel in infancy,
objects are perceived, and we can easily imagine how nouns are learned and
recalled for such discrete objects. The noun-words are learned
phonetically, and then linked up with the visual images.
When things are in a class, they all share a relationship, namely their
mutual belonging to that class.
Our perception system for visual images attaches nouns to perceived
objects. "Other than nouns it does not attach" - dare I say that? Because
I don't think the out-tagging system can handle just jumbles of visual
haphazardness. When perceiving a scene or image, we either relate it
through one of its ingredients to a previously tagged item, or else we learn
a new word or make a new association so that the novel image can itself
become an archetype to serve in the recognition of re-occurrences of such an
image. But probably all such novel tagging is done in the early phases of
language-acquisition, so that subsequent tagging and word-learning probably
amount to re-groupings and re-classifications of previous archetypes rather
than to the novel formation of new archetypes.
After infancy, we learn many new nouns and verbs, but not new
archetypes of visual perception.
There must be a classification process which goes on in an area which
can be thought of as perpendicular to the visual memory channel.
When we read a storybook, our mind conjures up its own image to go with
each noun or verb. Numerous individual examples of each noun have been
classified as expressions of each particular noun. Now when our mind
encounters the nouns in a story, it uses the whole class behind the noun for
understanding the story, but our "mind's eye" conjures up some specific
visual example which happens to present itself most fittingly for our
interpretation of the story. In fact, it may be that particular example
that yields access to the logical associativity of the whole class behind
the noun. So the bare noun reminds us of a specific example, but each
specific example has full access to the whole class.
Now I think that there are some psycholinguistic classes that are so
abstract that they go beyond visual images.
Every perceptual recognition yields access to at least that one class
of which the recognition is being made. For instance, perception of one
mouse yields access to the psycholinguistic class of "mouse," and then in
turn to the class of "animal," and so on through myriad other classes.
I suspect that oftentimes access is gained to some logicoconceptual
classes that are so abstract that they lead not to nouns but rather to
logical conditions and functions, such as the condition of plurality and the
function of subject of a verb. It may be that these logicoconceptual
classes can not be traced backwards to specific examples, as the word
"mouse" could.
Perhaps every slice of perception must lead at some level to (at least)
one of these logicoconceptual classes, of which, understandably, there might
be a relatively small number: certainly under a thousand and probably at
least half a dozen. In fact, the possible binary permutations of the full
number of logicoconceptual classes might give a hint as to the upward limit
on the number of verbs which we could possibly access. However, that large
number might in turn be obviated if the unitary logicoconceptual classes can
be used more than once in the formulation of the recall-apparatus for a
verb.
It's possible that there must be a "re-affirmation" mechanism with the
memory channels to keep these logicoconceptual classes valid and vigorous.
Suppose a slice of visual perception entails recall through a previous slice
in the distant past, a slice which grants access to a line running parallel
to the visual memory channel, which line constitutes a logicoconceptual
class. At the moment of new perception, the new slice first gains access to
the logicoconceptual class by the roundabout recall route. There should
perhaps be a "re-affirmation" mechanism of creating a new direct tag from
the new slice to the parallel line representing or constituting the
logicoconceptual class.
If there is non-retraceability, if a logicoconceptual line can not go
backwards to activate its afferents, then it is perfectly fine if all new
perceptions find their (roundabout) way to a direct tie-in with one or more
of the logicoconceptual classes.
In fact, this theory being developed may be equivalent to saying that
we forcibly attach logicoconceptual interpretation to each and every
perception, and that we can not do otherwise. To perceive is to interpret.
This present theory may be solving two problems at once: both the
action-to-verb problem and the problem of accessing function-lines in the
operation of grammar rules.
19 MAR 1979
A brain cannot even begin to recall a verb without first seizing upon
some entity within its perception-slice as the subject or object of the
verb-to-be-recalled. And the brain cannot seize upon a perceived entity
unless it successfully makes a comparison with an old slice. It's as if to
say that we never really perceive anything new, just recombinations of
elemental old things.
So a first step in fetching a verb is to recognize an entity by making
a connection with a stored record. According to the theory that is
presently being developed, such connection is how access is gained (and
maintained) to "psycholinguistic classes." A constellation of accessed
classes will approximate the recall-requirements for a stored verb. I say
"approximate" because the process does not have to be, and perhaps even
cannot be, exact and certain - it just selects the most likely, the most apt
and fitting verb.
The reader or re-reader of these notes may begin to suspect that the
theory is calling for a great superabundance of class-structures.
So when a verb-related entity is perceived, it could activate a
plethora of class-structures. However, any action perceived and calling for
a verb will probably involve several or many separately perceived entities.
For argument, let's say that five perceived entities are necessary for the
recall of a particular verb. Each of the five entities being perceived
might individually activate dozens of class-structures, but only the
congruence of five specifically required classes would fetch the verb.
Remember, from perception these class structures operate only forwards and
not backwards.
Of course, it is not yet clear whether this process operates by
"strict" "voting" or by summation-type "voting."
23 MAR 1979
A "re-affirmed" (See NTJ 12MAR1979.) perception slice can feed into
even a large number of psycholinguistic classes. These classes themselves
do not feed back into the perception channel. If the perception channel did
not have an ulterior purpose, obviously these classes in the "abstract
memory" would be useless.
I suppose that a main function of the abstract memory is to achieve
"intermediation" between raw percepts and such complexities of language as
verbs. When the first verbs are learned by an infant, probably the
connection between the raw perception and the learned verb is originally a
very tenuous one, but that doesn't matter very much, because there are not
many verbs to cause confusion in the child's vocabulary.
If semantic inputs to a verb can be called "radices," then each early
verb of a child might be learned with just one radix. For a more mature
speaker, a more discriminating assembly of radices would be required.
By the function of the abstract memory, the classes of the
intermediation "vote" for which of all (however loosely) connected verbs
will be fetched for recall.
We can make a case now for the need of adept speakers to teach a
neophyte. The neophyte's internal selection of the correct verb is not
ratified internally, but rather by the approval of the teaching speakers.
27 MAR 1979
In selecting a verb for recall, it is obvious that a mental mechanism,
while perceiving a stream of input, must initially seize upon one
significant percept as the linguistic subject of the verb to be recalled.
Such is probably the case even when we use impersonal expressions like "It
is raining."
This selection of tentative verb-subjects may be a function of an
attention-mechanism. At any rate, it matters indeed to theorize that such
selection occurs. To coin a phrase, the "nominator-mechanism" which seizes
upon tentative verb-subjects can also serve to provide a logic-line for the
grammar area which differentiates between subjects and objects of verbs.
Once something has been perceived as a tentative verb-subject, the mind can
go to work expressing that verb-subject in the proper grammatical form.
It is quite likely that, for selection as a verb-subject, all a thing
has to do is be noticed first in a series. Or there could be a level-of-
associativity trigger-mechanism which fires when a percept is highly
significant enough. When things are expressed initially as direct objects,
it would probably be not in the course of raw visual perception, but
probably rather in a more abstract situation where the subject or verb or
both are already understood.
Later, in the evening, I've had an insight into what it means to say
that we "know" something. Knowledge is a composite of both the original
acquisition and the subsequent consideration of information.
The preceding sentence suggests how knowledge or information is stored
in the mind. Suppose we first hear a piece of knowledge as a statement
through our ears. That linguistic statement is laid down in our auditory
memory channel, where it remains as a record both of experience and of
knowledge. However, true possession of the information as knowledge comes
from the subsequent processing we do of the information. In accordance with
how much we believe the information and are affected by it, we develop
traces of the information in the memory channel of our own internal
reflections. The more we tend to believe a statement, the more broadly we
will associatively associate it with the main corpus of our knowledge and
belief. Therefore significant knowledge becomes widely anchored within a
mind, because it reverberates so deeply in our memory channels.
28 MAR 1979
I have been reviewing the NTJ work of 9NOV1978 on motor memory and I
have had an insight or two concerning volition. Instead of having prolonged
associativity constitute inhibition of motor initiative, I would now like to
reverse that notion and argue instead that positive (i.e., any)
associativity above a threshold level actually causes motor initiation.
Indeed, the theory is becoming quite clear right now. The passive,
experiential side of the mind knows (from experience) its own motor
capabilities. To contemplate any such capability is to "nudge" the
threshold of its execution. My first of two insights is that the passive
mind can't actually look ahead and feel or foresee each motor initiation.
No, the mind is just blindly confident that the motor initiations are
available. So the so-called "desire" to activate emerges on the passive
side in the context of belief and knowledge as discussed yesterday.
My second insight concerns the relative natures of verbal and non-
verbal volition. Non-verbal volition works fine, as in reaction to sudden
danger. Enough compelling association towards an action simply causes it.
What's more, I would like to place the activation-thresholds at the point
where the "Motor Memory Activation Channel" enters the motor habit tagging
system. Thus there need not be an elongated threshold system in between the
passive and active sides of the mental automaton.
Verbal volition, however, can be much more refined, precise and
delicate, because there are such intricate pathways of verbal cogitation.
The ego, referring to itself in English as "I," wanders amid its verbal
memory and feels confidently in command of its motor options.
Now I am getting an insight on how generation of sentences may actually
occur in the passive experiential side instead of within the motor system.
If so, this theory would mean that the motor system habituates the basic
phonemic sequences of the words and that the passive side manipulates all
the grammatical changes worked upon words.
Nolarbeit Theory Journal 28 MAR 1979
M u s c l e s
o o o o o o o o o o
.---. .------------. \ \ \ \ \ \ \|_|_|_
(< EYE >) '--. EAR ,---' \ \ \ \ \ \/ |
\"---"/ \ / | \ \ \ \ \/ |
"""// \ ( \ \ \ \/ |
// \ \ | \ \ \/ |
// \ \ \ \/ |
// \ \ | \/ |
___/(______ \ \ / Cerebellum |
/ \ \ \ | /_____________|
| | \ \ // \\
| | \ \ | // \\
| | \ \ // \\
\ / _______) \__ | _______/(__ _______)\_____
| | | | | | | |
| | | | | | | | Motor |
| | | | | | | |
| Visual | | Auditory | | | | | Memory |
| | | | | | | |
| Memory | | Memory | | | Concept | | Activation |
| | | | | | | |
| Channel | | Channel | | | Nodes | | Channel |
| | | | | | | |
4 APR 1979
The system diagram of our automaton changed considerably between
10SEP1977 and 28MAR1979. In September of 1977 we were glad just to have our
first system-wide diagram. It gave us a holistic basis against which to
react, and we have reacted so thoroughly that the diagram (28MAR1979) is
really in a state of high flux.
The main difference is that the system has become simplified,
streamlined, and highly orthogonal. The perception and motor channels are
now seen as running in parallel. In the pristine diagram of 10SEP1977,
heavy black lines represented unknown, "black-box" processing-channels. In
the recent new diagram of 28MAR1979, no such lines have been drawn in,
because broad interaction is envisioned at right angles all across the
various perception and motor channels. In the diagram of 10SEP1977, there
had been separate boxes set up to organize the elongated memory channels, as
if perceptions would be assigned associative tags as they were filtered
through such separate, modular tagging-systems. Now the perception channel
itself is seen as paramount, with orthogonal tagging going on all along the
length of the perception channel.
A process of theorizing has perhaps become clear, that of
"dimensionalizing" complex systems. The visual channel of 12APR1978,
despite all its complexity, becomes just one linear dimension within our
total system diagram of 28MAR1979. When we lay down all the perception and
motor channels in parallel, our notion of "dimensionality" suggests that in
the subsequently orthogonal direction we can include as many different
sensory and motor channels as are feasible. For example, if it were
possible to have a "sixth sense" that registered dangers and perils, we
could just lay it down in a groove alongside the other channels of our
automaton. Such a sense might be used only rarely, but it would not
overcomplicate the host system, because its dimensionality fits right in
with the host design. In futuristic automata, we could have some really
exotic senses present.
6 APR 1979
The Acquisition and Function of Grammar Rules
Now that on 28MAR1979 a new system-wide diagram has been developed, the
problem of grammar has returned to the eminence it held in fall of 1977. At
that time I did develop a complex system of grammar, but I ended up with the
feeling that my grammar system was too much in isolation from the (as yet
undeveloped) other portions of the automaton. The more sweeping the grammar
rules I allowed for in my system, the more I had to conjure up extraneous
inputs necessary for the function of my grammar system. I felt that I was
ending up trying to tackle the problems just by transposing the problems
outwards to the perceptual system. Since the perceptual system had not been
designed, the new grammar system stood in isolation while great burdens had
been heaped upon an almost non-existent perceptual system. However, the
feeling of accomplishment in the grammar area did set the stage for the
perceptual work of April 1978. Meanwhile, in March of 1978 work had also
been done on the motor side of the automaton. It remained to simplify the
volition system in November of 1978. From then on, enough major subsystems
had been roughly designed that we could gain a new look at the total system.
As I tried to integrate the perception subsystem with the total system, I
focussed on the verb problem on 17NOV1978. Over the winter I felt stymied
by the verb problem, but I am holding in abeyance the tentative solution
through an abstract memory. Before designing in detail that abstract
memory, I want to go back to the isolated grammar work of late 1977 and
modify my design which used extraneous inputs from the perceptual system.
There are two forms of habituation necessary in the language system:
the habituated linking of phonemes to form words, and the habituation of
grammar rules. In my present work I have been tending to physically
separate those two domains of habituation: to put phoneme habituation into
the VMHTS "cerebellum" and to let grammar-rule habituation develop right
within the auditory memory channel itself. I am nudged towards using the
auditory memory channel because of the problem of how to enable the mind to
"hear itself think." In the system diagram of 10 SEP 1977, there was an
"Internal Verbal Perception Line" as a sort of internal return loop so that
the motor mind could hear its own output and simply make a choice as to
whether or not that output would actually be spoken. Now, however, it may
prove radically simpler to let the very sound-volition system be hearable
unto itself and serve as its own self-perception system. The way to achieve
that self-perception would simply be to have the rather wide-spread
sentence-generation process deposit its pantothenic results in the succinct
form of an utterance-capsule at the freshest extremity of the self-
lengthening auditory memory channel. Indeed, it sometimes seems to me as if
people like R.D. Palmer have a well-developed "pre-elocution register,"
because they seem to hold before their mind's eye their intended utterance
with the additional ability of altering it quickly in the light of their own
flashing reaction to it. The trick of such a pseudo-register would be as
follows. The "pantothenic" procedure is probably sufficient to generate
sentences for immediate, unpremeditated utterance followed by immediate
deposition at memory-extremity. Such is probably the glib manner in which
young children speak. However, mature speakers with the "pre-elocution"
skill can perhaps let their intended utterances go into memory-extremity
before exceeding volition-thresholds for actual speaking. Thus the
sentence-formulations can be rapidly adjusted several quick times in the
brief moment before utterance. Of course, the mechanism being described is
by no means simple. There still has to be passage of control lines from the
auditory memory channel to the "effatory" motor channel. I haven't decided
whether those would be old "pantothenic" lines or newly re-affirmed memory-
extremity lines, although the old pantothenic lines are necessary to
generate the reaffirmations. In describing the system in this paragraph, I
haven't yet described what mechanism would be generating those sentences
right within the self-perceiving auditory memory channel.
The beauty of today's proposal is that several things fit together
quite nicely. The habituated grammar knowledge resides ubiquitously within
the auditory memory channel and generates sentences from verbal material
interspersed amid itself. Simultaneously the auditory memory channel
perceives and understands the sentences being generated within itself. The
auditory memory channel thus becomes an arena through which all the
conscious knowledge of a mind can interact for such purposes as equilibrium,
synthesis, and communication.
It is beginning to look as if "deep structure" is going to be little or
nothing more than the classifications which arise in the abstract memory
channel. Just how flexible and "habituable" is this channel? Its inputs
from perception are constantly variable through the mechanism of "re-
affirmation." Its outputs to the auditory memory channel are variable with
respect to their destinations within the auditory memory channel. In other
words, there can be a two-tiered process, which nicely keeps the abstract
memory isolated between perception and cogitation. The first tier is the
perceptual re-affirmative inputs. The second tier, that of outputs to
auditory destinations, will probably function by a normal associative
tagging mechanism. That is to say, abstract memory lines will
perpendicularly acquire their output destinations on the basis of
associative tagging through simultaneity. No, I take that back. The
assigning of their destinations must probably occur by conscious learning.
You see, ordinary simultaneity would create too much of a jumble. The
abstract memory is supposed to be aloof and isolated.
Perception fetches no theta-word without simultaneously conveying
through the abstract memory all the concomitant grammatical influences upon
that word.
Perception fetches a theta-word directly, but the abstract concomitance
governs the form and syntax in which the word will emerge to be thought or
spoken.
Suppose a nonsense-word like "kred" is to be pluralized into "kreds"
from perception. The basic theta-word is fetched by recognition. At the
same time the abstract line for plurality is accessing the extremely
frequently used suffix for plurality in the auditory memory channel. But at
the same time this procedure must be being organized by lines governing
syntax. Of course, syntax is part of the "concomitance." That is to say,
syntax arises from the perception itself. Rules of syntax have been
learned. This "learning" has been habituated through the process of
establishing to which destinations the abstract-memory outputs will go.
Now, I have been getting the idea that each whole line for syntax will
actually be a node upon a whole "tree" of syntax nodes. In other words,
each line will be truly elongated and not punctual, but also as a node each
line will lead up from itself to a particular sentence-delta at the summit
of the tree, from whence branches will go down leading to other nodes crying
for imposition of satisfactory "fillers," be they words or concepts.
Now, it is rather clear how a low node-line for a particular percept
can come into play simply as a result of logicoconceptual classification.
It's not so clear how a sentence-delta will come into play. I would like to
make it hinge upon the total state of mind of the speaker. For example, a
playful person could express all of his declarative observations in the form
of questions. Of course, there would not be very many sentence-deltas to
choose from, anyway.
The sentence-delta could come from perceptual classification, but at a
time possibly either equal or prior to the low node-line.
9 APR 1979
The Evolution of Mind. The new system diagram of 28MAR1979 leads me to
speculate that now there is an obvious possibility for the origin of mind.
Since the channels in the diagram are mainly in parallel and coming from the
joint area of perception and motor function, I tend to see that stimulus-
response area as a source both in the diagram and possibly also in
evolution. Until recently I had never viewed the evolution of mind as
possibly so simple a process. But now I can imagine certain steps.
An early step would be the differentiation of cells necessary for
stimulus and response. With such cells, many sorts of complex systems can
arise even before memory is introduced.
Now, I don't claim to know how instinct functions, unless it is a form
of quasi-memory pre-established genetically. But the next step in the
evolution of mind might be the appearance of memory capabilities. Two
subdivisions in this memory-stage might be instinctual memory and learning-
memory. My diagram of 28MAR1979 suggests to me that to add memory to a
neuronal system is to open a real floodgate of possibilities. It makes
sense to add memory only in enormous quantities suitable for the whole
natural lifetime of the organism. Of course, with all these parallel
memories there must also be the associative cross-linkage.
The third step in the evolution of mind might necessarily be the
development of an abstract, logicoconceptual memory, as opposed to merely
sensory and motor memories. In this regard it is perhaps significant that,
in my design of the abstract memory, I have had to derive it from what we
might call the "apex" of the perceptual memory channels. Logicoconceptual
classification does not go directly sideways, but occurs "apically" through
the roundabout route of recognition. Contemplating the 28MAR1979 diagram, I
receive the strong impression that the whole works is just an outgrowth in
temporal extension of the original stimulus-response apex.
Prototype Construction. Although consideration of actual hardware
construction should not be allowed to influence the theoretical design of
our automaton, construction insights may be included in this journal. For a
long time I have hoped to be able to use variable loops of memory channels
in a prototype so that, on the one hand, I could have a real-time machine
without absurd simulations of time or its environment. Now it (again) looks
as though it may be possible to use loops to recirculate the various
parallel channels, and to add increments as they become available. Of
course, the sideways associative networks would also have to be
recirculated.
11 APR 1979
The Abstract Memory
It should be possible to use the 4APR1979 method of "dimensionalizing"
the abstract memory channel.
The abstract channel stands out alongside so many "concrete" sensory
and motor channels.
The first delineation of it is that its inputs probably come only from
sensory channels, and not from motor channels. The second delineation is
that its outputs are only to the auditory memory channel as the vehicle of
language. Therefore, in terms of inputs and outputs, the abstract memory
channel is an organizer of all sensory perception channels for special
presentation to the auditory perception channel. Significantly, the
abstract memory channel can have both input from and output to the auditory
memory channel. Although I have been designing the abstract memory in terms
of the visual memory channel, a blind person can certainly learn the use of
language, and therefore the abstract memory must be open to most or all of
perception.
The possible need for an abstract memory arose in the effort to solve
the problem of access to verbs. I may now use the abstract memory to carry
or mediate the whole grammar system of language. For grammar, the question
is, must abstract memory lines be simple and unconnected, or can they feed
into one another and cause structures to arise within the abstract memory
itself? In other words, what is the "dimensionality" of the abstract
memory? To start out with, it is at least two-dimensional because it has
the length and width of its group of multiple lines.
Each "planar" line represents a classification of elements from
perception. These are not classifications made consciously by the mind,
they are automatic.
It seems to me that the most difficult classifications are the ones
that fetch verbs. But it is not enough just to find a root verb in storage;
for Indo-European languages, the proper modification has to be worked upon
the form of the verb. Such modifications are almost always worked according
to standard rules; otherwise, specific irregularities are recalled. But the
rationale for the functioning of rules has to be present already when the
fetch-order is made for a verb, so I refer to the extra request-structure as
the "concomitance."
I would now like to suggest that the total verb-fetching structure is
itself a concomitance to the perception-driven generation of a noun phrase
perceived as the subject of an incipient utterance.
The mind unconsciously knows that noun phrases call for verb phrases to
form sentences. So perhaps a "first-filler" mechanism latches on to a
subject-function noun phrase and then tries to fill in the role of the verb
phrase. Perception or selection of a subject noun-phrase initiates a
coordinated process.
It is not hard to imagine how a classification-line in the abstract
memory can get hold of a subject noun-phrase. There can be a kind of
"primacy-hook" so that an attempt is made to force any new discrete percept
into the role of a subject noun-phrase. The fact that thereafter begins the
search for the concomitance means that "primocapture" (or the "primacy-
hook") has broken into a kind of absorption-structure.
19 APR 1979
Ideas on Grammar Habituation
- The bridge from a perception to auditory memory involves three things:
word, syntax, and inflection.
- In a way, a salient feature of syntax is probably already present when a
percept is seized upon as one meant to lead to an initial word.
- It is important to distinguish between the two domains which give rise to
the generation of sentences: external perception and internal reflection.
It is possible that all sentence-generation occurs only under the control
of internal reflection through a consciousness-continuity-mechanism. Such
a notion is attractive because it allows a syntax-mechanism to be always
dominant. The idea is, let nothing be perceived (or thought) unless the
syntax-mechanism is attentive to it and ready to latch onto it.
- We expect the more convoluted, more contorted sentences (such as these
journal sentences) to arise from the domain of internal reflection. There
syntax can become quite complex because of the way in which one thought
(or reflection) leads to another. External perception we expect to give
rise to short, simple, direct sentences made in observation of the
external world. This discussion matters significantly, because we are
faced with the question of deciding what causes the lead-off to a
sentence, or what causes the initial syntactic assertion. You see, we may
want there to be a natural tendency for subjects and nominative case to be
the first to assert themselves in the domain of external perception. The
oblique cases we expect to be much more likely to start sentences (such as
this one) in the domain of internal reflection. Such expectations are
reasonable, because the continuity of internal thought can cause each
internal sentence in a chain to pass a syntactic departure onto the
succeeding sentence. Oblique departures can allow internal sentences to
begin with or hinge upon oblique constructions. On the other hand,
observations made about the external domain can be expected to spring so
directly from "prime movers" as subjects that oblique constructions would
not be called for.
- In the auditory memory channel, an "onset-tag" would serve to fetch a
word, while an ultimate-tag would serve to recognize a word. Such close
bifurcation to a word gives the idea of looping through a word before
returning to pronounce a word.
27 APR 1979
More on Grammar
The bridge by which a word crosses from perception to auditory memory
involves three things beyond the word itself: part of speech, syntax, and
inflection. The part of speech is a spontaneous concomitance of the word.
That is, each perception is originally channeled as a particular part of
speech. So a perception heads towards a word along two vectors: the parse-
vector towards syntax and the recall-vector towards the basic, stored form
of the word. The part of speech (plus perhaps also the "syntactic
departure") leads to the syntax, and then the syntax governs the inflection.
Nolarbeit Theory Journal 28 APR 1979
diagram number one
_______
/ \
/syntactic\
\ model /
\_______/
__
round-about connection /\
/--------------------------------\ /
| | /part-of-speech
| | /vector
| | /
| | /
__|__ ___V___ / _____
/ \ re-affirmation line / \/ recall-vector / \
/percept\<----------------------/abstract \-------------------->/stored \
\ / \ memory / \ word /
\_____/ \_______/ \_____/
28 APR 1979
For sentence-trees, we want a structure which is like a filter in
abstract memory. A percept has various vectors in its concomitance, such as
part-of-speech vector, function-vector, recall-vector, and any of various
modification-vectors for such things as plurality, negation, or
conditionality. An S-structure (or sentence-structure) in abstract memory
looks like a ladder which has slots instead of rungs, and furthermore there
are multiple apparitions of the ladder for multiple levels below the "S."
When the concomitance vectors of a percept "address" any slot on any ladder-
level of any S-structure, then the very re-affirmation line quickly wends
its way up through the S-structure to the apical S. Well, perhaps not;
let's investigate further, because meanwhile I have drawn today's diagram
number two.
Nolarbeit Theory Journal 28 APR 1979
diagram number two
\ \ \
\ \ \ \
\ \ \ \ ...
\ \ \ \ = =
\ \ \ \ \ \ \ \ = SM =
\ \ \ \ \ ... \ ... \ \ =...=
\ \ \ \ \ = = \ = = \ \
\ \ \ \ \= SM = \ = SM = \ \ ...
\ \ \___ \ \=...= \ =...= \ \ = =
\ \ / \ \ \___ \ ... \ \ = NP =
\ \< SM > \ / \ \ = = \ \ =...=
\ \\___/ \ < NP > \ = NP = \ \
* \___ \ \___/ \ =...= \ \___
S / \ \ \ \ / \
< Nuc > \ \ \ < V >
\___/ \___ \___ \ \___/
/ \ / \ \
< VP > < MV > \
\___/ \___/ \___
/ \
< NP >
\___/
I need a word to describe that roundabout passage of a percept back
through old recognitions in a cross-over to elongations in the abstract
memory channel. We could call it "anocatamnesis" for "up-and-down memory,"
that is, "supratraversial" memory which flows up, across, and then down
again in an "anocatamnemic" way to achieve an "anocatothenic" tie-in to an
abstract memory line. Forsooth, I need such words so as to think more
readily.
Anyway, in today's diagram number two we see the various slots in the
various levels of an S-structure. The concomitance-vectors of a percept
will reach any such slot supratraversially. It is readily obvious from this
diagram that certain superior (abstract-memory) lines must feed into certain
inferior lines. In fact, the whole rationale of addressing the
anocatothenic S-line is to generate a sentence out of parts tagged by
perception. The question is, what taps the S-line? Perhaps it does not
matter whether the S-line is tapped by perception or internal reflection.
Nevertheless, a subapical percept gets to its slot through the anocatamnemic
filter. Notice that "NP" for "noun phrase" occurs twice in today's diagram
number two, but that the two noun phrases differ greatly with respect to
their function, which is "subject of verb" in the superior instance and
"direct object of verb" in the inferior instance. Obviously, as part of the
anocatamnemic filter, a function-vector has to help select which
anocatothenic noun phrase will be activated. So the concomitance of
perception filters through to the proper slots in the S-structure. (Let
each elongated abstract-memory line be an "anocat.") We might have to think
of the S-anocat as freezing or steadying mental activity in such a way as to
permit all available and relevant percepts to percolate through the supra-
filter to their proper slots in the S-structure. After all, these anocats
are actually controlling or manipulating the recall-vectors of the words
tagged to the percolating percepts. So when a word is being recalled, say,
a noun or a verb, the source-percept (or a series of them) is like a
stanchion to which the various filter-vectors are tied. In fact, it is
possible to glimpse a scene and then close one's eyes and generate a
sentence about the scene. The various percepts become the stanchions.
I just got a possible insight on the inflection-mechanism. Perhaps it
functions because the word-recall-mechanism is two-tiered. The first tier
is indirect through anocatamnesis, and then the second tier is direct
through the re-affirmation mechanism. The first tier could be subconscious,
and the second tier, with the inflections added, could be conscious.
Anyway, in the case of a momentarily "frozen" S-structure, a percept-
vel-word such as a subject NP has activated its proper slot. The system now
has the time of the "freeze," during which all possible percepts can drive
vectors through the supra-filter to reach and fill up the appropriate slots
in the S-structure. Meanwhile, the S-structure effortlessly functions
perpendicularly (to the anocats) to drive slot-fillers away from deep-
structure and out towards the ready-made sentence of surface structure.
However, what does each slot have hold of? Each anocat-slot has been
arrived at via the concomitance-vectors of a percept-stanchion. Somehow the
anocat has to keep hold of the underlying word-recall-vector. So far, the
anocat-slot "knows" what part of speech the word is, what its grammatical
function is, and, per se, where syntax will put it in the sentence. Aha,
what it does not know is what the inflection of the word will be. But the
inflection is highly dependent upon the function-vector, which was used, for
instance, to choose which of several possible NP-slots would be selected in
the S-structure. Now, do we want the function-vector to continue up in the
supra-filter, or to be subsumed in the branchings away from any relevant
anocat-slot? Remember, we still haven't figured out how the anocat-slot has
hold of the word-recall-vector.
The time-order of perception is not necessarily going to be the same as
that of the resulting utterance. The S-structure creates a specific time-
order for the utterance.
The question is, at what point do the words mesh with the anocat-
clusters of information concerning the words?
In general, manipulative control over a word must probably be
considered to operate upon that word as it sits there in the auditory memory
channel. Then the control of inflections of that word is a separate but
related process.
The S-structure that we see in today's diagram number two - its sole
purpose is to generate syntax, temporal word-order. It is not concerned
with the forms of the words, the inflections, or the intonations and
accents. But since the S-structure is indeed declaring the syntax, it has
hold of the words while other processes determine the inflections.
Perhaps the supra-filter fetches the stem of a word plus a vectored
approach to its possible inflections. If so, there is an interstitial
return from the auditory memory channel to the abstract memory channel.
Perhaps the stem of the word is fetched by an onset-tag, and then an
ultimate-tag leads back to the abstract processing-mechanism which vectors
towards the proper inflection. Note that when the ultimate-tag leads to the
abstract channel, it (the tag) has to join up with a function-vector if it
is going to lead to the proper inflection in the auditory channel.
Perhaps the S-structure actually decrees which recall-vector will first
fetch a stem from the auditory memory channel.
When we say that the S-structure has control of the underlying recall-
vectors, perhaps what we mean is that signals from the anocat-slots have to
be AND-gated to the processing-vectors if their progress is to proceed.
I suspect that initial word-order hinges upon perception, and that
subsequent word-order to round out the sentence comes from the S-structure.
Here's a possible scenario. An active component signal flashes at the
primordial point of the surface-structure syntax. This signal cannot re-
trace backwards, so it has to have been "escorted" by an AND-gated vector to
the word-fetch (word-recall) line. So therefore let us say that the word
recall-line in the supra-filter has to go through its own quasi-ladder-
levels to reach the auditory memory channel. A specific percept will pass
through each quasi-ladder-level only if it is AND-gated from the S-
structure. Even, or finally, at the level of surface-structure a fetch-line
(recall-line) has to get the go-ahead of an AND-gate from the S-structure.
If the first item in an utterance gets its surface-structure go-ahead, then
a stem is accessed in the auditory memory channel. However, accessing that
stem could possibly send an ultimate-tag back into the abstract-processor in
search of an inflection. Now, it is obvious that the ultimate-tag would
lead to an abstract area specific to the stem just fetched. For instance,
it might lead to a decision-juncture of five possible case-endings. The
proper selection would then be done by AND-gating with the proper function-
vector back along the recall-path.
A word recall-line, then, is a "knotty" path with abstract AND-nodes
along it. The AND-nodes correspond to quasi-ladder-levels. When input goes
into a cluster of recall-lines that will coalesce into a sentence, each
recall-line has to be helped along by successive go-ahead AND-signals from
the S-structure. It may be that all but the surface-structure go-aheads are
very quickly and haphazardly forthcoming, so that sentence-generation
proceeds quickly and in a manner of parallel processing.
However, the release of surface-structure items is a serial operation.
(Stuttering may be due to a foul-up in this operation.) When a point on the
surface-structure releases its go-ahead AND-signal, we might say that that
point is de-potentiated. It summons a stem, and the stem activates an
ultimate-tag. The ultimate-tag AND-gates with the function-vector to
address an appropriate inflection-engram. Perhaps the inflection-engram
also activates an ultimate-tag to send serial control back to the S-
structure. In fact, control may pass straight to the S-anocat, whence it
quickly percolates to the serially next undepotentiated surface-structure
item. During this process, all accessed engrams of the auditory channel are
being thought or spoken.
2 MAY 1979
In the work of 27APR1979 I was theorizing that the grammar-bridge
involves three vectors: part of speech, syntax, and inflection, above and
beyond the recall-vector. I would now like to pare that number of four down
to just two: a function-vector and a recall-vector.
Syntax doesn't have to be a vector, because the S-structure is there
waiting for the perceptions. Part-of-speech, and inflection, are really
just aspects of a word's function, so a function-vector should cover both
those aspects. The grammar system really doesn't care what part of speech a
word is; it just cares what function it serves. We scholars have
artificially classified the parts of speech. As for inflections, the
function-vector will go into the grammar system, and then it will encounter
a mechanism which takes care of inflections.
It is good to pare down the number of bridge-vectors, so that the S-
structure of syntax will have to keep track of fewer underlying items in
tandem.
So therefore, when a word is going to be recalled from, say, visual
perception to auditory memory, the process has to bridge the chasm filled
with lines of abstract memory. The abstract memory is operationally divided
into parts. The logicoconceptual "cable" allows rudimentary percepts to
join together to fetch such complex items as verbs. This "L-C cable"
(logicoconceptual cable) probably also works to fetch such work-words as all
conjunctions and prepositions. It would be safe to theorize that nouns and
adjectives also can be mediated by the LCC (L-C cable). So word-fetching
recall-vectors are rooted in the LCC of the abstract memory.
The problem is, we need a grammar system that will handle both classes
and specifics.
(Later, in the evening.)
There could be a way of increasing the number of logicoconceptual lines
in an abstract memory cable if bunched lines were allowed to subdivide or if
free, unused lines were allowed to join with dedicated lines and then
unjoin, leaving only dedicated lines. Then the dedications could diverge.
To pursue the syntax question, let's assume that the L-C cable had as
its righthand "wall" a plane of around twenty thousand L-C lines each
representing a word-stem onset-tag in the auditory memory channel.
Obviously, I am establishing a syntax-gulf between the L-C cable and the
auditory memory channel. Instead of directly accessing the auditory
engrams, I am positing lengthy lines over which the syntax-system can
exercise some control.
Strangely enough, my concept of AND-gating is re-emerging. I want a
way for the syntax system to control each logicoconceptual word line whether
as a class or as a specific. The problem is that each of the 20,000 L-C
lines will probably have multiple, even multitudinous outputs over to
historical engrams in the auditory channel. Therefore it becomes imperative
for syntax to control the vertical L-C line itself rather than the numerous
horizontal fetch-lines. So we must set up a system where, sure, the L-C
line can be accessed and be ready to fire over into auditory memory, but
where an enabling (and sequencing) input from syntax must be present if the
L-C line is to fire.
Felicitously, once we require an AND-gate-type input from syntax, we
allow the process also to work backwards, in a way. Looking for a subject
or object of a verb, by blanket AND-gating, syntax can prime the L-C cable
to fire the L-C line of whatever word is solely or most decidedly active.
Nolarbeit Theory Journal 3 MAY 1979
___________________
M O---| / _________________________________________
___| Cerebellum / |
u O __| / |
/ _| /-------'
s O / | /--------,
/ / \ / | Motor Memory Activation Channel
c O / / \ /\ |_________________________________________
/ / / \ /\ \
l O / / / \ / \ \
/ / / / \ / \ \ _________________________________________
e O / / / / \ / \ \ |
/ / / / \/ \ \____|
s O / / / \_____
/ / / |
O / / | Concept Nodes
/ / |_________________________________________
O /
/
O _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/\
| | _________________________________________
| | |
| |____ |
| \_________________|
| EAR_____________________
| | |
/ / | Auditory Memory Channel
|__/ |_________________________________________
_____________
___/ \
___/ |
___/ ______ ______ |
___/ | | | | |
___/ /| MV |---| NP | |
syntax cable / _____ / |______|\ |______| |
==========================| | |/ \ ______ |
==========================| | S | \| | |
==========================| |_____|\ ______ | V | |
\___ \ | | |______| |
\___ \| NP |\ ______ |
\___ |______| \ | | |
\___ \| NP | |
\___ |______| |
\_____________/
========================================================\
======================================================== \
======================================================== |
======================================================== |
======================================================== /
\ logicoconceptual cable \ /
'---------------------------------------------------------'
__ ___________
/ \\ / \______________________________________________
/ \\ |
| EYE |\_____|
\ //\_____
\__// | Visual Memory Channel
| ______________________________________________
\___________/
3 MAY 1979
The firing of an L-C line may not have to be strict AND-gating with
strictly two inputs. Instead, it may be a kind of summation or integration
process where the second input just pushes it to threshold, but where even
strong primary inputs might suffice to cause firing apart from syntax.
One way to achieve the sequencing desired of syntax or an S-structure
would be to have a system in which control-lines fed into one another. That
is to say, each line could be both a control line for the L-C cable and an
input line to its successor.
I am now beginning to visualize a hierarchy of the "syntax-cable" lying
above and dominating the L-C cable. However, the L-C cable is different in
that it does not have internal sequences. Fibers of the syntax cable would
use "backwards-blanketing" to prime for firing whole classes such as nouns
or verbs in the L-C cable.
So we can visualize a surface-structure level of the syntax cable
resting right above the L-C cable. This surface structure holds a series
of, say, seven syntax lines. Each such syntax-line is in itself an
elongated abstract memory fiber. It has multitudinous historical one-way
connections to the L-C cable class which it governs by backwards-blanketing.
It is not important now whether it goes directly to all the members of the
class or to an intermediate collective fiber. The seven syntax lines are
chained together in a successive way. Now, I don't want to rush the firing-
sequence, so I think I should introduce a moderating mechanism to let each
word get sent into pronunciation and actually get activated before the next
word is crammed down the pipeline. It is one task to activate the words in
the proper sequence, and it is another, equally important task to await the
successful activation of each preceding word before initiating the
activation of each succeeding word.
I would like the ready-for-next-word signal to come out of the accessed
auditory memory channel and into the syntax-cable, perhaps in the form of an
ultimate-tag fed onto a special-purpose abstract bus coming from all
ultimate-tags. For the moment, I am ignoring the interstitial problem of
inflection.
Anyway, I see the need now to hierarchize the insides of the syntax-
cable. Let the bottom level, just above the L-C cable, be the properly
sequenced surface-structure. Then let other levels pyramid upwards to go to
the S-apex of the sentence-tree.
Note that I have finally found a way to fit in the dimensionality of a
transformational-grammar tree-structure. First the logicoconceptual cable
rests like a flat wall of around twenty thousand fibers between, say, visual
perception and the auditory memory channel. But when I want to pyramid the
S-structure of the syntax-cable over the flatness of the logicoconceptual
cable, I take the liberty of imagining that the flatness of the L-C cable is
rotated through ninety degrees so as to interface with the bottom plane of
the S-structure. But actually these cables can just as well be thought of
as round and as internally jumbled, so I make up the following Nommultic
guideline: The internal dimensionality of a quasi-neuronal cable does not
matter with respect to the external dimensionalities which the cable can
enter into.
My reasons for hierarchizing the syntax-cable are not yet totally
clear. It is convenient to get the tree-structure built in already now. I
suspect that the syntax cable must be so hierarchized so that later I can
plan in how transformations will occur. For the moment, I want to theorize
that the ready-for-next-word signal will come in at the apical S-level and
go down all branches at once so as to move the firing locus over by one
line.
For weeks now I have been imagining a syntax-tree and trying to fit it
into my general scheme of things based on the diagram of 28MAR1979. So now
I see it as a pyramidal epiphysis to a rotated plane.
4 MAY 1979
Preparing to Assault the Inflection Problem
This morning on Seattle's Pier 51 I have been reading back over the NTJ
from 4APR1979 onwards. I want now to record certain side-ideas without
necessarily following a specific track of thought.
Yesterday it became especially clear that my present theorizing calls
for elements of verbal thought to surface in consciousness in the following
way. We consciously perceive a lot of quick sideways loops made by the
grammar system through or in and out of the auditory memory channel. The
grammar system strings together word-stems and inflections to form
sentences. Those phonetic elements, the word-stems and inflections, are
actually stored and operated upon within the auditory memory channel. All
the complex linguistic processing goes on within the grammar system, and yet
the focus of linguistic processing slips in and out of auditory memory as
the sentence-elements are strung together.
Having thought up the idea of a "linguistic focus" in the preceding
paragraph, I get the idea of an imaginary little ball or bead being moved
around as the focus of linguistic control. For instance, when the control-
focus moves into the auditory memory channel to flow through a word-stem,
the rest of the process has to wait until the control-focus comes back out,
perhaps in the form of an ultimate-tag designating the end of the engram.
But of course, no little black bead is bouncing back and forth in and out of
the auditory channel. Instead, the grammar system is undergoing various
internal states, one of which involves accessing an auditory engram and then
waiting for a return-signal before resumption of operation. Of course, it
is not yet certain that return-signals exit the auditory channel, but it is
presently convenient to think so.
This recent work on the grammar system has been quite novel.
Previously I was designing systems of information-flow where one flow of
information did not govern another. For my perception and motor channels, I
was simply deciding what the basic automatic routing would be. Now for the
grammar system I have begun to design systems of control, where one
mechanism (syntax) must govern the procedures of other mechanisms. I have
had to adjust my thinking to handle combinatorial flows where widely
disparate inputs yield hybrid outputs, as for instance in the case of syntax
governing word-recall. At some times, I have had to think about a line
representing a specific, and at other times about a line representing a
whole class. I have had to think about ways to control both types of lines
without violating their logical integrity.
My long-standing "pull-string" theory of transformational grammar
probably has to do with the selection of which transformation will operate.
By "pull-string" I meant that the very route of access to the sentence-
structure would determine or select the transformation. The basic criteria
are the elements in a normal, untransformed sentence-structure. When the
mind seizes upon any such element, the element can possibly act as a "pull-
string" to yank even the weirdest syntactic transformation into operation.
For instance, if the mind seizes upon an element which it is presently
regarding as a direct object, such a seizure might yank the passive
transformation into operation. Haste or urgency in expression might compel
many kinds of deletion-transformations.
5 MAY 1979
(Function-Cable Plus Ultimate-Tag) Times
Inflection-Cable Yields Appropriate Inflection
It is shortly after midnight and I have decided to sit down and try to
figure out at least a rudimentary instance of inflection. Let's say that in
the first declension of a Latin-type language we are dealing with inflection
as to whether a noun is subject or direct object of a verb.
First of all, I suspect that my so-called "function-vector" is a
function of the syntax system. That is to say, the S-structure, not
perception, determines whether a noun is construed as subject or object. If
a noun is going to be direct object instead of subject, then it will be led
into by a different transformation, a different form of the S-structure.
I'm not absurdly saying that perception plays no role at all; I'm saying
that the information (of subject or object) is transmitted more by how we
perceive than by what we perceive.
Let me review how the concatenated surface-structure will operate.
Each node on it "backwards-blankets" a whole class of potential fillers.
Theoretically, an individual filler thus gets released over to the auditory
memory channel, where it will activate an engram. The grammar system is
meanwhile poised to receive an ultimate-tag out of the auditory channel,
which tag will send a blanket "next-signal" down the S-apex.
It therefore follows that all ultimate-tags feed into an abstract
memory line which can pass the control-focus back to the syntax-system.
Such a state of affairs is not difficult or unpleasant to imagine. In fact,
it may be a way of causing sentences to arise in the first place.
However, such ultimate tags must also carry the burden of seeking out
inflection. Now, I don't yet know what my hoped-for solution is going to
be, but I do know the available inputs for this black-box mechanism. One
input is the nature of the syntax-node that has prompted the engram-fetch.
Somehow that information has to be kept available so that it can influence
what happens inflection-wise to the fetched engram. The other available
input is inherent in the nature of the fetched engram. Somehow, over the
history of the organism, it must have slowly or quickly come to be the case
that, from engrams of the particular fetched stem, ultimate-tags would not
only activate the return-control bar, but they would also prime for
activation a declension-peculiar slate of possible case-endings to be
selected depending upon the "function-vector" emanating from the syntax-
node. In other words, all ultimate-tags from a certain declension have to
go and "half-activate" or "poise-activate" first a collective abstract line
and through it a set of abstract lines, each holding multitudinous specific
cross-over lines into the auditory memory channel, where they activate an
inflection-engram.
Now the problem remains of how to get the syntax-node function-vector
information over to the single line in the poise-activated cluster of
abstract inflection-lines. Note that each abstract line has multitudinous
concrete cross-over lines.
So far that function-vector is just kind of hanging there. It doesn't
go into the L-C cable, because it isn't needed there.
Of course, we could set up an abstract "function-cable." The syntax-
cable would be "trained" to set the status of the function-cable at each
time when the syntax-cable were firing one of its function-valent nodes, or
even any node. Yes, the surface-structure nodes could become "hard-wired"
to trip the function-cable with each node-firing. Then the function-cable
in turn sends out specific-case blanket-signals to all possible declension-
clusters of case-lines. Well now, dare I cry "eureka"?
If this tentative solution works, it has been achieved by some
methodologically noteworthy inclusions in the system. It seems as though I
just throw in one or several abstract lines whenever I want to create an
isolation-buffer between two mechanisms. I don't think that I would want
the syntax-nodes directly to access and trip the declensional case-clusters.
Anyway, it makes more sense first to centralize and then centrally to
distribute the control lines.
It is also noteworthy that I have had two outputs issue from a single
syntax-node. One output blanket-accesses part of the L-C cable so as to
flush out a word-recall. The other output now sets the abstract function-
cable, which in turn trigger-blankets all associated case-lines regardless
of declension. There is perhaps a third output from the surface-structure
syntax-node telling the next node that it will fire when co-triggered by the
signal percolating in from the ultimate-tag return-control bar.
Let's go over now how words will be fetched and their inflectional
endings be put on. The syntax-node of the surface-structure sends out two
signals. The first signal, a kind of recall-vector, blankets a portion of
the logicoconceptual cable containing all the abstract vocabulary-lines for
a certain part of speech. If one of those abstract lines has been pre-
poised from perception, it now tries to activate all or the freshest of its
concrete cross-over lines. At least one concrete cross-over line succeeds
in activating an engram in the auditory memory channel. That engram is now
blipped consciously throughout the auditory memory channel. Without pause,
its ultimate-tag is activated and outputs to two destinations. No, wait a
minute; it does not go to the return-control bar, it goes only to the
cluster of abstract case-ending-lines peculiar to its declension. It can
not fire a case-line yet, because it does not know which case is
appropriate; so it merely pre-poises all its cases. Meanwhile, the second
signal from the syntax-node has unerringly gone into the "function-cable"
and activated an abstract collective bar which in turn branches out
distributively to pre-poise all appropriate case-lines of all declensions.
Of course, it is the grammatically same case in all the declensions. Once
the two pre-poisings from syntax and ultimate-tag come together, they co-
operate to select one specific abstract case-ending line, from which the
concrete cross-over lines now reach into the auditory channel to activate
the inflectional engram. At the end of the inflectional engram, now finally
the ultimate-tag outputs solely to the return-control bar, to let the
syntax-cable know that the next node can now generate the next word in the
utterance.
It has been convenient to make groupings of related lines in the
abstract memory and call the groupings "cables." I can group together the
various clusters of abstract inflectional lines and call them the
"inflection-cable." The inflection-cable is totally under the selective
pre-poise control of the function-cable, which is itself directed or set by
the node-lines of the syntax-cable.
By the way, does it seem true that perpendicular concrete-lines are
necessary to interconnect abstract lines? Does it also seem true that the
multitudinous concrete-lines with their re-affirmation lines are the vehicle
of the mechanism of habituation-learning?
I feel bold enough to start theorizing that even plurality-signals
could work their way through the grammar system by much the same mechanisms
as I have already employed today. Number-signals would come from the L-C
cable and they would "start-poise" either the function-cable or the
inflection-cable. Of course, with number you also get the problem of
agreement between subject and verb. Maybe it would be good to have the
number-vector go from the L-C cable to the function-cable, where it could
perhaps be used to affect the number of both subjects and verbs.
8 MAY 1979
/^^^^^^^^^^^^\ /^^^^^^^^^^^^^^\
/ Visual \ ______ / Auditory \
| Memory Channel | / \ ultimate-tag | Memory Channel |
| | / syntax \<---------------|---------------\ |
| | \ node / ______ | | |
| | \______/------>/ \ | | |
| round-about| connection | /function\ | | |
| /-----------|-------------\ |flush \ cable / | | |
| | | | |vector \______/ | ______ | |
| | | _V__V_ | | / \ | |
| _|___ | / \ recall-vector| | / stored \ | |
| / \ re-af|firmation /logico- \-------------|----|-->\ stem / | |
| /percept\<-----|-------->/conceptual\ (onset-tag)| | \______/ | |
| \ / line | \ cable / | | | | |
| \_____/ | \________/ ______V_ | ultimate|-tag | |
| | / \<-|---------' | |
| | /inflection\ | ________ | |
| | \ cable /-|--->/ \ | |
| | \________/ | / stored \| |
| | | \inflection/ |
| | | \________/ |
9 MAY 1979
Ideas on the Habituation of Grammar
It may be premature for me to start attacking the problem of grammar
habituation, because I still have important problems in deciding just what
there is that should be habituated. For instance, I have the problem of
habituating the various transformations of sentence-structures. But I may
be able to list the various ingredients and then devise a kind of "universal
habituator" for abstract memory channels.
I will probably have to get down into quasi-neuronal function to
achieve habituation. In my model, I don't want to use the growth of any
portions of neurons, unless I am forced to. I would rather use the post-
growth logical interconnection of neurons that have already grown into a
sort of tabula-rasa network.
The unusual thing here is going to be the tenuousness of the
habituating influence: language-patterns of experienced, adept speakers as
perceived by a neophyte.
I may try first to plan the habituation of a simple, normal syntactic
structure as found in the diagram of 3MAY1979.
I should keep in mind three important functions of neurons: positive
transmission, inhibition, and frequency-coding. Indeed, I was most eager to
mention inhibition just now, but then frequency-coding occurred to me as
also worthy of mention. Then the idea hit me, after I wrote the first
sentence of this paragraph, that I could perhaps use frequency-coding as a
way of letting emphasis on a pattern be a way of habituating the pattern. I
should just keep this idea in mind.
I must also consider the right degree of play-off between genetic hard-
wiring and habituated learning.
When an infant is first saying nouns to refer to percepts, I can
imagine a primitive S-structure causing each instance of the enabling of a
firing-link from a perception channel over to the auditory engram. In other
words, the pyramidal S-structure acts like an element of volition, perhaps
only by operating whenever attention is directed to things that will be
verbalized. For present purposes, we can assume that the S-structure either
always tries to operate or is involved subconsciously by some such mechanism
as attention or volition.
In establishing the logicoconceptual cable, it seems safe to theorize
that first the perpendicular recall-lines develop, going from perception to
the auditory channel. We can then think of a mass of L-C fibers pervading
and inhibiting the recall-lines. The L-C fibers could come in for all nouns
as a part-of-speech class. It would then be the normal dormant function of
the L-C fibers to inhibit the noun-recall lines. Activation of the L-C
fibers would disinhibit, but not activate, the noun-recall-lines.
The recall-lines and the L-C fibers would be running at right angles to
each other. Each recall-line would adopt the L-C fiber or fibers closest to
and inhibiting the recall-line as the incipient logicoconceptual line for
that recall-line and its related percepts, both historical and future.
Now we run into the problem of the re-affirmation lines. A re-
affirmation line from visual perception is a normal associative tag that
goes from the visual channel over to the L-C cable but not necessarily
directly over to the auditory channel. We could think of the L-C cable for
nouns as being essentially flat. The recall-line associative lines are
already present but dormant before the advancing front of visual perception
and memory gets down the channel to those lines. A new percept through
anocatamnesis stimulates a pristine logicoconceptual line, during the
absence of inhibition. It is rather easy for us to say by way of design
that the extremity-fresh associative-tag recall-line will now form a rather
permanent connection with whichever pristine logicoconceptual line is now
being supratraversially stimulated. Since the fresh associative tag flows
over a theoretically flat array of logicoconceptual lines, it can easily
bond with the proper line among all the lines. Furthermore, this idea of
re-affirmation is not necessary for the one-time function of the grammar
system, but rather it serves constantly to update the system and perhaps
even to allow gradual changes.
Now we have made clear a technique of "habituating" the
logicoconceptual lines based on the idea that the recall-line associative
tags are single-shot, use-only-once bonding-devices. It's a habituation
system which is essentially pre-hard-wired.
The logicoconceptual fibers for nouns can be controlled as a class. So
far, we are saying that they work by resting-state inhibition. That is, the
L-C fibers will not access auditory memory unless disinhibited.
The same sort of associative-tag system that reaches from the visual
channel to the L-C cable should also in turn reach from the L-C cable into
auditory memory.
Lines that go from percepts to fetch words are "onset-tags," because
they fetch the word at its beginning. If a disinhibited L-C line fetches a
word, it is through anocatamnesis. Now, the auditory channel is a serial
memory. We might as well envision it as flat, so that extremity-fresh,
unused onset-tags can be bonded from the flat L-C cable over to the flat
auditory channel. We plan that this system shall re-affirm a word each time
it is thought. The information of the phonemic series of the word shall
duplicate itself at the freshest extremity of the auditory memory as a
temporal series of nodes. Of course, our grand design from 5MAY1979 for
inflections specifies that word-engrams shall have ultimate-tags. These
ultimate-tags would be needed just for recognizing heard words, let alone
facilitating grammar. The question is, during re-affirmation, can an old
ultimate-tag be passed forward to become a new ultimate-tag? If it can be,
it would not be done in isolation; that is, the new ultimate-tag would be
created (bonded) only if it had a place to go to outside of the auditory
channel. The trouble is, we are now getting into re-affirmation lines that
have to be link-bonded at both ends, although I suppose even the visual ones
were that way.
What I suppose we could do is to have the ultimate-tag be re-affirmed
only after its destination had been re-affirmed. In other words, wherever
the historical ultimate-tag went, it would be to some kind of abstract-
memory line, such as in an inflection cable or a return-control bar. With
regard to the auditory channel, the ultimate-tag is an output line. It is a
juncture from the last phoneme in an engram to an outgoing associative tag.
I have thought of a reason why it may be better to have ultimate-tags
be re-affirmed through their destinations rather than through their sources.
If we did it through their sources in the auditory memory channel, then we
might get interference from interstitial ultimate-tags residing somewhere
higher in the channel at the end of syllables that chanced to be components
of a word that we are dealing with.
Still, I am uneasy with the prospect of slippage of the exact bondage-
point where the ultimate-tag terminates an engram. There may have to be a
kind of ultimate-tag generating-system in which an ultimate-tag issues forth
from any phoneme followed by a pause, a non-phonemic interlude. Actually,
every temporal node-level of the auditory memory channel could have a
tabula-rasa potential ultimate-tag, but perhaps only re-affirmed ones would
get into the grammar system. Anyway, if we ignore the prospect of slippage
for a while, we can proceed while counting on re-affirmation by destination.
When a fiber of the auditory memory channel is serving as a departure-
point for a pristine ultimate-tag, that tagging might cause that fiber to
remain activated long enough for the bonds of re-affirmation to form at both
ends of the new extremity-fresh ultimate-tag.
10 MAY 1979
Ideas on a "Universal Class-Habituator"
Actually, what I started trying to design yesterday is a "Universal
Class-Habituator," or "UCH." The idea is to habituate classes of things
into a syntactic structure, and then specific elements activated within the
classes will follow the habituated pattern so as to create a grammatical
sentence.
I think I might be able to devise a science of universal habituators as
a help in designing one. (If I may speak facetiously, it would be nice even
to have a "meta-habituator.") The science can develop if I specify a lot of
things about habituators and their design.
The term "classes," in the sort of automaton which I am designing,
refers to elongated abstract-memory lines, each of which can singly
represent a whole class of specifics. So, even though these abstract lines
might be hierarchized in a two-dimensional array, their elongation means
that they always have that extra dimension which is that of time.
One idea which I have gotten in this field of habituator-science is
that of a certain reciprocity between a class and its specific elements. I
want to use this reciprocity to make a rather novel suggestion about how a
mind might habituate a pattern of classes: A mind sets up a proper example
of a target pattern using specific elements from the classes, and then
proceeds to habituate the pattern through a process of re-affirmative
repetition. This suggestion is based upon the reciprocity-notion that, in a
way, you can get at each class by getting at any one of its elements. In my
work of the earlier days of this month, I developed the idea that all active
vocabulary items peculiar to a language are bundled into separate classes
according to their possible grammatical functions, so that they can be
controlled by syntactic nodes. If the items are bundled, it may be possible
temporarily to use a specific element to get hold of the whole bundle in
habituation.
Another habituator-science idea is that it may be necessary to solidify
or consolidate each habituated instance of abstract interconnection before
proceeding to a new habituation. Or, more generously, it may be necessary
just to solidify each level of habituation before proceeding to a higher
level.
It may be a general rule in habituator-science that inter-abstract
habituation must both occur and be solidified by means of the so-called
multitudinous perpendicular concrete lines, otherwise known as "associative
tags" or "re-affirmation lines." But we then have a problem figuring out
how the first concrete line gets strung from one abstract line to another:
it evokes the image of a spider trying to spin a web.
A couple of other, distant possibilities arise for first habituations:
the splitting of a single abstract line to yield two divergent but related
ones; and the joining of abstract lines first by (educated) chance and then
subsequently by re-affirmation.
I would now like to return to yesterday's meandering ideas while
keeping in mind the above ideas on "habituator science."
It may be necessary to theorize that the auditory channel has the
ability to "fuse together" a reasonably short series of sounds. A fused
series could then be fetched by an onset tag, be activated in full, and then
come to a natural halt, with a built-in tendency to bond-generate an
ultimate-tag. With such fusion, we might avoid interference from
interstitial ultimate-tags. The fusion might be inaugurated whenever
sufficiently intense sounds are recorded in a rapid and unbroken series. Or
the fusion might be controlled by an attention-mechanism. Such fusion might
allow nominative cases to be treated differently and more directly than
oblique cases. The fusion could work by concatenating node-slices of the
channel. Recall could cause a new fusion at the extremity of the channel.
So far, we have been trying yesterday to design how an infant would
habituate first nouns and then verbs. We used re-affirmative associative
tags to let noun-recall-lines accrete onto logicoconceptual fibers in a flat
array of abstract memory.
We can now envision a single, rudimentary abstract line serving as the
primordial line of the first nascent sentence-structure. Without worrying
about how thick or thin that line is, let us envision it as the first
linguistic control-line employed by the infant. We know how infants like to
point at a thing and blurt out the word that they have learned as the name
of the thing. Such use of isolated nouns is what we are presently trying to
describe and explain.
We might as well draw up plans for a two-tiered volition system for
verbalization. One volition line will access a sentence-structure to
generate verbal thought, and the other will both generate thought and cause
it to be spoken aloud. In other words, what I am trying to do is refrain
from designing the S-structure in a vacuum. If it is part of our
"habituator-science" that single abstract lines must be accessed by
multitudinous concrete lines, then we had better make provisions for that
process at the pyramidal top of any large or small S-structure.
It is beginning to look as though the physical flow of "candidate"
abstract lines will be provided genetically. Thus the infant's first quasi-
syntactic control-line will already be available in the general vicinity
where it is needed.
What I have not yet decided is whether or not there will be epitaxial
layering of the flat arrays in the abstract memory channel. This decision
may matter, if it matters whether the infant's first control-line is at the
highest or lowest level.
Anyway, we want to use a kind of "random dynamics" (q.v.) method to
start the infant blurting out its first nouns. I suppose that we can first
acknowledge and then ignore that random-dynamics mechanism by shoving it off
into the motor areas. But it will cause the first laying-down of the
concrete-fibers going from the speech-control and thought-control (mirabile
dictu) volition-lines over to the S-structure.
I suppose it is perfectly harmless to imagine the infant's first noun-
control-line as being at a middle level, rather than at a highest or lowest
level. Then we have the option of building either upwards or downwards.
Notice that we have required the S-structure to be subject input-wise
to a flat associative-tag array, just as we did in the case of the visual
and auditory memory channels. Two days ago I was reading in the Engineering
Library of the University of Washington that the cortex of the brain, if
laid out flat, would measure approximately fifty centimeters by fifty
centimeters by two millimeters. If the human brain can be thought of as
flat, then I am encouraged when I find myself designing such a flat
automaton.
The class of logicoconceptual lines for nouns develops control of all
the recall-lines for nouns. Although it might have been easy to let the
first noun-recall-lines operate individually in motor-control of nouns, we
want to shift the control upwards to the rudimentary S-structure. So we are
speculating that the first control from above operates downwards to inhibit
the function of the recall-lines below. As I recall, such operation is
considered to be typical in the brain: lower functions tend positively to
operate, unless inhibited from above.
So we might have to envision a kind of "spread-out cell" ("SOC") that
operates vertically so as to let a superior abstract-line exercise
widespread collective control over all the logicoconceptual noun-lines in
the flat array below. Actually, the SOC-cells above could also come from a
(superior) flat array, as long as they always acted as a group.
If we design a built-in tendency for downwards inhibition to occur,
then we have a natural, unawkward control-link which the rudimentary S-
structure can easily gain access to. So when the infant's rudimentary S-
structure fires downwards through the SOC-lines, it disinhibits the flat
array of logicoconceptual noun-lines.
We have next to figure out how verbs would come into the infant's
system. We want the infant to learn to say such things as "Bird fly" or
"Train go."
Verbs will have one ingress into the auditory memory channel, in that
the infant will certainly hear a lot of verbs, even if he or she doesn't
understand them. Verbs will start to be learned when some sort of initial
link-up is made between perception and the stored sound of a verb.
Over this past winter I did a lot of theorizing on the verb-problem.
Here is where I want the logicoconceptual cable to really do its work.
Perhaps I can create a new flat array in the L-C cable, as a place for
verbs. Let's say that the verb-array lies below the noun-array. However,
the verb-array has no concrete lines coming in directly from perception.
Instead, the verb-array gets its inputs through multiple contacts with the
noun-array above it. Now, the verb-array is a flat array of abstract lines.
Each abstract verb-line develops associative-tag recall-lines over to verb-
words stored in the auditory channel. Instead of a concrete-line reaching
"leftwards" into perception, each verb-line will have a kind of concrete
"trailer" line that goes underneath the (superior) noun-array for its whole
breadth. Of course, each verb-line will have multitudinous such "trailers,"
as re-affirmative lines along the time-dimension. Now, each inferior
trailer-line can make contact with any number of superior noun-lines. Thus
the noun-lines can "vote" to select a verb-line. The most appropriate verb
can be selected, perhaps through frequency-coded propagation, by whichever
verb-line got the most positive-votes and the fewest inhibitions. You see,
we could design it so that a trailer-line, getting some but not all of its
requisite nodes filled, would be inhibited somewhat by each unfilled node.
Thus a simple verb of few, but fully filled, inputs could override a complex
verb of many, but some unfilled, inputs. Of course, a complex verb of fully
filled, many inputs would override a simple verb of fully filled, but few,
inputs.
Now, the need for these trailers to have access to the full superior
noun-array poses some question as to where the actual logicoconceptual verb-
lines should be placed so as not to get in the way of the trailers.
Notice that this system seems to allow the neophyte over time to learn
refinements in the selection of verbs, via the re-affirmative function.
Wherever the logicoconceptual verb-lines are located, we next have the
problem of how to control them as a class.
11 MAY 1979
More Ideas on Grammar Habituation in Infants
The idea that higher-level functions in the human brain typically
govern lower-level functions by inhibiting them may prove useful if we think
of this phenomenon as an evolutionary building-block in the genesis of
complex systems. Specifically, we have the idea that percepts will tend to
recall words. Then we are going to inhibit the recall from on high. Once
we establish inhibition as the normal state of affairs, we acquire positive
control through negation of the inhibition.
Yesterday I was getting the idea that processes which I have been
thinking of as from top to bottom may actually be upside-down in the cortex
of the human brain. That is, I have been designing syntactic control
structures as governing flat arrays of data-flow located on lower levels.
All these putative structures may actually be upside-down in the human
cortex, so that the relatively more significant control structures may
occupy an inner, more centralized position as opposed to the relatively less
significant associative arrays. This is a tentative prediction of the
Nommultic theory that is possibly susceptible to empirical verification.
Anyway, for the nonce I will continue in my present mode so that I can think
of structures as being both superior in control and superior in location.
Reading over yesterday's work, I get the idea that the logicoconceptual
cable may be stratified into as many layers as there are parts of speech.
It might not be good for me to lend credence too soon to such a formulation,
but I can always re-arrange it. Even if there are part-of-speech strata,
the visual and auditory channels would probably not have multiple strata.
If there are part-of-speech strata, I suppose that superior disinhibition-
lines would send down disinhibition "runners" from above. It is not yet
clear whether the disinhibition "descender-runners" would physically access
only a distinct stratum while actually permeating all the strata. Growth
would have happened locationally, while learning would happen dynamically.
It may be advisable to establish a notion that various neuronal
networks occupying a given volume can be highly transparent and permeable to
one another. Although I have been relying upon orthogonality in my design,
I have tended to design networks as physically separate, lest they interfere
with one another. Thus I planned certain arrays as being flat, and then
perhaps stratified, while I kept in mind how things would fit together. But
now I have to keep in mind that such caution might not be necessary if the
arrays and networks can physically intermingle while preserving their
logical integrities.
Anyway, I am on the verge of adding in the control of verbs in infants.
Yesterday I designed how the logicoconceptual verb-array could be a layer
right below the noun-array (or even a more general array). If we assume
that the noun-array was set up first temporally, then we can have it over
and done with when it is time to establish superior control over the verb-
array.
The noun-array came to be controlled as follows. Horizontal volition
runners activate the rudimentary, single S-line. This S-line rests amid the
SOC-cells that descend to counterinhibit the logicoconceptual noun-lines.
Let's say the infant gets pretty adept at blurting out nouns and is
starting to learn verbs. The activation of counterinhibiting SOC-cells down
to the verb-array will be causing verbs to be spoken. We can imagine that,
from the original S-line for nouns, to which volitional fibers had been
reaching, now additional associative fibers will be advancing further in the
same direction as the volitional fibers had been going, and these newly
activated associative fibers will be bond-attaching to a collective cable
that abstractly flows through and controls the SOC-cells for verbs.
So now we have designed that horizontal volition will activate first
the noun S-line and then the verb S-line. The verb S-line develops as
something which automatically follows the noun S-line in time. But now I
want to do something which differentiates the volition signals for the noun
S-line and the verb S-line. You see, the mind must try to say both types of
words, nouns and verbs, immediately at once. So the volition system will
try to slow down and differentiate. My insight here is that the flat,
horizontal, volitional sentence-inputs will wander up one level higher and
that in response a higher S-line will be established. The original, lower
volitional fibers will cease to be true volition and will now function as
return-lines carrying the information that the subject-noun has been
expressed and that it is now the turn of the predicate-verb to be expressed.
So therefore those return-fibers will go to the verb S-line, which has
remained low while the controlling noun S-line has shifted up one level
along with the volitional fibers.
These arrangements may not be exactly correct, but the general idea of
the insight is clear: Let volition wander upwards and let the former
volition-fibers become informational return-lines.
Today's earlier work was done on Pier 51. The last several paragraphs
actually took some mental effort and I was pacing around a bit trying to
hold on to the train of thought and seeking the right sentences to express
my thought. But I finally had enough written down so that I felt I could
safely leave the continuance for this evening or tomorrow morning. However,
after I left off writing, the "insight" deepened considerably and now, in
the evening, I feel that I may have a major structural development.
The deeper insight involves closing today's loop of control-lines on
top, counterinhibition-lines going down the side, recall-lines back across
the bottom, and then ultimate-tags back up to the top again, so that a very
dynamic square or rectangle is created. I have not thought out this hollow-
square scheme in great detail, but, once the idea occurred to me at the
conclusion of this afternoon's writing, I began to visualize several
possible major features of the scheme.
The idea is to have the whole generative grammar system operate as a
kind of hollow square. (My next sentence may sound facetious, but there is
sincerity in it.) If we visualize looking at the square from the future
back into the past, the flow is counterclockwise. The logicoconceptual
cable is at the bottom left, and the auditory memory channel is at the
bottom right. Habituation of syntactic sentence-structures is to take place
across the top of the square. The control-nodes will be at the top left,
and volition will be coming in at the top right.
Nolarbeit Theory Journal 14 MAY 1979
_____ __________
/ \ | |
/ \<--------------------------------| Volition |
\ S / |__________|
Sentence \_____/
Structure |
| _____ _____
| / \ B | | return-control
+------------->/ Verb \<-------------| R-C | line
| \ / |_____|
| \_____/ /|\
A | | |
_V____ | |
/ \ | __|__
/ Noun \<-----------+ | |
\ Phrase /<-----------|------------------| R-C |
\______/ C | |_____|
| | /|\ /|\ /|\ /|\ /|\
| | | | | | |
Noun O O Verb | | | | |
Spread- /|\ / \ Spread- | | | | |
Out Cells / | \ O O Out Cells | | | | |
/ | \ | | | | | | |
O O O O O | | | | |
/ / \ \ | | | | | | |
O O O O O O | | | | |
/ / \ \ | | | | | | |
O O O O O O | ultimate-tags |
/ / \ / \ \ | | | | | | |
O O O O O O O O | | | | |
/W\ /W\ /W\ /W\ /W\ /W\ | | | | | | |
||||||||||||||||||||||||| /W\ /W\ | | | | |
logicoconceptual noun-lines ||||||||| recall-lines ______________________
ooooooooooooooooooooooooooo========================/ \
===========================ooooooooooo=============\______________________/
verb-lines auditory memory channel
17 MAY 1979
The Comprehension of Natural Language
When a sentence comes in to be understood, it is, so to speak,
"captured" by the auditory memory channel. The mind will attempt to
understand the sentence immediately, but, if an error is made, the sentence
still remains in memory for additional attempts at understanding it.
Usually, the mind tries to understand a sentence as it comes in,
without waiting for the end of the sentence before processing the initial
elements. I personally have had many sporadic instances in which I have
raced ahead too quickly in taking meaning out of sentences, with the result
that I have had to readjust my comprehension as the rest of the sentence
came in. I have even made erroneous comprehensions by quickly processing
just the first few syllables of a word, such as a compound noun. I assert
that I was correctly processing what I heard up to each point, but I was
often surprised when a sentence kept coming in in a way that invalidated my
initial comprehension of the incomplete sentence. In several surprising
instances I have comprehended the (incomplete) utterance and started
thinking out a reaction to it, seemingly in the natural interval as a
speaker was poised between syllables of a word. Actually, I was probably
allowed the interval measurable by the amount of time it took for the
corrective syllables to register. My point is, that processing advances as
each morpheme comes in and is recognized. Some morphemes therefore convey a
temporary ambiguity for the brief time that those morphemes have been spoken
but not the immediately following morphemes. Of course, the falsely
comprehended, incomplete utterance can be completely free of ambiguity. For
now, I will call this ambiguity "subset-ambiguity," because, during a very
brief interval, a subset of the morphemic string is erroneously but validly
processed as if it were a complete and independent utterance.
The following scenario shows how subset-ambiguity might operate.
Suppose a traveler returns to his home town after a period of being
incommunicado. Hungry for news, he is greeted upon arrival by an
acquaintance whose countenance seems to grow sad and troubled at seeing the
traveler. The acquaintance says, "Oh, my poor friend, I was so very sorry
to hear about the sudden death of your brother Tom's racehorse." This
example is rather extreme, but it illustrates two things. Firstly, various
initial subsets of the sentence can obviously be comprehended in such a way
as to describe a catastrophe, namely the death of the brother. Secondly,
the example shows how expectancy or uncertainty may play a role in how an
utterance is comprehended.
It will become necessary to reflect upon whether expectancy plays only
a psychological and epistemological role, or also a role in syntax and
grammar.
Anyway, I am at the point now where I want to start theorizing about
how syntax and grammar operate in the comprehension of language. I have a
general guideline in mind. Whatever language-comprehension is, it involves
the establishing of the proper associative connections among the elements of
a perceived utterance. Merely to deposit a sentence in memory is not to
know the information contained within the sentence: witness a talking
parrot.
Is it possible that expectancy or urgency may use frequency-coding to
strengthen the associative "valence" of an important sentence? Frequency-
coding might be used to enhance permanently the associative bonding of an
associative tag.
An incoming sentence can drastically change and update our knowledge
about the world or the universe. Let's discuss change in our knowledge of
familiar elements from the external world. Our associative knowledge is
clustered about the words naming the elements. Thus the same word for an
element may exist at many different points within the stream of the auditory
memory channel. However, those different points are accessible through
internal associativity up and down the auditory channel itself. Of course,
that internal associativity establishes only identity (or perhaps also
similarity), not the information-rich relationships among separate words.
Other than by temporal contiguity, separate words can not be associatively
related within the auditory channel alone. Within my Nommultic theory as
developed so far, separate words can be related only by the associative
tags, of which there are two types in the auditory memory channel: onset-
tags and ultimate-tags. I am now reserving judgment about all the various
features of the onset-tags and ultimate-tags.
I am getting the idea that it is possible that there are two ways for
extraparietal association among separate words of the auditory channel. The
first way would be through the avenue of one of the other senses, such as
vision. For example, a visual scene could serve to associate many separate
words describing things within the scene. The second way would be through
associations brought about in language-comprehension. The question arises,
can the comprehension-mechanism manipulate associations without getting
entangled in them? I suspect that it can, by being abstract. When a
sentence, either fresh or stale, is run through the comprehension-mechanism,
associations are generated which remain as a momentary part of the
historical record of that moment in the history of the organism. The
sentence itself, whenever re-activated, goes through the comprehension-
mechanism and re-establishes the "momentary but permanent" associations.
(Wouldn't it be awful if we had to re-activate each original sentence of
knowledge in order to make use of the information in the sentence?) The
more important a sentence is, the more frequently we are likely to run it
through the comprehension-mechanism so as to associate it quite broadly with
our general knowledge.
The next question is, can the comprehension-mechanism lay down
associations among separate words in a word-to-word fashion so that it is
not immediately necessary to go into the other sensory channels or even into
the auditory channel in a nonverbal way? (For all I know, perhaps this
wonderful performance is what it can only do.) Although it may thus string
words together into relationships, the meaning (and subsequent logical
generativity) of the string depends upon the associativity of each
constituent word out among the total sensorium. If words are strung
together but there is a paucity, a dearth, of sensory information about the
words, then it will be difficult for richly specific mentation to ensue.
A clue is developing now as to what provides the "motivation" or
impetus for sentence-generation (thought-generation). An angular,
crankshaft-like process operates. When one sentence, whether from outside
or inside, goes through the comprehension-mechanism, associative
relationships are established among words regardless of the prior
associative import of the words. However, that prior import can immediately
come into play as a kind of logical tension which prompts the generation of
new sentences. Thus the mechanism of syntax and grammar mediates thought
all up and down the system, and the verbal product of thought is deposited
at the freshest extremity of memory.
18 MAY 1979
As the first word of an utterance comes into the auditory memory
channel, the mind commences processing that word through its grammar-filter.
Obviously, just as I did with sentence-generation, I will want the incoming
sentence to submit to the control of a syntactic structure. I will
disregard the problem of psychological expectancy by imagining that the
incoming sentence is being perceived in relative isolation, so that the
hearer does not know what to expect.
If each word in the vocabulary of the language could serve as only one
part of speech, then, clearly, we could use that part of speech as a
criterion or selector for entry into any of multiple syntactic structures.
We could then concentrate on the inter-related concerns of function and
inflection.
Perhaps I should establish a principle of widest-possible spread-out in
the through-filtering of a perceived word into the comprehension-mechanism.
I have in mind a pair of English sentences such as the following:
"This man likes music."
"This man I like."
In accordance with the (presently emerging) "saturation principle," an
initial English noun-phrase, such as "this man" above, should try to run
through all fitting and available syntactic structures, in a process of
being impeded on branches where things don't fit. A syntactic path which
absorbs the whole utterance "wins the day," so to speak, because
associativity saturates and continues from the winning path, while it dies
out along any obstructed branches.
I would now like to suggest that ambiguity can be seen in the light of
the saturation-principle. An utterance is ambiguous as long as it fully
traverses two separate syntactic paths unto naturally ending destinations.
Notice that this description of ambiguity also covers the "subset-ambiguity"
mentioned yesterday, because subset-ambiguity is fleetingly ambiguous, until
post-subset morphemes carry the true and full utterance to the end of the
true path. Now, is there a problem of going back and negating or
invalidating the momentary subset-ambiguity? I suspect that such a problem
does not exist, because there is only a momentary disturbance insofar as the
hearer's psychological belief-structure is momentarily shaken by the
erroneous comprehension. The belief-structure is a self-adjusting network
of extremely free associativity. An erroneous comprehension stemming from
ambiguity can therefore just ripple away or be cancelled out within the
"panpsychic" belief-structure.
In the language-learning of an organism, comprehension must actually be
learned before generation can be learned. In recent weeks I have worked on
generation first, because it lent itself to analysis. I knew that I had
much to finish in the theory of generation, but yesterday I suddenly
realized that I might as well move into comprehension, if it seemed
presently easier. So yesterday I started with sweeping generalizations,
because I hoped thereby to move deftly in upon the quasi-neuronal switching
functions at the heart of comprehension. I expect to make the
comprehension-process more or less a reversal of the generation-process. I
am not yet sure whether I will have to create the analog of a two-way street
with separate lanes, or whether the generation/comprehension flow will be
bidirectional along the same transmission lines. It is quite possible that
the various associative recall-tags will be bidirectional, but that there
will have to be separate, co-mirroring syntactic structures for generation
and comprehension. Whereas inflectional endings had been added on
by syntax-nodes in the generation-process, inflectional endings in the
comprehension-process will probably serve to guide words to the proper
syntax-nodes, during the saturation-process.
I may use "neuron-fatigue" as a way of shifting through the variant
possibilities for comprehension of an ambiguous utterance. Any
malappropriate, initial explication of an ambiguity would slow down in its
firing on secondary or postsecondary comprehension-passes, so as eventually
to yield in favor of a fresher, perhaps more appropriate explication. You
see, it is important that comprehension not have to be consciously labored
at.
6 JUN 1979
Disinertiativity through Transabstractivity
I need two clear and succinct terms to describe two important processes
in the mind.
I think the first term will be "transabstraction." A non-thinking
central nervous system can be quite efficient at receiving inputs and
bridging them over to the various outputs. However, depending upon the
relative level of evolution of the CNS, the bridging of inputs to outputs
will probably have to be rather direct and in strict registry. On the other
hand, in a mind capable of abstract thought, the conscious mind has all its
outputs freely at its disposal. Even if the outputs are kept in registry at
lower levels (of reflex or habituation), the conscious mind has an
overriding particularistic control of the outputs. The mind can prolongedly
contemplate its options for output, and then freely select and initiate the
desired motor activity. Likewise the conscious mind has free access to its
permanent experiential memory stretching back through its lifetime. The
concept of ego, ensconced within the conscious mind, wanders freely up and
down the memory channels, creates new verbal thought, and freely reviews its
options for motor initiation. Perhaps I should use the word
"transabstractivity" to refer to the free linkability of practically all
data and information within the thinking conscious mind.
The second term which I need will probably be "disinertiativity," to
describe the mind's process of potentiating the action of previously inert
objects. Many things, such as the surface of the moon of Earth, would
remain extremely inert, were it not for the transabstractional or
transabstractive disinertiativity of the mind. This process was discussed
in the Nolarbeit of 28JUN1975. Transabstractivity in disinertiation
releases the logical tensions potentially existing between inert objects.
14 JUL 1979
The Implicit Mechanism of Attention
The distinction between inherent and solely operational qualities.
When we build a transabstractive mind, as described so far by the Nolarbeit,
many of its functions are designed explicitly into the system. For
instance, it shall be able to associate memories and to habituate rules of
grammar. However, I am beginning to suspect that there may be additional
functions which become available even though we did not try to design them
into the system. My first idea of what to call them is "implicit
functions," but I wish I had a more descriptive term. Anyway, the idea here
is that these unexpected implicit functions spring into being just because
we have built such a powerful transabstractive system. Although the
examples which I am going to present may be erroneous, the idea may still be
valid. First of all, there may be some really unexpected implicit
phenomena, such as the ability to dream, to sleepwalk, or to be hypnotized.
But what I had in mind first was the possible explanation of a mechanism of
attention.
In trying to design the uppermost intellectual level of a conscious,
thinking mind, I have operated on a plane where there must be freedom and
nothing but freedom. That is to say, the fact, that the design was of the
very highest level of a thinking mind, has perhaps certain consequences.
One consequence is perhaps that the design at the top is actually much
simpler and less complicated than many systems operating at lower levels,
such as perhaps in the cerebellum. [See Albus in BYTE, July, 1979.] Such
simplicity would perhaps be due to the rationale that freely thinking
consciousness must of necessity be quite isolated and protected from
possible interference from lower levels. If lower levels must interfere,
they should interfere only in special or extreme circumstances, such as
those of instinctual drives or serious dangers.
It may be that a mechanism of attention results unexpectedly and
implicitly from the design of the topmost, transabstractive level.
Attention would work in the following manner. We all know that we can be
attending to one voice among many, and end up really hearing and
understanding only that voice to which we "paid" attention. Well, part of
Nommultic theory says that we actually heard and retained as engrams the
whole milieu of sounds, but that only those sounds were remembered to which
access was gained and maintained via associative tags. (Now, as ideas are
beginning to dawn on me, will it be the idea of selective fixation of
associative tags, or the idea of the importance of the employment of
reaffirmation tags?) This explanation of attention may have to go hand-in-
hand with development of ideas about language-comprehension. Anyway, when
we are listening to one speaker among, say, three speakers, we came to be
listening to that speaker by a process of association, and we remain
listening to and attentive to that speaker by a process of association.
Perhaps, then, any vortex of associativity actually constitutes a so-called
"mechanism of attention."
If we have been attending intensely to a phenomenon in our perception,
then far back into our memory channels there are instances of associativity
going on with respect to aspects of our attention. As each new slice of
perception is perceived, it is associated, not just with other incoming
perceptions by simultaneity, but also with recall-fetched memories that have
quickly moved to the freshest extremity of the memory channel. Thus a past
engram-slice becomes duplicated at the freshest extremity, and this process
enables attended percepts quickly to become associated with vast past
experience. After all, each duplicated engram-slice is now like a node
present in two places at once.
Since association is happening so intensively to each previous slice of
the attended perception, each fresh slice is avidly taken up by the same
state of affairs. Of course, if we hear something shocking or surprising,
we may stop paying attention as our thinking drifts away through
associations from that which shocked or surprised us.
How can I say that associative attention puts us into a state of
expectancy, expecting more of the same from whatever has seized our
attention? Well, I can make use of my old friend, the concept of neuron-
fatigue.
As fresh and old engram-slices are being associated with what we are
(probably with one main sensory channel) paying attention to, many old
engram-slices are going into semi-activation but then losing out to those
engram-slices which reach full activation. Then neuron-fatigue eliminates
the fully activated engram-slices, but leaves many semi-activated memories
in a sufficient state of residual activation as to constitute,
psychologically, a state of expectancy. The mind is expecting certain
associations simply because it is readier to make them than others. So a
mechanism of attention is implicit when you build a conscious, intelligent,
associative, transabstractive mind.
21 NOV 1979
Comprehension
We can simplify the attack on language-comprehension by imagining a
society of minds that enjoy the generativity of only one syntactic tree:
subject - verb - direct object. We can say that the language contains only
nouns and verbs.
Actually, language-comprehension will start whenever the perceiving
mind seizes upon an initial morphemic word as fitting into any one of
perhaps many syntactic lead-offs.
From now on here I want to posit a logicoconceptual cable for purposes
of comprehension. Let us reflect as though the comprehension L-C cable were
separate from the generation L-C cable, and then later we can see if we
would want both L-C cables to be one and the same.
When the initial word of a sentence is perceived, it goes to the
freshest extremity of the auditory channel. As it filters down through the
auditory channel, it will stimulate some one ultimate-tag, or group of same-
word ultimate-tags, most strongly. That one ultimate-tag, with or without
its fellows, will "supratraversially" access and activate a conceptual fiber
in the logicoconceptual cable for comprehension. Reaffirmation will then
occur as the new ultimate-tag of the fresh auditory engram, through
simultaneity, attaches itself to the same abstract fiber which was
supratraversially stimulated in the L-C cable. Thus updating, and perhaps
even initial language-learning, occur.
Now, what good does it do the incoming word to have stimulated an
abstract fiber in the comprehension L-C cable? Well, that process
tentatively establishes the part of speech of the word, because the abstract
fibers of the L-C cable are layered (bundled) according to part of speech.
Note that nothing has been established as to the grammatical function of the
word, such as subject or direct object. Function will be determined either
by interpretation (based perhaps on word order) or by clues from inflection.
Now let's say that a second word, such as a verb, comes in. The
incoming verb will access and reaffirm an abstract fiber in the verb-layer
of the L-C cable. Then a second noun, a direct object of the verb, will
access and reaffirm a second abstract fiber in the noun layer of the L-C
cable. Of course, the comprehension-processing of the first noun as subject
will have to have taken place quickly already so that no confusion occurs
between the two nouns.
Now, this sort of sentence states a relationship involving two nouns
and a verb. The sentence can convey knowledge of the relationship. Mere
activation and reaffirmation of L-C cable abstract fibers do not set up the
internal representation of the relationship carried in the sentence. The
relationship is carried by the syntactic structure of the sentence, and
therefore the relationship must be reconstructed internally through
activation and reaffirmation of a syntactic structure residing in the
"syntax cable" of the abstract memory channel serving comprehension.
Let's looks at that dynamic bundle of potential meaning and
information, the verb. So far we have merely accessed an abstract
conceptual fiber representing the concept of the verb. In the experiential
history of the organism, knowledge of the meaning of the verb must
henceforth (either long or briefly) be associated both to the subject and
the direct object. In other words, the direct object acquires some of the
semantic data underlying the verb. But the direct object is epitomized only
as a morphemic word occurring here and there in the experiential stream of
the auditory memory channel. Those happenstance morphemic-word loci are not
going to be the fastening point to which semantic data coming from a verb
are now henceforth tied as a result of comprehension of the sentence. No,
those auditory engrams are too phantom-like; instead, the abstract
conceptual fiber for the direct object will be the steadfast point to and
from which all semantic connections are made. The auditory memory engrams
are just a means of rapid access to the abstract conceptual fiber.
The abstract conceptual fiber for a noun can lead semantically to data
stored in all five sensory memory channels. Thus, when the semantic
background of a noun is invoked, perhaps during time-extended or lingering
comprehension, all sorts of associations fan out into the memory channels of
all the senses. Then, within the individual memory channels, the
comparator-effect allows circuitous re-entry back into the abstract domain.
In the case of verbs, there is an escalation away from the five raw
sensory memory channels. The abstract conceptual fiber for a verb has its
fanning-out "feeler-apparatus" which has hold of the semantic bundles
underlying the verb.
Dare I say the following? When semantic contact is to be made from a
verb to its subject and direct object, the quasi-reaffirmation process
causes the feeler-apparatus of the verb to connect to its subject and to its
direct object. Of course, the reaffirmation process works only when the two
things to be affirmed have been stimulated separately. It is the syntactic
structure of the syntax cable which will proffer abstract fibers as
candidate elements for the feeler-apparatus to take hold of and affirm. It
is semantically essential that the syntax cable guide these affirmations,
because other plausible guides, such as juxtaposition of word-order, are not
reliable or strict enough. So inflection guides the syntax cable, and the
syntax cable guides the affirmative constructions of relationship, and the
resulting structures constitute the new condition of knowledge engendered by
the comprehension of the sentence.
It is possible, during reflective thought, that an abstract conceptual
fiber may perhaps "vibrate" in concert with its dually sensory and semantic
background out among the various sensory memory channels. Suppose that one
concept is momentarily of supreme importance and interest to the mind. The
abstract fiber of that concept will be undergoing heavy use. It will not be
important of itself, but rather its situational ambience will be making it
important. Thus, over and over again, the conceptual fiber will
semantically project out into the psychic ambience, and the psychic ambience
will circuitously rush back to the important fiber. Meanwhile, all sorts of
words will be coming to mind in a brainstorm of verbal mentation. Some of
the words and psychic currents will trigger generation of sentences of
thought. In fact, when a sentence is generated, it immediately goes through
the above described comprehension-process, so that its peculiar semantic
relationship can be reaffirmed through syntactically guided structuring
within the logicoconceptual cable of comprehension, which cable may or may
not be the same cable as that of generation.
The generating of a sentence is the verbal expressing of a
relationship. Before generation of the sentence, the relationship is only
pregnantly available to the mind. If the mind is generating a sentence
about what it is perceiving externally, then the relationship is coming from
without the mind. Such an external relationship can be very clearly
perceived and yield a strong formulation as a sentence. However, suppose
that a sentence is being generated as a result of internal meditation or
reflection. Then a new relationship is about to be discovered and is about
to come into expression in the sentence. Or perhaps merely an old
relationship is about to be reiterated. At any rate, the about-to-emerge
relationship is pregnantly available within the interior psychic ambience.
Logical "tension" builds up and finds release in the generation of the new
sentence. Such logical tension can come from one new external fact entering
a mind. If one bombshell of a fact enters a mind, then the equilibria of
many old relationships can be disturbed, so that a vortex of thought quickly
flares up and slowly subsides.
On the one hand, generation of a sentence sets up a relationship, but
it is the immediate comprehension of the new sentence that affirms the
relationship and leaves a structured memory trace of it. The fortuitous
network of associations that generates the sentence does not have to show up
again unchanged as that network which remains structured in memory after
comprehension of the sentence. Typically, a very tenuous network might
generate the sentence, and then, after immediate comprehension, there might
remain a network as a structure carrying a broader and fuller semantic
impact. That is to say, a tenuous, fleeting relationship finds expression
in a sentence, but thenceforth the full impact of the relationship stands
formalized in the sentence, ready to operate strongly each time the sentence
is run through comprehension. The shift in semantic impact between
generation and comprehension of a sentence is perhaps where mental
creativity arises. The fortuitous network that generates a sentence is not
in itself creative, or is it? Well, we could say that the initial,
fortuitous seizing upon the relationship is creative because the
relationship was perhaps never perceived before. And each time that the new
sentence is run through the mind, the success of the creativity can show
itself, because new aspects of the relationship can be realized with each
comprehension.
23 NOV 1979
In the months that I have been thinking now and then about language-
comprehension, I have been concerned with determining just what "trace" is
left after a sentence has been comprehended. Quite early it was clear that
there would have to be an episodic trace of the sentence as a sequence of
sounds in auditory experiential memory. However, it is now seeming clear
that the purely sensory recording of the sentence in auditory memory
probably plays little role in the organism's structure of knowledge. Even
if the words and morphemes of the sentence have their full complement of
onset-tags and ultimate-tags, those tags just lead to abstract conceptual
fibers in the abstract memory. The import of the sentence is in the
relationship which it asserts among the concepts. We might say that a
sentence adjusts the background of all its concepts. Although primitive or
rudimentary concepts may be rooted in sensory memory, more complex or
abstract concepts are probably rooted in a structure of relationships to
other concepts.
The concept of a transitive verb is probably strictly furcated into a
set of relationships with subjects and a set of relationships with direct
objects. During comprehension, each set is probably attached on the basis
of syntax or inflection. Each transitive verb implies attachments based on
relationship. A verb as a concept is comprised in its relational history.
The temporal progression of conceptual knowledge is constantly being
"woven" as verbal thought momentarily specifies the varying relationships
among concepts.
A comprehensional relationship of a concept cannot ultimately be
distinguished from its semantic "definition." However, the semantic
definition is like the statistical "average" of all the previous
relationships of the concept. When an incoming sentence asserts a
relationship, the heavy preponderance of past relationships is used in the
process of comprehending the sentence, and at the same time the gist or
scope of past semantic definition can henceforth be slightly altered if
there is any novelty in the newly asserted relationship. The
comprehensibility of a concept grows each time the concept is newly
comprehended. Either a standard meaning is reinforced once again, or a
shift in meaning is adduced.
If someone tries to use a verb in an improper way, then the asserted
relationship will not be believed. A believed relationship will be affirmed
in subsequent associative thought, but a discredited relationship will die
out for lack of reassertion.
The picture which I am painting here is perhaps hard to believe.
Thousands of unitary abstract fibers hold the concepts in a mind. The
rudimentary concepts are defined in terms of sensory memory, and the complex
concepts are defined in terms of one another, or of rudimentary concepts, or
of sensory memory. The actual attachments of relationship to a concept are
effected by means of the myriad concrete associative tags which are singly
available to the conceptual fiber at each pulsed moment of ratiocination.
The length, the long dimension of the abstract conceptual fiber signifies
its temporal aspect and its changeability over time.
Any new relationship is understood in terms of potentially all previous
relationships.
26 NOV 1979
It is really quite a claim to state that intelligent consciousness
operates not in the sensory memory channels, but in an immense conduit of
single-concept fibers. It is quite a jump from orderly, even rigid, sensory
memory into the stick-forest of conceptual fibers. At first blush, it might
sound ridiculous to assert that a stick-forest of fibers can think. But the
thinking results from or as the interconnections among the fibers.
In this light, I would like to mention copulative verbs such as "be" or
"become." It looks as though verbs are going to have to have "portals" or
partitions for at least subject and direct object, if not also for semantic
attributes of the action expressed by the verb. It is possible that there
could be two main classes of the semantic tags attached to a verb-fiber:
subject-linkable tags and object-linkable tags. When a verb-concept is
invoked, those two linkage-classes have to be assigned. When the action of
a verb upon a direct object is comprehended, what supposedly happens is that
the object-linkable tags become linked to whatever concept is momentarily
the direct object.
The momentariness of interconceptual taggings is very important. It
may be that the tags are extraordinarily strong only when first established.
When a non-transitive, copulative verb such as "be" or "become" is
used, what we probably get is the unimpeded through-linkage of the subject-
portal to the quasi-object-portal. The various forms of the English verb
"to be" are limited in number, and each form probably has its own concept
fiber. During comprehension, the syntax cable probably causes tagging of
the subject-fiber to the verb-fiber and of the verb-fiber to the quasi-
object-fiber. Once these tags have been set up, the subject-fiber is
recentissime linked, via the rather meaningless verb-form, to its predicate
nominative as a quasi-object. Now, the linking by the syntax cable is
momentary and transitory, but these affirmatory links are permanent.
Henceforth, invocation of that one-time subject-fiber must also tend to
cause invocation of that one-time predicate nominative. But I think I see
now that the residual cross-linkage must not go via the verb-fiber of the
form of "to be." No, the verb-fiber only established the linkage.
Thereafter, the verb-fiber must be free to service other accounts, so to
speak.
Suppose that the mind comprehends and believes the sentence, "John is a
midget." Henceforth, the concept-fibers for "John" and "midget" should both
be capable of invoking each other. Each of the two major fibers has become
like a subtag to the other. There is no interference from the verb-fiber
for the word "is," because it did not really attach anything of its own
nature to either the subject or the predicate nominative. Thus forms of the
verb "to be" serve as manipulative or copulative instruments of the syntax
cable.
The next question is, what associative linkage-tags are established by
a transitive verb? I just got an interesting idea about the complementary
roles perhaps played by abstract memory and sensory memory. I would hazard
a guess that a transitive verb does not establish a tripartite link
involving subject, verb, and direct object. During comprehension, the
syntax cable establishes various links through simultaneity. However, it
looks as though the links to the verb have to be directional in nature.
Suppose that the only abstract link established between subject and
direct object is that of simultaneity. In other words, the subject is
linked to the subject-portal of the verb, and the object-portal of the verb
is linked to the direct object, but there is no direct, permanent link, via
the verb, between subject and direct object. There ought not to be any link
via the verb, if the verb is to operate independently. But the various
words are linked to the auditory memory channel by simultaneity.
We can view the comprehension-process in the light of dimensionality.
The work of the last year or two has tended to map out a very simple
dimensionality, namely: that past experience is along one dimension, while
all present experience takes place on a plane or slice which processes
information along pathways lying at right angles to the dimension of the
past. This idea of a single plane or slice for present experience keeps
things simple by allowing an indefinite number of slices of present
experience to be added on as time passes. Each slice of present processing
can be interacting with memory and control traces lying anywhere along the
past dimension, but past slices are never altered, and new connections are
made only in the plane of each fresh, momentary slice. Information can flow
in and out of past slices, but past pathways are basically unalterable,
except for the possibility that shifts due to neuron-fatigue can shift the
routing of information.
So there we have the picture. The preterite mind is a firm structure
of belief and knowledge, and only the present extensions of that structure
can admit of change. When we comprehend a sentence in the present, we are
just reweaving the ties among fibers parallel to the past dimension.
The grammar of a sentence operates through the syntax cable to tell us
what connections to make among the concepts named in the sentence.
28 NOV 1979
Probably the most substantial concept in a mind is the concept of self
or ego. It is almost taxing to concede or realize that this present theory
of the mind requires the concept of ego to be physically located as a
unitary fiber in the abstract memory channel. We can try to justify the
elongation of the "punctum" of ego by saying that a concept has to remain
relatively constant over time. So even if we did not think of a concept as
a unitary point, we would be forced into positing and attaching a unitary
line or fiber to achieve continuity or constancy over time. For instance,
if we tried to think of a concept as a ringlet of points joined into a rough
circle, we would still have to attach a unitary fiber somewhere so as to
carry the concept out of the past and into each new moment of the present.
So the concept of ego is one fiber among many in the abstract memory
channel.
The ego-fiber has both experiential and linguistic associations. In my
writing of this year I have distinguished between the single abstract fiber
and its numerous "concrete" associative tags. In a way, the tags are really
"concrete" because they are laid down by a concrete happening in present
experience. At any rate, I may have to introduce a concept of
"neuroresistance" to keep an abstract fiber separate and buffered from its
concrete tags. Let us say that an abstract fiber has no neuroresistance and
that each concrete tag has some small but important measure of resistance
to signal-propagation. The idea is not to slow signals down, but to gate
them. We don't yet care whether the resistance is in the lines or at the
synapses.
Once we have the idea of neuroresistance, we can develop the idea of
"proximolocality." Whenever several associative lines happen to converge on
any point or locus, a process of summation gives added or enhanced
significance to that point. For instance, a signal reaching it will branch
out in multiple directions. If associative lines had no neuroresistance due
to either time or distance, then any point in a network would be logically
the same as any other point. Indeed, that situation is what we want for an
abstract fiber, which is just an elongated point. But for associative links
we want differences to arise in the conceptual import of various points.
Ratiocination proceeds under the constraint of time. By dint of
"proximolocality," a dense, concentrated point can briefly function as a
unit appreciably distant from its surrounding points. Now, I am describing
a topography as if it were in a flat plane. It is hard to imagine the
physical accretion of indefinitely many associative links onto a thereby
dense point in a single slice of topography. So we use myriad successive
slices that function logically as if they were, in many but not all ways, a
single slice. The abstract fibers ensure that there is no resistance
between the slices, and the associative links introduce enough resistance so
that points can become conceptually distinct.
Now, there is a certain complementarity having to do with whether a
concept is associated to abstract fibers or sensory memory fibers. An
ultimate measure of the power of a neuronal mind lies in its ability to
discriminate, to detect differences. The linguistic use of a large
vocabulary relies upon the ability to discriminate. "Quaeritur": Does
abstract ratiocination tend to merge concepts or to differentiate them?
The primary and ultimate differentiation comes from the sensory memory
channels. Many of the concepts in the abstract memory channel are
associated with words stored in the auditory memory channel. Those unique
words themselves are a means of sharp discrimination. The conceptual word-
fibers receive discriminating information from all the sensory memory
channels. Thus words of a reasonably concrete nature are sporadically kept
differentiated by the senses.
However, highly abstract words have no direct reference to the sensory
memory channels. The abstract notions are not perceived directly by the
senses, they are only comprehended in thought. Since we assume that all
thought has its roots in the senses, we expect to find that even the most
abstract of concepts are just built up through association with the more
concrete concepts.
A baby could assemble a concept of ego around the concept-fiber for its
own name. Then, for sentence-generativity, it could gradually link its
self-concept to the abstract fiber controlling the word "I" in the English-
speaking auditory memory channel. Thus we have a rationale by which the
acme of conceptual density can gradually travel among the points of the
stick-forest of the abstract conceptual plane. No matter what abstract
fiber the baby originally uses to gather information about itself, the
incipient use of language will formalize a specific linguistic fiber as the
center of the ego-concept. Subsequent reaffirmations will consolidate the
primacy of the linguistic fiber.
Is it possible that some concepts can operate only via syntactic
relationships? Take an abstract concept such as "honesty" or "courage."
What associations will lead away from the word-fiber? For one thing, there
will be associations to specific episodes when the word was used. But it
was used for an idea rather than a direct perception.
We do not necessarily have to design into a mind the stipulated ability
to entertain an abstract concept such as "honesty." Instead, a linguistic
mind may have the implicit ability to handle such concepts. Suppose that we
designed a linguistic mind to handle concrete perceptions and words naming
them. What would such a mind do with words naming abstract concepts? For
one thing, the mind would implicitly be able to record each word in audition
and to set up an abstract conceptual fiber to govern the stored word.
My claim that abstract concepts can operate only on the
transabstractional level is reinforced if we agree that such concepts are
never introduced to the individual by any other route than the quasi-
transabstractional route of language. I mean, abstract concepts are
introduced through the domain of language, and that domain is where they
always remain. How primitive people would first develop and name such
concepts, is another question.
3 DEC 1979
The Vault of the Mind
Lately I've been taking pains to specify notions which I should keep in
mind as I continue my design for artificial intelligence. For instance,
there is the notion of neuronal prodigality, by which I should never shy
away from positing a structure just because it seems to use an awful lot of
neurons. Prodigality is both legitimate and mandatory if there is no other
way to get the job done.
Today I want to discuss the notion of the vault of the mind. Over the
last two years I have often entertained the notion that the thinking mind
seems to be separated from the physical universe by an almost impenetrable
chasm.
Among the floating ideas in the academic literature is the idea about
how small is the largest number which the mind can know directly. I suppose
that that number is less than eight, and around three. It has a bearing on
how the mind perceives aggregates with many parts. My feeling is that that
small number of knowable elements serves us as an indispensable bridge
between the vault of the mind and the universe-at-large.
I have lately had the following line of thought about concepts. A
certain amount of perception of the physical world is necessary to get
language going in the mind. Language involves variety, so there must be
variety in that physical experience. But aspects of language operate both
away from and towards variety. When language makes a concept out of things
and names them, it is merging variety into unity. The environment ought to
contain the raw material for a variety of concepts, so that not everything
merges into unity under the scrutiny of mind.
Now, somehow in the acquisition of language, the mind becomes capable
of thinking up new concepts and thereby increasing internally the variety of
its concepts, the basic set of which had had originally to come from
external sources. I want to link this notion of conceptual fecundity with
the notion of "proximolocality," the notion that local densities arise on
the conceptual plane. We want to examine the mechanism by which conceptual
variety is increased within the mind.
I hypothesize that variety is increased by the formulation of sentences
of thought. A concept, although unitary in its comprising nature, is
nothing more than a focus of relationships. To establish or change a
concept, you must establish or change the relationships. When logical
tension builds up, a sentence flashes into being as a statement of a
possibly old or possibly new relationship.
Now, what is logical tension? Suppose that associative activity is
frequently accessing two (or more) loci at the same time on the conceptual
plane. Those loci thereby organize themselves as ingredients for a nascent
sentence. If the two points can be thought of as points emanating tension,
we can think of a diffuse percolation of associativity between the two
points. The diffusion touches points satisfying the input requirements for
selection of a verb. But once the verb is activated in a sentence of
thought, the relationship is no longer diffuse; it is now direct via the
verb.
Now, a few days back I was unwilling to say that any recorded link from
subject to direct object actually went through the verb. However, we might
use the notion of simultaneity-freezing to make sure in the future that each
verb gets associated with its one-time (same-time) subject and direct
object. If the simultaneity-freeze establishes one direct link from subject
to verb and another direct link from verb to object, then any subsequent
associative quasi-recall of the event should cause the verb to spring to
mind properly embedded in a sentence of thought somehow paralleling the
original sentence.
I say "quasi-recall" because we are not in the sensory memory channels
where true recall occurs. We are making "quasi-recall" of assimilated
information. The verb is still free and independent. However, if the verb
is accessed by a subset of the elements of the simultaneity, then all of the
elements will receive an impetus towards coming into play.
Notice that this discussion bypasses those phonetic records of thought
laid down in the auditory memory channel. The phonetic records are unwieldy
and unnecessary. The thought occurred, and henceforth the conceptual
mappings are altered. The phonetic channel can serve to communicate the
thought or to record the exact wording, if necessary.
The conceptual plane is a vast topography containing tens of thousands
of concepts. The mind is pregnant with innumerable possible thoughts.
Suppose that the conceptual plane mirrors relationships actually existing or
potentially existing in the external world. The mental expression of all
possible relationships can not happen all at once. As the mind cogitates,
it produces some relationships which can then serve as the building-blocks
for further relationships. Variety need not be reduced through cogitation,
it can be enhanced and embellished.
So a certain initial population of concepts has to arise along with the
use of language. But then the mind with language becomes free within its
"vault" to think transabstractively about any available element of
knowledge. When I say "free," I mean that the broad flows and forces are
free to converge in passing at any conceptual point on the network of the
vast topography. The mind could even set up for itself the task of
examining all available points one after another.
The notion of the vault of the mind has to do with the presence or
absence of certain concepts within the mind. The initial concepts are
probably the least easy to establish, because the mind is so alien to the
physical world. But, as the mind quickens within its vault, it operates
ever more freely within its own realm of ideas. Words, a form of code,
belong truly to the immaterial world of ideas rather than to the physical
world-at-large.
Once the vault of the mind has been established during infancy,
incoming communications via language tend to reinforce the abstract
uniqueness of each concept. Suppose that the mind hears a sentence
purported to define or explain a certain concept. That sentence relates the
target concept to various other concepts. The sentence is itself an
abstraction, and it manipulates abstractions, namely, its component words.
Being a statement of the relationships among concepts, a sentence is an
abstraction of abstractions. However, each abstraction as a concept has a
dynamic interactive potential. Belief and knowledge have accreted onto each
concept.
During comprehension, the semantic import of a sentence is absorbed by
the conceptual topography of the comprehending mind. The credence granted
to a sentence during comprehension is a function of the very process of
associative absorption of the sentence. A mind is free to accept or reject
any statement. A few weeks ago I was having difficulty figuring out the
differences in comprehension of sentences believed and not believed. I was
wondering how a mind could knowingly take in a lie, comprehend it, and not
suffer damage to the conceptual apparatus of the mind. But a
transabstractive mind tends to guarantee full dissemination of information.
At any time that the comprehension of an obvious lie is operating somewhere
on the conceptual plane, at the same time the massive operation of
verisimilitude is operating elsewhere on the plane and in contravention of
the lie. The lie does not enter the conceptual plane in an isolated way; if
it did so, it might not be recognized as a lie. Whatever logic brands a
statement as untrue also maintains the status quo of belief during
assimilation of the semantic content of the statement.
5 DEC 1979
Phrases, Methods, and Categories
For some time I have been wondering how the mind might use some sort of
analog of "forward looking radar" to work the proper modifications upon the
initial elements of phrases which are going to culminate in a noun at the
end. For instance, in English, how does the mind know to put "this" or
"these" in front of some adjectives followed by a noun? Yesterday I had a
possible insight based on the German phrase, "An der schoenen, blauen
Donau." The problem has long been quite poignant to me with respect to
German, because the mind has to select the proper gender for an article well
in advance of the utterance of the noun. Then adjectival endings depend
upon whether or not the article was even used. From my own speaking of
German acquired in my teenage years, I feel that my mind unconsciously
launches into the correct form of article for an upcoming noun.
Yesterday's explanation involves the slice of conceptual topography.
Suppose that a German prepositional phrase is going to be constructed. The
main element must necessarily be the noun at the end. The transitory,
associative "valence" of that noun looms large in the topography. That is
to say, the ongoing thought processes of association are just about to push
that noun into conscious realization in a sentence of thought. However, the
noun is not being associated to in isolation. The very nature or character
of the association is about to be expressed by the preposition. The article
and the adjectives are also being associated to in conjunction with the
associative build-up of valence on the conceptual noun-node. What I have in
mind is that the syntax cable will seize upon the noun-node as a starting
point but that there will first be a backwards motion away from the noun and
in search of a definitely primal element from which the process can then
turn around and go forwards through conscious activation of the properly
modified words in the auditory memory channel. In other words, the article-
node of the syntax cable has to be addressed or dealt with before the
forward swing of the sentence-construction can proceed. There is a mixture
of inflectional information-flow along two routes. Only declensional
information goes through ultimate tags in the auditory memory channel.
Information as to the case, number, and gender of, say, an article has to be
dealt with before the point is reached where auditory ultimate tags are
involved. The gender of a German noun is somehow closely associated with
the concept-node of the noun. The singularity or plurality of number is a
pervasive concept that comes into play in the associative embroglio of the
nascent phrase. The case required by the preposition is either fixed
conceptually or chosen conceptually. We end up with a group of conceptual
variables that are all going to interact before the conscious forward swing
ensues. The fact that some of these concepts are pervasive, while others
are unique, will bear upon the dimensionality of my solution to the problem.
Now I would like to discuss my current methodology. Lately I have been
working on language-comprehension. On several days I have written long
sequences of rambling thought. Such verbiage may not seem valuable in its
own right, but it has value within my method, which is to go back at many
points in the future and look for the germination of ideas that are ready to
bear fruit. The backdrop of so much verbiage is meant as a fertile panorama
to react against creatively.
If my general ideas about an abstract conceptual channel are on the
right track, then I hope gradually to become able to work from a position of
initially rough completeness. That is, if I am developing a correct
structural framework for the inner workings of the mind, I should start
finding certain felicitous results. For one thing, it should become
possible successfully to turn my attention to minor but bothersome details.
Successful accommodation of various details will bode well for the validity
of the general theoretical structure. New work should come more easily and
more quickly if a correct theoretical framework is yielding me a
comprehensive overview.
I would like eventually to join the worldwide circle of minds and to
author two sorts of publications on my project: a factual, technical,
highly descriptive and specific exposition of my results, and a more
searching, questing generalization of all the philosophy involved in the
project. You have to produce the hard results for the first kind of book to
earn the right to expatiate in the other kind of book. Under the idea of
the generalization-style of book I have in mind "The Phenomenon of Man" by
Teilhard de Chardin. Such a book is rich in general ideas and does not have
to present the specific solution to all our problems.
My project has been broadening out lately because I have begun
collecting certain categories on index cards: AI Notions, AI Questions, and
Brain Information. Gradually I want to acquire an overview in those three
areas. If I tabulate my guiding notions, I may be able to apply them
better. At present they include such notions as dimensionality,
disinertiativity, prodigality, and transabstractivity. I might just
arbitrarily list the accumulated categories at certain times in this theory
journal. By tabulating the questions, I should enhance my control of the
project in many ways, such as in furthering the internal cohesiveness of the
theoretical structure. By accumulating cards with units of brain
information, I hope to conceptualize a mapping-out of the brain so that I
can look for the physical manifestations of the things about which I am
theorizing.