IGS Discussion Forums: Learning GS Topics: Consciousness of abstracting - lower level abstraction instances
Author: Ralph E. Kenyon, Jr. (diogenes) Saturday, October 6, 2007 - 07:28 am Link to this messageView profile or send e-mail

Ralph wrote this post in In the News, and Thomas responded with

I think this is somewhat related to my comments in the SD thread. It is science that gives us the event level - without science, ie. animals, there "is" no event. In S&S Korzybski describes the term 'consciousness' as being incomplete by itself and only has meaning in the context of consciousness of what? So unless you can explain, say the theory of light waves, to a chimp and how they are detected and integrated into our visual cortex etc. then he cannot be COA in the GS sense. This will be true no matter what they find in the brain stem.

Author: Ralph E. Kenyon, Jr. (diogenes) Saturday, October 6, 2007 - 07:30 am Link to this messageView profile or send e-mail

The theory of consciousness of abstracting at the level of scientific theories is itself a very high level of abstraction. We need to break it down into multiple levels of complexity, and I think that the simplest examples of consciousness of abstraction start with a vague realization that one has questions about what one sees, hears, etc. This, it seems to me, is the simplest form of awarenss that our map is not what it is a map of in that we are aware that we do not know. As soon as we "identify" what we see, we lose that awareness that what we are seing is not what it is that the seeing is of. Such moments are brief, but they represent specific examples of awareness that the map is not the territory, not in the more abstract general sense, but in the more extensional immediate case. Developed, continuous, awareness of the principle is a higher level of abstraction. But the lower level of abstraction consists of incidents which can be abstracted to the more general level. Many examples of "awareness that "I (self) do not (is not) recognize (map element) (consciousness of awareness) this thing (territory)" or "This (territory) is not what I (self-awareness) think (consciousness of abstraction) it is (map).

The general theoretical notion of Consciousness of abstracting is a complex level of awareness that does not arrive in one fell swoop like satori; it requires continuous training and forcibly applying a complex theory. This theory, however, has "simplest case" examples, and I see such examples in the simple process of confusion while learning. This, it seem to me, is the most rudimentary form of consciousness of abstracting, one component of which is awareness that what we see is not what "it is". If we are "seeing" and we do not yet "know" what we are seeing, and we are aware that we don't know what we are seing, we are aware that our map is not our territory, even though we do not have the more abstract generalization of this thing we are seeing as a specific instance of seeing and this "territory" (unidentified) territory as a specific instance of the generalized territory.

Nowhere is the word "fruit" in "apple", "orange", "bananna", etc., but each is a low level abstraction that gets abstracted to the high level abstraction "fruit".

Simialarly, An immediate instance of awareness that one does not know what something is is an example in which the individual has consciousness of his processing of information, and that there is a difference between what the one experiences (the map) and what that experience is of (the territory). This individual has a short term immediate experience that includes awareness of the processing failure - and this is the basic principle "the map is not the territory".

Author: Ralph E. Kenyon, Jr. (diogenes) Saturday, October 6, 2007 - 10:41 am Link to this messageView profile or send e-mail

I do not see conscious of abstracting as a clear and distinct binary distinction that falls into a does or does not have classification. I see it as a range of levels, and I am looking toward the lower levels. What are the minimal examples that can grow into the more fully developed abilities? What can we describe that presents a brief glimpse that can be grabbed and held and bulilt into more awareness?

In The Origin of Consciousness in the Breakdown of the Bicameral Mind, Julian Jaynes raises the question of the evolution of consciousness. I'm raising the question of the evolution of consciousness of abstracting. How can we describe it's immediate precursors and first beginning processes, processes that we later abstract more fully into "full blown consciousness of abstracting" with us at nearly every moment, as Milton would have us doing. If we can model the development, and "identify" some of the simplest case examples, then we just may have a model for teaching.

In this vein, I see "perplexity", "uncertainty", etc., together with some rudimentary self-awareness as the beginnings of consciousness of abstracting.

But as in many other cases of abstracting, the higher level classification covers many different examples or instances, many of which do not get described with the language of the higher level model. The connection between the lower level example or instance and the abstract model must be made explicit. That's what I am getting at.

Author: Ralph E. Kenyon, Jr. (diogenes) Saturday, October 13, 2007 - 08:03 am Link to this messageView profile or send e-mail

I would not write off computers with the phrase "will always"; I would use "as yet".

Way back in 1984 a computer program was characterized as exhibiting rudimentary "self-consciousness".

A lot has happened in 50 years.

We will get computers/machines/devices/etc. that will pass the Turing test.

Author: Ralph E. Kenyon, Jr. (diogenes) Saturday, October 13, 2007 - 09:01 am Link to this messageView profile or send e-mail

Speaking of "associational paths", the majority of parallel processing "machines" were actually "simulated" using sequential machines.

The "structure" was "implemented" in software rather than expensive and long time manufacturing process of actual physical manufacture.

"Associational paths" are routinely used in artificial intelligence in the form of asyncronously activated rule-objects that perform actions.

"Physical structure" is not critical with a general purpose machine (for internal processing), because "associational paths" can be "structurally implemented" in "software".

Consider an interrupt routine that responds to a switch being turned on. The routine can be directed to any piece of softare to process the routine. So an arbitrary "associational path" can be internally connected to such an input device. Similarly, "software interrupts" can also be connected to any kind of "associational path". The computers we currently use have both kinds of interrupts as well as other associational paths in use regularly. They just don't have "enough" and they don't have enough computing power.

It's interesting to note that the program Eliza fooled some people for a while.