IGS Discussion Forums: Learning GS Topics: Induction, Deduction, etc.
Author: Ralph E. Kenyon, Jr. (diogenes) Sunday, October 14, 2007 - 12:25 pm Link to this messageView profile or send e-mail

See Popper's Philosophy of Science.

Deduction is strictly truth preserving.
Induction (not the mathematical kind) generalizes from examples; it is not truth preserving because it "predicts" about "events or examples not yet seen". Once any such not-yet-seen events come to pass, the falsity of one transfers back to the universally quantified statement strictly deductively using Modus Tolens. In this case "falseness" is "preserved" (backwards) from the conclusion back to the hypothesis.

Author: Ralph E. Kenyon, Jr. (diogenes) Sunday, October 14, 2007 - 08:34 pm Link to this messageView profile or send e-mail

The project to reduce mathematics to logic is call "Logicism". The project was essentially derailed by Gödel's incompleteness theorems (first proved in 1931). (There are a few hold-outs who still believe it's possible, however.)

A search of Korzybski's collected works CD produced no hits for any of "Godel", "Gödel", or even "Goedel", indicating Korzybski was possibly not familiar enough to comment. It's resonable to conclude that the news had not even come to Korzybski by the time of the publication of Science and Sanity. If he heard the name of the theorem, Korzybski just might have taken it to be corroboration of his use of "etcetera"; he mightn't have even bothered to look up the details, as by then he was already busy prostletizing his system of general semantics.

Author: Ralph E. Kenyon, Jr. (diogenes) Tuesday, October 16, 2007 - 09:38 am Link to this messageView profile or send e-mail

Vilmart wrote "I don't see why Goedel's results mean that we cannot identify math and logic as Russells says. Could you tell me more please ??

No, for two reasons. (1) This is not a mathematics or philosophy of mathematics forum and (2) you will have to take several graduate level courses in both mathematics and the philosophy of mathematics (as I did).

My reference to Korzybski's "etcetera" indicated his focus on incompleteness due to one of the maxims of general semantics, "you can't say all about anything.". Korzybski was continually applying "the map covers not all the territory", so, if he heard about Gödel's "incompleteness" theorems, he just might have surmized, "I already know that (anything one can say is not all of it), and decided not to investigate the details. (Mind you, this is mere speculation. The fact is that Korzybski makes no reference to Gödel.)

Just to be clear, my comment about the ESGS's website was about the structure of that specific page, not the entire website, and that was in a different thread.

Let's try not to cross-link threads by discussing the content of one thread in another.

Author: Ralph E. Kenyon, Jr. (diogenes) Tuesday, October 16, 2007 - 06:30 pm Link to this messageView profile or send e-mail

The importance of Gödel's theorems and the thory of types to general semantics would be a topic for this forum. But discussing whether or not the logicist project is "correct" belongs not here, but in a mathematics or philosophy of mathematics forum. Vilmart, you seem well on your way to being able to judge for yourself whether you wish to join the ranks of the Logicists or side with their opposition. Do continue to do so.

Regarding the importance of logic and mathematics to general semantics, Korzybski noted that most people without experience or training in logic or mathemics routinely use a number of common reasoning fallacies, for example, affirming the consequent: [P->Q, Q therefore P]. This and many other fallacies and other incorrect reasoning are also in common usage. Korzybski noted that even scientists who know better in their laboratory work often revert back to using common fallacies in their non-laboratory life. This is the basis of his distinction between "sane" and "unsane" as dintict from "insane". One of the main purposes of general semantics is to promote "sane" reasoning in everyday life - and that means learning and systematically using the proper methods of inference from logic and mathematics in everyday life.

I do not say that mathematics involves only logic.

What makes mathematics "extensional" is the fact that it uses almost exclusively concepts by postulation. Mathematics uses strictly intensional definitions, but the primary terms in mathematics are concepts by postulation or are explicitly undefined.

Examples of undefined terms are 'point', 'line', and 'plane'. Although they are "defined" in terms of each other, nothing else about them are allowed save what can be explicitly proven using valid rules of inference. Only what is explicitly written; nothing less, nothing more, is allowed to be inferred about them. No "inductive" (not mathematical induction) inference is allowed. This character is what makes mathematics "extensional" - we can always refer back to the original explicitly written words as to what is or is not allowed. But what may be conculed from such definitions is derived strictly intensionally using rules of inference.

If this seems somewhat confusing, it is like an oxymoron, for example "rapid stop" or "terribly good". Mathematics is intensionally extensional. (Nice pun, eh?)

If you apply the rules of inference of logic and the methods of mathematics to your everyday life to insure that you make no mistake in the process of deductive inferences, you will be reasoning sanely by Korzybski's definition. But that is not enough. You must also test any prediction so derived. If any one such prediction fails, then the formulations you started with are either inconsistent or they are a model that failed.

The principle effect of Gödel's theorems for general semantics is how it deals with hierarcharies of axioms. Gödel proved that there will be an undecidable statement in a strong enough system. Because the statement is undecidable, it may be assumed to be true or it may be assumed to be false. In either case it can be added to the previous system as a new axiom and thus create a next level system. Gödel's result applies to that system as well, so it will also have an undecidable statement, and it can be added as an axiom to form the next level. The process can continue indefinitely. This metaphorically parallels Russell's theory of types. Each statement about a statement is at a level above the subject statement.

If the theory of types is rigorously applied then there can be NO self-reference - a point not understood by many so-called general semanticists. (This negates the so-called general semantics principle that a map is self-reflexive.) Similarly no statement can refer to itself in logic without introducing inconsistency.

Logic and mathematics uses deductive reasoning (which includes mathematical induction) to prove statements (theorems) consistent with starting axioms, definitions, and previously proved theorems. I refer to this process as "intensional" because it defines concepts by postulation in terms of other, previously defined and validated words.

An "intensional orientation" might start with untested hypotheses, derive conculsions using such valid logic, but then stopping there and (a) not testing the results or (b) testing the results but denying contradictions AND refusing to revise the starting untested hypothesis.

So my view is that we use sane reasoning and intensional method of deductive reasoning to make predictions from our initial assumptions, test the predictions by usage, and then, by using predictions failures, update our original assumptions, and we continue this process repeatedly. By Gödel, anything new we want to guess at, we can add to the system as either true or false, but we must be prepared for it to make a prediction failure. And if something is completely untestable, we can add it to our system as either true or false. Consider belief in ghosts. So far, in spite of some so-called paranormal researchers, we have no definitive testing method to prove the existence of ghosts. That means that per Gödel, as an "undecidable" statement, we are free to either believe in or not believe in ghosts - until such time as one is conclusively proven to exist. As this applies to most religious systems, we should all realize that Gödel's approach allow us to build many systems of belief, all not disconfirmable. For an analogy consider Euclid's fifth postulate. By stating it in various ways, we get different geometries. While they are inconsistent with each other, they are each internally consistent. The same with religions. But "unsane" reasoning fails to recognize the independence of the various choices for religion, and each seeks the desolution of all others [except for a few enlightened ones].

Author: Ralph E. Kenyon, Jr. (diogenes) Wednesday, October 17, 2007 - 08:18 am Link to this messageView profile or send e-mail

Vilmart wrote:

Just one remark:

In logic itslef, there is no necessary need to break down our initial hypothesis as you advice in this quote. Gödel unveiled that there is no possibility of testing, due to undecidability.
That does not apply, because he is not talking about using the various choices of an axiom as a model of something else; he is talking about testing to see if one or the other is "true".
There can be multiple systems that are consistent inwardly. And each should have an equal value as regards formal logic. For example, as you said, in Geometry, we can have the one of Euclid, the one of Riemann, or the one of Bolyai and Lobachevsky, depending on the 5th axiom that we take. All of them revealed themselves crutial for the development of both math and applied math.

Concerning reality, and natural sciences, there is only one model that can fit it.
This assertion is not true; moreover it can not be tested, so it is not even scientific. We, in fact, have multiple models in physics that are consistent with general relativity, but, so far we have not yet found a way to test for the differences.
Therefore, there is a criteria to attribute some value to our models. Indeed, for any sane person, either there are ghosts, or there are none. However, both alternatives don't have the same value.
"Value" is generally not a "property" of an object; it expresses a relation between a person and an object; we "value" something. "She is beautiful." expresses our choice, which we value, projected and attributed to the subject, as Johnson calls to-me-ness.
Only the most valid model
"Valid" does not admit of degrees. Models of our world are either disconfirmed or not-yet-disconfirmed. You and I may have criteria for judging models other than that primary criteria, but all models that are not yet disconfirmed are equal under that criteria.
merits our consideration, within the bounds of the territory that the models may represent.
For me, little useful can be said about religions. I support everyone's right to believe as they choose, but I personally have no use for any religion which presumes to "know the Truth", that is, that says any other religion is "wrong". (Freedom of (and from) Religion)

With regard to the press, we must be "etremely" conscious of abstracting, not just our own, but the entire process from the event level through the abstraction of witnesses, through the abstractions of, in sequence, the reporter, the copy editor, the supervisor, the editor, the publisher, the sponsor, the director, the producer, and all of their presumed ideas, interests, and connections, as well as the electronics or the mechanics of the process, and finally our own prejudices and abstraction process.

Author: Ralph E. Kenyon, Jr. (diogenes) Wednesday, October 17, 2007 - 10:41 am Link to this messageView profile or send e-mail

Vilmart wrote, "Let us take the example of Newtownian mechanics, and of General Relativity. Newtonian mechanics is a particular case of GR and is like included in GR. Thereafter, GR is more valid than the Newtonian theory becasues the former has predicts less well than the latter. But Newtonian mechanics is not for that disconfirmed, it is just said of less value of prediction power, in terms of experiments. We don't disconfirm the theory because it is still very useful and simple at our daily life level.

The expansion of the relativistic formula for kinitic energy to an infinite series produces the Newtonian equation for kinetic energy as its first term in the series. This shows that Newtonian kinetic energy is a "first approximation". As such it is quite useful for low velocity short distance calculations. But as a theory, it makes FALSE predicions. It is therefore DISCONFIRMED. There is no graduation to "validity", as the rules of inference, per Popper, are not a matter of degree. We know that the Newtonian model is not a "correct" model of the physical world for this reason. General relativity has competing models among which we still have not empirically been able to choose. (more)

There is not ONE single model of "reality"; there are many that are scientifically undisconfirmed. Moreover, in principle, it is always possible to have multiple maps of any territory. The model, like the map, is not the territory.

Author: Ralph E. Kenyon, Jr. (diogenes) Thursday, October 18, 2007 - 09:07 am Link to this messageView profile or send e-mail

Vilmart wrote "As GS practitioners, we should see everything in levels of grays, in nuances. No, not everything. In logic and math the "valididy" of deduction depends on binary logic at its core. Even theorems in mult-valued logic and probability math (which Koryzbki calls "infinity-valued") depend on binary truth values. Internal consistency at the level of theory is a true or false value choice. Whether a theory is disconfirmed (Popper's "falsified") or not yet disconfirmed is a binary distinction.
That does not prevent a disconfrimed theory from having varying degrees of usefullness. Even though Newton's thories have been strictly disconfirmed, they are still a map, and a map that has quite a bit if similarity of structure to the relativistic map, and what more, a useful engineering map of the differences between these maps is well described. Newton's theory, as a theory of reality is "disconfirmed" - falsified exactly so - just plain FALSE. BUT as a map of our physical environment, it is one that can be described as having low resolving power. It works just fine to get you from California to Williamstown, but it does not work to get you to my house. It works fine to get you from Paris to Rome, but it does not work well to get you from the Earth to the Moon and back.

Just because a theory is "disconfirmed", "falsified", etc. does not mean that is has NO structure similar to that of a not-yet-disconfirmed map.

Consequently we need BOTH two-valued reasoning capabilities that use only valid rules of inference, that avoids fallacies - "sane" reasoning, this together with "infinity-valued" "shades of gray" in the application of theories to the continually changing what is going on. Our maps remain stable, relatively invariant, for the period that they are not-disconfirmed, but once disconfirmed, they may still be useful except in the area where the disconfirmation was found.

We cannot use Newtonian mechanics to accurately predict the location of mercury, but we can use it to predict the local train's time table (not the actual train's arrival).

So logic is valid or it is not.
Maps are useful to various degrees.
Theories are disconfirmed or not-yet-disconfirmed.
Theories that are disconfirmed may still be as useful as they were before they were disconfirmed - limited to the areas they were used succesfully prior to disconfirmation.

Due, perhaps, to my mathematics, logic, and philosophical training, in the context of reasoning, including "non-Aristotelian" reasoning, "valid" is a term that is for me strictly two valued, because it describes whether or not a rule of inference preserves truth or does not. Arguments are valid (or not). The term does not apply to theories. Theories are disconfirmed or not-yet-disconfirmed. Theories are also maps, and as map, they may have varying degrees of usefulness (to the user)(for a purpose)(in a situation).

General semanticists, speaking loosely, would say that maps are used for navigation, and their degree of usefulness depends on the similarity of their structure to "the structure of the territory" (my scare quotes).

Speaking more precisely, we have to say that their degree of usefulness depends inversely on the frequency with the map fails to predict correctly - that is, usefulness depens on how often we do experience what the map does predict. (Remember, we do not have any direct access to any putative structure in the territory.)

Author: Ralph E. Kenyon, Jr. (diogenes) Thursday, October 18, 2007 - 05:29 pm Link to this messageView profile or send e-mail

"Our physical environment" refers to the local putative what is going on, and indicates local "territory". Theory of "reality" is the verbal level structure.

A "theory" is more formal than a "map".
If we list the axioms and conditional theory statements we have Newton "theory".

If we describe it loosely, in less formal terms, using metaphors, etc., then we have a more general "map".

Also, a "theory of reality" applies to the entire universe, but I specified "our physical environment", a small subset of the above. I would not presume Newton's "theory" to be a "map" applicable to the universe when the theory has been disconfirmed. We currently hold relativity to be a conditional "map" of the functioning of the entire universe, a status that Newton's theory used to hold, but no longer does.

But loosely speaking, the difference between "theory of reality' and "map of our physical environment" can be seen as small enough for this question.

Author: Ralph E. Kenyon, Jr. (diogenes) Monday, October 22, 2007 - 11:26 pm Link to this messageView profile or send e-mail

Thomas, the problem is that Newton's theory has a lot of correlation with relativity with respect to prediction, especilly at "human scale" mass, time, and distances, so we can not say that a "disconfirmed" theory has "no statistical correlation whatsoever." Where statistics are concerned we get to choose a confidence level, five percent being a pretty common one. But that is a lot of variance for some things.

I would not connect "truth" with statistics in that way.

The continuum has indistinguishable from chance (a null hypothesis) at one end and distinguishable from chance at the other end.

Using various tests, we ask what are the chances we could get the observed result given that the hypothesis to be tested is false, that is, that we could get the observed result due to chance alone. Normally we want the probability to be less than 5 percent. That still leaves a (less than) 5 percent chance that the results could be due to chance. We "accept" the hypothesis with "95% confidence".

It occured to me that statistical testing is a process that mixes the metaphysical and the epistemological.

We choose a hypothesis as a model. From a metaphical perspective this model would be "semantically true" or "semantically false". As Popper would say, we can find out about "falseness"; all it take is one failed prediction.

However, when we are dealing with measuements there is uncertainty. And, when we are dealing with populations, we are measuring events that vary; further, we are abstracting from that variable data. Our hypothesis deals with the abstractions, not the individual events.

So we have to compare our chosen test statistic with vaious kinds of variable distributions.

We cannot "know" if the hypothesis is "true", but we can pretend that it is false and compute what to expect if it were false. Using our tests, we can calculate the proability of getting such a result. If the result is less than 5 percent chance of getting the result due to chance, then we get to "accept" the hypothesis (conditionally).

"Accepting" or "rejecting" is binary, like true or false, but it is not the same thing. We have an infinity of possible probability values. But I would not call this "determinism", at least not at the level of the individual events that are recorded for statistical analysis.

Author: Ralph E. Kenyon, Jr. (diogenes) Monday, October 22, 2007 - 11:34 pm Link to this messageView profile or send e-mail

Thomas, the problem is that Newton's theory has a lot of correlation with relativity with respect to prediction, especially at "human scale" mass, time, and distances, so we can not say that a "disconfirmed" theory has "no statistical correlation whatsoever." Where statistics are concerned we get to choose a confidence level, five percent being a pretty common one. But that is a lot of variance for some things.