Is quantum mechanics self-interpreting

T.R .:

I would like to start with a fundamental belief principle of AI. Furthermore, a relativization to the meaning of consciousness follows, which must not be taken as scientifically sufficiently well-founded. It's an educated speculation! There is no short explanation or formula for intelligence and therefore also not for artificial intelligence. Nor do I want to give the impression that it is a kind of sectarian religious community that discusses the following problems. All of these problems are at the limit of possible human progress in experience.

"There is something strange about the description of consciousness: whatever a person tries to express, he just doesn't seem to be able to say it clearly. It is not as if we are confused or ignorant. Rather, it seems to us that we know exactly But what is happening could not describe it properly. How can something seem so close and yet always remain beyond our reach? " (Marvin Minsky, Mentopolis)

The "Organizational Invariance"

This principle says that every physical system, which has a certain similar abstract organization, regardless of what material it is made of, also produces the same conscious experiences.
The notable implication is that sufficiently complex machines can, in principle, be conscious.
We think this working principle is very interesting, but we also have some critical comments.
We pursue the thesis that life must not be confused with the products of life!

This also means that technology cannot claim biological equivalence! It is absolutely correct that the products of the mind such as books, music, theories and "language" can all act back on the mind and change it. However, these things are in an interrelationship and not in an identity relationship. Language is not intelligence but a possible trait.
Under no circumstances will a machine generate biological behaviors and sensations. Biological behavior is shaped by survival strategies and by dealing with the diversity of natural competitors (including social competitors).
Machines are created by themselves still passive and so far Willless and with any degree of complexity they can also only produce, for example, machine consciousness. From the beginning they lacked what distinguishes biological systems. They lack the stored history of their own active engagement with the environment. This means that they differ in terms of their embossing behavior. Much more important, however, is the objection that history does not repeat itself. This applies as a hypothetical universal proposition without restriction for the development of physical and biological systems. Every starting condition of a development line leads to very different development paths with slightly different starting values. This empirical conjecture is supported by quantum mechanics. The indeterminacy of developments is essential. The development of technology therefore cannot reach human consciousness. She can walk past it, whatever that means. But it will come to intermediate results that will be significantly different from human behavior.
The mathematician, philosopher and cognitive scientist David J. Chalmers, who worked at the Philosophical Faculty of the University of Santa Cruz, presented a not new thought experiment again some time ago, which in his opinion provides clear evidence that consciousness is also generated in machines can.

"Dancing Qualia in a Synthetic Brain "
The advantage of multi-valued logic

Chalmers asks whether we would start to see something else if we successively replaced some areas of the visual center on the occiput (sulcus calcarinus) with computer chips? The chips should be structured in the same way and function just like their natural analogues. In the next step, the states would be switched back and forth between chips and real neurons via an interface. If we want to answer the question in the affirmative, then we would have to see different states, depending on whether the chips or the neurons of our brain are "switched on". (There would be a visual quality structure.)
Chalmers wants to answer the question in the negative and therefore derives a contradiction from the affirmation. If the contradiction is conclusive, the negative applies. (reduktio ad absurdum - proof in Spektrum der Wiss. 2/96)
The evidence is not really conclusive. Also, the method of two-valued logic that was used here can only be applied to mathematical facts that do not deal with possible intermediate values. In this example in particular, however, something is said not only about intermediate values, but also about future values Possible.
The thought experiment allows only two real interpretations. Either we see something else or we don't, is the first interpretation. But we could also no longer see anything that is non-tautological to "see something else", which would give us the second interpretation. This second interpretation becomes possible because the preconditions are incorrect. Chalmers actually wants to show in advance that we could technically copy human intelligence. We definitely do not think so.

We had just introduced a small deviation from the two-valued logic unnoticed.
The originator of the Polyvalent logic Lukasiewicz wrote about it in 1920. "Among all multi-valued systems only two philosophical meanings can claim: the three-valued and the infinitely valued system. Because if the values ​​different from" 0 "and" 1 "are interpreted as" the possible ", then, for good reasons, only two cases can be distinguished:
either one assumes that the possible has no degree differences, and then one obtains the three-valued system; or one assumes the opposite, and then it is most natural, just as in probability theory, to assume that there are infinitely many degree differences of the possible, which leads to the infinite propositional calculus. I believe that this latter system deserves preference over all others. "
The multi-valued logic is a first important building block for the AI.
It is very complex and has opened up many interesting ways of logic. A special case is the well-known fuzzy logic (fuzzy logic). We doubt, however, that their level of awareness correlates with factual knowledge about them. The advantage of multi-valued logic is that one is not forced to express complicated facts and relationship structures in a simplified manner. You can adapt the logic to the situation!

In fuzzy logic, an element under consideration can belong entirely or only to a certain extent to a set. In contrast to Cantor's set theory, sets are introduced as not sharply delimited, which corresponds to reality much more closely. The degree of membership can be understood as a quantitative but also a qualitative measure of the extent to which a considered element fulfills the properties of a fuzzy set.
Chalmers clearly recognizes, however, that human consciousness cannot be adequately described with conventional physical or biological laws. He's probably right about that.
That is why the kind of thinking he supports calls for the extension of these laws to the realm of the psyche and the human world of experience, which could lead to a more extensive theory of the world, ultimately to a theory that also includes man with all of his self must include.
This requirement arises from the claim and belief in the existence of a single complete theory, which already fascinated Einstein and which would always provide consistent explanations for all future and past phenomena of the known and unknown to us, which thus determines the history of being.
This theory would therefore also explain the way in which we came to it or discovered it. The physicist S. Hawking points this out, who ultimately believed in such a theory, but more and more doubts its feasibility. Ultimately, such a theory should enable predictions that lead us to truths that must remain inexplicable to us, since they concern facts that are outside of our everyday experience.
The search for a complete theory seems more like a motivating slogan under which science is driven not to tire itself in its search for truths.
We do not see any fundamental insoluble problems with the fact that machines can have conscious experiences, but only with the general use of the term consciousness, because in this context it essentially means human consciousness and we deny that!
We do not deny this because we are so keen to preserve this area of ​​human life that has not yet been conquered by successful theories, but above all to point out a fatal misinterpretation of the goals and possibilities of modern computer science. Of course, a theory that explains our consciousness would not change anything for us. Because this theory would only have an explanatory competence. This theory could not allow the delegation of conscious experiences.

Theory is not a substitute for reality

. But in terms of Popper's World 3, such a theory can indirectly have strong effects on our consciousness.
These effects are then however a novelty in the consciousness. Every theory that we want to understand becomes a novelty in the consciousness and is therefore no explanatory force for the consciousness because it has to pass through the consciousness.
We believe that the complexity of machines can certainly be increased to a very wide extent. We are thinking among other things of the possibilities of nanotechnology, which will probably achieve this leap. But it is not the desirable goal of AI to bring about a precisely human consciousness, that could only create the birth of a new child and its further development. Machines, on the other hand, only produce machine consciousness. If we could understand the principle of organizational invariance in this way, we would agree with it.
An essential question that repeatedly gives rise to heated debates is the structure of consciousness, its incontinuity not only in sleep but also in the "waking state" and the deceptive conclusion of continuity. Consciousness is understood by many to be the real goal of AI. This is not the case, but it does not contribute to the transparency of this topic if one "cheats around" about it. After all, human intelligence and consciousness are repeatedly described as synonymous. Since the AI ​​learns and abstracts from human information processing, confusion is inevitable. However, AI develops its own mechanisms that have no functional correlate in the human neuron complex. A neural network, e.g. the Hopfield network, does not occur anywhere in nature! It is a model borrowed from nature. That is also the real reason why we believe that there will be no technical copy of human intelligence. The development of such networks is therefore also subject to very practical aspects: how to improve the ability to learn, increase flexibility, etc. It is nowhere about awareness! Perhaps something should therefore be said about it in all the necessary brevity.

What consciousness is not

Anyone who thinks that he has to settle consciousness at the level of the protoplasm naturally raises the question of which criteria can be used to speak of consciousness at all. We make some claims that are only partially intended to be supported here.

  • The origin of the ability to associate or learn in evolution has nothing in common with the origin of consciousness. The Darwinian continuity hypothesis for the evolution of the mind is more than questionable and belongs to the area of ​​social myth-making.
  • We reject Huxley's automata theory and Spencer's helpless spectator theory. Why is consciousness more intense when we do not act and disappears with monotonous actions when it has nothing to do with the acting "automaton"?
  • The theory of consciousness as an emergent construct may be true. But it has no real practical value other than simply to assert it. Which neural network was necessary for it to arise?
  • We continue to reject behaviorism. Imagine someone trying to "become conscious" that there is no consciousness.
  • Ultimately, the pure search in the anatomical foundations (e.g. Formatio reticularis or the like) must be rejected as an isolated route. Knowledge about the nervous system can only be achieved if we first find what we are looking for in behavior. The mere description of the enormous number of nerve cells and their connections to one another only leads to masterful empirical quick-wittedness.

From all sides the AI ​​is reproached that it will never produce consciousness.
We assume the opposite, namely that after a certain level of abstract organization it will only be an insignificant task to generate consciousness in machines.
In order to explain this important attitude, we are forced to go into a little more detail about what most people consider to be their greatest achievement: Consciousness!
Many services of the neural system are assumed to be typical entities of consciousness. With this it must first be shown what consciousness is not!
When we ask about consciousness, we become conscious of consciousness. The opinion that precisely this, being conscious of consciousness, is the actual consciousness, is a fallacy!
It is just such a mistake to be convinced of the continuity of our consciousness. There are still countless philosophers today who tacitly accept this assumed continuity as the starting point of the philosophy "the home of all immovable certainties": "Cogito ergo sum."
The use of language even implies many misunderstandings. When we say that someone "lost consciousness" after being hit on the head, we assume the following picture:
The patient shows no signs of consciousness and so does he responds no more. But these are clearly two different facts. There are clinical somnambulistic states in which the patients may have lost their consciousness, but they can still react well. We are constantly preoccupied with modes of reaction for which there is no consciousness representation. Balance regulation, relieving postures, accommodation of the eyes, vegetative reactions, avoidance of quality by subsuming information, this creates a relatively fluctuating worldview and much more.

  • Consciousness is discontinuous and appears only as a continuum.

It makes up a much smaller part of our soul life than we are aware of, since we are not conscious of what we are not conscious of!
Just as the holes in the perception of space (caused by the blind spot on the retina = entry point of the optic nerve) are "cemented" by the brain without leaving a gap, the consciousness closes its time holes and gives itself the deceptive appearance of a continuum.

  • Consciousness is not a copy of our experience

Does the door of your room hit the right or left? How many teeth do you see when you brush your teeth?
What is detailed behind you on the wall without turning around?
Which is your second longest finger? What is up at the traffic light: red or green? If you smoke: Which brands are in the machine from left to right?

With such questions we always experience how little there is actually in conscious memory if one of these questions has not been consciously contemplated beforehand.
But if you suddenly had an extra tooth in your teeth overnight, or if one of your fingers developed an abnormal length, or if a new mark was in the cigarette compartment, you would notice it immediately! It is the familiar psychological difference between recognition and memory! In contrast to the sea of ​​factual knowledge, what is remembered is only a bathtub full! The conscious recapitulation consists mostly in finding facts and not in finding perceptual images!
Of course, we don't want to deny that if you concentrate enough, you can "see" the surroundings. In doing so, however, there is always a moment of creative imagination that places our ideas in a causal relationship with certain exaggerated partial aspects.
You never see the same thing! This fact is called Narrative. It is based on mistakes! You can also call it creativity, preservation, and addition.
If you remember, a strange person appears in a strange play. Do you remember the last time you really embarrassed yourself in public? In her memory it is something completely different! Certain aspects are overemphasized, while others are left out or changed.
You only notice this when you remember what you have experienced with other people.

  • Consciousness is not essential for concept formation

All living beings, we claim, have a concept of the facts of interest to them.
On the other hand, it is the great achievement of human language to put a word for a concept.
Concepts, however, do not appear in the consciousness at all, otherwise we would not have to talk and write about conceptual relationships.

  • Consciousness is not necessary for learning

Both associative learning and skill learning are most effective without the influence of consciousness. In these forms it is even sometimes extremely annoying!
Consciousness only leads us to the task. In Zen archery is taught in such a way that the archer should not see himself as an acting person who draws the bow, but learns that the bow draws itself and the arrow seeks its target itself. He is sent on a journey.
The solution learning (instrumental learning or operande conditioning) does not always get by without consciousness. Many types of this complex clearly do, however, when certain test subjects had no knowledge of the aim of the experiment. In a psychology lecture, students were tasked with inconspicuously complimenting all women on campus who wore red clothes. Only a week later the cafeteria was a sea of ​​red and neither woman was aware she had been tampered with.

  • Consciousness is not necessary for thinking

Unless we are hostile to the fact that "judging" is part of thinking, one must simply acknowledge that the act of judging is never conscious. Only the result of the process enters consciousness as a judgment. The result, however, the judgment itself, is not thinking but perception of facts like the above.
Many experiments (Marbe, H. J. Watt) have proven this beyond any doubt. The so-called directed attempts at association were developed in order to still help thinking to its wrongly assumed place in consciousness.
If we are asked to develop a solution (we can also commission ourselves), this accordingly begins with an instruction about the desired problem or field of association. In simple cases we come straight to a construction.
Let's look at a number of geometric figures!

Which character is next? As soon as you have the instruction, the construction or the solution comes immediately! But how did you come up with the solution? When we use self-observation to prepare our solution as a processive entity in consciousness, we are actually doing something completely different. We give ourselves a new instruction (instruction + construction) which is useful as a reflection matrix for an invented story of the solution to the above problem. In this way, we generate astronomically complex structures that produce hundreds of thousands of association relationships (not image relationships) in our brain every second. They all arise unconsciously and some interesting solutions are represented in consciousness. It is not the process of thinking that becomes conscious, but only its result (possibly). What is ultimately actually represented in the consciousness as a twisted and invented coloring of the real is given by a inner evaluation guideline decided.
It is not at all clear what structure such a directive fulfills. We only know about it that it is constantly changing. It is therefore an approach of the AI ​​to technically manipulate such structures on a trial basis. There are some good contenders for this project.

  • Consciousness is not necessary for reasoning

Reason or reasoning and logic relate to one another like health and medicine or like behavior and morality.
One includes natural thought processes and the other rules how we must think if we assume truth or approximation to it as our goal.
The most important reason why we need logic at all is that the reasoning is largely unconscious. But the scientist who is confronted with a problem and applies conscious induction and deduction to it belongs in the realm of legends. We could mention many examples where brilliant ideas suddenly burst through completely unprepared. Of course there is a lot of thought work in advance of such conscious ideas. And before thinking work comes the integration of knowledge that enables such unconscious activity as making judgments.

The errors about consciousness are often misleading attempts at metaphor formation.

Everything that has been said so far only serves to limit consciousness and not to deny it. The actual properties of consciousness can now also be worked out just as clearly.
  • Spatialization: the creation of an inner action-carrying space of imagination
  • The excerpt: the exaggeration of individual aspects that stand for the whole
  • The I (qua analogue): the actionless representation of our possible actions
  • The I (qua metaphor): the further non-actionable distancing from the I qua analogue
  • The narrative: the integration of our quasi-actions into causal relationships of the spatialized time structure
  • Compatibilization: the analog of assimilation for the structure of consciousness. Adaptation of objects of perception to acquired schemes. We adapt excerpts or narratives to one another.

Overall, one can say that consciousness is an operator. Nothing is in consciousness that is not an analog of something that was previously in behavior, to vary Locke's well-known formula.

Consciousness is an operator of analogy!

If consciousness is nothing more than an analog world on a linguistic basis, if we do not metaphorically transfigure consciousness and mystically exaggerate it, then we can also grasp it. But this includes a highly complex basis that gives scientists more headaches than the fact that machines are supposed to have conscious sensations!
Consciousness will be less of a problem once you have a highly organized technical abstract structure.
We suspect that it will come about in passing. By accepting the emergence of possible developments, we reject them (the emergence as above) as an explanation for consciousness! But not for the emergence of consciousness. The origin of consciousness is a very different area about which a great deal can be said.

Approaches to generating generative theories in machines

No theories about intelligence are to be discussed. We just want to note that no matter how much one admires their performance, the intelligence of machines is questioned as soon as one discovers what rules or algorithms guide their thinking.
ELISA is a program that expresses itself like a psychotherapist. The program asks questions and the patient enters answers. All test subjects were amazed at how aptly ELISA was able to understand their situation. However, when they were shown the principles behind it, they were no longer convinced of the intelligence of the program.
So we are currently the measure of intelligence ourselves and therefore enough.
Everything should be made as simple as possible, but not simpler! (Albert Einstein)
As long as you have no idea of ​​the whole, you can't make up a rhyme from the individual partsdo. (Marvin Minsky)
The problem of intelligence is not easy and yet we must first learn to understand partial aspects that give us an idea of ​​the possible whole.

Logic in medicine

On the occasion of a diploma thesis, we began to think more intensively about how human decision-making behavior can be supported, simulated and critically monitored by technical systems. The theoretical basis of an expert system should be designed that would offer physicians a possibility to quickly consider the individually unmistakable interactions between the various prescription drugs as a construct of warnings, contraindications, biochemical interactions and contraindications and to suggest possibly adapted alternative preparations.
It turned out that this topic fell completely short. Of course, all previous illnesses of the patient had to be included in this technical decision of the medication recommendation. Because a patient with kidney failure or diabetes has different starting constellations for the evaluation of his specific possible medication for another medical problem. Furthermore, Prof. Fröhlich (University Clinic Hanover - Chair of Pharmacology) pointed out that we (of course!) Had to include a dose adjustment that is relative to age, gender and body weight. But there are drugs that become ineffective under a certain dose. The physical constellation also has a selective influence.
After we had already tried for a year before registering the work, how all the factors could fit into a theoretical concept, at least with the ulterior motive of having to program the whole thing, a decisive thought occurred to us. It was triggered by a conversation with the head of pharmacology at the University of Greifswald, Prof. Sigmund.
He described a similar project to us that had been unsuccessful for years with the university hospitals in Oslo and Stockholm. They already had a huge database. But their medication recommendations already said "stop" or "don't give" if a general practitioner hadn't expressed the slightest concern. The database was too sensitive for the individual situation. It has already issued warnings where there were no practical human concerns. Your decision-making rules were too tight. We introduced the idea of ​​using a special logic here that would be adapted to the problem. Prof. Moraga (KI chair at the University of Dortmund) found the idea plausible.
The logic of choice was fuzzy logic. Why? It can precisely handle fuzzy sets problems. It does not lead, as many believe, to fuzzy results, but to exact output values ​​that can be sent for further processing.
Namely, when we talk about kidney failure or heart failure, the terms are vague. The popular epidemic diabetes is also a gradual term for its intensity.
Its causal appearance is also different and therefore can only be added vaguely as an influencing factor to a therapeutic consideration, precisely its importance for the therapeutic influencing effect. At this point at the latest, another logical problem opens up.
If we want to vaguely define the influencing factors, we have to allow an element of arbitrariness, because the weighting of individual factors must be estimated. The real indeterminate element lies in this estimate. Efforts are made to use expert knowledge to mitigate this act of arbitrariness.
This insight should not be seen as a weakness of the method, but as its strength.
Nothing is more unnatural than the application of two-valued logic to a problem of human experience.
Ultimately, the idea was sufficient to apply a fuzzy logic, which has only existed since 1964 and was developed by Prof. Lotfi Zadeh (Making computers think like people), to a fuzzy problem on a trial basis in order to solve the task better.
The Fuzzy logic is based on fuzzy mathematics. Here, too, fuzzy mathematics produces sharp results that must not be confused with probability. In fuzzy logic it is stated how an element is graded.
The membership function describes the logical and current participation of the element in its theory in relation to reality. In the field of neuroinformatics, some interesting things have been developed at the University of Bochum, which also go back to the aspect of "fuzzy logic".
An optical system developed by Robotronik is held on a suspicious area of ​​skin and the computer outputs a diagnosis, which, however, delivers a result that has to be verified. For example, life-threatening skin diseases such as malignant melanoma should be detected.
The fascinating thing is that a machine decision could be verified to 98%. It was not the expert's neural processes that were copied, but his behavioral aspects. In the case of malignant melanoma, the system only takes three rules into account: edge design, color and surface texture of the suspicious spot.
The more jagged the more malignant, the more inhomogeneous the more malignant and the more verukous the more malignant.
A doctor would also use this to make a decision. The "ever" and "the" are programmed in the fuzzy sets. The so-called inference machine calculates the weighting relationships of the incoming values ​​and gives a sharp output in the defuzzification stage (e.g. by min-max operators), in this case a diagnosis whose source of error is far lower than that of a dermatologist. The bottom line is that doctors use rules that they are often unaware of. Of course, none of this makes a machine intelligent!
She should be able to design the rules herself if she wants to be a specialist.

Short-term goals of the AI

Despite the uniquely complex structure of his brain, which contains more neuronal connections and contacts than positively charged particles in the universe that is visible to us, buddy Anton von der Zeche Hugo would probably not recognize one of these three rules in his entire life. One must also be able to see intelligence from this side. We all, like machines, must be able to learn! A mining robot could very likely not only mine coal, but could also chemically examine the mined rock and determine the direction of the advance if it had an "it's worth it" strategy that contained few rules. Of course, he couldn't have coffee and casual conversation.
But if you want it?

The crucial question seems to be what do we expect from intelligent machines.
If we combine all of the special machine talents that are already realized today with the expected miniaturization of their services in the future, then, in the opinion of many, we would have a highly intelligent machine. We do not think so! We'd have a complete idiot with admittedly many special talents!
It is already the case today that machines can have more factual knowledge. Can you use it too?

Advantages of machines:

  • Machines are sequentially ten thousand times faster than any human brain.
  • You can recursively search any problem space in real time.
  • Machines can efficiently share and merge their knowledge bases.
  • You do not have the disadvantage of having to initiate extensive learning processes in order to pass on knowledge.
  • Their parallelization in knowledge processing can be greatly increased.
  • Machines don't need an order of magnitude of human linkage complexity because they are faster!

Disadvantages of machines:

  • You cannot save complex structures and call them up from multiple sides of the implication.
  • You have insufficient knowledge of the problem area to be able to rely on a similar structure that has already been analyzed.
  • Machines learn quickly, but in a very limited way! You can not forget!
  • Machines are not very complex. You are still too dependent.
  • Their ability to assimilate is static and their ability to associate is rudimentary.
  • They don't ask questions, except those that were predictable (and therefore programmed)!
  • Machines do not have any hierarchically structured action and decision-making concepts.

Neural Networks:

In 1940 the era began with the introduction of a theoretical neuron that was supposed to simulate the conditions in the human brain. It was called the McCulloch-Pitts neuron. The assumption by this direction of research that it would be sufficient to "wire together" enough neurons and the artificial brain would be finished proved to be wrong.
Today there are quite good, software-simulated networks on the market that have an impressive ability to learn. Neural networks always have to be trained, just like a newborn must first acquire its maximum number of networks! Its "basic genetic wiring" is limited to survival functions and general unspecific activity.
Neuro-Fuzzy Systems: Linking the learning basis in the form of parallel, multi-layered neural networks, which have depth and breadth representation, with the fuzzy systems developed and further improved in Dortmund, will improve learning and application to specific situations. The dogmatics of certain learning content is relativized.
Fuzzy error propagation: If the environment changes, then the incoming measurement data in a fuzzy controller will only produce nonsense that has nothing in common with reality.
A method was therefore developed which detects a deviation from the optimal, unsharp control quantity and counter-regulates it by influencing the membership function of the fuzzy controller. The evaluation strategy changes fluently (parallel to the internal evaluation strategy for conscious data)
Genetic Algorithms: They are always looking for the best option, according to the directive issued. In accordance with their human analogues, they are passed on and only develop deviations when it is inevitable. One or more sub-directives can be defined or influenced by neuro-fuzzy controllers. One can also manipulate the threshold value of genetic algorithms in order to achieve a directive correction.
Adaptive control: It was developed for cases in which it must be unclear when and under what conditions an assessment guideline must be abandoned or modified. In principle, neuronal structures are already capable of this (Barto et al. 1983). Only their approach remains in the dark. It is impossible to understand which neuronal adaptation led to a certain observable change. That is exactly the crux of the matter with AI. We are slipping into an adventure that we cannot control!
Our consciousness, which we had previously denied some competence, proves to be right in its place at this point, because the will forms, the consciousness follows.

Knowledge development and knowledge representation

If we define "knowledge" as the central point of an artificial intelligence, it is not because of the denial of certain emotional human factors or to weaken their importance, but for the sole reason that all forms of human well-being and behavior are ultimately related to knowledge do have. Fear that can escalate into panic, affection that can lead to excessive behavior, all of this certainly has to do with knowledge, the regulatory guideline of which is constantly changing. Ultimately, it is a linguistic problem as to how we represent such complex and archaic patterns of knowledge. Neurologists and brain researchers are equally busy with this. The limits of linguistic analysis and the limits of linguistic knowledge representation illuminate some problems that we will have to solve in the future in order to advance AI. People have certain difficulties analyzing nested sentences that are linguistically correct. "This is the malt that the rat that killed the cat that the dog barked ate."
Machines, on the other hand, do not per se understand associative knowledge and metaphorical implications of human language. This is where the actual skepticism of various critics of AI arises, who do not want to see that this is "only" a problem of the representation of knowledge and not a mythically based, indissoluble advantage of human knowledge representation.
Machines can only be conveyed to human knowledge as a string of singular sentences, since they do not store situation programs, but facts and rules on how they should apply knowledge.
Machines can, however, produce their own situation programs, in which knowledge chains are portioned and so-called clusters, which can also be processed using fuzzy logic. Of course, there is still a long way to go before machines have similar syntactic capabilities to humans. Their abilities are not perfect either, as the following funny example shows.

A: Tonight, I want to go back to bed with Sindy Crafford.
B: Again?
A: Yes, I have had the urge before.

The emptiness as the beginning of the abundance generationforming

It has been shown that it can be completely irrelevant if one implements certain facts to machines that are later not used. Furthermore, there is no point in implementing conclusions or associations that are too specific a priori. Here the problem arises of the infinite sets of associations that are anchored in the structure of the language, which one can never program.
For example: "Birds can fly unless they are penguins or ostriches, or they are dead, or have fractured wings, or are locked in cages, or their feet are stuck in cement, or they have had such terrible experiences, that they were practically mentally incapable of flying. " (M.Minsky; Mentopolis) In this way, what we actually want to represent cannot be programmed. That is why empty rule bases are tried, which the computer should fill out on its own.
The most important prerequisite for an intelligent machine and its development is the ability to communicate with already intelligent structures (programmer). Furthermore, the use of pre-formed, machine-processable data sets must be guaranteed, as well as a problem-solving and problem-recognition strategy.
A knowledge store with a "genetic" basic knowledge and adequate algorithms serves as a starting point. Recorded facts pass through parallel neural networks, the threshold values ​​of which are pre-weighted for switching through. After the run, an output is prepended again (feed forward and back propagation), or fed to any other neuron layer. The threshold values ​​can be varied depending on the output values. This makes a neural network more flexible. The output values ​​of the neural network are correlated again with unknown facts or values ​​by means of fuzzy sets.
The fuzzification mechanisms of the inference engine also work with weights. Their weightings can be modified using fuzzy error propagation.
The system of threshold values ​​and weights of linguistic (mathematical) variables or terms, the back-propagation strategies and adaptive control mechanisms can be called Unit of meaning be interpreted. In this system, indistinct terms can acquire a clear meaning within the machine processing system. Even at this stage, coincidences and unpredictable developments arise. The unit of meaning can be expanded and supplemented as required.
Association units arise through knowledge portioning. Certain knowledge patterns are summarized in a cluster and can be "addressed" by units of meaning. This "addressing" of suitable clusters can be controlled by superordinate theories. If no suitable representation is found for any theory A that would be compatible with the new facts, a modification of A is slowly achieved via the unit of meaning of A by regulating its threshold values ​​and weightings. Surprising developments can get around this.

Dear Mr. Walther! I didn't really want to write so much on questions of consciousness because there are quite a few other views that I can understand. But I couldn't get around it because I couldn't leave AI and the question of the conscious mind of man unconnected, as you will surely understand. My presentation of the two problems that actually interested you turned out to be a bit poor. But on one thing we both agree, I think. You speak of an "evaluating center", I speak of an evaluation guideline. But this center is nowhere anatomical. It is a functional correlate of the brain functions themselves.
They say: "A computer can never provide such dynamic performance, in which function and evaluation are" reciprocal ", that is, interactively and mutually drive each other vertically higher." I ask why not? What is it that puts you off thinking about this? The anatomical structure of the brain is a special way of generating intelligence.

H.W .:

First of all, I stumble upon the concept of "consciousness", which you describe partly through negation and partly through positive phenomena, but which nevertheless remains in a peculiar limbo; and on the other hand, in my opinion, very different nuances are often used, which in my own "nomenclature" are not subsumed under "consciousness". This certainly means no criticism of your presentation - as well as, in view of the problematic nature of this term!

But at least for my own "needs" and also with regard to the possible success of a communication about it, it seems to me inevitable, also from my side, to try to limit the term within the framework of my own ideas, what is (for me) consciousness Not in order to then consider which phenomena this term may then still stand for. As an aside, I would like to note that at the moment of this beginning I myself am not completely sure to what result this experiment will develop - so here a further thanks to you too, insofar as your presentation forces me to do so, here for myself to think again from the ground up. Of course, there have been years of preliminary considerations and a certain "direction", but neither you yourself nor the development of knowledge in the natural sciences stop, and so this fermentation and clarification opportunity is very welcome to me.

You yourself use the term consciousness in (in my opinion) quite varying meanings; on the one hand it appears as a pure more functional Term if you refer to human and machine consciousness, for example. It is then referred to as a synonym for intelligence, or as what man points to as a human be so proud (hence the ability that I would personally call "spirit" in contrast to the animal). In another place consciousness is again identified with memory - on the other hand it is not necessary for thinking, for the formation of concepts and for the rational activity. Personally, I don't get any "conceptual unity" in this way, I'd rather speak of contradictions.

So what might we mean when we speak of "consciousness"? An important point in this respect seems to me to be above all the comparison with animals, but also the relationship between life and inorganic matter; Another is what we want to say when we speak of "unconsciousness" (starting points that also play a role for you) - and that both points are directly related to one another will immediately become apparent: You yourself bring the picture of " Blow on the head "and of processes for which there is no consciousness representation. An equally good example seems to me above all to be sleep - and it is immediately noticeable that animals naturally also sleep! Now we surely agree that man is not "conscious" in sleep, and so in analogy the same can be assumed for the animal world (and even for the plant world, which also has a clear difference in reactions during the day. and night rhythm shows). From what has been said, it follows, as casually as it is inevitable, that of course different consciousnessesto shape exist, because the "consciousness" of plants, animals and humans differs qualitatively, but that of all living Being a same functional fact "consciousness" is suitable as active state of the perception, representation and evaluation systems. Consciousness means above all the "waking state", which is why "waking up" every morning is able to provide many a clue. The introspection shows that "consciousness returns", that is, in a serial process the various centers in the brain that determine the respective form of consciousness are activated again in a networked manner, like the way in which the lights come on in a large building. At the end of this process we find ourselves as the ones we fell asleep as - fortunately there is definitely a "continuum", otherwise we would have to create ourselves from scratch every day!

This "continuum", however, is something different from functional consciousness itself, it is already "consciousness" of something"in the respective corresponding consciousnessshape. It follows from this: the term consciousness does not initially mean a material substrate (1), but only a specific one Status a "hardware organization": the computer can only "something"calculate when using externally supplied energy switched on is; its systems and their functionality are one thing, the other is the data that is stored in them and processed. Unfortunately, most of the time we cover consciousness and "consciousness of something" with the same word, and that leads to confusion, and this confusion is greatly increased by the fact that they are very different "consciousnesses."to shape about something ".

With this I come to the conclusion that in principle we have to assign consciousness to all living organisms in the sense of the "switched-on state of a reaction system", but that of course the forms of consciousness change qualitatively through the series of species - from the rational consciousness of man to the emotional consciousness of the Animals to the "lower forms" of instinctive or even vegetative "consciousness". Most people understand by consciousness only the rational-human form, but at least we can recognize animals as their own (and ours supporting) form of consciousness Sensory awareness certainly not to agree - insofar as they are self-interpreting individuals that respond to external and internal stimuli. The "lower" forms are less relevant for our topic than they are not here Individuals who guide the stimulus evaluation, but these genetically is fixed. From the emotional system onwards, however, living beings have a certain range of Selfevaluation of sensory and internal signals is available, and this I mean, it is actually what we understand by consciousness.

This derivation is initially understood to be a purely "phenomenal" one in that it tries to derive its conceptual understanding from existing forms of consciousness. Another question can then be whether other forms of consciousness than those of living systems are also conceivable, e.g., machine.

If we have hitherto regarded the active ability to interpret data as consciousness, we will first have to come to the conclusion that a computer that is switched on then has something like "consciousness". On the other hand, "something" in me is immediately reluctant - something seems to be missing in the definition of "consciousness"; I think what is missing is that "fictional self-perception" that it is my own "I" that is becoming aware of something here: that this ability with myself as self is "assembled". Exactly this also applies to animals, whose sensory consciousness is just as obscure to themselves as the sensation is still in humans. Every living being is something that is dynamically pressing and thus developing in certain contexts of reception and reflection (2) - as you yourself say: it has a "Will" and it is also aware of this" will ", yes, so far Schopenhauer is certainly right, it is this willing and the consciousness of being able to want that make it up at the core (3); but precisely this cannot be transferred to machines. According to my definition, consciousness is not only the switched-on state of information processing hardware, but above all it is Self-Consciousness off own Activity. This is the only way to achieve an independent and independent consciousness within the respective range of perception, representation and interpretation. Artificial forms of consciousness without these last two conditions remain basically "artificial" and depend on the programmer, so they are only conscioussimilar.

According to my theory, the human being has three different forms of consciousness at his disposal at the same time: the emotional one, which he shares with animals, the rational-intellectual and the rationally rational, which are developed and work together in individually different ways. These different forms of consciousness are thus generated by their own centers of interpretation ("faculties": ratio as understanding and reason, emotion), which in turn build on one another in layers and in a serial-parallel network. Rational consciousness is inconceivable without an emotional one, this again depends on the instinctive as well as the vegetative network.If the highest form of consciousness fails in an individual, then he is "unconscious" in relation to his average type. The baby, like the animal, appears to us as "unconscious" in relation to normal adult humans, although both are downright "conscious" in their own way.

The difficulty is mainly of a linguistic nature, because we do not distinguish between "consciousness" and "form of consciousness", but mostly only speak of "consciousness", and therefore cannot differentiate between the various forms / levels, they then blur into one another without contours.

The term also poses a further linguistic problem because in German (English and French) in the root of the word "knowledge" ("consciousness", "conscience" = lat. "Withknow "[!]) is derived from, so initially only covers the rational sphere. (4) Knowledge in this sense, however, we only admit to humans, and that is why we have excluded the animal kingdom until today and have a special position with regard to this granted on consciousness.

In this respect, a further argument can be used, which Peter Singer (5) has made clear again: Consciousness should be spoken of where the living individual experiences through his own reflection that he is himself Suffer is exposed. But this only applies to the reflected emotion as the self-perception of sensations in the individual. Neither the lower animal species (probably from the reptiles downwards) nor insects and plants have it - and even fewer computers.

For me the following concept of consciousness crystallizes out:

1. Switch-on state of a reaction system and

2. individually perceived self-evaluation of active and passive relatedness.

Therefore, one thing above all belongs to consciousness Awareness of oneself, in particular the much-discussed question of the "I" and its constitution. In this respect, the searched "core of consciousness" seems to me to be identical with the "I-consciousness", however light (rational) or dark (emotional) this may be. I can quote from my article "What is Metaphysics" about my conception of the I:

"Every faculty is a synthetic mode of reaction and action of the inside towards the outside, learned from the inside from the outside, because the outside can be learned something in its same or similar repetition on the basis of these two factors synthetic communication from the outside to the inside increases, and thereby the connection between the outside and the communication center indirectly will, must living connection be preserved between outside and inside. This requires a synthetic center based on neural function: the one given in the demarcation from other beings unit of a being necessarily requires the leading synthetic ability to communicate with the synthetic center set in one is. The picture for this being in one is the sphere. What we claim "scientifically proven" for galaxies, stars, planets and gravitation, the same applies to ourselves: it is the essence of the sphere, with its outer shell, to protrude into the surrounding area in a delimited manner; their effects on the surrounding and the effects of the surrounding on them turn out "as if" they came from inner center originate or act on the center of the sphere. Center and surface are two components of the sphere, and yet the sphere is one, one closed in itself ANess. It is the same with every faculty of being and the élan vital. (6) The layered faculties from instinct to reason that connect every living being with the outside and communicate with it, ver-means the center of this outside, in that the results of the sense organs are interpreted through these faculties. The ev-center is nothing else than that center of action of the sphere, which we call in humans "I" of the understanding, "I" of reason or in the double reflection "I-I", on which the human organism by means of its leading ability relates his experiences and actions. He gains his "strength" from the concentration of the ego center, and from it he works his actions. This center is just as much an "as if", just as much a fiction as the center of mass of a star is - and yet both are quite real in their own way: as centers of action. Everything that is in this world is figuratively of such a spherical shape, and thus of such duality in unity. This Vary Between the two poles of unity, light is already exercising in an exemplary manner, in that it behaves partly as a wave (energy - élan vital), partly as a corpuscle (mass - surface) - which has not yet become a standardized Conception is brought, but only and precisely in this composition can be understood.

Every possible type of person can be understood from the interplay between his abilities and the seat of the sphere of inwardness. The epigenetic development of the human spirit, depending on the talent, the environment and its tradition, brings about the most varied networking of the stratification of assets between emotion, understanding and reason. This necessarily leads to different centering of the ego - and so people either follow their instincts, their feelings, their uses, the ideal, the "sacred", or in a chameleon-like alternation between the categories everything at the same time. That the majority of people are still today shaped by understanding and not by reason can already be seen from the fact that superstition, that is, the mythical conceptions of the understanding, is far more widespread than the metaphysical conceptions of reason. "

The human ego is thus made up of the two forms of consciousness that exist in it, emotion and ratio emotional and rational Shares together. In contrast to the animal, this consciousness becomes "bright" through language as the one of the understanding; Another quote from my text "The feeling for the beautiful":

"Things in our human sense only crystallize out as one vertically integrating Self-performance of the mind: in the connection of the different Properties of different sensory outcomes too one Functional unit. These Summary is assigned its own term, represented in its own brain area and evaluated by the mind itself (initially under the guidance of the Emotio). Grammar is the juxtaposition of terms and thus the Empowerment of the world through language as mind. Let me put it in the picture: words are the torches in whose light things first appear to us.

At this interface, what a person calls his `` I '' also emerges: the ability of the mind to identify things as functional units, per se ipsum leads to recognizing and subordinating oneself, one's own person, as functional unit and center of action to summarize in a separate term: the `` I '' as the carrier and `` owner '' of self-perception including feeling and the data storage of the mind. "

The "carriers" of consciousness are therefore the faculties emotion and ratio, the bases of which are formed by instinct and vegetative, which can be "reduced" to their neural components. The multiple stratification and serial networking seem to me to make it clear at the same time that the phenomena of consciousness on the level of what is happening in the individual neuron cannot be found any more than in pure "network structures"; rather, the individual interpretation is conditioned by Central trainings for evaluating information. Without tapping the chemical concentrations of neurotransmitters through the limbic system and their reflection (thalamus?) none Selfsensation and consciousness of sensation; without concept formation as independent Reception, reflection and interpretation (rational "mirror of consciousness", short-term memory) no ego and no "bright" consciousness.

I. Fuzzy logic

This principle of fuzzy logic convinces me very much, as it is certainly already the basis of the perception of living systems (and is contained in my conception of individual perception): this principle works with the fact that the evaluation of various parameters does not have to be 100% fulfilled, but , let's say for example a 70% agreement is sufficient for five different parameters. Since this 70% correspondence can be determined with certainty much faster than an exactly 100% correspondence, which would have to deal with the details of what is to be perceived, since, on the other hand, a single parameter is not sufficient for reliable identification, i.e. at least two or three If 100% determinations were to be made, a fuzzy logic has a far advantage over this and hardly less in terms of accuracy.

It is no coincidence that it seems to me to be the same with our mind, or in other words, that's what defines the mind exactly in my opinion: the evaluation different Sensory results on one Level. The sensory centers seem to me to work with a kind of "fuzzy logic" from the outset during the "first recognition", because the senses, especially the visual sense, evaluate the perception according to the "protruding sides" of what is to be perceived in the first "access" (see my text "Consciousness") but what else is it than an "approximate", i.e. about 70% determination of agreement? And this combined with all the sensory centers for which what can be perceived has something to offer?

One could even go so far as to claim that our ability to compare and analogize is based precisely on the fact that our neuronal abilities also work with such "fuzzy logic". Analogy and association should therefore be explained very nicely with fuzzy logic, in that "as well as possible" (!) Matching neural patterns are "turned towards", because we (rightly) assume that "things are alike" (!) In many cases himself too similar behaves.

II. "Third factor"

In referring to Marvin Minsky's sentence on the "description of consciousness" that you have quoted, that we are not able to do this correctly, I am not entirely satisfied with my own definition in my last answer: "On-state of an individually evaluable reaction system" says something about the "system requirements", but does not describe what consciousness is.

Latency, background knowledge, expectation: this seems to be a third requirement for consciousness - on the basis of previous knowledge we apply a more or less wide "pre-filter" to each perception, depending on the connection we are in to a given perceptual situation. When we walk on the street and virtually perceive "everything at the same time", we actually see "nothing", but instead register everything that we can expect "on the street" by "reaching over it". If we encounter an unusual sensory event from outside, or if we set ourselves a concrete "perception goal", we narrow the perception filters and the background knowledge again along the previously known expectation. Consciousness in the human sense seems to need as a third factor a constant provision and filtering of prior information: without the accommodation of memory (see graphic in "Consciousness"), what we perceive would be straight Not deliberately; rather, we would register something "foreign" here and try to approach it by means of comparison and analogy (fuzzy logic). How should this automatic allocation of "context" and its filtering ("headlights") be transferred to machines?

But we already share this third aspect with self-perceptive animals: there, too, one can already be found individual anticipation determine which additive-horizontally linked certain events with certain subsequent events and evokes the corresponding emotional reaction (such as the Pavlovian effect). Here, too, we already have something like "environmental awareness" on an emotional level, which is made up of a "stream of sensory signals" and their familiarity / unfamiliarity, which in turn requires a known "context".

"Consciousness in itself" is therefore just as much an "absurdity" as "free will in itself" Will is always only will in that it wants "something", and there is consciousness only of "something", not "in itself ". The "switched-on state" therefore consists primarily of a constant "flow of data" which the perceiving individual himself remains unconscious, latent; the "latency threshold" - that is, what penetrates conscious perception - will differ greatly between individuals, especially with regard to the categorical equipment: the emotional and the intellectual type are determined by the external perceptions, but also by the body's own signals from vegetative and instinct are more affected than the rational type. So there must be a connection between the location of the value center and this latency threshold.

In any case: this only "latently conscious" of the perception belongs to the consciousness itself, is something different from the famous (e.g. Freudian) "unconscious" insofar as this latent in the perception is always "under the control" of the perception and through unusual changes or "attention" ("headlights") can switch from latency to presence.

Obviously we are dealing with completely unexplained processes here in our brain, such as how this provision of background knowledge and perception filters works - similar to the phenomenon "one floor below" that it is still completely unknown how the brain manages to to "stand on its feet again" the world standing upside down by the eyes ...

Consciousness itself should now be defined as a signal stream between an individual evaluation entity (emotional and / or rational self) and sensory perception systemsthat use fuzzy logic to determine the incoming sensory events automatically "Pre-recognize" in parallel and serial passage through various representative systems, and this in constant comparison (reciprocally fed back) with the existing and pre-rated Background knowledge of which automatics, on the one hand, meet the perceived 'context' and at the same time regulate the latency or presence of the perceived. (Certainly not a "simple" definition ...)

III. Organizational invariance

Here you put together most of the arguments yourself as to what distinguishes human and machine consciousness; Their main argument is definitely mine, in which all the individual arguments can be summarized: here, a self-active one living System including its phylogenesis and ontogeny (genetic and cultural tradition), there a passive and willless system that is dependent on external energy input and preprogramming both in terms of "background knowledge" and "learning algorithms" as well as "target corridors".

I would not share your approach that "history does not repeat itself". Only because the same thing repeats itself in the inorganic could living information storage arise at all (information storage would make no sense in something that is always random and non-repetitive). And as Goethe already knew, everything is repeated in the cultural existence of man, at least in its basic forms, only the "quantities" change, but not the qualities (the latter are, however, expanded by cultural evolution, but nothing about the repetition the existing changes). In the opinion of almost all scientists, quantum physics and its uncertainty relation can in no way be transferred to the meso- or macrocosm.

And insofar as the phenomena of consciousness - as already pointed out - can by no means be elucidated on the neuronal level (i.e. in the area of ​​electrical and chemical phenomena), because they are rather the result of processes that are qualitatively far higher ("mesocosmic"), the findings of quantum physics cannot Play a role for consciousness. Perhaps one could go so far as to say, in an exact analogy to the inorganic: Consciousness is only possible because "history" repeats itself ?!

How closely the "fuzzy" or "multi-valued" logic comes close to my own conceptions has already been described above; I consider the belief of a "single complete theory", which is often found in science, "that thus determines the history of being", just as questionable as Heidegger's belief in the "clearing of being" ... (in essence it is that same: a Faith, in which above all Metaphysics is included, which the normal person covers with "religion".)

There are only two options:

a) Everything possible in the cosmos is already there, we just haven't fully recognized it yet. With this we arbitrarily declare the here and now to be the climax and endpoint of a development ... this kind of anthropocentrism is ancient in various guises.

b) We believe that by means of a "theory" (!) we can "prescribe" all possible further developmental steps to nature - how can this be reconciled with the emergent phenomena from inorganic to life to spiritual, which we in reality to this day just now Not can explain? So far we can only determine the emergence of these phenomena compared to their pre-existing condition - and we want to say something about the future, yes, "prescribe" something to it?

We do not know ourselves, for example as far as our consciousness and our intelligence are concerned, but nature does part we are (and who we, in another term, have been since then Withwe want with these properties that are unknown to ourselves prescribewhat to do and what not to do in the future? (7)

At the same time we know: the more comprehensive a theory should be, the more comprehensive it should be more general it must be, but that also means the further it must move away from real beings remove! In an eleven-dimensional "world formula" of the "strings", all phenomena can be described "mathematically" in a coherent way - but what does that say about those created by humans cultural Mesocosm that determines existence to an ever greater extent, at least on earth? What can that say about a further conceivable qualitative "quantum leap" in information processing by living systems (and their aids) if we "lengthen" the leap from animal to human and there the change from intellect to reason further forward?

Shouldn't every "ToE" (of reason) be thrown onto the rubbish heap of history, however beautiful, just like the Ptolemaic world view of the mind?

Finally, you seem to me to be of a very similar opinion here, if you describe the search for a complete theory as a "motivating solution", you see me entirely on your side: the knowledge that such a theory cannot be completed should of course not be of this discourage trying to explain rationally all that was "behind us" in the past up to ourselves - because we ourselves are a part of this unfinished history and stratification, and so a reductionist explanation of the world becomes per se ipsum with regard to forcing a common "origin" of all beings towards the attempt of a unifying "ToE".

If I interpret you correctly, you are now introducing a distinction between machine and human consciousness in such a way that it is the goal of AI Not is to transfer concrete human consciousness to machines, but that the principle of "organizational invariance" "only" aims at a human similar To generate consciousness - for example in the sense that physically comparable "assemblies" with comparable organization ("wiring" and possible operations) also generate comparable consciousness.

Then you differentiate between intelligence and consciousness in the sense that AI is primarily concerned with human analog information processing, which, however, should not be confused with consciousness (I agree: no one will deny that "machine computers" do the human brain are far superior in the execution of various "operations"). But are really "the machines themselves" intelligent, if the algorithms they use consistently implanted by humans are? Wouldn't a machine be called "itself" intelligent only if it were, for example self "I would think" to use a multi-valued fuzzy logic instead of a two-valued one?

Your subsequent "negative attempts at defining" consciousness seem, in my opinion, to refer less to consciousness "as such" than to various of its "constituent elements", in particular to memory, which in no way "is consciousness" itself. All in all, I would like to refer to my own ideas of the "stream of consciousness" I have already given you (see graphic):

"Discontinuity" of consciousness: the "continuum" does not consist in the contents of the memory itself, but in the negative and positive review of the same, the storage of perceptions is not an end in itself, but for that anticipatory (hence all prejudices!) Estimating plotfollow there. The "narrative" is the interpretation and classification of the fluctuating events in an individual "context", which in this respect can always only be done a posteriori - and from the nature of the thing only presents itself for the interpreter, for every other observer differently . To call this a "mistake" is wrong, because the evolutionary success of the individual is not dependent on the objective correctness of his or her ego and reality interpretation, but on theirs subjective carrying capacity.

The fact that an intersubjective communication and logic develops from this is due to the abstraction of reason beyond the understanding, in that the "essential" is worked out and separated from the accidental of the sensual momentary. We can only "roughly understand" the feelings of others, but their abstract thinking can be checked for objective correctness. The concepts of reason are "meta-physical", those of the mind material - and this brings us to another problematic point: "Consciousness is not essential for concept formation," you say; this thesis stands and falls with the definition of what you want to understand by "term". Your definition seems to be "very broad" because you ascribe the "concept of interesting facts" to all living beings. This conceals several difficulties:

1. In this respect, Peter Singer has to agree, "interests" can only have living beings who have sensation - only they base their "interests" on one individual Scale (Emotio).

2. The concept of a "state of affairs" already presupposes the "concept of things", otherwise the "behavior" of "things" to one another would not be understandable; Animals (the great apes left out) are things in the sense of things completely unknown, precisely because they cannot "get the idea". Animals have "only" one "concept" of causality, each with a different "brightness", that a certain event is often "coupled" to another, and this connection is conditioned. And even this animal "conceptual understanding" that you have stated presupposes "consciousness" insofar as this is only present in the "waking state" (see the "switched-on state") of such animals.

3. What man understands by "concept" is therefore necessarily linked to the existence of consciousness, because things, things only appear in the human mind Concept formation as the identification of effects and their carriers is the essence of understanding - and there is no such thing without consciousness. The essence of language is not "to put a word for a concept", which is just floating on top of it, rather, with the independent conditioning of linguistic concepts as words, the human world of things is first posited.

In all cases, the term "concept" is something completely different - what is meant by it differs just as categorically as the various "assets" with which "concepts" are formed.

We encounter the same problem with your statement that consciousness is not necessary for learning; here you are referring again to that human Form of (at least) mind-consciousness, and thereby exclude, however unspoken, the emotional consciousness. Which, of course, must be present in the "forms of learning" you have chosen, and otherwise also in all animal conditioning from the perception of a key stimulus to sensation-controlled learning processes in higher animals must be there.

Your statement that consciousness is not necessary for thinking belongs to the same categorical problem - which probably doesn't sound like a paradox at first for nothing ...

Without a previous definition of the term "thinking", which is still in humans in two things Form is present, its connection to consciousness will probably not be made visible. They identify "judging" and "thinking" without any fuss - all self-perceptive animals make judgments without ever thinking for themselves.

In any case, "thinking" and "judging" are straight in my opinion Not identical:

1. Thinking is tied to the human being as the material (understanding) and essential (reason) interpretation and, above all, the cultural re-creation of the real world.

2. How this person then makes his judgments is a completely different question, namely the question of reflection and the individual position of the management level; You are certainly right that most people make their judgments on an emotional level, and thus (according to old parlance) "unconsciously", but not unconsciously! Rather, it only shows insufficient reflection if one leaves the decision about a situation thought by means of ratio as understanding or even reason to the much older center of emotion.

Of course, even a rational judgment can never be free from emotional influences (and it shouldn't be at all, because the assessment of situations is always strongly based on the emotional evaluation got toin order not to go wrong or become impossible, as observations on people show, in whom the emotional system is destroyed); But first, according to my definition, the sensory consciousness is also consciousness (which is differently rationally conscious or just "unconscious" among individuals, and secondly, it ultimately depends on which center is decisive in judging - whether it is emotional, intellectual or reasonable values ​​in the Stand in the foreground.

From my point of view, your statement that consciousness is not necessary for rational activity must appear even more paradoxical (whereby you neither distinguish between understanding and reason nor at all indicate what this rational activity should mean - calling this "natural thought processes" will probably not be the case fair, otherwise most "natural people" would not be like that unreasonable "think" of this world ...) On the contrary, rational activity is a late product of cultural evolution, which presupposes the ability to abstract independently (reception) and allows the abstraction to be related on its own level (reflection) and thus actual Think. All of this should be possible without consciousness?

Logic, on the other hand, in my opinion does not consist in "rules as we must think if we assume truth or approximation to them as our goal", but that Making a judgment on the rational level with the same criteria as they already apply on a sensual and intellectual level: the determination of agreement, similarity or inequality (see Aristotle's famous example of Socrates on the conclusio). logic is the rational activity, its way of working, which deals with the determination, comparison and compilation of "essential abstractions", its way of working and its "rules" are its "system-immanent" like every other faculty of living beings. Here too (as in the case of perception itself) we can speak of a "two-valued" and a "multi-valued" logic, insofar as the former is nothing other than equality and the latter is similarity. Insofar as this activity in dealing with abstractions plays on a "meta-physical" level and on the rational one ConsciousnessSpiegel (short-term memory) is carried out, the assertion of a rational activity without consciousness seems to me to be a self-contradiction.

Your example, with which I totally agree, that most brilliant ideas are not based on a rationally conscious deduction, but always have something of "inspiration" about them (and that is how they were "experienced" by many scientists - this is certainly a main reason for the various forms of "belief in God" even in the natural sciences) - this example certainly does not mean that scientists have such ideas in an "unconscious state". Then how should you capture these ideas? What you (and I) mean by that is only that there is no rationally conscious act of thinking here reason is present, through which a certain thought constellation was generated. However, all other layers of human consciousness, especially mind and emotion, are very well and necessarily involved in the "inspiration" of a brilliant idea - this is certainly one of the main reasons why the physicist Stephen Weinberg, for example, particularly stands up for an idea when it comes to the truth criterion whose "beauty" invokes (whatever he means by that ...)

In my opinion, your statement here should have been exactly the other way around: although reasonable results of thought are not possible without consciousness, they do not necessarily have to be achieved through the ability to reason as a rational thought act, but are mostly owed to searching association and intuition based on a certain on a rational basis.

If you then describe consciousness as an "operator of analogy", I can certainly support that to a certain extent, as far as we are talking about the human-rational form of consciousness; In my opinion, however, what is overlooked is that our rational consciousness is in any case also constituted by the sensory consciousness, which cannot be fitted in this smooth way into the rational analog and anticipatory operations. Our individual hard wiring, our individual vegetative and instinctive "situation" and our pre-rational individual emotional conditioning, all of this always and inevitably delivers its parameters to the consciousness and determines this with is part this consciousness as long as we are "conscious", and not just with a view to About what we have a consciousness, but as co-constituents, that we have one.

I am also concerned that your definition of consciousness is now identified with what I mean as a Constituent elements of this consciousness would designate: being able to carry out analog operations. A more functional Part is now taken as the whole. As already said, however, according to my definition, neither intelligence nor analogy operations make up what we call consciousness; indeed, for mere sensory consciousness, as we find it in animals, neither is even necessary.

Although I think that I should neither metaphorically transfigure consciousness nor mystically exaggerate it, it is more, better for me: other than "an analog world on a linguistic basis". Incidentally, it is not only a linguistic problem if you refer to consciousness as the "producing operator of analogy" in order to use the Results to define this operator, its analog world, as "consciousness" as well. Once you are talking about the function, another time about its content, but both cannot be the same?

I try to avoid this difficulty of the functional fixing on the one hand, and of the inner content of consciousness in all its forms on the other hand, by using consciousness as one Status which results from the interlinking of certain constituents and which is just as difficult to grasp "as such" as, for example, the "will" - both themselves are "metaphorical" terms "a priori", to which a "definite and dissectable place" cannot be assigned is like the "soul".

To recreate this state in its complexity by machine appears to me to be problematic, while you consider machine consciousness possible, and therefore you are at the same time compelled to change and to change your definition of consciousness yourself functionalizein order to ultimately reduce it to the "function of a tangible part": the analogy operator, which one would then only have to recreate as a "highly organized technical abstract structure" in order to generate consciousness by means of "organizational invariance"?

I read your statements on logic in medicine, which deal in particular with the practical application of fuzzy logic, with great interest, as it corresponds to my ideas (the details must I leave it to you as a specialist!).In any case, with regard to the intelligence of machines, you come to the same conclusion as I have already stated above: you should be able to design the rules yourself.

Your deliberate presentation of the immediate goals of AI, the development of knowledge and the "emptiness" from your point of view was very interesting for me and, as far as my understanding was sufficient, I took note of them with approval for the most part; They were able to show me, among other things, that, especially in terms of the ability to learn and partial self-regulation ("assessment guideline"), one is apparently already further than I assumed myself.