This topic is discussed in several science and artificial intelligent discussion forums and I haven't received a challenging opposition or a support yet. It has occurred to me that maybe the nature of the topic fits the computer oriented minds more than the rest of the internet community. I hope you would help me to develop either supportive or dismissing ideas about my quest.
Here is the idea:
When we type “appl†using sophisticated software, the programme underlines the word and gives us alternatives such as “apple†or “applyâ€. We know that “appl†is not in the dictionary and it is programmed to provide the similar alternatives. We also know that when we type any word, it is not the word itself travels through transistors; it is transferred to binary numbers by software.
Could a similar process be going on when we think? When we think of “appleâ€, how do they travel on brain so neurons can read it? There are two possible ways:
1. “We have apple neuronsâ€. As soon as we think of an “appleâ€, the relevant apple neuron(s) are alarmed. I find this possibility utterly useless and stupid as we might have required a separate neuron for every single thing, concept, word, etc. Not only that, we must have separate neurons for every single possible forms of an apple (red one, green one, bitten one, or one that inspires us for gravity; practically endless).
2. “We have inner translators that decode the representation of a thought.†So thinking of an apple, (or anything else for that matter), will evoke different pieces of more elemental information (roundness, being eatable, fresh/old, colour, taste, etc.) as well as contextual (apple to sell or apple to eat?) and conceptual (apple as a computer brand or apple as a fruit?) steps.
If we suspect of the second way, we must also ask this question: Would it be possible that thoughts are also translated into some other type of codification (such as binary codes of a computer system) before they are processed by neural activity. In other way of saying, “apple†reaches to neurons with a totally unrecognizable way of representation? Transistor doors wouldn't understand anything from “apple†but they require some binary codes; and maybe neurons (that work with chemistry) wouldn't work with concepts, therefore they require a different type of symbolic process language –binary or not.
And maybe, this middle language makes more sense than being just a translator between neurons and thoughts: As we know from computers, all transistor architecture is designed according to logical doors that interpret 1s and 0s, not for what we type on screeen. Maybe neural architecture is also built up in order to make sense from its inner language rather than concepts of mind.
What I am asking is this: Using the computer analogy, and depending on the difference between neural activity and thoughts, can we suspect from any software-like system which operates between thoughts and neurons? We don't have to start with human brain; we can take the example of a rat: When a rat sees an apple, we can guess that there is no word of an apple going through its reception mechanism. But some mechanism translates this outside object (apple) to the neural system of our rat and it approaches to this fruit. A rat does not think as we do, yet we can still suspect that there is a simpler version of the similar mechanism is going on inside its brain. It is possible to generate more examples.
One step further: Let's imagine that there is a command software which reflects the functions of brain. This might not be "a" software. It can be a different software regimes. The coded language between neurons might be as simple as 1s and 0s of transistors, and this could be very basic and robust that is shared by all other brainy creatures of nature. We know that cells communicate to each other, we know the map of proteins used for this communication, we know DNA ( compact microprocessor units if you like), we will soon replicate the entire map of brain cells with all its specialized compartments; yet we don't know how do they communicate.
BCI (Brain-Computer Interface) devices work on a simple principle: A human made computer reads neural activity, then signals are translated into hearing, sight, movement of a robot arm; mostly "motor functions" due to the current level of technology.
We know that there are some special software behind our computers which are able to read the neural signals and translate it to commands. We also know what sensors of this computer reading is not the concept itself (not what we consciously think), but the electromagnetic signature of the brain activity. Here is the might-be-confusing bit: When I think of moving my arm, I am aware of moving my arm and this command/request comes to my consciousness as a concept ("I want to move my arm towards right/up/left/down direction"); however, the computer does not understand that, it reads the mirroring neural activity and directs the robot arm accordingly, because of the human made software. If I didn't know this process, I might have thought "Oh, I thought of moving my arm and computer understood that, magic!" No, computer didn't understand me at all, it's not magic, its software translated a signal into action, that's it.
Any ideas?
Here is the idea:
When we type “appl†using sophisticated software, the programme underlines the word and gives us alternatives such as “apple†or “applyâ€. We know that “appl†is not in the dictionary and it is programmed to provide the similar alternatives. We also know that when we type any word, it is not the word itself travels through transistors; it is transferred to binary numbers by software.
Could a similar process be going on when we think? When we think of “appleâ€, how do they travel on brain so neurons can read it? There are two possible ways:
1. “We have apple neuronsâ€. As soon as we think of an “appleâ€, the relevant apple neuron(s) are alarmed. I find this possibility utterly useless and stupid as we might have required a separate neuron for every single thing, concept, word, etc. Not only that, we must have separate neurons for every single possible forms of an apple (red one, green one, bitten one, or one that inspires us for gravity; practically endless).
2. “We have inner translators that decode the representation of a thought.†So thinking of an apple, (or anything else for that matter), will evoke different pieces of more elemental information (roundness, being eatable, fresh/old, colour, taste, etc.) as well as contextual (apple to sell or apple to eat?) and conceptual (apple as a computer brand or apple as a fruit?) steps.
If we suspect of the second way, we must also ask this question: Would it be possible that thoughts are also translated into some other type of codification (such as binary codes of a computer system) before they are processed by neural activity. In other way of saying, “apple†reaches to neurons with a totally unrecognizable way of representation? Transistor doors wouldn't understand anything from “apple†but they require some binary codes; and maybe neurons (that work with chemistry) wouldn't work with concepts, therefore they require a different type of symbolic process language –binary or not.
And maybe, this middle language makes more sense than being just a translator between neurons and thoughts: As we know from computers, all transistor architecture is designed according to logical doors that interpret 1s and 0s, not for what we type on screeen. Maybe neural architecture is also built up in order to make sense from its inner language rather than concepts of mind.
What I am asking is this: Using the computer analogy, and depending on the difference between neural activity and thoughts, can we suspect from any software-like system which operates between thoughts and neurons? We don't have to start with human brain; we can take the example of a rat: When a rat sees an apple, we can guess that there is no word of an apple going through its reception mechanism. But some mechanism translates this outside object (apple) to the neural system of our rat and it approaches to this fruit. A rat does not think as we do, yet we can still suspect that there is a simpler version of the similar mechanism is going on inside its brain. It is possible to generate more examples.
One step further: Let's imagine that there is a command software which reflects the functions of brain. This might not be "a" software. It can be a different software regimes. The coded language between neurons might be as simple as 1s and 0s of transistors, and this could be very basic and robust that is shared by all other brainy creatures of nature. We know that cells communicate to each other, we know the map of proteins used for this communication, we know DNA ( compact microprocessor units if you like), we will soon replicate the entire map of brain cells with all its specialized compartments; yet we don't know how do they communicate.
BCI (Brain-Computer Interface) devices work on a simple principle: A human made computer reads neural activity, then signals are translated into hearing, sight, movement of a robot arm; mostly "motor functions" due to the current level of technology.
We know that there are some special software behind our computers which are able to read the neural signals and translate it to commands. We also know what sensors of this computer reading is not the concept itself (not what we consciously think), but the electromagnetic signature of the brain activity. Here is the might-be-confusing bit: When I think of moving my arm, I am aware of moving my arm and this command/request comes to my consciousness as a concept ("I want to move my arm towards right/up/left/down direction"); however, the computer does not understand that, it reads the mirroring neural activity and directs the robot arm accordingly, because of the human made software. If I didn't know this process, I might have thought "Oh, I thought of moving my arm and computer understood that, magic!" No, computer didn't understand me at all, it's not magic, its software translated a signal into action, that's it.
Any ideas?