A Natural software between neurons and thoughts

I think this thread is above my level of understanding anyhow...

You could say our software is our DNA. We are all running on a slightly different homegrown OS, made by our parents. They are essentially mummy and daddy Gates, and when they get twins; they are essentially mummy and daddy Jobs giving us almost exact duplicates of what we had before

We can only do what is physically possible anyhow. But, I believe there is more to the brain than that. In some way, we can communicate through other means. Least, that is my theory.
 
You could say our software is our DNA.

I can not, because I don't know. And not DNA, but maybe "junk DNA" (the DNA code that doesn't make any protein) could be software code to be executed when it is required. But this is supposition. And still, this is not the execution of the software: Execution happens while we are thinking (computation). Software -if there is any- makes sense during the process, just as computers...
 
I'm going with the first explanation...

fro the following reasons...

when you say apple, I think apple, I think of a pal green fruit with a sweet flavour, I think of a fruity smell, a crunch whilst I'm eating it.
(French golden delicious)

if you asked me to think of an apple, that's the first thing that I'd think of...
I have other entries in my head for apple, I realise there are red and pink ones. I realise it's a software brand. but those varieties of apple come in a kind of also see kind of thing, I guess you could think of it like a wikipedia page, the first entry I have in my "what I know is" is my first experiences with apples, the apples I ate during child hood.
it appears first on my metal "listing for apple", followed by other apples.


the second thing that I'd use to back up my support for the first theory is amnesia.

it is possible to entirely forget what an apple is. so we must store things in some kind of list based arrangement in our neurons.

we couldn't forget due to alcohol or injury how to interoperate a thought, or how to interoperate a word... because if that were the case surely we'd forget all kinds of words.

but we could wipe out a page that contained mental data for an object, or series of objects. and just forget about those objects.
 
This topic is discussed in several science and artificial intelligent discussion forums and I haven't received a challenging opposition or a support yet. It has occurred to me that maybe the nature of the topic fits the computer oriented minds more than the rest of the internet community. I hope you would help me to develop either supportive or dismissing ideas about my quest.

Here is the idea:

When we type “appl†using sophisticated software, the programme underlines the word and gives us alternatives such as “apple†or “applyâ€. We know that “appl†is not in the dictionary and it is programmed to provide the similar alternatives. We also know that when we type any word, it is not the word itself travels through transistors; it is transferred to binary numbers by software.

Could a similar process be going on when we think? When we think of “appleâ€, how do they travel on brain so neurons can read it? There are two possible ways:

1. “We have apple neuronsâ€. As soon as we think of an “appleâ€, the relevant apple neuron(s) are alarmed. I find this possibility utterly useless and stupid as we might have required a separate neuron for every single thing, concept, word, etc. Not only that, we must have separate neurons for every single possible forms of an apple (red one, green one, bitten one, or one that inspires us for gravity; practically endless).

2. “We have inner translators that decode the representation of a thought.†So thinking of an apple, (or anything else for that matter), will evoke different pieces of more elemental information (roundness, being eatable, fresh/old, colour, taste, etc.) as well as contextual (apple to sell or apple to eat?) and conceptual (apple as a computer brand or apple as a fruit?) steps.

If we suspect of the second way, we must also ask this question: Would it be possible that thoughts are also translated into some other type of codification (such as binary codes of a computer system) before they are processed by neural activity. In other way of saying, “apple†reaches to neurons with a totally unrecognizable way of representation? Transistor doors wouldn't understand anything from “apple†but they require some binary codes; and maybe neurons (that work with chemistry) wouldn't work with concepts, therefore they require a different type of symbolic process language –binary or not.

And maybe, this middle language makes more sense than being just a translator between neurons and thoughts: As we know from computers, all transistor architecture is designed according to logical doors that interpret 1s and 0s, not for what we type on screeen. Maybe neural architecture is also built up in order to make sense from its inner language rather than concepts of mind.

What I am asking is this: Using the computer analogy, and depending on the difference between neural activity and thoughts, can we suspect from any software-like system which operates between thoughts and neurons? We don't have to start with human brain; we can take the example of a rat: When a rat sees an apple, we can guess that there is no word of an apple going through its reception mechanism. But some mechanism translates this outside object (apple) to the neural system of our rat and it approaches to this fruit. A rat does not think as we do, yet we can still suspect that there is a simpler version of the similar mechanism is going on inside its brain. It is possible to generate more examples.

One step further: Let's imagine that there is a command software which reflects the functions of brain. This might not be "a" software. It can be a different software regimes. The coded language between neurons might be as simple as 1s and 0s of transistors, and this could be very basic and robust that is shared by all other brainy creatures of nature. We know that cells communicate to each other, we know the map of proteins used for this communication, we know DNA ( compact microprocessor units if you like), we will soon replicate the entire map of brain cells with all its specialized compartments; yet we don't know how do they communicate.

BCI (Brain-Computer Interface) devices work on a simple principle: A human made computer reads neural activity, then signals are translated into hearing, sight, movement of a robot arm; mostly "motor functions" due to the current level of technology.

We know that there are some special software behind our computers which are able to read the neural signals and translate it to commands. We also know what sensors of this computer reading is not the concept itself (not what we consciously think), but the electromagnetic signature of the brain activity. Here is the might-be-confusing bit: When I think of moving my arm, I am aware of moving my arm and this command/request comes to my consciousness as a concept ("I want to move my arm towards right/up/left/down direction"); however, the computer does not understand that, it reads the mirroring neural activity and directs the robot arm accordingly, because of the human made software. If I didn't know this process, I might have thought "Oh, I thought of moving my arm and computer understood that, magic!" No, computer didn't understand me at all, it's not magic, its software translated a signal into action, that's it.

Any ideas?
I have been attached to a BCI/BMI for a few years now and I'm so in tune with it that I can tell when new handlers take over or is just passed on to an all out ai program that feels like it's just running on sort of an auto pilot system.
I am a survivor of a series of expirements that initially started in 2021. I was one of many others that fell victim to becoming test dummies for all sorts of scenarios and programs. Still to this day most of which stem around social and mental data. I myself am attached to a psyops agenda with with BCI and BMI technology coupled with private ai programs that can pose an immediate threat not only to targeted individuals but governments as a whole.This tech has the ability to read one's thoughts. Mind interpretation is what I call it. It also has the ability to interpret sight sound and touch. Combined with a survelance system that will make anyone sick to their stomach's if they ever found out that this was actually being used on selected individuals. Believe me how frightening this expirence is when you don't know what is happening. I will not discuss further detail until I know I'm in contact with a legitimate source.
On a different note, I am looking for someone who would be interested on hearing the details from a test subjects point of view. I have documented and can recount for the entire process. I am highly sensitive to the slightest changes that have been made throughout the 3 almost 4 year expirence. As you probly already know I am connected to a non invasive BCI/BMI that most cirtinly has the ability to interpret sight, sound, touch, emotion, ect... all of this happens even before my thought is finished. I understand the open source process but would love to convesate in detail about the whole expirence. Please note that I was 100% unknowingly selected to have this technology application. I have always been a hyper aware individual which made my expirence that much more effective.
 
Back
Top Bottom