It was long thought that the human brain puts a sort of spotlight on important information and experiences for us to keep track of and learn from.
But researchers now think it's more likely a filter – omitting everything which isn't useful.
That seemingly minor difference could hold the fate of human civilisation, according to science journalist and US Navy veteran Tristan Greene.
This is because artificial intelligence (AI) is designed with the brains of children and young people in mind.
Able to learn quickly but already equipped with the key formula for independent development, AI should be off on the right foot.
But scientists' more recent understanding that the brain writes its own 'code' means AI has nothing to copy.
The brain's much-misunderstood basal ganglia actually teaches itself over time, whiz scientists at MIT found a few years ago.
But what MIT's landmark paper showed was that the code is written while the brain develops.
Greene wrote: "AI has no way to experience anything.
Drinker beaten to death outside bar on night out as others ‘filmed the attack on phones’
"No matter how advanced an AI is, or how good at making decisions it can learn to become, it still can’t taste, see, smell, feel, or hear."
Even still, if humans are able to impart AI with the necessary functions of developing its own consciousness – as our brains do by themselves – artificial intelligence could rapidly simulate human life.
The writer added: "If we were to develop quantum AI systems capable of more accurately mimicking the human brain‘s machinations, it might go a long way towards creating the science necessary to grow sentience from organic matter.
"A sufficiently robust brain-computer-interface (like the ones Facebook and Neuralink are working on) should, one day, be capable of speaking the brain’s language.
"Under a paradigm where computers can skip our senses and feed direct input into our brains, it should be trivial for a human-level AI to rewrite the stuff inside our grey matter that makes us human and replace it with a more logical model based on itself."
In other words, we'd be screwed.
Thankfully, that's still a long way off.
Or is it?
Source: Read Full Article