Video: Oliva’s eye-opening brain visualizations show us what’s happening in our own heads, through the power of average 3D models.
Every once in a while, we humans can use a reminder that while our mental supremacy was fairly enshrined in the 20th century, things in the 21st century are looking a little different…
“We are not anymore at the top of the thought chain,” Aude Oliva, a Senior Research Scientist at CSAIL reminds us, in looking at the trillions of parameters involved in modern neural networks, and suggesting that we will be rivaled by artificial peers in the years to come.
Noting that humans and robots will have to collaborate, she talks about solving black box problems that occur when we can’t figure out how algorithms are working and how neural networks are coming up with decisions:
“As a neuroscientist and a computer scientist, I see that both of these fields are facing the same questions, and the same challenges: What’s going on in the black box, the black box of the natural human brain, deep neural networks with hundreds of trillions of connections. And the black box of those GPT and generative models with trillions of connections.”
Now, part of what’s interesting in this talk is the visual models that Oliva provides to us – first, of the human brain responding to images that are only shown for half a second, and then models of a human brain responding to a sound with a similar duration. She also shows the aftermath, when the sound has gone silent, or the image has gone dark, and the brain is still working away. You see the activity ‘fade’ and you think about what’s going on in these heads of ours, every waking second of every day. (actually, during the night, too!)
Referring to these models as ‘spatio-temporal maps, Oliva suggests you can use all kinds of stimuli, a cell phone, a bird call, etc., and come up with an analysis of average responses that help us move toward progress in what used to be “science fiction” applications. Imagine a cap or hat, she says, that could calculate and read out the signals, can be enormously valuable, together with labeled training data that can generate these average 3D maps, but this is still out of reach.
“All the magic is at the level of the data analysis, where we have to relate some signals that are very different in space and time,” she says. “And then, we can come up with those three dimensional brain responses maps of, on average, what’s going on for different processes.”
As for what we can do with the technology, she mentions diagnoses, brain disorders, mental health issues and motor control issues.
“A lot of the diagnoses, based on if we are losing memory, or if we have motor control issues, involve the entire brain dynamics, and the method that we have … here in the lab is looking at this whole brain dynamic – so there’s (a) huge opportunity to see to which extent we could compare a control group to a patient group with some particular brain disorder, and come up with diagnosis much early on, that’s what’s possible.”
All of this sort of demographic-diagnosis stuff meshes with the earlier approach, where Oliva showed us that the human brain works, well, in the context of time. The sky’s the limit for what some of these new attention tools are going to reveal to us, about our lives, and about ourselves.
She leaves us with this:
“(With those) generative models, there is now a lot of opportunity for human/AI communication, with an AI assistant that may one day be able to ‘see in(to) your mind…. But not yet.”
Read the full article here