Translating brain experience to film: One step closer.
Scientists at the University of California, Berkeley, have managed to decode and reconstruct dynamic visual experiences processed by the human brain. Currently, researchers are only able to reconstruct movie clips people have already viewed. However, the breakthrough is expected to pave the way for reproducing the movies inside our heads that no one else sees - such as dreams and memories.
http://www.tgdaily.com/general-sciences-features/58630-brain-imaging-reveals-the-movies-in-our-minds
Holy shit. Not sure how much I like this. Pretty cool, but high potential for abuse at the same time.
Enlightened Atheist, Gaming God.
- Login to post comments
I'm going to love it. Finally a way to make visual art without learning Photoshop! Put some holographic projection on top of it and there will be guys walking around like that one called Riviera from the Neuromancer novel.
Furthermore, sometimes before I go to sleep, I hear music. No shit, a real music, quite original, which I never heard before. Stuff like rock music or even baroque symphony. I'd love to record that some time.
Beings who deserve worship don't demand it. Beings who demand worship don't deserve it.
We know what is coming, go onto Facebook and 'share your thoughts'.
Wow. It's one thing to imagine it, or even to read about it, but to actually see it working. Even if it's blurry like that. That's fucking awesome.
I have a sneaking feeling that they will never get sharp reconstructions. At least, not until the reconstruction software itself becomes almost brain-like, making its own associations.
This reminds me of two very cool things.
First, the images remind me of the movie Waking Life, which was quite a cool movie.
Second, when they show the clip with the elephant, there are words/letters overlaid on top of the elephant. This reminded me of the extremely mind-blowing book, Reading In the Brain, by Stanislas Dehaene. From reading that book, I think I understand why the words appear over the elephant, but not over things like a human face, or more familiar things. Because elephants are fairly unfamiliar, a lot of our understanding of them comes from what we've read about them, and actual visual experiences of elephants are relatively rare. I suppose a zoo keeper who deals with elephants regularly would have shown a more accurate, less blurry image of the elephant, without any words floating there, because they would have a stronger conceptual/perceptual understanding of the elephant in more-intuitive depth, without the need to supplement it with words/letters. If this sounds crazy, read the book, it's amazing.
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!
LOL. I can see it now. On the latest newsfeed, it will have a link that says : See who is thinking about you.
In addition to statuses, it will have "John Doe Thought :....."
On a serious note. That was a pretty far out article to read about. Slightly scary and slightly fascinating at both the same time.
“It is proof of a base and low mind for one to wish to think with the masses or majority, merely because the majority is the majority. Truth does not change because it is, or is not, believed by a majority of the people.”
― Giordano Bruno
Lol.
Seriously, though. I've thought for a long time (several years, anyway) that once we get the ability to control technology with our minds (it already exists, in weak but functional form, and there's even a company marketing a commercially available video-game controller using this tech), then it will be a short skip and a jump to being able to communicate by technologically assisted telepathy. Imagine if, instead of typing this message, I just thought the words, the interface records them, and posts them for me at a thought.
Forget keyboards. Forget those crappy, tiny little buttons on your phone or whatever. The ultimate human-machine interface is just around the corner. Brain to computer. Computer to brain (or at least visual/audio feedback). Everything you do now on computers. Imagine doing the same thing at a thought. Way faster, way more convenient. No intrusive technology, just very sensitive brain scanning.
Which makes me wonder. I'm betting that they had to insert neural probes to get this kind of resolution. If it was done with something like MRI, that would be ginormous leap.
Edit: Holy shit, it was MRI!
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!
When I first heard about this it totally blew my mind. But I had jumped to a conclusion that I think you may have, that they were reconstructing the images basically from scratch. That's not what's happening. My understanding is that they took the youtube database and mapped the brain patterns of people watching the videos in fMRIs. Then they used that information in the experiment by scanning the brains of the subjects while they watched the video clips shown in the example and the software compared the brain patterns of these subjects with the exemplars to construct a composite video of the youtube clips which cause brain patterns most similar to those generated by the subject. This is what we see on the right side of the screen: a composite of all the videos which generate brain patterns which most closely match those of the subjects being tested. So the images and letters and words are not being generated by the minds of the people, they just existed in those videos which were used to create the composite images.
I'm not trying to downplay the mind blowing nature of this breakthrough, it's still incredibly amazing. But those letters on the elephants, those aren't generated from the minds of the subjects, they're generated by youtube. We're still a few steps away from mind-reading.
Nik K
Yes, I realized that later, too, when I was re-considering the details mentioned in the article (same ones you're talking about), but I hadn't fully thought through. It is a very clever technique they used to improve the ability to visualize what the person is seeing in their mind's eye.
I suppose it's a little bit reminiscent of those composite images, where they have all these little tiny images on a poster, that form a kind of mosaic which viewed from a moderate distance looks like a completely different picture. The first one of those I saw was made up of tiny shots from the Star Wars movies and the overall mosaic was of Darth Vader. Not exactly the same thing going on in the bran scan, but somewhat similar.
I wonder if this technique can be improved, and what it would take to do so.
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!