VR Prototypes Reveal Facebook's Surprisingly Critical Research Directions – Hackaday
A short while ago, Tested posted a video all about hands-on time with virtual reality (VR) headset prototypes from Meta (which is to say, Facebook) and there are some genuinely interesting bits in there. The video itself is over an hour long, but if you’re primarily interested in the technical angles and why they matter for VR, read on because we’ll highlight each of the main points of research.
As absurd as it may seem to many of us to have a social network spearheading meaningful VR development, one can’t say they aren’t taking it seriously. It’s also refreshing to see each of the prototypes get showcased by a researcher who is clearly thrilled to talk about their work. The big dream is to figure out what it takes to pass the “visual Turing test”, which means delivering visuals that are on par with that of a physical reality. Some of these critical elements may come as a bit of a surprise, because they go in directions beyond resolution and field-of-view.
At 9:35 in on the video, [Douglas Lanman] shows [Norman Chan] how important variable focus is to delivering a good visual experience, followed by a walk-through of all the different prototypes they have used to get that done. Currently, VR headsets display visuals at only one focal plane, but that means that — among other things — bringing a virtual object close to one’s eyes gets blurry. (Incidentally, older people don’t find that part very strange because it is a common side effect of aging.)
The solution is to change focus based on where the user is looking, and [Douglas] shows off all the different ways this has been explored: from motors and actuators that mechanically change the focal length of the display, to a solid-state solution composed of stacked elements that can selectively converge or diverge light based on its polarization. [Doug]’s pride and excitement is palpable, and he really goes into detail on everything.
At the 30:21 mark, [Yang Zhao] explains the importance of higher resolution displays, and talks about lenses and optics as well. Interestingly, the ultra-clear text rendering made possible by a high-resolution display isn’t what ended up capturing [Norman]’s attention the most. When high resolution was combined with variable focus, it was the textures on cushions, the vividness of wall art, and the patterns on walls that [Norman] found he just couldn’t stop exploring.
Next up at 39:40 is something really interesting, shown off by [Phillip Guan]. A VR headset must apply software corrections for distortions, and it turns out that these corrections can be complex. Not only does an image get some amount of distortion when passing through a lens, but that distortion changes in nature depending on where one’s eye is looking. All of this must be corrected for in software for a high-fidelity experience, but a real bottleneck is having to wait for a physical prototype to be constructed, and complicating this is that different people will have slightly different subjective experiences of distortion. To address this, [Phillip] shows off a device whose purpose is to accurately simulate different physical headset designs (including different lenses and users) in software, allowing exploration of different designs without having to actually build anything.
The final prototype — named Starburst for reasons that will soon become clear — is showcased at 44:30 and demonstrates the power of true high dynamic range. It’s the most unwieldy-looking, but that’s mainly due to essentially having car headlamps as backlights. The purpose isn’t to blind users, but to deliver something important, and lacking. Why is high brightness so important? The answer is simple: light levels in the real world are far beyond anything a modern monitor (or VR headset) can deliver. This means that, in VR, a spotlight only ever really looks like a picture of a spotlight. It will never truly look bright, not in the way that your eyes and brain actually experience the word. When headsets can deliver a true HDR experience, that will change, and that’s what this prototype delivers.
It’s clear this direction is being taken very seriously, and it may come as a surprise to learn that delivering a convincing visual experience goes considerably beyond higher resolution and wider field-of-view. All of the truly good VR ideas may have been dreamt up back in the 1960s, but this video is a great showcase of what goes into the nitty-gritty scientific work of figuring out how to get a problem solved.
I too look forward to modelling every inconvenience of the real world, like bright that hurts. Also I guess we’ll be needing a cuirass made like an iron maiden so you can really feel the stab.
Just the body suit covered in Taser shock pads, painful but less messy…
A firearms training facility somewhat already has that. For their 300 degree training simulator, the participants can opt to wear a shock belt to simulate being shot when they screw up in the training.
Just wait till someone invents smell-o-vision, or the feelies for that romp through the solar system.
Brightness matters (chronobiologically). Would be fun though if budget versions just came with belladonna eye drops 😀
Facebook wants to be able charge others to be in total control of your brain. Especially to really know how long you react to adverts or other paid for material. It will allow incredible analysis and manipulation of the participant’s mind and ultimately their body. The aim of providing a virtual experience is just a cover to run mind games on people. Participate at your own (and our) peril. That tin hat moment is bearing down on us quickly. You can bet they won’t be modelling climate change into this brave new, disconnected from the real world experience in your head, not when you are trying to sell the latest opiate for the masses.
I concur, they should not be encouraged in any way, and people need to be warned about the traps they are setting for the unwary and the naive.
Hardly. They won’t be recording and analysing your every eye-linger. Calculations for focus on that level will be local and the data channel will seperate into more general info that becomes rendered on the headset. There will be some self reporting that apps and OSs do but nobody wants to create an endless data bank of useless info. You’ll know what they can and cannot report within reason and it won’t be a big leap in comparison to what they already know about you now.
In VR if you are making searches for inappropriate things you’ll be just as easily tracked as when not in VR. It will make very little difference how long you focus on that avatars groin. But I’m absolutely sure when it registers you focused attention on a brand name they company that paid $0.25 per load instance will get a “viewed” analytic that your headset self reports.
Then of course we will have adblocker software from third party hackers that disable Meta/Facebooks ability to render Coca-Cola machines… But they will also side load crypto-hashing onto your devices graphics processor. So it’s probably less detrimental to let Facebook tell Coca-Cola you lingered longer on their machine vs Pepsi, because they clearly have enough real-world sales data that anything more they learn is literally to become more persuasive without alienating their customers.
If nothing else… Product ads will become more subtle, feel more like a choice, and therefore become less divisive when we aren’t interested.
No troubles Amazon has you “covered”
12 Pieces Aluminum Foil Deep Conditioning Caps Reusable Hair Processing Caps Hair Coloring Shower Caps for Home Salon Use (12 Inch, Silvery) https://a.co/d/3iverWw
The general mood for every advance in technology is that it can be used for evil. Nuclear power and nuclear weapons are the modern paradigm examples of this, but basically it happens with any advanced technology that’s long in development. AI is on the horizon, and look how many people wonder “if” we should do that.
New technologies can be used for evil, but on the whole they are not.
The reason for this is that 1) people are, in general, not evil, and 2) some of the non-evil people come up with strategies to further reduce the efforts of evil.
So eMail is used for spam and phishing attacks, we now multiple solutions that keep spam largely at bay.
The internet is rife with advertisements, so we now have ad blockers and various browser add-ons.
Hackers are trying to get into your computer, but we have an army of white-hat hackers that oppose them.
Nuclear power can be used to destroy the world, but an army of pacifists want to turn governments away from this.
(Reference note: Regarding the war in Ukraine, it has been stated that *if* Putin decides to send nukes to Europe or the US, someone in the Russian chain of command would bail out.)
Even if Facebook is SPECTRE in disguise, their evil impact on the world will be minor. Lots and lots of people are basically thoughtful and moral, and any internal evil would lead to whistleblowing, lobbying, regulation, and (in an extreme situation) social cancellation.
We can expect *some* evil out of these developments, but on the whole any advancements in technology will be simply that: advancements in technology, that will be distributed thoughout our lives in places where VR will have a positive impact.
It’s not the new technology that invokes the wariness. It’s the developer.
And contrary to your assertion, people are, in general, inherently evil. No one has to teach a child how to misbehave.
Claiming that children misbehaving is a sign of them being inherently evil is absurd.
VR does not concern me in the least.
Facebook’s motives for it…moreso.
if you think tech is not used for evil, then look at ai and insurance. it is literally a racist, sexist, scam. oh wait, that is all of capitalism and the data driven economy.
but sure, they wont use this to push us further into the panopticon/mass slavery.
Ready Player One explored this. I only saw the movie. IOI = Meta. One quote from IOI/Meta: “Our studies show that we can fill up to 80% of someone’s visual field (with ads) before we induce a seizure.”
It’s coming – believe it. Meta is in it for control and money, period. VR allows more of both.
Facebook et al are attention slavers, that is their business model. There aim is to take control of human consciousness for as large a percentage of a person’s waking life hours as possible.
One thing can be said when this is all over. You think 3D printing is patented, you haven’t seen anything yet.
Why should it be absurd for a social media company to be involved in VR? Online social spaces are one of the possible uses of VR, as shown by early virtual worlds like Second Life.
Rather than dive down the rabbit hole of who and why this is being done, can’t we celebrate some of the real world conditional testing, interesting engineering and cool tech combinations being demoed.
That multistage selective focal length lens thing is really awesome, and could have so many applications.
Not sold on making VR brighter though, I like that my VR set is bright enough to get across details and contrast well but not dazzling, its the very definition of a light controlled environment! (probably doesn’t help that I am prone to migraine like headaches and photo-sensitivity there though, but still even when I’m fine I never turn the brightness on my monitor up above the bare minimum to see the content not the world behind me in reflection (which means usually I wish it would go dimmer)…
Though the way you said that just made me wonder if their aims are completely opposite to each other. Trying to compress depth of field so you have to focus your eyes at the same time as they make it a lot brighter so your pupils pinhole and you get great depth of field… ummm yah, good luck with that.
Anyone that gambles on digital schizophrenia is a fool.
The vari-focal elements aren’t to simulate shallow depth of field, they are to address the convergence-accommodation link, where converging your eyes to look at something closer, automatically causes your eyes to focus closer. At present the displays are focussed at a middle distance, so they feel natural for middle to far away objects, but when you try and look close, suddenly your eyes are trying to focus close, and the screen is far away, and everything goes out of focus. To fix this, the vari-focal element can shift the display focus so that it is close, allowing it to come into focus. Now you can read things up close in VR! (A common workaround as a user in VR just now is to close one eye, then your brain stops worrying about stereo vision and convergence and can pretend that the object is actually far away, and just large).
Multi-focus what wonderful tech. What this could mean for glasses wearing people, oh wait there have been ideas and examples. Maybe in the third world, optometrists have been fighting this for years. Can I wear that VR gear with my glasses? Otherwise it will have to accommodate to my prescription which is a technical violation of law.
What? How is accommodating to your prescription in any way a violation of law?
My camera had a dioptre adjustment on the viewfinder – is THAT violating any law?
Making lights in VR bright enough to feel bright… VR needing as much enrgy t fake an experience as the real thing needs… just another way for facebook to become an even greater drain on energy supplies, now they’ll want two car headlamps running for every person so they can be plugged in to the metaverse. Seems like living in the real world is going to always be better for the planet, however conspicous ones real world consumption may be.
..square of the distance bro..
we don’t really need a star in the goggles
Companies have made attempts to introduce various VR concepts and products several times now over the past 25 years. The problem was that the tech wasnt capable of delivering a truly realistic experience.
These days tech has matured to the point where the hardware can now deliver a much more passable simulated reality. But still far from dazzling, therefore it will fail once again. The long-term continued success that Meta is hedging on, will solely depend upon participation. Why on earth would anyone feel compelled to stop what they’re doing in reality, take the time to change into whatever suit/VR headset/+gear so that they can plugin to Meta and do what? Go hangout with friends at a Meta cafe, go play out some sort of role-adventure scenario, meet another person to engage in virtual sex, or engage in some other game-like world? I think many visialize these concepts in their minds as if yeah that would be so awesome! But once people begin to realize that to do anything in the Metaverse, they would end up having to excert more energy to do so than if they simply performed these activities in real life.
In other words, most people are too lazy or lack the motivation to do all that, and there is also the stupid factor. I personally would feel stupid donning all this hardware just so I can go chat with friends in the Metaverse. Whatever conversations you have with those friends will still require you to speak out loud. And to walk around in random circles, or perform any other body movements in order to digitally mimic the same in the Meraverse. That’s just stupid!
Also, people need to start scrutinizing all these new and weird directions and products that big tech is leading us towards. This isnt all simply happening randomly and without designed purpose. Primarily greed, power, and control are the main motivating factors.
For real, something is on the horizon but its not here to deliver what we think and see itself to be…
I mean. People already sit there on their phones scrolling away on Facebook or whatever social network site they are using instead of engaging in the actual real world. For hours at a time. Haven’t you ever seen a group of people sitting at a table in a restaurant, every single one is on their phone?
I’m with you- it is completely stupid but they are already doing each and every one of the things you mentioned.
Realistic isn’t required, just enjoyable or useful, infact realistic is often not worth it as the real world exists, and often isn’t fun…
Enjoyable however has been possible for quite a while, though not very affordably, even now its not that cheap, just no longer bank breaking for all but the top earners..
My gen1 HTC VIVE has awesome tracking, and a very usable screen, not all that sharp on text which can be a little tough in some games like Elite Dangerous, but as an experience in something like Elite its superb, a tiny bit of head bobbing to put the text you have to read in the sweetspot of readable (and a hud colour pallet swap to improve the sharpness) are a small price to pay – the only remotely practical thing that would really make it better still is for the HOTAS to be bolted to my chair arms in the same space the VR world shows the flight controls so when you look a little down there is no disconnect between hand position and what you see, though full on force feedback sim would be great…
And that is what makes VR special its letting you do stuff you can’t do for real, like fly a spaceship, play something resembling Calvin & Hobbes take on squash, or training you to do something for real in a safe environment. The ability to really meet with friend is also far from a given – so if VR lets you use something like Tabletop simulator to play that board game and chat – the general hanging out relaxing with the folks you want to keep in touch with, but are now a stupidly inconvenient distance away, in something like a natural way why not! It doesn’t have to replace real life to be a worthy addition to real life (and for folks with mobility or other serious health issues it might be the only way to really keep in touch and do anything fun)
I do wish my vive had a slightly wider IPD max as I think I could use it, and better accommodation for my rather larger than average head – its not particularly uncomfortable but it could do with a little more space, which I’m not sure any of the current generation headsets do either – but they certainly have far far sharper displays so if I can ever afford one I’d think about bumping up to a more current generation.
I think I’ve seen this before…
Long live the new flesh!
Ah, but most of us saw Videodrone as a warning.
Zuckerberg saw it as a game plan.
Norman Chan (Tested) presents a great video report.
Doug Lanman (vari-focus), Yang Zhao (resolution), Phillip Guan (optical distortion), and Nathan Matsuda (light sources). All present solid material and informative insights (some original, some derivative).
The video could be much improved, and about half as long, if the Meta marketing babble was deleted, Mark Zuckerberg (bs).
Please be kind and respectful to help make the comments section excellent. (Comment Policy)
This site uses Akismet to reduce spam. Learn how your comment data is processed.
By using our website and services, you expressly agree to the placement of our performance, functionality and advertising cookies. Learn more
Leave a Comment
You must be logged in to post a comment.