VR Crew

I've been sceptical towards the VR up until I tried the VRChat. Sure, it's a ****show, but the potential is there. Anyone tried VRChat?
 
Had Rift and Vive, but the resolution is just a bit too annoying for me. I struggle to get immersed in the game or experience because I am constantly aware of the bad resolution. I love VR and am a huge supporter, I am 100% confident it will be awesome one day soon when the next generation of headsets arrive, alongside some bigger and better games.

The resolution increase on the Vive Pro is nice, but ain't no way i'm paying that money for a mid life upgrade. I'll be buying the proper Rift 2 or Vive 2 when they're out. Apparently they should release at some point in 2019 from what I hear.
 
PP what was that temporal-something-or-other technology that only renders high-res detail where your eye is focusing?
 
PP what was that temporal-something-or-other technology that only renders high-res detail where your eye is focusing?

Im not PP but hey ho :p

It's foveated rendering. It relies on on high quality, low latency eye tracking (likely < 5ms / 240hz+). The headset would detect where you are looking, feed that to the game engine and rendering pipeline, and then render at full resolution in the area of high visual acuity, with reduced render resolution as you extend away from the centre of focus. If done well it should be imperceptible. It's a little expensive in terms of overhead, so it's really only a net benefit at very high resolutions, e.g. 4K+. VR headsets are rapidly approaching and will quickly exceed that kind of high resolution, in which case foveated rendering becomes a necessity. It's a pretty complicated technical challenge though as you need something that is self calibrating, insensitive to poor HMD head placement, doesn't stop working when the HMD slips around on your head, can cope with oddly shaped eyes, long eyelashes, saccadic eye movements, lazy eyes, etc etc. There are lots of things to solve. It has to work for 99% of people, not just 90%. Otherwise you'd sell 1M units and have 100K people saying it's not working, which would be unacceptable.
 
Last edited:
I've talked about it quite a few times outside of the forums. They really needed this to be a thing before VR came to market. Would have gone a lot further.
 
I've talked about it quite a few times outside of the forums. They really needed this to be a thing before VR came to market. Would have gone a lot further.

According to John Carmack and Michael Abrash, it is actually a performance hit at the resolution of the current Rift and Vive. The performance hit of the calculation of the eye tracking and the extra step in the render pipeline do not offset the performance gain when you only have a couple million pixels to push. GPUs are currently not at all built to render a frame at varying resolutions, so it is basically a hack. Qualcomm have built in optimisations to Snapdragon 845 to better support foveated rendering natively at the pipeline level, and it seems likely Nvidia will have done the same for Volta.

The tech is also basically only just becoming ready, so if we had waited VR would have been delayed 2 to 3 more years. But that is moot, considering the point above.
 
Last edited:
With it we wouldn't need resolutions that we're currently at. Rendering a frame at varying resolutions shouldn't be a problem and should be all software. DoF basically already does this by adding a blur to unfocused areas (the scene is still full resolution and foreground a blur, but still). We have distanced based AA, so not sure why we couldn't be rendering other areas of a frame at a different resolution. Could just be a matter of scaling rather than a hard resolution. We also have LoD based on uGrids in accordance to the character in the scene. Scaling resolution in different places shouldn't be an issue.

Actually the more I think about it, a hard resolution wouldn't be the answer besides the cap. If we have a target of 12 for maximum rendered scene scaling the unfocused areas would and should definitely be the answer instead of Circle A being 12k, Circle B 8k, Circle C 4k, etc. Scale in percentages of distance to focus. This should be achievable with current tech much like how dynamic resolution works.
 
With it we wouldn't need resolutions that we're currently at. Rendering a frame at varying resolutions shouldn't be a problem and should be all software. DoF basically already does this by adding a blur to unfocused areas (the scene is still full resolution and foreground a blur, but still). We have distanced based AA, so not sure why we couldn't be rendering other areas of a frame at a different resolution. Could just be a matter of scaling rather than a hard resolution. We also have LoD based on uGrids in accordance to the character in the scene. Scaling resolution in different places shouldn't be an issue.

Actually the more I think about it, a hard resolution wouldn't be the answer besides the cap. If we have a target of 12 for maximum rendered scene scaling the unfocused areas would and should definitely be the answer instead of Circle A being 12k, Circle B 8k, Circle C 4k, etc. Scale in percentages of distance to focus. This should be achievable with current tech much like how dynamic resolution works.

Well they have shown it working to some extent on PCs for several years now, so it is possible, from what I read it's just that you can't do it how you would ideally do it hence the kind of chip level changes Qualcomm have made the the 845. I guess it is similar to the Nvidia RTX thing. GPUs have been able to do ray tracing since forever, but it will only be hardware accelerated on Volta chips due to some custom chip level work they did to make it faster.

Though apparently it is not as simple as linearly scaling the resolution down as you get towards the periphery of your vision. Google did a good paper on it: https://research.googleblog.com/201...blogspot/gJZg+(Official+Google+Research+Blog)

Basically if you just do it the dumb way and simply reduce resolution while spreading outwards from area of high acuity, you can still see visual artifacts due to hard edges and aliasing. Nvidia did a similar paper on more advanced methods that incorporate multiple different rendering changes to the frame to make it much more subtle and effective and still give 2 to 3x perf increase.

It's sure as **** exciting though. Can't wait til the next VR headsets with this kind of tech in. I don't trust Facebook as far as I can throw them, but apparently they have their "biggest ever news" for AR/VR at their F8 conference this year, so will be interesting to see what they show.
 
Last edited:
Well they have shown it working to some extent on PCs for several years now, so it is possible, from what I read it's just that you can't do it how you would ideally do it hence the kind of chip level changes Qualcomm have made the the 845. I guess it is similar to the Nvidia RTX thing. GPUs have been able to do ray tracing since forever, but it will only be hardware accelerated on Volta chips due to some custom chip level work they did to make it faster.
Can't really compare the 845 here, as that's ARM. Sure they made changes for it to work, but they probably made those changes because it didn't work before. To make the tech you'd need hardware that worked to begin with, and I'm pretty sure our current GPUs have been doing it.

RTX is using the Tensor cores, otherwise if they're not it's locked proprietary for nothing as Volta offers nothing exceptionally different over Pascal architecturally.

Though apparently it is not as simple as linearly scaling the resolution down as you get towards the periphery of your vision. Google did a good paper on it: https://research.googleblog.com/2017...search+Blog)

Basically if you just do it the dumb way and simply reduce resolution while spreading outwards from area of high acuity, you can still see visual artifacts due to hard edges and aliasing. Nvidia did a similar paper on more advanced methods that incorporate multiple different rendering changes to the frame to make it much more subtle and effective and still give 2 to 3x perf increase.
That's showing it works, the paper shows why they shouldn't use it yet.

It's sure as **** exciting though. Can't wait til the next VR headsets with this kind of tech in. I don't trust Facebook as far as I can throw them, but apparently they have their "biggest ever news" for AR/VR at their F8 conference this year, so will be interesting to see what they show.
I could put money on Rift Pro.
 
Back
Top Bottom