PP what was that temporal-something-or-other technology that only renders high-res detail where your eye is focusing?
I've talked about it quite a few times outside of the forums. They really needed this to be a thing before VR came to market. Would have gone a lot further.
With it we wouldn't need resolutions that we're currently at. Rendering a frame at varying resolutions shouldn't be a problem and should be all software. DoF basically already does this by adding a blur to unfocused areas (the scene is still full resolution and foreground a blur, but still). We have distanced based AA, so not sure why we couldn't be rendering other areas of a frame at a different resolution. Could just be a matter of scaling rather than a hard resolution. We also have LoD based on uGrids in accordance to the character in the scene. Scaling resolution in different places shouldn't be an issue.
Actually the more I think about it, a hard resolution wouldn't be the answer besides the cap. If we have a target of 12 for maximum rendered scene scaling the unfocused areas would and should definitely be the answer instead of Circle A being 12k, Circle B 8k, Circle C 4k, etc. Scale in percentages of distance to focus. This should be achievable with current tech much like how dynamic resolution works.
Can't really compare the 845 here, as that's ARM. Sure they made changes for it to work, but they probably made those changes because it didn't work before. To make the tech you'd need hardware that worked to begin with, and I'm pretty sure our current GPUs have been doing it.Well they have shown it working to some extent on PCs for several years now, so it is possible, from what I read it's just that you can't do it how you would ideally do it hence the kind of chip level changes Qualcomm have made the the 845. I guess it is similar to the Nvidia RTX thing. GPUs have been able to do ray tracing since forever, but it will only be hardware accelerated on Volta chips due to some custom chip level work they did to make it faster.
That's showing it works, the paper shows why they shouldn't use it yet.Though apparently it is not as simple as linearly scaling the resolution down as you get towards the periphery of your vision. Google did a good paper on it: https://research.googleblog.com/2017...search+Blog)
Basically if you just do it the dumb way and simply reduce resolution while spreading outwards from area of high acuity, you can still see visual artifacts due to hard edges and aliasing. Nvidia did a similar paper on more advanced methods that incorporate multiple different rendering changes to the frame to make it much more subtle and effective and still give 2 to 3x perf increase.
I could put money on Rift Pro.It's sure as **** exciting though. Can't wait til the next VR headsets with this kind of tech in. I don't trust Facebook as far as I can throw them, but apparently they have their "biggest ever news" for AR/VR at their F8 conference this year, so will be interesting to see what they show.