![]() It wouldn't make much sense to rely on a low-end device's limited CPU and RAM capabilities while using the cloud-based server strictly for its graphics hardware, when it would undoubtedly produce better results simply having everything run on the server and sending back a compressed video feed, much like other game streaming services. ![]() the Google Stadia model), but not if you're uploading textures and geometry that need to get rendered, compressed, and turned around within tens of milliseconds.The description of what this service was intended to be is rather vague, though I question whether a game would be running on the local system, and sending graphics calls to a remote server. That might be alright if you're just uploading events from an input device, like mouse, joystick, etc. With cable internet, I think it's not uncommon to have 10:1 download:upload ratio. Though I bet this sort of advanced caching would trigger some copyright litigation, even though the assets are only being used in conjunction with the software they originally came from for their originally intended purpose.īit_user said:Regarding throughput, one aspect I didn't really delve into is that most internet connections are asymmetric. The local driver can also cache resource hashes to know which assets it doesn't need to wait for server confirmations on. Only the first player to encounter a new asset would get the full new asset upload hit. The local virtual GPU driver can hash whatever the game is pushing through the API, send the hash to the server, the server queries its local asset cache to see if the asset already exists and loads it from local storage at 10+GB/s or SAN at 10-25Gbps. the Google Stadia model), but not if you're uploading textures and geometry that need to get rendered, compressed, and turned around within tens of milliseconds.Most textures and other assets are static. ![]() As I was alluding to above, complex geometry and big textures would tank performance rather quickly.įinally, with this just getting cancelled now, I have to wonder just how many resources they wasted on it, that they could've put towards improving their beleaguered drivers.īit_user said:Regarding throughput, one aspect I didn't really delve into is that most internet connections are asymmetric. Except, I think that stopped working after like OpenGL 1.x. In UNIX (X11, to be precise), you can actually run OpenGL programs on a remote machine and have the graphics rendered by your local GPU, which ties in with the idea of forwarding the rendering commands. ![]() Somehow, that doesn't seem terribly workable, either. The cloud would need to have a cached copy of the game's disk image, too. Instead, I wonder if it's more like having your game running in a container and transparently migrating that container to a cloud instance. So, I remain skeptical that's really how it's meant to work. Maybe you could use some tricks to cut it down by one order of magnitude, but certainly not two. We know you need at least PCIe 3.0 x16 for good performance on a high-end dGPU, so putting the GPU in the cloud would mean cutting back from 16 GB/s to like 0.13 GB/s over a gigabit internet connection (if you even get that much throughput, which you usually don't). The idea that it splits work between the client machine and the cloud just seems so weird and wrong.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |