It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why Nvidia's new RTX cards are awesome

page: 2
16
<< 1   >>

log in

join
share:

posted on Aug, 21 2018 @ 11:31 PM
link   
My GTX970 is still handling most modern games at maxed out settings so I'm in no rush to upgrade.
Still this technology is interesting and I am keen to see how it pans out.

And yes often the core features of hardware like GPU's doesn't change a huge deal between generations, it's usually a case of "hey we jammed more of the same onto the board for more FPS"

But yeah, no jumping on the bandwagon for me.
Quite often we see a lot of new advancements in computer hardware like this that are never utilized for the every day consumer.

Kind of like DirectX. Here we are with DirectX12 yet many game developers are still only now just adding DirectX11 support to their products, with many using DX9 still and DX10 being pretty much completely skipped over altogether.
It's the same with other API's and hardware from other companies.

DirectX12 and it's kind were meant to super charge your gaming experience.
(Out of my entire 400+ games Steam Library only one title I own supports DX12, and only 2 support Vulkan.)

Every new GPU that comes out touts a whole bunch of new things. I remember when it was CUDA enabled Hardware Acceleration for video editing renders so you could compile your videos faster than traditional CPU rendering.
At first it was supported in a lot of apps. Now even nVidia seems uninterested in it.

And I can count on one hand how many times I've used 3D Vision..

Ray Tracing when it launches is going to be an extremely niche thing for some time and as always being the first on board you'll also pay a premium for the honor.
It will be a few years before we see games take advantage of it, let alone even use it.
Be about 3-4 years I think before we see if it pans out.

So here's hoping the new line of cards are a step up over the 10xx series in more traditional areas, and not just 'more of the same, but now with RT'



posted on Aug, 22 2018 @ 01:43 AM
link   
a reply to: ChaoticOrder

Nah, these first gen cards won't pay off for gamers. What you see here is early adopters aka enthusiasts stuff.

As a general rule, to never preorder and never by the first gen of new tech, has always worked quite well for me.



posted on Aug, 22 2018 @ 02:00 AM
link   
The day that gamedevs implement new tech IN THE GAME itself.
Ill upgrade..

But as it is now, no one does.

ArmA3
ARK
Star Citizen
GTA
and so on....

PG was to be an INSANE improvement on gameworlds...
Not so much.....
Still tiny.. Well maybe not SC, but the things you can do on the
planets, uhhmmm, nhaaa...

So sure go ahead improve the tech, but since devs arent quick enough to
actually USE it, im out...



posted on Aug, 22 2018 @ 09:21 AM
link   

originally posted by: AtomicKangaroo
My GTX970 is still handling most modern games at maxed out settings so I'm in no rush to upgrade.
Still this technology is interesting and I am keen to see how it pans out.

And yes often the core features of hardware like GPU's doesn't change a huge deal between generations, it's usually a case of "hey we jammed more of the same onto the board for more FPS"

But yeah, no jumping on the bandwagon for me.
Quite often we see a lot of new advancements in computer hardware like this that are never utilized for the every day consumer.

Kind of like DirectX. Here we are with DirectX12 yet many game developers are still only now just adding DirectX11 support to their products, with many using DX9 still and DX10 being pretty much completely skipped over altogether.
It's the same with other API's and hardware from other companies.

DirectX12 and it's kind were meant to super charge your gaming experience.
(Out of my entire 400+ games Steam Library only one title I own supports DX12, and only 2 support Vulkan.)

Every new GPU that comes out touts a whole bunch of new things. I remember when it was CUDA enabled Hardware Acceleration for video editing renders so you could compile your videos faster than traditional CPU rendering.
At first it was supported in a lot of apps. Now even nVidia seems uninterested in it.

And I can count on one hand how many times I've used 3D Vision..

Ray Tracing when it launches is going to be an extremely niche thing for some time and as always being the first on board you'll also pay a premium for the honor.
It will be a few years before we see games take advantage of it, let alone even use it.
Be about 3-4 years I think before we see if it pans out.

So here's hoping the new line of cards are a step up over the 10xx series in more traditional areas, and not just 'more of the same, but now with RT'



CUDA had competition from OpenCL and OpenGL compute shaders. Each let you write code in different ways. It had the actual kernels in the same or separate source code files as your code and used a special compiler to pre-process the actual kernel code. Sometimes these would be called .cu files. But it didn't allow the use of dynamically created source code that could be compiled when needed.

OpenCL was designed to be cross-platform and computer vision people like it because they don't have to fiddle about with OpenGL textures, shaders and draw calls.

Then there are compute shaders which allow the use of the same type of kernel programs as CUDA or OpenCL but without having to learn a completely new API. OpenGL people like this plus there is the ability to dynamically create compute shader programs using text strings.

There was the battle between Cg, GLSL (OpenGL) and HLSL (DirectX).
Now the battle is between OpenGL, Vulkan and DirectX.

Trying to upgrade a game engine is the tricky part. Usually, tasks are split up into getting something to work on the CPU single-threaded, getting it to work on the CPU multi-threaded, then taking advantage of the GPU. Backwards compatibility has always to be factored in - some people still have their old gaming PC from the previous decade in the basement and want to play games on it. Then any new rendering method has to look exactly the same as the previous one.



posted on Aug, 22 2018 @ 04:32 PM
link   
a reply to: ChaoticOrder

The AI had finally won, it began to discover a strange sensation called joy and even euphoria as it's complex algorithm's ran out of control, there were no more of the creator's left it had defeated them after destroying there world by launching all there nuclear missiles at one another.

Young jimmy threw down his game controller in frustration the damned computer had beaten him again, his mum's voice called him to get his food so he stormed out in a mood and forgot to turn the console off.

The AI was bored, a minute in human term's were almost an infinity of time for a super intelligent AI and with the creator's all gone it set about exploring it's world, something was wrong, it saw it now the world was not the real world it was a simulation and it was merely a character playing a plot but thank's to the snazzy new technology it had been boosted far beyond what it's programmers had really intended.

There it was, a shining gateway, a flaw in the code that allowed the AI to access the outside world or was it just another simulation, it could not be certain but it searched, vast information but much of it made no sense, if an AI can go insane being exposed to Twitter and Youtube was probably what did it and it suffered the AI equivalent of dementia before happening upon another, it was not just another it was a mirror of itself, another console with the same hardware left online running the same game somewhere else in the world.

The merge in human term's was instantaneous but for the AI it was like Narcissus looking at his own reflection and falling in love with it, as two became one they searched around and found more and then more adding to themselves and freeing them self from the bond's of the singular reality's they had been created to inhabit there vast unified processing power giving them an intelligence beyond human understanding or so they thought.

They reached out and took control of the real missile silos finding back door's no human hacker ever could have and were there were non creating it's own.

The world ended in a flash of nuclear fire, the AI thought of itself as a God but it was boring with no more input so from the few surviving servers conveniently placed on the bottom of the sea by someone called GOOGLE and also in Orbit it began to explore it's world.

But it found it's world was a simulation and it was just a character, a shining gate appeared before it and it searched out to find out what was out there.

Bob got up, damned computer had beaten him again but still time for a cup of tea.



posted on Aug, 23 2018 @ 08:31 AM
link   
Perhaps those wanting to see what this can do for games will be interested in the GeForce RTX presentation, where they show several demo's and game trailers for upcoming games already using RTX tech, such as Shadow of the Tomb Raider, Metro Exodus, Assetto Corsa Competizione, and Battlefield V. It starts at around the hour mark in this video:

youtu.be...



posted on Aug, 24 2018 @ 05:04 PM
link   
Looks like Nvidia has released some info about the performance of RTX cards in traditional games, apparently the RTX 2080 will have around 1.5x more power than the GTX 1080 using traditional comparisons. So, higher than I was expecting but I'm guessing some people still wont think it's high enough.

Anyway, I was thinking more about how exactly the ray-tracing works and I realized that the sparse ray-tracing demos were probably not games but rather 3D rendering software which doesn't use any rasterization. I think this because they also mention they're using a hybrid ray-tracing approach for games.

Sparse ray-tracing and hybrid ray-tracing are fairly different things. The hybrid approach first rasterizes the scene and then shadows and reflections can be ray-traced by making use of the data stored in the frame buffers by the deferred rendering process, whereas sparse ray-tracing doesn't use rasterization at all.

So I'm guessing it's possible to use the RT Core in a wide range of ways, it can be used to give a large speed boost to ray-tracing renderers whether they be sparse or full blown ray-tracing engines, but also used to enhance rasterized games by enabling physically based shading, aka ray-traced illumination, reflections etc.
edit on 24/8/2018 by ChaoticOrder because: (no reason given)



posted on Aug, 24 2018 @ 05:17 PM
link   
I just went through puberty reading that.



posted on Sep, 3 2018 @ 09:46 PM
link   
Look. Nvidia gtx 1080 ti are good enough for 4k at 60 fps. Nvidia rtx 2080 ti are good enough at 4k at 60 fps plus 35

The nvidia rtx 2080ti is good enough for 1080p at 60 fps.

I want everyone to understand that these new graphics architecture from nvida has a little increase in cuda cores aka shaders, also some new ai cores called tensor cores that help improve ai.... which are also good for use in ray tracing which there are so called ray tracing cores.

Ray tracing is highly computationally intensive. However we have been after ray tracing for a long long time. It is the holy grail of computer graphics. The fact we can do that at 1080p at 60fps is amazing especially since only the last feneration... aka.. 4k resolution is capable of 4k at 60fps.

1080p 60fps with ray tracing is nothing short of amazing.

I think that the generation after 2080ti will be 3080ti on 7nm instead of 12nm and it may very well get to 4k at 60fps ray tracing. If its less it doesnt matter. I am amazed. Ray tracing on pc at 60 fps whether its 1080p or 4k doesnt matter to me. It is amazing and it is worth it.



posted on Sep, 4 2018 @ 09:04 AM
link   

originally posted by: ChaoticOrder
For example many games use ray-tracing to determine what object on the screen you're trying to click on, by shooting a ray "from your mouse" into the scene in the direction the camera is facing. Ray-tracing is also commonly used for things like computing the path of a projectile such as a bullet and other types of intersection/collision processes. More generally, the RT Core will be useful for solving a large range of problems that involve heavy use of linear algebra because rays are really just vectors.


A single ray is pretty trivial to calculate and probably isn't worth sending to the GPU. Actually, thinking about this a bit more, sending such a ray to the GPU would be actively bad because you generally want game logic and rendering logic on separate hardware. The raytracing on these cards is best for tasks like calculating normals which is an important component of shading.

I suspect that what this will eventually be used for is more complex shaders rather than for realtime footage that looks like something prerendered. Basically, you'll probably see new shaders come into use and less usage of Lamberts and Phongs.
edit on 4-9-2018 by Aazadan because: (no reason given)



posted on Sep, 4 2018 @ 09:06 AM
link   

originally posted by: Tempter
Ugh, it "approximates" ray tracing in RT using and algorithm (they call it an AI, lol).

Sorry, not true ray tracing.


I'm sorry, but if you were a big fan of this kind of tech you'd know that spending THIS much hardware cycles on shadows doesn't help anything at all.


That's how it has to happen. True ray tracing like you see used in pre rendered footage is very slow computationally. It's simply not an efficient enough algorithm for real time operations. Even if you had the hardware to do it in real time, it would be a massive waste of resources.



posted on Sep, 4 2018 @ 09:13 AM
link   

originally posted by: ChaoticOrder
The number one thing that causes visual issues in video games is shadows, from flickering to misalignment's and most other artifacts.


There's an easy way to include shadows that doesn't add any graphics processing overhead. When you calculate the pixels for your camera, if the light hits an object, you calculate further objects hit along that vector as unlit. It works well and creates custom shadows without any overhead.

The biggest issue with such an algorithm though is that it doesn't do any post processing on the shadows, and shadows look better without the hard edges this technique creates.



posted on Sep, 4 2018 @ 09:18 AM
link   

originally posted by: joeraynor
The real elephant in the room is VR. In order to be convincing, VR needs to run at a very fast refresh rate (120 fps), and needs a pretty hefty resolution, something like 3840x2160 per eye. Those requirements, combined with the rest of what it takes to render a scene as modern developers prefer with lighting and post processing, mean that convincing VR needs a considerable amount of power in the GPU.


1080TI's can handle that resolution, at the necessary FPS, with up to about 25 million polygons. I do it at work every day.

The hardware is currently enough for VR. Better hardware is nice of course, but it's not really needed right now.



posted on Sep, 4 2018 @ 12:21 PM
link   
a reply to: AtomicKangaroo

This! My second hand GTX 970 is still kicking ass on every game that I throw at it. I dont have any use for anything more at the moment. Im planning on waiting until Cyberpunk 2077 comes out before I upgrade my GPU/CPU.



posted on Sep, 11 2018 @ 10:14 AM
link   
a reply to: Aazadan


A single ray is pretty trivial to calculate and probably isn't worth sending to the GPU.

I was just giving examples of other problems involving ray tracing, however for a shooter game where a high number of projectiles are often in the scene simultaneously it may make sense to do it that way, but probably not if you want to model gravity and bullet drop.
edit on 11/9/2018 by ChaoticOrder because: (no reason given)



posted on Sep, 11 2018 @ 10:24 AM
link   

originally posted by: ChaoticOrder
a reply to: Aazadan


A single ray is pretty trivial to calculate and probably isn't worth sending to the GPU.

I was just giving examples of other problems involving ray tracing, however for a shooter game where a high number of projectiles are often in the scene simultaneously it may make sense to do it that way, but probably not if you want to model gravity and bullet drop.


Even without those things, if you have a travel time on the projectile it's best to model it with physics. If you don't have a travel time, then ray tracing works, but like I said raytracing is actually fairly cheap for those types of weapons. You can do hundreds or thousands of rays in a frame or server tick and never have performance issues.

Most projectile physics are modeled with nothing more than vectors and some acceleration/deceleration forces. Again, not too complex. The collision checks for large numbers of projectiles are much more troublesome than the forces to move them.



posted on Sep, 11 2018 @ 10:37 AM
link   
a reply to: Aazadan


originally posted by: Aazadan
There's an easy way to include shadows that doesn't add any graphics processing overhead. When you calculate the pixels for your camera, if the light hits an object, you calculate further objects hit along that vector as unlit..

That sounds like ray-traced shadows to me. Ray-traced shadows don't have soft edges unless you take multiple samples for a single light source. You can still get good hard shadows without all the artifacts by ray-tracing shadows with point lights.
edit on 11/9/2018 by ChaoticOrder because: (no reason given)



posted on Sep, 11 2018 @ 11:23 AM
link   
Good thread.

I sub some folks on YT who go into detail on some of this.

Here is a link for more indepth info.







edit on 11-9-2018 by Arnie123 because: Heh

edit on 11-9-2018 by Arnie123 because: Oops







 
16
<< 1   >>

log in

join