It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why Nvidia's new RTX cards are awesome

page: 1
16
<<   2 >>

log in

join
share:

posted on Aug, 21 2018 @ 04:51 PM
link   
I'm seeing quite a lot of reviews popping up mostly talking about how the RTX probably wont be worth the money because all it will do is give you a few extra graphical enhancements which you wont even notice much anyway. This may be true right now with the very limited number of games that support the RTX series, but I'll try to explain why I think these cards are a very good step in the right direction and how they will really pay off for gamers.

Unlike pretty much every gaming video card we've seen before, the RTX series has a so called "RT Core", which is specialized for carrying out ray-tracing operations. It also has something called a "Tensor Core" which is specialized for performing operations related to neural networks built with frameworks such as TensorFlow. It also has a compute core which is the only type of processing core a GPU normally has and is optimized for tasks related to shading.



In simple terms what this means is, the card has 3 different cores each dedicated to a certain type of computation. For example the Tensor core can allow complex AI algorithms to be possible in games, it wont have to be done by the CPU (which is very bad at parallel computation) and as demonstrated by Nvidia during their SIGGRAPH presentation we can also use neural nets for intelligent post-processing effects such as de-noising and super-resolution enhancements.

The same thing applies to the ray-tracing core, 10 billion rays per second is enough for some serious ray-tracing, it can allow us to finally render things like shadows and reflections in a realistic way without all the visual artifacts which have plagued 3D video games since the dawn of time. Filling a scene with rays also makes realistic global illumination possible. However there are also countless other ways that ray-tracing can be used within a video game which doesn't improve the quality of the graphics.

For example many games use ray-tracing to determine what object on the screen you're trying to click on, by shooting a ray "from your mouse" into the scene in the direction the camera is facing. Ray-tracing is also commonly used for things like computing the path of a projectile such as a bullet and other types of intersection/collision processes. More generally, the RT Core will be useful for solving a large range of problems that involve heavy use of linear algebra because rays are really just vectors.

So overall this technological innovation has great potential to make games better in ways we can't even yet imagine, the ways that AI can be applied to games is virtually endless, from on-the-fly animation systems to advanced procedural content generation algorithms. The new API's for this tech will also allow interoperability between the ray-tracing, AI, compute, and rasterization systems, allowing for unprecedented levels of control over how a game is rendered and how the GPU solves tasks.

Some videos demonstrating a small fraction of what powerful neural nets can do for games:










edit on 21/8/2018 by ChaoticOrder because: (no reason given)



posted on Aug, 21 2018 @ 04:57 PM
link   
a reply to: ChaoticOrder

I was watching a youboob video about this exact topic earlier on lunch. And I must say that I wish I had waited to upgrade till a card with a core dedicated to ray tracing came out. But in my defense at the time of my purchase that was only a rumor. But man even having that available is freaking awesome and really makes games look great!

Good thread BTW


edit on 2/19/2013 by Allaroundyou because: (no reason given)



posted on Aug, 21 2018 @ 05:05 PM
link   
a reply to: ChaoticOrder

$1000 for a video card?

edit on 21/8/18 by LightSpeedDriver because: Typo



posted on Aug, 21 2018 @ 05:08 PM
link   
$839 for the RTX 2080

$1000 and up for the RTX 2080 ti.

I'd love to have one, and a new puter to use it in



posted on Aug, 21 2018 @ 05:25 PM
link   
a reply to: ChaoticOrder

My GTX is doing an amazing job right now so I will wait for the prices to come down but I'm looking forward to getting one of these in the future.

Thanks for the info.



posted on Aug, 21 2018 @ 05:28 PM
link   
2080ti lol. That could probably run a PC by it's self.



posted on Aug, 21 2018 @ 05:32 PM
link   
Ugh, it "approximates" ray tracing in RT using and algorithm (they call it an AI, lol).

Sorry, not true ray tracing.


I'm sorry, but if you were a big fan of this kind of tech you'd know that spending THIS much hardware cycles on shadows doesn't help anything at all.



posted on Aug, 21 2018 @ 05:41 PM
link   

originally posted by: LightSpeedDriver
a reply to: ChaoticOrder

$1000 for a video card?

Yeah it is pretty steep plus it's the first generation of gaming cards made for ray-tracing so I'll probably wait until the architecture has matured a bit more and the price has evened out before buying one. Having said that I think their pricing scheme is actually pretty fair considering it will cost more to manufacture such a new architecture and the cost of a GTX 1080 is still around $500 USD and they're asking around $700 USD for the RTX 2080 (non-Founder Edition). That seems pretty reasonable considering you get so much extra power for ray-tracing and tensor calculations. Obviously the issue is that those extra cores aren't really of much use yet so the price wont really be justifiable for most people at this point in time.

I also think that these types of gaming cards could be the beginning of the end for crypto-miners buying up gaming cards and using them for mining, because they wont really be able to use those extra cores for anything so they're paying for something they don't need, and it will be far most cost effective to simply buy some sort of specialized ASIC machine for mining. That will probably result in gaming cards dropping back to price levels closer to the Earth and closer to what they were in the past. That will take a while though.
edit on 21/8/2018 by ChaoticOrder because: (no reason given)



posted on Aug, 21 2018 @ 05:47 PM
link   
1) - Nvidia hopes this will end SLI for good (me too)
2) - 3 separate chips would have been better
3) - Looks worthless for now , as it will take an extra 2 years or so for the app/game producers to take advantage of the extra goodies and cores .
By the time the above happens, Nvidia would have improved on the technology a good deal
I will wait.



posted on Aug, 21 2018 @ 05:50 PM
link   

originally posted by: Tempter
Ugh, it "approximates" ray tracing in RT using and algorithm (they call it an AI, lol).

Sorry, not true ray tracing.

No, it does sparse ray-tracing and then uses an advanced de-noising neural network to remove the noise which results from under-sampling the scene. It's still real ray-tracing, it's just an amount of rays that make it manage for the GPU, then the AI fills in the rest of the detail that wasn't computed by ray-tracing. However, I imagine most of the processing of the RT Core will be on reflection and shadow rays rather than the primary rays. With 10 giga rays per second I'm a bit surprised they still have to do sparse ray-tracing though, it seems like 3 or 4 rays per pixel would be enough for full blown ray-tracing without de-noising, and I if I recall correctly he did show a real-time scene which he said used 3 or 4 rays per pixel.


I'm sorry, but if you were a big fan of this kind of tech you'd know that spending THIS much hardware cycles on shadows doesn't help anything at all.

That's why there's a dedicated core for doing the ray-tracing, so it doesn't impact on the other rendering processes. By modeling actual light rays we can remove all the visual artifacts that arise when using methods such as shadow mapping and we can create much more realistic shadows which have soft edges or color tints caused by semi-transparent surfaces, as well as caustics and other effects caused by refraction of light rays through semi-transparent objects. The number one thing that causes visual issues in video games is shadows, from flickering to misalignment's and most other artifacts.
edit on 21/8/2018 by ChaoticOrder because: (no reason given)



posted on Aug, 21 2018 @ 06:04 PM
link   
I am waiting to see the benchmarks on performance for 2018 programs and games on the 2080 vs the 1080 before I make up my mind.

For instance, if we see a 1080 rendering a game scene at 1080p at 120 fps, and we see the 2080 do it at 138, I will be very disappointed.

Ray tracing is nice, but it is sort of a marginal utility return vs the cost of these cards. The kind of thing it will allow is 4th-degree reflections (reflection of a reflection of a reflection of a reflection), and that is cool, but not something you would miss if it wasn't there. It will also allow all the shadows to be rendered with computed edge gradients rather than the common hard shadows. Global illumination is nice, but tbh, game devs have been faking GI by adding a discrete lighting value to triangles in certain volumes for ages. Most people would not be able to tell the difference between true GI and faked GI.

I am not particularly impressed by Nvidia saying this card is 12x better at ray tracing than the last gen, if the last gen wasn't aimed at raytracing, and this gen may have made big sacrifices to get RT to that level. If the AI core and the RT core end up being employable to tasks which scale with vanilla rendering well, it will be worth it at least.

The real elephant in the room is VR. In order to be convincing, VR needs to run at a very fast refresh rate (120 fps), and needs a pretty hefty resolution, something like 3840x2160 per eye. Those requirements, combined with the rest of what it takes to render a scene as modern developers prefer with lighting and post processing, mean that convincing VR needs a considerable amount of power in the GPU.

The most logical thing for Nvidia to focus on would be providing enough raw power to cross the performance gap for VR (and regardless of whether VR is your thing or not, the power would benefit everyone).

If the 2080 ends up only being a marginal increase, it may be worth it to pick up one of the 250 dollar 1080's we are about to see in September, and wait a gen or two to upgrade past it (2-4 years). I am excited to see how they actually perform, but also a hair worried ( and I am someone who would willingly pay the 800 USD if the performance justified it, to have a future-proof GPU for the next 6 years like my current 670). My concern is less about these fancy but unnecessary visual improvements, but merely having a card which will stay current for 5 years +.

AMD on the other hand is really shining this year. I can't wait to see what their zen2 line of CPU's looks like in the Spring. They are giving intel a run for their money (to the point where intel took a worryingly huge hit to their wallet).

Something intel has up their sleeve though is their "optane" system, which would combine RAM and Disk space, and allow for absolutely insane RAM sizes like 512 GB on the cheap. That will be truly disruptive when they release it.



posted on Aug, 21 2018 @ 06:25 PM
link   
a reply to: joeraynor


For instance, if we see a 1080 rendering a game scene at 1080p at 120 fps, and we see the 2080 do it at 138, I will be very disappointed.

Well that's still a speed increase of like 15% which is not negligible, I wouldn't really expect to see an increase any higher than that.



posted on Aug, 21 2018 @ 06:30 PM
link   
a reply to: ChaoticOrder

tensor and RT cores are just butchered shader/compute cores, specialized for those specific operations, so they can stick more of them on a chip, at the cost of regular compute cores of course. there's nothing preventing regular compute cores (so the GPUs that are available right now, from both nvidia and AMD) from doing raytracing, other than software and available processing power.

now, AMD GPUs tend to have higher raw performance (vega beats top-end consumer pascals in scientific benchmarks), that wasn't really utilized properly in the games so far. all we have to do is wait for AMD to release their version of raytracing library for games, because you can bet nvidia will make sure their raytracing software runs only on their RTX cards, despite the fact there's nothing preventing it - technically - from working on everything else. one may even suspect AMD was just waiting for this opportunity - "hey, guess what - raytracing works on our cards as well - and it works better than on yours!"

unless you forgot that every time there's new technology made available for games, it works worse on nvidia cards, even if they're the ones releasing it. just look at shadow of the tomb raider raytracing demo - the one with framerate visible - and explanations from the devs. "it's early version, and in the end it'll be available as a patch after game's release - but it'll be faster, promise!"

i mean, come on. nvidia is a company that butchered FP16 performance in their consumer cards to sell more teslas, while AMD offered double FP16 performance in vega (and in PS4 PRO) to be used in games, and now they're fixing it by adding tensor cores (and charging extra for it), under excuse of "AI-powered antialiasing and denoising"?

those cards are interesting, no doubt about that. still, they're butchered in more than one way, and don't be surprised when AMD cards end up being faster at the very things these cards are supposed to be superior at. there's plenty in case of games that can be optimized, and i'm speaking as a programmer familiar with some deeply optimized raytracing-on-the-cpu algorithms invented by demoscene coders some long time ago already.

here's some reminder for you:
www.tomshardware.com... - yay, up to 12x performance!
blog.gpueater.com... - OH WAIT

butchering compute cores, then adding specialized cores is the current way of nvidia. it makes sense to a degree - the raw power for those specific tasks is higher - but in the end, software uses everything, and the workload is never divided perfectly across all the components, because every software, every game engine, has different needs.

and no, RT cores alone won't do a thing when "everyone switches to raytracing" - they're just helper cores for compute cores.
www.reddit.com...



posted on Aug, 21 2018 @ 06:31 PM
link   
a reply to: ChaoticOrder

The 1080 Benchmarks against the 980 about 35-60% faster framerate in apples-to-apples comparisons though, and didn't represent a major cost increase. I think this first gen stuff has traded the normal performance leap for the RT. If the AI core can be put to good use for lots of task though, great.



posted on Aug, 21 2018 @ 07:05 PM
link   
a reply to: jedi_hamster


there's nothing preventing regular compute cores (so the GPUs that are available right now, from both nvidia and AMD) from doing raytracing, other than software and available processing power.

I'm well aware, I've written a ray-tracing engine using OpenCL. Having specialized cores for the task of ray-tracing makes it an order of magnitude faster.


don't be surprised when AMD cards end up being faster at the very things these cards are supposed to be superior at.

I've always been an AMD fan and tended to hate on Nvidia for their prices and proprietary approach to everything. But the fact is they have better graphics cards, they are more power efficient and generate less heat. Yes AMD is getting better but they still aren't as good when it comes to graphics cards. The only reason I own a GTX 1080 now is because I got sick of waiting for AMD to release Vega. I thought AMD would save the day with a cheaper option than the 1080 but the Vega 64 8GB card was priced well over $1000 AUD when I first looked. I'm quite glad I bought a 1080 now because it's actually worth more than when I first bought it and it doesn't have overheating issues like most of the AMD cards I have owned in the past.
edit on 21/8/2018 by ChaoticOrder because: (no reason given)



posted on Aug, 21 2018 @ 07:12 PM
link   
a reply to: joeraynor

You cannot expect the same massive jump in power each time, especially as we reach the limits of how small we can make a transistor and the tactics we use to pack them together. An increase of 15% on top of an already massive amount of power is probably some what equivalent to the power different between a 980 and 1080, and that growth can compound very quickly after a few generation.



posted on Aug, 21 2018 @ 07:25 PM
link   
a reply to: ChaoticOrder

as for specialized cores, the question is, how big is the difference really, in real life scenarios? we'll have to wait to find out. they were saying the same thing about tensor cores in tesla v100 - see the benchmark comparing vega64 to it.

as for the power draw, i don't think RTX cards will be that cool, not with raytracing enabled - and without raytracing, those cards just aren't worth the money.

lets just hope devs will use directx and vulkan to implement raytracing (yeah, right... first one is in experimental stage, second one is in development), instead of nvidia's own api, so that AMD cards can shine with some rays as well.



posted on Aug, 21 2018 @ 07:27 PM
link   
a reply to: ChaoticOrder

Moore's law is slowing, but we are still making feverish improvements every year.

I am guessing we will find a way to get sub 2nm transistors from our current 12-14nm before we have to convert to some mysterious new media or process. In a diamond structure, the carbon atoms are only .154nm apart. I have heard it said that they already have the roadmap to transistors that are only a few atoms. I think the only reason we are not in 5nm land today is that intel got complacent in their once sizable lead over AMD. Moore's law is seemingly slowing down, especially for CPUs (where they are trying to parallelize by adding more cores, and yet those cores aren't being utilized).

Nvidia is currently performing well above AMD in the GPU market, which I think may be part of why they chose this time to emphasize their RT project instead of continuing the relentless pace up the performance mountain when they are in the lead. I am glad AMD got their chance to shine in the CPU field, even if they sort of borrowed some R&D from others to get to 7nm, while "evil empire" intel did it the old fashioned way in house.

The back and forth jockeying between Nvidia / AMD and Intel / AMD this year will lead us to faster improvements for a few years probably. The competition here is the primary mover, more so than consumer demand I would guess.



posted on Aug, 21 2018 @ 07:37 PM
link   
They've got beyond classic Whitted ray-tracing where they just fire rays from the eye-point and then reflect off or refract through objects. They're going for true global illumination methods where they trace photons of light from every light source, find out which object they hit, then apply Monte Carlo randomization to decide whether what happens to that photon - is it absorbed, reflected, refracted and in which direction. That provides true and accurate soft shadows and caustic effects, providing photorealistic rendered images.

RT cores probably just do the linear algebra for ray-tracing triangle meshes and bounding volume hierarchies. Having custom functions to do triangle intersection and bounding sphere/box tests would speed things up. They were already caching and batching rays that go through the same space.



posted on Aug, 21 2018 @ 08:06 PM
link   
My goodness I could run 50x of my browsing computers on the wattage one RTX 2080 consumes (250W TDP), But it does look good never the less, I particularly liked the Real-Time Character Control, its awesome. Thanks for alerting.



new topics

top topics



 
16
<<   2 >>

log in

join