a reply to:
ChaoticOrder
tensor and RT cores are just butchered shader/compute cores, specialized for those specific operations, so they can stick more of them on a chip, at
the cost of regular compute cores of course. there's nothing preventing regular compute cores (so the GPUs that are available right now, from both
nvidia and AMD) from doing raytracing, other than software and available processing power.
now, AMD GPUs tend to have higher raw performance (vega beats top-end consumer pascals in scientific benchmarks), that wasn't really utilized properly
in the games so far. all we have to do is wait for AMD to release their version of raytracing library for games, because you can bet nvidia will make
sure their raytracing software runs only on their RTX cards, despite the fact there's nothing preventing it - technically - from working on everything
else. one may even suspect AMD was just waiting for this opportunity - "hey, guess what - raytracing works on our cards as well - and it works better
than on yours!"
unless you forgot that every time there's new technology made available for games, it works worse on nvidia cards, even if they're the ones releasing
it. just look at shadow of the tomb raider raytracing demo - the one with framerate visible - and explanations from the devs. "it's early version, and
in the end it'll be available as a patch after game's release - but it'll be faster, promise!"
i mean, come on. nvidia is a company that butchered FP16 performance in their consumer cards to sell more teslas, while AMD offered double FP16
performance in vega (and in PS4 PRO) to be used in games, and now they're fixing it by adding tensor cores (and charging extra for it), under excuse
of "AI-powered antialiasing and denoising"?
those cards are interesting, no doubt about that. still, they're butchered in more than one way, and don't be surprised when AMD cards end up being
faster at the very things these cards are supposed to be superior at. there's plenty in case of games that can be optimized, and i'm speaking as a
programmer familiar with some deeply optimized raytracing-on-the-cpu algorithms invented by demoscene coders some long time ago already.
here's some reminder for you:
www.tomshardware.com... - yay, up to 12x performance!
blog.gpueater.com... - OH WAIT
butchering compute cores, then adding specialized cores is the current way of nvidia. it makes sense to a degree - the raw power for those specific
tasks is higher - but in the end, software uses everything, and the workload is never divided perfectly across all the components, because every
software, every game engine, has different needs.
and no, RT cores alone won't do a thing when "everyone switches to raytracing" - they're just helper cores for compute cores.
www.reddit.com...