Beyond Turing - Ray Tracing and the Future of Computer Graphics

Beyond Turing – Ray Tracing and the Future of Computer Graphics



alright guys how's it going and my recent Nvidia shooting analysis video I noted that the presentation started off for the history lesson that got me thinking of how best to start this video off and an end I opted for my own little history lesson a history lesson of graphics from my perspective my family got our first game console when I was around three or four years old it was the Atari VCS which launched way back in 1977 the Atari was the first console to me use of sprays which we all know about today they weren't called sprite spike man that name came with a Commodore 64 computer but it's clear from these games more colorful than I recall actually that were looking at sprays by a novel name in 1980 though we got something very different on the Atari one of the first real 3d games battlezone by today's standards it was primitive indeed but back then I assure you it was quite impressive note however the see-through graphics and there was no hidden line removal everything appeared to be made out of wire and indeed the name given to the technology was wireframe graphics the tally was very late 70s early 80s but in 1982 a home computer launched that would change my world it was the Z I expect from an 8-bit computer developed by UK company Sinclair research here we can see the original model with Robert keys yes actual Robert keys the medulla I owned and the first computer I ever owned was thus one ZX Spectrum Plus which launched at the end of 1984 the spectrum was an incredibly popular computer in the UK especially and it went on to sell 5 million units worldwide anyone around from that time will know of a strong rivalry between the spectrum and the Commodore 64 a computer which was much more popular in the United States but it was a cheap prices of computers like the spectrum which really kept off the PC gaming industry the fuss spectrum games barely looks better than the Atari VCS but it wasn't long before some very smart programmers started getting the best out of these cheap computers one of them was David Braben who alongside Ian Bale co-developed the smash head space combat and trading some elite even what around one frame per second and still using wireframe graphics but no with hadn't lain removal technology elite a game changing and what all the changing experience for me for the allowed me to lose myself an affectional game universe elite went on to be one of the greatest most famous and most influential computer games ever created the 16-bit era was upon us with computers like the Atari ST and Commodore Amiga arriving in the mid to late 80s well I would go on to own both of those computers I still had my spectrum until late 87 it was a very old and underpowered computer by LAN stage but some great programming techniques made for some almost unbelievable advances a British company named incentive software launched the game gorilla created with our in-house 3d game engine free scamp up to now even a 16-bit computers were thought only capable of wireframe graphics but free scape proved that even the lowly ZX Spectrum could be capable of solid 3d forgetteth it was running under a frame per second that doesn't matter what mattered was that our virtual worlds no no even more realistic a few months later and an obscure game by the name of zart launched a strange flight combat game created by David Braben and some incredible solid 3d graphics running on the extremely powerful acorn archimedes home computer econ computers went defunct in 2000 but you may have heard of one of those subsidiaries ARM Holdings zart was clearly something different from Drella solid 3d at a high frame rate and game models became even more immersive by 1993 elite was now frontier elite to a solid 3d universe based on your tone in physics but just blew my mind also in 1993 a company named Nvidia was born today it's quite amusing when I realize I've been gaming for almost 15 years before Nvidia even existed whether Nvidia came a new gaming era IBM compatible pcs what windows were becoming more popular and home computers like the st and amiga were rapidly falling out of favor the Windows PC had a major advantage and upgradability graphics companies like Nvidia ATI and 3dfx vied for our cash with major graphics card releases as often as every six months I kept my Amiga for university though and gamed on a Sony Playstation for much of the later 90s so it was a long time before I bought my first real graphics card as long as 2002 in fact when I finally had to buy a geforce 440 MX and order to play model and a game which took me mere seconds to fall in love with by this point hardware and software advances had made frontiers virtual universe look flat and uninteresting and the guest now is a good time to halt the history lesson and discuss what was going on under the hood all this time we saw that the early games wireframe graphics are made of polygons polygons are themselves a collection of triangles and triangles can be represented by three points called vertices and 3d space each vertex contains information like its position in 3d space as well as color texture and its facing a process called rasterization simply takes a stream of vertices and transforms them into the corresponding to the image on your screen the rasterization algorithm actually projects triangles onto the screen in other words we go from a 3d representation to a 2d representation of that triangle using perspective projection the next step in the algorithm is to fill up all the pixels of the image that are covered by that 2d triangle this rasterization algorithm is object centric because we actually start from the geometry and project back to the camera creating the initial image on screen this is then further processed or shaded depending on elements like light source is interacting with the pixel and whatever textures are applied what we get an end as a final color in each pixel on our screens programming techniques and hardware advanced but the underlying technique of creating objects out of points in 3d space and then raster rising to the screen remains until this day so why use this particular method well first of all it's fast it's very fast allowing for over 60 frames per second one of the major reasons why it's fast is that this is how it's always been done and if you want to get faster at doing something in particular it's no real surprise when the hardware evolves to get better at that task graphics cards are built to rasterize so what's a problem then we've always done it this way and as you saw in the first five minutes graphics have improved massively over the years look at elite T'Challa drill out the frontier frontier tomorrow and we can continue on let's say from more Ont crisis released 5 years later in 2007 and yet another huge increase in graphics quality but what about cents crisis everybody knows the meme cannot run crisis and the reason we still say that is crisis was the last truly jaw-dropping moment in PC gaming but it's over 10 years old and we know aren't making anywhere near of the same level of advancement in graphics as we previously dint there's a couple of reasons for this but it all essentially comes down to the problems of rasterization it's not photo realistic it's a hack an approximation of real life that we can make more realistic with shadow mapping and I've been occlusion depth-of-field etc but with each new technology we develop to make games look more realistic they're adding to the artistic warlord merchants um as an increase in production costs and an increase in production tame this is because rasterization while also looking good got su extremely complex the artists are spending too much time on the technicalities of a project rather than art itself support if any are the alternatives to rasterization remember rasterization is objects entry not as we start from the geometry and project back to the camera when older method known as recasting reverses thus process casting rays from the camera to the objects thus is one reaper pixel on the screen when the rehash an object or using the correct terminology when the objects in the scene intersect the re the color information of the point closest to the camera which is determined by its texture is recorded for that pixel but let's say we have 1000 by 1000 pixels not as 1 million raised to be cast with each ray being many lanes of court clearly this is very computationally expensive and the result ends up looking very flat as we can see here in the case of Wolfenstein 3d which actually used thus rendering method the reason it works flat is due to the color information simply coming from the texture there's no lighting or shadows affecting any so going back to the getting office video where I mentioned in various recent cheering presentation and their short graphics history opening movie part of that showed us demo from 1978 The Compleat angler by a gentleman named John Turner Whitted working at Bell Laboratories this was the first true ray tracing demonstration where we can see shading where some latest blocked by objects reflection where we see the image of one object reflected and another object and refraction relate passage through transparent or semi-transparent objects thus demo took almost two weeks to render and if you think it doesn't look all that impressive recall what gaming graphics were like in 1978 remember what recasting if a new pixel color on screen simply came from the texture of the first object that intersected array in the case of the first demonstration that would be a rather flat red color an SMU demonstration showing two apples on a table using the Ray casting algorithm the rivet send back color information off the table in this case it would be a rather flat looking brown but what has retracing algorithm told not introduced precaution no Venera hits the table surface instead of the color information being sent to the display instantly the Rica now generates up to three additional race those being shadow reflection and refraction Ray's shadow Ray's are traced from the surface towards each light source and of any opaque objects the apples in this example intersects the surface and the light source then the surface must be in shadow in this case we can see that the red apple is blocking a light source causing the surface to be in shadow Dominion we can see the Ray has just a side of the green apples much darker shadow so rather than the fatal pixel color being updated to a flat brown color like we'd get with recasting with instead have a shaded brown due to retracing a Ferrari has a reflective surface a reflection rate as traced at the mirror reflection angle and the first object at intersects in this case the table will be seen in the reflection obviously this object can be in shadow too and in this case again we can see that it s what's important to grasp here is that with this retracing algorithm Faina pack so you see on screen is a much more complete combination of object color materials and lighting interaction resulting in a far more realistic overall image imagine trying to get the true pixel color and a case like this how using rasterization hanks have you think about that simply in the case of rasterization projecting the image to the screen from the object it seems Florida a fundamental level tracing rays from the camera to the object seems a far more sensible method and as if you desire realistic looking images the major drawbacks of ray tracing however as lately obvious to you by now it is of course incredibly computationally expensive with each individual ray being multiple lines of code and bouncing off objects to create even more rays all for one pixel this is why octonal ray tracing has generally been done offline and render farms with single frames taking hours to render that's a far cry from the minimum 60 frames per second requirement of high-end gaming so perhaps wasn't surprising when back in me the Unreal Engine Star Wars demo showing real-time ray tracing was met with a high degree of cynicism even with the use of extremely high-end NVIDIA supercomputers and the subsequent months all became clear the Nvidia was about to shift the gaming industry and a new direction and what their recent release of their cheering architecture we learned that this will be done through a combination of rasterization and ray tracing what's been called hybrid rendering and branded as r-tx at the same time Microsoft revealed the new DXR update or DirectX for ray tracing for the DirectX 12 API essentially the ray tracing elements of DX r and r TX are limited to reflections and shadows eerie lighting and ambient occlusion and this is for people actually seeing what the Unreal Engine Star Wars demo in other words this is not full scene ray tracing yet this only ray tracing specific scene elements while the rest is still rasterized hybrid rendering cheering's r-tx relies heavily on new denoising hardware and it's tensor course denoising is a huge part of ray tracing due to each ray being so computationally expensive casting a lot number of raised especially in real-time is currently out of the question with today's hardware but unless you leave a retrace of running long enough to fill an the scene you end up with a lot of unpleasant looking noise even with optimizations to help which rays are cast you still end up with a lot of noise cheering I believe only cast one or two rays per pixel in real time and such a small number of rays will leave a very noisy image guaranteed clearly for gaming purposes neither leaving the ray tracer to run long enough to fill in the scene or allowing the noise to exist as acceptable the first would dramatically lower frame times and the second with dramatically lured image quality – sure I mean here is an example RTX scene using ray traced shadows with hundreds of rays per pixel a number of such as far higher but any real-time ray tracing graphics card is capable of today dropping down to only one spp that's one sample protection and modern lane with children's real-time ray tracing we see how noisy the shadows look the magic occurs when currents denoising hardware against the work we see the image is cleaned up massively even at only one ray per pixel and the final look is very close to the ground truth with over 100 Ray's per pixel Nvidia claims that their denoising Hardware can do this an under one millisecond impressive stuff but we can see though from this recent electronics arts seed presentation of hybrid rendering we're really still looking at a series of hacks that's what rasterization as today is a series of hacks one of the main reasons for implementing ray tracing in the first place as the artists are trying to scale back on these hacks in order to make their lives easier and to bring down production costs hybrid rendering and its current implementation at least simply appears to be replacing one set of hacks with another and it's still only for shadows and reflections it's also important to realize that even folks in ray tracing still doesn't give physically perfect red zones it comes closer to the real world of done rasterization but it's no simulation of reality just like rasterization engines have to cheat to achieve reflections and refractions a ray tracer has to cheat to get soft shadows caustics and global illumination in order to achieve photo realism r-tx DXR will also require a massive industry weight investment from studios to adopt as a tomb as this really worth all the time and therefore what is the better way with path tracing the camera again sends rays from the camera to the scene but unlike with ray tracing which Tracy's new Ray's two points of light know the Ray is based around the objects and the scene collecting information like the color and material with each burns when array is finished bouncing the final result is taken as a sample as in samples per FX or previously discussed each sample is added to the average for the sauce pixel and the final pixel color you see on screen as the average of all the sample values for that pixel tens hundreds or even thousands of samples can be taken for each pixel depending on how capable your hardware is and also how long you're happy to wait path tracing also makes use of different sizes of legs rather than just simple light points and thus allows for free soft shadows without the need for hacks as largely surfaces mean softer shadows tracing the path of rays also allows free color bleeding truly intense colors when an object is intensely wet and of course caustics late reflections and materials like glass and water path tracing behaves very similar to how light behaves in the real world it's still not a 100% physically accurate model of course but this global illumination gets us pretty close to fourth to realism let's take a look at us fully path tracing and say the house with only two samples per pixel remember shooting is one sample per pixel for shadows and lighting effects only the scene is fairly obviously a room and an apartment and we can make it most of the objects but we'd never accept this during gameplay when we let the denoise get to work the image is cleaned up of the noise and we have something that looks perhaps from 2004 doubling to four samples per pixel and we still have a noisy maze however all scene objects are obvious and in fact it just looks vastly better than two samples that denoising on top of the four samples however I never get into something that would almost be happy to play doubling again to eat samples first with noise and then de noise and we again have an increase in the final image quality it's especially noticeable at the walls in this example but you can already probably figure that with samples it's very much going to be a case of diminishing returns at fifteen noisy samples we can see the image taking on a more natural color removing the noise and we have what is a very nice-looking final render 1000 samples again we see another improvement and overall ambience within the scene but again very much diminishing with tongues given requires twenty times the horsepower by this point the denoiser is basically removing the green from the brick wall and finally 12,000 samples per pixel call it the final perfect image and almost zero difference after denoising checking at the deference between fifty samples per pixel and twelve thousand samples per pixel well aware you de st. it does depend on other factors turtle I wouldn't drink from these each sample per pixel wineglasses D noise they are better but still nothing to write home about at 50 samples we see a hint of a red color and more reflection even more after denoising looking at the fate of 12000 sample image though and this time it does appear to be a cut above the 50 sample d noise so even pathways denoising isn't a magic bullet it was obviously a pretty large difference between 50 spp and 12000 a SPP here but then again I would this scene look rasterized but at this point you may be thinking why even bother looking at this stuff today with cheering Nvidia as a one sample per pixel and that's basically only for black and white lighting in shadows you don't need a math degree to figure out 50 samples per pixel and a path traced fill scene must be decades away well it's a lot closer than you or I probably thought Oh toy creators of the octane rendering engine and the sponsors of the huge amount of research hyper into this video can now fill a path chase a scene let's call it interactively rather ban real time as we can see from this demo back in March of this year this as entire scene path tracing it's not just r-tx reflections and shadows on top of DirectX or OpenGL and it's running at around one frame per second as we can see in the corner we can also see that this bedroom seen as 50 pathways samples per pixel and is running on 2 volt RV 100's using an internal belt of octane 4 which is meant for offline rendering so this is actually optimized for quality and not real time but how is this possible what major difference appears to be that the octane a ID noiser operates deep within the 3d engine whereas with cheering we know that the denoising is basically done and post-processing rate at the end let's me remind you of elites running at 1 frame per second back in 1985 but don't think about us in terms of I wouldn't play that today think about it in terms of where we were even last year when denoising took minutes rather than fractions of a second think about where we'll be next year 7 nanometre GPUs are coming but the march to global illumination will be a combined hardware and software effort some engines are already looking at least to an 80% speed-up simply going from Pascal to cheering and that is only with a few weeks tweaking the software but some developers are suggesting between 3 and 5 times speed op or even 5 to 8 times for this generation path tracing also scales extremely well with multi-gpu as every sample is under pendant from the others so it's a fairly if expensive easy doubling of my performance though we still don't know how children and arty access and respect the multi-gpu and as I mentioned Oh toys octane for belt is actually meant for offline rendering so even more optimizations of Judea over the years we got used to accepting 30% performance increases with every generation but there's no reason why we can't go back to the days of 33 times speed ops or more every year with a new paradigm making full path traced 60 frames per second a lot closer than you might believe today he can argue about whether or not guerrilla band 60 frames per second is needed but the point stands all we need to do after that is continue to increase the amount of computation and from that we can increase the resolution or the framerate full 360 degree path traced holodecks may not be that far away perhaps though a better way to look at it as des we never get for two realism with rasterization we won't even get the scores we're alone thus close rasterization is long long since passed at diminishing with tons point I've learned about when crisis launched yes graphics do still improve every year but the cost and production and man-hours means we will never get to actual photo realism with these hacks and what's more amusing is that the more of these hacks we add in order to try to simulate reality the more computationally expensive is becoming you know the saying which cause throwing good money after bad that's what we're doing the rasterization both in hardware and software so I applaud Nvidia for attempting to change this and what is basically the only way they could do it but hybrid rendering is a stopgap NVIDIA needs to take the hybrid approach due to a legacy of thousands of rasterized games we see why that is with cheering already being pearl received due to not being fast enough at rasterization can you imagine what would have happened had they doubled or even quadrupled their r-tx and Giga raise well actually lowering rasterization performance this was the only way Nvidia could do it Indian the other hand the other type of company that we just throw it all out and start from scratch and I believe we will see them go down a tree retracing or path tracing routes but the game consoles one day with hardware specifically created for that task as we know as much easier to force the issue on a closed system like the game consoles the animation studios walk down the same path two decades ago they started there with rasterizes on tanks moved on to retracing still using a bunch of tracks before arriving at path tracing which will leave them from almost all the remaining hacks and the process with the advent of special-purpose ray tracing Hardware muscle art exs its dedicated hardware for tracing raise the gaming industry is now also set to walk that same path by adopting less technology and near real-time rendering engines perhaps now the team is also right for a new grown-up path tracing architecture to come in and sweep everything else away you know this is what it used to be like in the good old days gradually out guys you you

38 thoughts on “Beyond Turing – Ray Tracing and the Future of Computer Graphics”

  1. Programmer here. I wrote my first ray tracer in the 80's, and my second one just a few weeks ago. You've captured the topic exceptionally well here. All computer graphics is tricks and magic to create the illusion of a virtual fantasy. What you see is pretty much never what actually is. I can very well appreciate the amount of research that must have gone into this video, and the result is excellent. Catch you later.

  2. Really good video, thanks. I kind of bumped into this accidentally – I'm a programmer, have never really done much graphics stuff…but I found this really interesting and informative.

  3. Oh boy I'd never thought I'd see those games again in my older age… Some blasts from the past wow! I grew up in the 80s and played these games only when I visited my Uncle, who had those computers. I was treated to a go of Space Invaders on his "Green Screen". That's all I knew it was called haha. But I remember Wire Frame graphics very well. I used to have dreams about them and also, occasional nightmares in my very early childhood were of wireframe graphic shapes haha! Anyway, good stuff man.

  4. Yay! Someone remembers. I had the 64KB Spectrum with the rubber keys and the tape drive. No monochrome XT makes me sad though. The mighty Hercules graphics adapter gets no love.

  5. When nVidia released their videos, I read a lot of comments about how ray tracing was some sort of nVidia gimmick and how it wasn't going to last. This video does a great job at explaining how it's not some gimmick, and how it's really here to stay. It really does a lot to simplify the jobs of the artists, and frees them to focus on making their models rather than fine tuning hundreds of rasterization hacks. Once Turing based GPUs go down in price and AMD releases their own ray tracing capable cards (and/or drivers for existing cards), there's no way AAA studios are going to stick to costly rasterization methods. I think we'll start seeing mass adoption of ray tracing in less than two years, and games may start requiring ray tracing capable cards in 2-5 years. Maybe I'm being optimistic, but artwork is the #1 development cost for AAA games and such a gigantic way to save costs for artists is too much to ignore in my opinion.

  6. I think in all of this the main thing to take away from it is the love for gaming and how it has pushed tech as quickly as its going. Even though IPC gains have hit a wall in recent years its evident that we're still not slowing down on how ambitious we are at getting full 1/1 graphical fidelity that is truly photo realistic. Once the hardware evolves to meet the demand of 12000SPP in real time we'll be at that dream and then proceed to attempting to hit high frame rates at high resolutions like 16k 240hz because we want to live that dream.

    If it weren't for people buying pc hardware for the sake of gaming the gaming industry would be VERY different today. When we see that we are where we are out of love for the median of entertainment that are video games it just goes to show you how, and why this industry has grown so quickly and how it shows absolutely no signs of slowing down anytime soon. High fidelity 3d movies and Visual effects owe their progression to gamers for pushing tech at such a rapid rate and to deliver us to the golden age of being able to produce such high fidelity results at a more faster rate.

    Currently in the future with 3d stacking we'll get another boost in computing power to where CPUs and GPUs could possibly have enough horsepower to do path tracing in real time at a high SPP along with DDR5 memory that is upcoming in the future and HBM2 memory becoming less expensive and more common place on hardware. The optimization in software will also get us to our desire for real time path tracing and id wager that by 2030 well be at that point to where the tech will be affordable for devs and consumers.

    As time goes on leveraging tech AMD, Intel, and Nvidia hardware consumers will be able to develop their own games and high fidelity movies at a more faster pace. With Turing's RT and Tensor cores becoming optimized overtime devs will be able to work faster as long as Nvidia improves on the Turing architecture on the next line of cards after it and continues to improve driver optimization for it. I know that the hardware is expensive for gaming purposes but for developing on it it could be worth it as long as drivers improve perfomance upon the attributes that these cards provide. I'm taking my time learning 3d modeling with Modo and will move onto octane using my RTX 2080 and are keeping my ear to the ground on what Navi and Intels GPUs will have to offer in terms of developing and rendering 3d games and ray or path tracing. They Cryengine ray tracing demo is proof that the ray tracing software isn't as good on Unreal Engine 4 as it is on the Cryengine which goes to show us that Cryengine still has what it takes to blow our minds which in my opinion is a good thing as I'll be working with that when I'm skilled enough to do so.

    Once we hit that point of having hardware and software being able to drastically shorten the amount of time that devs of movies and games have to spend time on graphical and visual fedility we'll get better movies that can be made less expensive to produce sense the time to render those visual effects will be lesser than what it is now and game devs will have more time to make more complex worlds, mechanics, and stories instead of having to put so much time into visual fidelity we can only be excited for what will be made once we reach that point.

    But anyhow thanks for the video and your content has earned you a new subscriber and will be donating to your Patreon in the future 😀

  7. Ray tracing is NOT raycasting that was used in Wolf3D. Not even close. Just because they both use rays does not mean anything.

  8. As to compatibility, IIL is a good candidate for caches in Emitter Coupled Logic CPU owing to its very high density and quite decent speed.

    Certainly, operating system must be as much «ascetic» as possible.

  9. 1-micron BJT translates into 335 MHz clock rate, so 10 (or considerably smaller, using DXRL) nm ECL RISC CPU with 100W TDP should still deliver five times the clock rate of a ultrafast CMOS shit from either Intel or AMD. The only way to render physically-accurate picture is to model all light rays in the scene, which is and will always be technically impossible, alas. Highly unlikely that someone will reissue in pure machine codes (since even Mercury is useless at improving efficiency) everything (save extraneous applications) from operating system (monolithic; TUI) as such, graphics driver, right to graphics engine itself therewith resorting to direct access to graphics resources.

  10. Imagine what real time graphics will look like when Quantum processing is perfected and commercially viable.

  11. A holodeck has been made, you can try it somewhere on the gold coast. And it does not use pixels, it uses lots of small dots that make up the picture. But as I understood it, modern graphics cards would not be able to use the technology. A new type of graphic's card was built, it was interesting though and I saw it here on you tube.

  12. You know its computer intensive if ORNL fills an entire super computer with NVIDIA RTX cards, which makes me question is the government making video games?

  13. I wish I had seen this earlier because I thought the point of ray tracing is realistic lighting and reflections. Well, that's part of it, but the implementation is far deeper than just making things shiny.
    And now I know what was meant by "path tracing is even more computationally expensive but even more realistic".

  14. I am the only one of my friends gushing about this. All of my friends call it a gimmick and laugh at this. Calling it useless and just a way to charge more. I tried to get them to understand the pace of innovation. Like it is hard to think of a world without internet right. THAT IS NEW. In a few decades we found out how to send data to make a 4K HDR video compressed into a 10mb/s stream from T.V to router, to server farm, back, and play on TV with nearly no noticeable lag at all. Hell that was impossible 3 years ago and this is a gimmick? No this is the beginning of a major overall in computer graphics that we can't comprehend. Do you want insanely immersive VR? This is the current only path to it. Seriously imagine a scene created with 12,000pp and way more advanced technologies we don't know in 15 year. Then put the player in a VR headset and a way to feel the environment. We have people getting real fear in VR NOW, I can't comprehend being lost in reality from VR today, but at that point in time idk.

  15. thats funny, i always noticed that 'different colour information' that wolfenshite had. and now i know what it was, thanks.

  16. it will all be running on servers in 5 years…that makes it simpler technically…but, in two generations kids wont have a clue what a PC does…it ll just be part of "nature" for them..

  17. Let's just switch to full physics-correct photon simulation. My laptop could probably handle that right? 😛

Leave a Reply

Your email address will not be published. Required fields are marked *