3dvisualization | iRender Cloud Rendering Service https://irendering.net " Simple and fast cloud rendering solution " Mon, 31 Mar 2025 04:03:18 +0000 en-US hourly 1 https://irendering.net/wp-content/uploads/2019/10/cropped-512Favicon123-32x32.png 3dvisualization | iRender Cloud Rendering Service https://irendering.net 32 32 Exploring new updates in Chaos Corona 12 https://irendering.net/exploring-new-updates-in-chaos-corona-12/ Tue, 26 Nov 2024 06:00:41 +0000 https://irendering.net/?p=31879 Chaos has released Corona 12 Update 1, the latest version of its production renderer for 3ds Max and Cinema 4D. Let’s explore some important new updates with iRender!

The post Exploring new updates in Chaos Corona 12 appeared first on iRender Cloud Rendering Service.

]]>

Exploring new updates in Chaos Corona 12

Chaos has released Corona 12 Update 1, the latest version of its production renderer for 3ds Max and Cinema 4D. Let’s explore some important new updates with iRender!

New Updates in Chaos Corona 12

Corona to Vantage Live Link

Corona 12 Update 1 introduces an initial implementation of a Live Link. This is just a starting point though, with more functionality to come in future releases.

With this early version, Vantage can be utilized for a form of “Interactive Rendering” that refreshes in real-time as you make adjustments in 3ds Max or Cinema 4D. This detailed ray-traced GPU rendering can be beneficial in certain scenarios while developing your scene. It offers an alternative way to discover ideal camera angles, modify lighting, organize your scene, configure various material aspects, and more.

Chaos Cloud Collaboration

With this update, you can now send an image directly from the VFB to Chaos Cloud Collaboration, allowing you to share it with your client and engage in discussions using markup and comments. This eliminates the dependence on email, Dropbox, Google Drive, and similar services, plus it means your client isn’t left having to use Paint or Photoshop to draw attention to certain parts of the image.

Cloud Collaboration enables you to create multiple projects and manage their sharing options. It also takes care of image versioning throughout the project and serves as a convenient method for delivering final renders.

To sum up, Cloud Collaboration makes it easy to share images with your clients, sending them right from the VFB. Then, you can discuss changes via markup, send deliverables, set up Virtual Tours, and open the door to much more in the future.

Faster Caustics

The first improvement to Caustics will now give a significantly cleaner result in the same amount of render time. Can not specify an exact figure for the speed increase since it largely depends on the prevalence of caustics in your scene. If caustics are the primary focus of the scene, you’ll experience a more significant speed improvement compared to a scene where caustics have a minor role.

As an approximation, Caustics themselves should be visually “about twice as good” in the same amount of render time.

Caustics Multiplier

The second Caustics improvement allows you to adjust the intensity of the Caustics in the Beauty pass, removing the need to use render elements and post-processing to get the results you are looking for.

Please note that you must set the Surface Multiplier at render time – it cannot be adjusted in post. IR can be used to find the right value before committing to a final render.

Volumetric Caustics

The final Caustics improvement is that you now have the option to enable the visibility of reflected or refracted rays created from Caustics as they pass through a Volumetric material.  A common illustration of this is sunlight underwater, where the refraction of light through the shifting water surface creates a captivating display of light beams within the water volume.

Let’s see other cases, such as underwater swimming pool lights passing through steam in a heated pool or jacuzzi, or light beams reflecting off the mirrored surface of a disco/mirror ball through the dry ice in the atmosphere of a night club, or sunlight reflected off windows through an early morning mist and more.

With surface Caustics, you can control the intensity of the volume Caustics in the Beauty pass.

Auto Exposure, Auto White Balance & improved White Balance

First comes Auto Exposure, a checkbox that lets Corona 12 choose the exposure values for you, just like when using a modern camera. This can serve as a simple starting point, with the option to make additional adjustments through the regular Exposure control or to bake them into the tone mapping.

If “Update during animation” is checked, then the automatic exposure and/or white balance will be recalculated for each frame of an animation, helping with scenes that move from dimly to brightly lit areas.

Additionally, a “Strength” parameter ranges from 0, which means no exposure adjustment, to a maximum of 1, where the calculated auto exposure value is fully applied. This allows for automatic adjustments to exposure or white balance for an effective starting point or to respond to changing lighting during animations. Furthermore, a new algorithm enhances white balance for more pleasing outcomes.

Auto White Balance functions have the same possibilities as mentioned above. There’s one extra improvement to White Balance though, as we have changed the algorithm it uses to one that gives a more natural look. For compatibility, any existing scenes will load with a “Legacy White Balance” to ensure they render the same as before. In new scenes, the new White Balance will be the default. While the difference is subtle, the results are more in line with the effects of White Balance in other popular software like Photoshop.

Image Upscaling

NVIDIA’s new Image Upscaling offers two key benefits.

First, it enables quick test animations by rendering images at a quarter size (e.g., 960×540 for 1920×1080) and then using AI Upscaler, resulting in a rendering speed that’s approximately four times faster. However, these animations lack temporal consistency, leading to noticeable flickering.

Second, when enabled for Image Refresh (IR), it can improve responsiveness during editing, although parsing times may limit this benefit. It’s particularly useful for scene setup, lighting, and camera angles but less effective for detailed material editing due to reduced detail.

Please note that NVIDIA Denoising is always applied to the image when Upscaling is used, both in IR and rendering modes. Also, just like NVIDIA Denoising, this requires a compatible NVIDIA card.

Are you looking for a cloud render farm for your project with Corona?

iRender provides high-performance and configurable server systems to customers who need to utilize the power of CPU & GPU such as 3D rendering, AI training, VR&AR, simulation, etc. With iRender IaaS and PaaS services, you can access our server through the Remote Desktop Application do whatever you want, and install any software you need on it. It is like you are using your own computer but with a powerful configuration and much higher performance. In particular, you only need to install the software for the first time, the next time you connect to the computer everything is ready for you to use.

Corona Renderer is purely CPU-based. It uses Intel Embree Ray Tracing Kernels, the fastest CPU ray tracing solution on the market. Since they mesh well with the Corona architecture, they are an important factor in its performance.

By rendering only on the CPU, we can avoid all bottlenecks, problems, and limitations of GPU rendering, which include the unsuitability of GPU architectures for full GI, limited memory, limited support for third-party plugins and maps, unpredictability, the need for specialist knowledge or hardware to add nodes, high cost, high heat and noise.

Also, Corona usually comes with Cinema 4D and 3ds Max. We support all 3D software including Cinema 4D and 3ds Max with their plug-ins. Moreover, we have preinstalled Cinema 4D and you can use a free Cinema 4D license from us. With Corona, you can consider our 3S server.

Now we have a huge promotion for the biggest sale event of the year: Black Friday. Don’t miss this best deal and speed up your rendering performance with iRender.

For more detailed information, please contact us via Live chat 24/7 or WhatsApp: +(84)915875500 or Email: vyvtk@irender.vn

iRender – Happy rendering!

References: chaos.com

Related Posts

The latest creative news from Corona Cloud Rendering

The post Exploring new updates in Chaos Corona 12 appeared first on iRender Cloud Rendering Service.

]]>
Cycles in Blender 4.0: New Features & Accelerated Rendering with iRender https://irendering.net/cycles-in-blender-4-0-new-features-accelerated-rendering-with-irender/ Tue, 14 Nov 2023 06:00:49 +0000 https://irendering.net/?p=27756 Although Blender 4.0 is not a major overhaul, it is a significant update. It introduces major new features to its renowned Cycles renderer. These upgrades allow for more realistic materials, lighting and scenes. In this article, we will explore new features of Cycles in Blender 4.0 and how iRender's powerful GPU farm enables hyper-accelerated rendering for Blender.

The post Cycles in Blender 4.0: New Features & Accelerated Rendering with iRender appeared first on iRender Cloud Rendering Service.

]]>

Cycles in Blender 4.0: New Features & Accelerated Rendering with iRender

Although Blender 4.0 is not a major overhaul, it is a significant update. It introduces major new features to its renowned Cycles renderer. These upgrades allow for more realistic materials, lighting and scenes. In this article, we will explore new features of Cycles in Blender 4.0 and how iRender’s powerful GPU farm enables hyper-accelerated rendering for Blender. 

New features of Cycles in Blender 4.0

Light and Shadow Linking

Cycles in Blender 4.0 now allows for light and shadow linking. This gives artists more control over how lights interact with different objects in a scene. 

Light linking in Cycles allows artists to target individual lights toward specific objects only in the scene. Shadow linking further expands this control to designate which object can act as shadow blockers for a light. 

This breaks the laws of real-world physics to provide more artistic control for lighting. For instance, a scene’s environment and characters could have unique lighting setups. A character can receive a dedicated rim light, linked solely to it, to make it stand out. Shadow linking could prevent environmental objects from obscuring this lighting effect.

Let’s look at the below example images. Light linked to give just the monkey object rim shading. The centre and right images are the scene without light linking, with the rim light off and on.

Other examples of light linking and shadow linking of Cycles in Blender 4.0

Ex 1: different light per car part to finely control the shape of reflections. The left is all lights, while the right is with light linking.

A scene of Fernando Alcala using asset by LRosario

Ex 2: Light linking is used to artificially eliminate reflections on the eyes and tree branches.

A scene of Alaska Young and Daniel Bystedt

Path Guiding

Cycles in Blender 4.0 has expanded its capabilities. It now works on glossy surfaces in addition to diffuse surfaces. This significantly reduces noise and artifacts on glossy surfaces and finds otherwise missing glossy light paths. 

Cycles renderer will update the default color transform from the current Filmic option to a new option called AgX. This new color transform works a bit more naturally with very bright lights and saturated colors. It is also designed to integrate smoothly with spectral rendering that is planned for Cycles in the future.

Look at the Scene example by Fernando Alcala and LRosario, made using assets from Blendswap below. Diffuse only path guiding, and new glossy path guiding finding additional light paths (these scenes are rendered with the same render time.)

Multiple Scattering GGX

Cycles has updated its approach to Multiple Scattering GGX. It now uses an approximation method described in Emmanuel Turquin’s paper “Practical multiple scattering compensation for microfacet models”. 

      • This replacement removes previous performance and noise disadvantages associated with the old Multiscattering GGX. As a result, it can now be reliably used as the default setting.
      • While the directional distribution is no longer a strictly accurate theoretical model, images may exhibit some minor differences. However, the overall effects of multiscattering correction (such as preserving energy, avoiding darker results at high roughness values, and increasing saturation at high rough surfaces) are still achieved.
      • For highly anisotropic materials, the compensation provided by this approximation may have limitations.

Principled Hair BSDF

Cycles has added a new hair scattering model called “Huang”. The previous “Principled Hair” BSDF is now designated as the “Chiang” variant.

The Huang variant adds more realism by supporting elliptical hair cross-sections, as natural human hairs usually have an elliptical shape. The cross-section orientation aligns with the curve normal and can be adjusted using geometry nodes with a default of minimal twist.

This represents a far-field model in contrast to the near-field Principled Hair BSDF model. Hair rendered with Huang may have less noise but require longer times at lower roughness values due to numerical integration along the hair width. It will also appear flatter when viewed up close. However, Huang produces nicer focused reflections when hair is seen against a light source.

A scene of Simon Thommes and Fernando Alcala

Blender 4.0 is not a major overhaul, but it is a significant update. While Blender’s Cycles continues advancing locally, iRender’s cloud rendering services can help accelerate rendering for many projects. 

iRender - The Fastest Render Farm for Blender Rendering

How does our service work?

We provide remote access to our powerful cloud workstations (remote machines) for rendering any 3D projects. You as iRender users will connect to our machines hosted on iRender’s servers via the remote desktop connection (RDP) or via our desktop app. After that, you will have full control of a remote machine (s) as if it were your local systems and render freely there. 

iRender has the most powerful machines for rendering

We power up your creativity with unlimited GPU rendering resources. Our GPU render farm houses the most powerful 3D rendering machines. Configure from 1 to 8 GPU with top-tier RTX 4090/RTX 3090, Threadripper Pro CPUs, 256GB RAM and 2TB SSD storage – our machines are capable of any Blender project demands.

All of our servers’ configurations are publicly posted on our website. Visit here for more details about the configuration and the price.

iRender has pre-installed Blender machines

At the time of writing, we have pre-installed Blender 3.6.5 machines to streamline your pipeline further! It’s still possible if you want to use another version on our machine.

iRender supports all Blender versions, providing you with total workflow freedom

Once renting our machines, you will own them as your personal workstations. Therefore, you can install and use Blender, plugins, and/or any other software of any version (even the newly released ones). You can even run multiple machines simultaneously to speed up the renders, thus no more worrying about any deadline.

Let’s see how our GPU machines accelerate Blender rendering!

New user incentives

This November, we are offering an attractive 100% Bonus Program for our new users making the first deposit within 24 hours of registration. 

Let’s get started!

Being a Diamond Sponsor of the Blender Foundation, we are thrilled to support the Foundation to partly contribute to the development of this awesome software. Let’s work together and support the future of Blender. Enjoy our FREE TRIAL to try our RTX 4090 machines and boost your Blender rendering now!

For further information, please do not hesitate to contact us via hangtt@irender.vn or mobile: +84915875500

 

iRender – Thank you & Happy Rendering!

Reference source: blender.org

Related Posts

The latest creative news from Blender Cloud rendering.

The post Cycles in Blender 4.0: New Features & Accelerated Rendering with iRender appeared first on iRender Cloud Rendering Service.

]]>
Control noise and artifacts in Octane for C4D https://irendering.net/control-noise-and-artifacts-in-octane-for-c4d/ Fri, 21 Jul 2023 06:00:49 +0000 https://irendering.net/?p=26816 In the previous article, we learned how Octane’s Path Tracing kernel mimics reality through a deep dive into how light works in Octane. If we set up lighting following real-world rules, we can achieve beautiful and realistic results without tweaking many settings or cheating too much. However, incorrect lighting causes noise and artifacts, and Heartache. So, how to control noise and artifacts in Octane for C4D for realistic renders? This guide will help you find the ways.

The post Control noise and artifacts in Octane for C4D appeared first on iRender Cloud Rendering Service.

]]>

Control noise and artifacts in Octane for C4D

In the previous article, we learned how Octane’s Path Tracing kernel mimics reality through a deep dive into how light works in Octane.  If we set up lighting following real-world rules, we can achieve beautiful and realistic results without tweaking many settings or cheating too much. However, incorrect lighting causes noise and artifacts, and Heartache. So, how to control noise and artifacts in Octane for C4D for realistic renders? This guide will help you find the ways.

1. The problem of noise and artifacts in Octane

PT kernel in Octane, an unbiased path-tracing engine, works by essentially shooting a bunch of rays from the camera into the scene and calculating their paths to various light sources. The more difficult for rays to trace the paths, the more samples/calculations are needed for a clean result. This difficulty shows up visually as noise and artifacts.

These calculations aim to mimic reality, so unrealistic values for lights and materials are harder to resolve, leading to more noise. With extreme values, calculations begin to break down, and render errors will appear like fireflies, color clipping, and hotspots.

This isn’t to say a perfectly realistic scene will always result in clean, quick renders. If the ray paths to light sources are obscured, blocked, or refracted (through glass), the calculations become much harder and noisier, taking longer to resolve cleanly.

2. What can we do to control noise and artifacts in Octane for C4D?

First, we need to set up our scene properly to minimize noise and artifacts in Octane. Once we optimize our scene, we can apply tricks and tweaks to clean up any remaining issues.

2.1 Set up our scene properly

Real-world Scale

First and foremost, making sure a real-world scale for the scene. The easiest way is to add a C4D Figure object into the scene as a reference. This will check if plants are tiny or chairs are large. All objects should be close to their actual size in real life, so light interacts correctly. If objects are abstract like sculptures, we still want reasonable sizes. Plausibility is key.

If using real-world light fixture geometry downloaded from a manufacturer, we must ensure it is properly scaled. Often these are imported in millimeter scale rather than centimeter scale, so we have to make accordingly adjustments before bringing them into C4D to match real-life dimensions.

Check the Normals

This step is important because lots of calculations rely on normals, especially for emitting surfaces. Imported CAD models can have incorrect normals, so we should ensure as many normals as possible face outwards (orange in C4D, not blue when selected).

Realistic Materials

Our materials should also be physically plausible. If using RGB Spectrum for our colors in the Albedo/Diffuse channel, we should avoid extreme values (e.g. 100% red, 0% green/blue, or 100% S or V using an HSV model). Gaussian Spectrum avoids this issue; however, it’s a bigger pain to find an exact color this way.
After RGB spectrum colors, the next biggest causes of noise are usually unrealistic IOR (Index of Refraction) and Dispersion values. The more unrealistic these are, the worse the render errors will be. Displacement and Mediums (for things like glass and liquids) are especially difficult for the renderer to calculate, so if they can be replaced with approximations like Bump and Diffuse Transmission, that would speed up the render. If not, they just need to be optimized and kept within the realm of reality.

Lights

For lights, we’ll want to use a texture to manually control efficiency. The Gaussian Spectrum node is often the best option unless we need an exact RGB color. If using an IES distribution, it will handle efficiency for us. We can use a 1/1/1 Gaussian Spectrum and lower the Power to intentionally dim the light.

Placement & Blocking

A hard and fast rule is that we should avoid blocking or obscuring light rays as much as possible, especially the most powerful ones. Instead, we want to mask and control the distribution and move our fixtures around so they emit outward and as unobstructed as possible.

In the scene above, the lights and environment are the same. We see two omnidirectional area lights with a Power of 10, Gaussian Spectrum Texture of 1/1/0.2. “Use primitives” is enabled, also there is no distribution texture. Render settings are Pathtracing at 64 samples with AI Light on. All other settings are at default, the only difference is how far the lights are from the walls and ceiling.

There are just two lights in the scene, thus all the rays which are used to illuminate the scene must trace back to them. The more obscured the emitting area is, the harder it is to calculate and the more noise appears.

This is the same scene except the sphere lights have been dimmed to 1 power, while the overhead rectangle area light was added as the main illumination source. The overhead light has 20 Power, a 1/1/1 Gaussian Texture, and is placed 1cm from the ceiling. The ceiling light emits rays in all directions, but only from the bottom side, so none are blocked. As the most powerful light source in the room, most rays will trace back to it. Even with the 2 weaker lights against the walls, the scene is not very noisy at 64 samples. The same strategy can be used with distribution textures for recessed lights to ensure rays are not blocked.

2.2 AI, Tweaks and Cheats

If we have already optimized our geometry, materials, and lights, some render settings can still help reduce noise and artifacts in our scene. Depending on the certain scene, they will have different levels of success. 

AI Light System & Spectron

The AI Light system optimizes light sampling specifically for each individual scene using algorithms that “learn” the scene. With AI Light enabled, the number of samples needed to achieve a clean render goes down significantly. A great thing about this AI Light system is that it remains unbiased – it doesn’t actually limit rays, it just prioritizes which ones to focus on to resolve them more cleanly. Since AI Light affects all emissions in Octane, it’s generally best to leave it enabled unless it causes issues for a specific scene.

All we have to do is check the “AI Light” box in the Kernel settings, and the AI Light System will go to work optimizing the render for our scene. 

Spectron is a procedural lighting system native to Octane Render that simulates even more realistic lighting effects. Here, we will just focus on Spectron Primitives, which make Area Lights render much faster and cleaner, especially when combined with the AI Light system. As of now, Spectron is only available for Area Lights.

In the Light Settings Tab > Octane Light Tag, the “Use primitives” option replaces the area light with a Spectron Primitive. In the Light Object’s settings (not the tag) > Details tab, the Area Shape dropdown lets us choose a shape for the area light.

Importantly, only the Rectangle and Sphere shapes are currently supported by Spectron. Selecting these will swap the light for the corresponding Spectron version. The other area shape options (Disc, Cylinder, etc) will still change the light’s shape but will not be replaced by Spectron primitives. They will still benefit from the AI Light optimizations.

The Outer Radius, Size X, and Size Y values change the size of the light. All other settings in this section are for C4D’s native render engines only and are ignored by Octane.

The above illustration compares a spherical emissive geometry, a non-Spectron area light sphere, and a Spectron area light sphere. It also shows how these options interact with the AI Light system.

All three were given the same properties and render settings. They all have a diameter of 20cm. A Gaussian Spectrum node set to 1/1/0.2 controls the texture. The temperature is set to 4000K  and the Emission Power is set at 7.5. The room is a fully enclosed box with no other lighting. Different materials are applied to the ceiling, walls, and floor to illustrate noise in both darker and lighter areas. For the render settings, GI Clamp is set to the default of 1,000,000 and no denoising or hot pixel removal is applied.

The results show:

      • There is not much difference between using a non-Spectron Area Light and an emissive material. The AI Light system handles both well.
      • When Spectron Primitives are enabled, the scene becomes cleaner even without AI Light on, and much cleaner with it.

This means we have a choice when using real-world fixture geometry from a light manufacturer’s site:

Option 1: Use an emissive material with the proper distribution texture, turn on AI Light, and deal with any remaining noise in other ways.

Option 2: Create a sphere or rectangle Spectron area light, place it inside the fixture geometry, and use Light Linking to hide the fixture from the area light. Apply a low-powered emissive material to the fixture geometry. We would likely only go with this option for tricky lighting situations where we cannot control noise through other means.

Adaptive Sampling

Adaptive sampling is a technique that intelligently allocates more rendering samples to complicated areas of an image and fewer samples to simpler areas. It still helps maintain the unbiased, realistic look of the render.

It works by creating a noise map (which is visible in the Live Viewer at the bottom). When a particular area of the render is deemed sufficiently smooth/clean, Octane stops adding more samples to that area and instead focuses its sampling on noisier/grainier areas.

The results are variable, depending on the scene’s lighting complexity and difficulty to render. In some cases, it is quite effective at quickly resolving problem areas. But in other cases, especially simpler scenes that don’t require as many samples, it can introduce issues like fireflies. There are settings to adjust the threshold and minimum number of samples to fine-tune the method.

Adaptive sampling also produces unbiased results and works well with AI lighting techniques.

Path Termination (Path Term) Power

In scenes where noise in dark areas remains high despite all methods we have tried, lowering the Path Term can help. This causes Octane to spend more time rendering darker areas, improving quality at the cost of speed. Conversely, for scenes without many dark areas, increasing the path termination value may yield faster render times with little impact on quality. Nonetheless, in most cases, other optimization techniques work well enough without adjusting path termination.

GI Clamp

GI Clamp is a traditional way to reduce noise but introduces bias into the render. This setting limits secondary light bounces in the scene, reducing noise. But its lower setting value is at the cost of realism – especially for interiors where secondary bounces are important.

Fortunately, we may not need to lower the GI Clamp aggressively – or at all – if we have properly prepared the scene by keeping everything realistic and unobstructed. This allows many problematic light rays to naturally become extinguished before causing issues. 

We now have better options like AI Light and Adaptive Sampling, which we should utilize instead. We should only use GI Clamp as a last resort or for non-photorealistic renders where realism does not matter.

Important: Combining GI Clamp with AI Light can damage the render, so you should choose one or the other. If you opt for AI Light, you should leave GI Clamp at its default high value of 1 million.

Hotpixel Removal

Pulling the hotpixel removal slider downward targets and removes fireflies. The default is 1, and lower values reduce the threshold for extremely bright pixels, eliminating them from the render. It works well when fireflies are obvious, but setting it too low can start removing highlight detail. Around 0.5 is generally good.

Denoiser

The denoiser uses AI to reduce noise in a render. This can drastically shorten render times, requiring only a fraction of the samples for a clean result. However, depending on the scene, it can impart an over-processed look, wreck tight patterns, or introduce artifacts in animations. It works best as a polishing pass rather than a primary noise reduction method for most scenes.

Light Linking

Light linking allows us to prevent specific light sources from illuminating problematic objects. While unrealistic, this can reduce noise and resolve rendering issues. However, we may need to adjust other lights to compensate.

In recent Octane versions, we can use C4D’s Light Linking system. Just go to the Project tab of a light object and select the objects we want the light to ignore.

We have to enable C4D Light Linking in the Octane kernel settings to use this feature.

Hopefully, this guide has improved your understanding of how these techniques work to control noise and artifacts in Octane and will help you to produce faster, cleaner, and more realistic renders. 

3. Render faster in Octane with iRender

iRender provides high-performance GPU rendering services to unleash creativity for 3D artists. We offer flexible configurations of 1, 2, 4, 6, and 8 GPU servers using the top-tier RTX 4090 and RTX 3090 for accelerated Octane rendering. Powered by powerful AMD Ryzen Threadripper PRO CPUs with up to 64 cores, 256GB RAM, and 2TB NVMe SSD storage, our servers can handle even the most demanding Octane scenes. 

Best of all, we grant you complete access to your rented servers. You can install any software, plugins, or Octane versions you need. We treat your rented servers like your personal workstations – no restrictions apply. This freedom enables you, 3D artists, to realize creative visions without limitations. Leverage our accelerated GPU rendering and scalable power to easily tackle CPU-intensive renders that formerly bottlenecked creativity. 

Using iRender’s server more efficiently in just one desktop app:

Check out Octane rendering performance on our 6x RTX 4090 server powered by AMD Ryzen Threadripper PRO 5975WX:

Make iRender your partner in creativity and unleash your potential with our high-performance GPU servers and start rendering today!

 

iRender – Happy Rendering!

Reference sources: Scott Benson on otoy.com


		

Related Posts

The latest creative news from Octane Cloud Rendering.

Control noise and artifacts in Octane for C4D

The post Control noise and artifacts in Octane for C4D appeared first on iRender Cloud Rendering Service.

]]>
How light works in Octane for Cinema 4D https://irendering.net/how-light-works-in-octane-for-cinema-4d/ Fri, 30 Jun 2023 06:00:50 +0000 https://irendering.net/?p=26631 Lighting plays a crucial role in all 3D applications. No lighting means no render. Lighting in an unbiased renderer like Octane’s Path Tracing kernel attempts to mimic real-world physics. This means that if we set up lighting following real-world rules, we can achieve beautiful and realistic results without extensive tweaking of many settings or cheating too much. However, if we don’t set up lighting up properly, it can result in noise, artifacts, and other issues that cause frustration. This guide will explain how light works in Octane for you to produce beautiful and realistic renders.

The post How light works in Octane for Cinema 4D appeared first on iRender Cloud Rendering Service.

]]>

How light works in Octane for Cinema 4D

Lighting plays a crucial role in all 3D applications. No lighting means no render. Lighting in an unbiased renderer like Octane’s Path Tracing kernel attempts to mimic real-world physics. This means that if we set up lighting following real-world rules, we can achieve beautiful and realistic results without extensive tweaking of many settings or cheating too much. However, if we don’t set up lighting up properly, it can result in noise, artifacts, and other issues that cause frustration. This guide will explain how light works in Octane for you to produce beautiful and realistic renders.

An In-depth guide on how light works in Octane

All emission is the same

All emission is the same in Octane. This may seem weird because we have long been taught that there are significant differences between an HDRI, area light, blackbody, and emissive texture; however, the truth is that all emission (light generation) in Octane works in the same way, just with different defaults and settings.

In Octane, light occurs when a polygonal object has an emissive material that is applied to it. 

      • Some random piece of geometry (with a material applied) that contains a Blackbody or Texture Emission node in the Emission channel emits light.
      • An Octane area light is a procedurally created geometry object with an Emission node built-in. It works in the same way as the random polygon object with an emissive material but with a few more controls for simpler placement and various default settings. It, like a material, can be configured to use a Blackbody or Texture Emission node. By default, the area light is a plane (rectangle), but it can be changed to a disc, sphere, cube, cylinder, or a few other shapes.
      • An Octane targeted area light is a light that has a C4D Target tag and can aim at an external object. This tag can be applied to any light (or object) once it has been created; this is just a shortcut to get this tag on a new light.
      • An Octane IES light is a type of area light that has certain (but not all) default settings to make working with IES textures easier. 
      • An Octane spot light is a type of area light with an adjustable spotlight distribution. It makes use of Octane’s Spectron system to add additional settings such as barn doors and a volumetric cone to better simulate a real spotlight. 
      • An HDRI, or texture environment, is a massive sphere with an emissive texture on the inside. The power settings for this and the sun/sky rig are on an entirely another scale than a smaller fixture since they’re supposed to simulate an entire environment that puts out orders of magnitude more light than an LED bulb.
      • An Octane Daylight rig is a collection of procedural textures and geometry that simulates a massive, powerful light source that’s so far away (The Sun). While it works in the same way as any other light, this rig hides and obscures lots of the standard controls to make it more intuitive.

Emission

In the real world, everything that emits light (such as the semiconductor in an LED, a filament in an incandescent bulb, or a ball of burning hydrogen in space) emits in all potential directions that the emitting surface allows for. As a result, the rays are never parallel or directable.

The sun is the closest natural thing to a parallel light; it still emits in all directions, but because it’s so large and so far away, we only receive a tiny portion of its rays that are essentially parallel with one another. When light is generated, lasers emit in all directions as well, but they have a complicated mechanism of redirecting the rays that currently Octane can not yet imitate.

Because Octane’s Path Tracing kernel is an unbiased render engine that simulates this behavior, we are unable to control or align rays coming from an emitting surface.

Why is this important?

This omnidirectional emission behavior, together with realistic falloff (how far a ray travels before running out of energy) and secondary bounces (Global Illumination), creates very realistic lighting without requiring us to adjust too many settings, as we would in a biased engine. This is fantastic.

What’s not so fantastic is it’s difficult for an unbiased renderer (such as Octane’s Path Tracing kernel) to calculate the paths of all of these rays, especially when some or many of them are prevented or hidden behind materials with refraction. Because of this difficulty, calculation errors accumulate and are visually seen as noise and artifacts such as fireflies. We can typically get a decent render if we throw enough samples at the problem. However, it would be preferable to just set it up perfectly the first time to make everything as fast and efficient as possible, right?

So, what can we do?

Octane is developed to imitate real-world physics with light and materials. Therefore, the best we can do is to use real-world values. We’ll need to discover how those values are translated into light and material settings. Following that, we can avoid doing things that are very demanding on the engine if we don’t have to. Finally, we could use AI and other techniques to help optimize the sampling method to ensure the image resolves faster without sacrificing the appealing look of a neutral render. If everything else fails, we can either brute force it by putting in a lot of time, or we may simply cheat. Let’s learn more about those values.

Wattage/Efficiency (Power/Texture)

In Octane, Emission operates on a wattage/efficiency model.

The Power field is equal to the Wattage of the light source. A 100 Watt incandescent bulb is set to 100 Power, and a 12 Watt LED is set to 12 Power. All is well and good, but as any bulb shopper knows, a 12 Watt LED produces nearly the same amount of light as a 100 Watt incandescent. The difference is in efficiency.

In Octane, the Texture field determines Efficiency. Incandescent bulbs utilize 100 Watts, however, they are only approximately 10% efficient (the other 90% of the energy goes to heat). To imitate this type of bulb, we’d use 100 in the Power field but link a texture node (such as Float Texture) to the Texture field and set it to 0.1. LEDs are approximately 90% efficient, thus if we took the same light and set the Float Texture in the Texture field to 0.9, we could lower the Power to 11 to mimic the LED and receive almost the same amount of light as our incandescent light.

Important: We MUST use a texture in the Texture field to simulate real-world lights. There are some built-in settings in different light objects that aren’t visible or controllable in their current state, thus we’ll override them by attaching a node to set our own values.

A Blackbody or Texture Emission node in a material has a Float of 0.025 (2.5% efficiency). The default value for an area light is 0.1718441 (17.2% efficiency). This means that if we create a Plane object and apply an emissive material to it, it will emit much less light than an Octane area light of the same size. We can get these two objects to operate similarly if we manually override and set the values.

Gaussian Spectrum Texture

Octane is a spectral renderer, which means it defines light (and color) using wavelengths, widths, and power, just like real physics. The Gaussian Spectrum node is what works with these values and produces the best and most realistic results.

Although RGB/HSV models are supported (through an RGB Spectrum node), Octane will convert them to Gaussian before rendering. Because RGB/HSV might produce unrealistic colors causing problems when converting to Gaussian, there may be blown-out or noisy harsh areas if excessive RGB values are applied. The more we can use a Gaussian Spectrum texture instead of an RGB Spectrum or Float texture (particularly with emission), the better we’ll be. 

Surface Brightness

The Wattage/Efficiency model works well for lower scale real-world light fixtures, but it is dependent on real-world scaled objects and emission sources.

For instance, assume we created a polygon model of an incandescent lightbulb. We built it 2 3/8″ (5.9 cm) in diameter because that’s how big it is in real life. We applied an emissive material with a power of 100 (100 Watt) and a Gaussian Spectrum in the Texture field set to 1/1/0.1 to limit the efficiency to 10%, just like a real incandescent bulb.

When Surface Brightness is OFF, this texture emits the appropriate amount of light when attached to our model because it estimates the bulb is approximately the correct size. This is the way to make a realistic interior since we can trust the values of the many light sources in our scene to put the correct quantity of light relative to one another.

Let’s imagine we place our bulb in a scene that wasn’t designed to scale – say, ten times larger. We’ll simply scale our bulb up 10 times. With Surface Brightness still turned off, we suddenly have a little sun in our scene, filling everything with light because the emissive surface is far too huge.

Surface Brightness comes into play here. We no longer have to try to match an exact real world light source. We are just trying to make the material look correct by estimating values. Enabling Surface Brightness means the emission texture now considers the size of the mesh and keeps the mesh surface at the same brightness, regardless of the object’s size. Our 10x larger bulb now somewhat looks like we would expect a large novelty lightbulb-shaped lamp to look. If we scale it to 100x, it will still look approximately correct.

After that, if we reduce our bulb back to actual size, but still keep Surface Brightness on, we will see that it is too dim (as in the above image). The surface polygons remain the same brightness, but there isn’t as much surface area now. So it’s not outputting as much light overall – we’d have to compensate with extra power (or turn off surface brightness).

In a word, if we want to match the exact light that we have the specs for, we want to turn off the surface brightness and put in the correct power (Wattage) and efficiency (Texture). If we just want some arbitrary glowy thing in a scene, especially, one that animates in size, or if we’re using neon tubes or something that can be a variety of sizes but needs to maintain the same apparent brightness, we’ll turn on the surface brightness and eyeball the power until it looks right.

It is turned on by default in an area light and turned off in an emissive texture.

Distribution

If we want more realism, visual intrigue, or faster render times with our interior lights, the Distribution field is where the magic will happen.

A Blackbody or Texture Emission node’s projection is always Spherical. That is, light emits from all possible directions from the polygons’ normal side (front), just as it does in real life.

In practice, this emissive element would be covered in a glass or plastic bulb, which would then be put in some kind of housing. The bulb itself may diffuse or refract part of the light, and the housing may block a portion of it while reflecting or refracting some of it depending on the type of fixture.

If we were to simulate this exactly in an unbiased renderer like Octane by constructing a fixture that blocks some of the light while focusing or diffusing others, the difficulty of calculating all those rays would skyrocket (especially with a lot of refraction), and we would end up with a horrid mess of fireflies and noise that would take a ton of samples and time to resolve (if at all) due to the nature of path tracing.

Even if we could control the noise, we’d have to continually modify the lens, glass form, housing shape, and so on to obtain a close approximation of how the light is distributed. 

Fortunately, we can simulate what real-world lights do in their housings by blocking or dimming areas of the emission texture itself using the Distribution field. We can apply a texture to the light-emitting surface (rather than burying it in a glass tube), and the material will simply not emit rays from these problematic areas, dimming the portions that would normally be semi-blocked, and we’ll end up with an approximation of how the light would really function in its enclosure in a much shorter period of time.

This is obviously not ideal if our render is a macro view of a lightbulb. However, it works well in interiors and other scenes where we aren’t hyper-focused on the emitting element itself.

Distribution Textures

The Distribution field, as mentioned before, takes a texture. We might add some random texture, like noise or checkerboard, and obtain fascinating results. It’s hard to control, however, since all light in Octane is spherical mapped. So let’s take a look at two methods to make good distribution textures.

Spotlight distribution

A Spotlight Distribution node in Octane is a quick and easy method to observe what Distribution actually does.

The spot distro effectively takes the entire spherically mapped emission area of the material and blocks out the majority of it, leaving a small circle on one side that generates a directional cone of light. The greater the circle, the less space it blocks and the wider the cone. The fuzzier the circle’s edges, the softer the spotlight.

Important: If we use this in an emissive material on random geometry, we must set the Orientation to “Direction – object” space so that the spot moves with the object when we rotate it. There are additional options in here as well, such as Direction- world space if we want the light to always point down. This is just a problem with emissive materials; area lights with the default spot distribution will aim correctly.

Because we’re blocking a portion of the emitting texture, the smaller the cone, the less light is emitted; if we make it too small, we may not see anything until we crank up the power. Just something to keep in mind.

IES Lights

The IES (Illuminating Engineering Society) created an open standard for these textures that is realistic enough to be used in engineering light simulation tools like Dialux while still being great for aesthetic 3D software like Octane. In fact, the .ies file can be utilized straight in Octane without being converted.

Most light manufacturers’ websites provide IES files for free because they want you to simulate their lights before purchasing the real thing for your house or business. Even better, 3D models of the actual fixtures are frequently free from the same sources, allowing us to map our correct distributions to real-looking fixtures! Also, the IES data manage efficiency for us, thus there’s no need to guess or waste time researching it.

Proper IES Light Setup

Because .ies files contain engineering data rather than creative image textures, there are a few things we’ll need to do to correctly set up an IES light if we want to match the real-world data contained in the file. IES files can be used in an Octane Light object, such as the Area Light, or in an emissive material. Both are functionally identical but with distinct defaults and considerations.

If we use an Area Light, we’ll need to scale it to roughly the size of the fixture for which the IES data was designed. This is under the Details tab of the actual light object (not the tag).

If we use material, all channels (except for Emission and Opacity) should be set to pure black and/or 0 float (no contribution) – this is especially important for the Albedo/Diffuse channel. The Opacity should be set to 1. It’s usually easier to utilize a Diffuse material for an emissive because there are fewer things to turn off. Then, connect a Blackbody Emission (not a Texture Emission) to the Emission channel and then turn OFF surface brightness.

The remaining procedures are similar for both a light and an emissive material.

The Power and Temperature of the Emission should be matched to the real-world power and temperature of the light on which the IES data is based. If we can’t find the specs on the manufacturer’s website, we can pretty safely assume LED at this point, which is usually in the 7.5 – 15W range, and the temperature is probably in a range of 2800 – 7000K.

In the Texture field:

Attach a Gaussian Spectrum node and set all three values to 1 (wavelength, width, and power). The IES texture will manage the efficiency for us, but we may achieve purposeful dimming by lowering the Gaussian Spectrum’s Power.

In the Distribution field:

We must remember to always use an image texture node. If we simply click the little down arrow next to the Distribution field and select the IES file instead of c4doctane>ImageTexture, the IES will import a C4D Bitmap shader, causing everything to break. If an Image Texture node is not used, there will be wailing, gnashing of teeth, and fire and brimstone.

      • We must ensure that the .ies file is loaded in the File field rather than the .png preview that is sometimes included.
      • The ImageTexture’s Power needs to be set to 1 – we will change the dimming in the Gaussian Spectrum node.
      • The ImageTexture’s Color space needs to be set to non-color data. Gamma is then irrelevant.
      • The ImageTexture should be attached with a Projection node set to Spherical; this will allow us to target the light, whereas UVW Projection will not (even if the emission will still be spherical).

Coloring lights

We have two approaches for coloring lights in Octane. The Blackbody Emission node utilizes the scientific black body radiation scale, which is often used in real-world light fixtures and to depict celestial objects such as stars. This is perfect for simulating an interior light fixture (such as an IES light) or simply creating a warmer or cooler light. For neon or other lights that utilize colored glass or gels, we can use Texture Emission to select a color that is not on the black body scale.

The primary difference between Blackbody and Texture emission is how color is handled.

Blackbody Emission

Texture Emission

Another approach is coloring light through the Texture Emission node. Rather than having a unique temperature setting like Blackbody, this relies on the Texture input to specify the color. The Texture field is there to manage efficiency, thus there is now a relationship between color and efficiency (as it does in the real world). Different wavelength has different apparent brightness.

We can put any texture in here, but if we want a specific color, we can use an RGB Spectrum node to define the color via RGB or HSV, a Float Texture to get a 0-1 grayscale, or a Gaussian Spectrum, which is what we really want to control noise and fireflies.

If we use RGB or Float, we need to avoid extremes like 255,0,0 (or 1,0,0 if we’re using Octane’s color model) or float values higher than 0.95 or such. Because all Gaussian values are real-world, we won’t have any problems there.

To manage the color with a Gaussian Spectrum node, we must work in small increments and approach it similarly to an HSV color model. It becomes complicated because of how light works and how the eye sees it. The three settings in the Gaussian Spectrum node (wavelength, width, and power) are all interrelated, and getting the proper color isn’t as simple as shifting one slider.

Wavelength is similar to Hue. The wavelength range of the visible light spectrum in the real world is 380-720nM. However, in Octane, it’s been remapped to a 0-1 scale, with 0 as violet and 1 as red.

Width is similar to Saturation, but what it really does is spread out the range using the Wavelength value as a center point, so the color starts to mix in with surrounding hues and becomes less saturated-ish as it grows larger. A width of 1 generates full spectrum (white) light, as shown by the bar on the far right in the image above. A width of 0.05 creates an extremely thin band that, like the other bars above, isolates single colors much better.

Power is similar to Value, but it significantly relies on color. As in the image above, with the same Emission Power and width (apart from white), we can see that we need to reduce the Gaussian’s Power all the way down to 0.05 for the 0.5 wavelength tube (greenish-yellow) to have almost the same relative intensity as the 1 wavelength tube (red). It works on a bell curve and may be evaluated by turning up the Bloom post effect and checking if a certain color is significantly hotter than another.

Wrap-up

That covers the majority of the main aspects of how light works in Octane. Hopefully, this article has given you a better grasp of how lighting works and can help you create faster, cleaner, more accurate renders.

Render faster in Octane with iRender

iRender provides high-performance GPU rendering services to unleash creativity for 3D artists. We offer flexible configurations of 1, 2, 4, 6, and 8 GPU servers using the top-tier RTX 4090 and RTX 3090 for accelerated Octane rendering. Powered by powerful AMD Ryzen Threadripper PRO CPUs with up to 64 cores, 256GB RAM, and 2TB NVMe SSD storage, our servers can handle even the most demanding Octane scenes. 

Best of all, we grant you complete access to your rented servers. You can install any software, plugins, or Octane versions you need. We treat your rented servers like your personal workstations – no restrictions apply.

This freedom enables you, 3D artists, to realize creative visions without limitations. Leverage our accelerated GPU rendering and scalable power to easily tackle CPU-intensive renders that formerly bottlenecked creativity. 

Check out Octane rendering performance on our 6x RTX 4090 server powered by AMD Ryzen Threadripper PRO 5975WX:

Make iRender your partner in creativity and unleash your potential with our high-performance GPU servers.
Start rendering today!

 

iRender – Happy Rendering!

Reference sources: Scott Benson on otoy.com


		

Related Posts

The latest creative news from Octane Cloud Rendering.

How light works in Octane for Cinema 4D

The post How light works in Octane for Cinema 4D appeared first on iRender Cloud Rendering Service.

]]>
Should we choose NVIDIA or AMD GPU for Redshift? https://irendering.net/should-we-choose-nvidia-or-amd-gpu-for-redshift/ Fri, 23 Jun 2023 06:00:55 +0000 https://irendering.net/?p=26579 When it comes to choosing a GPU for Redshift rendering, there are two main players in the market: NVIDIA and AMD. While NVIDIA has been the dominant player for a long time, AMD has recently gained attention for its support for Redshift through its HIP platform. In this post, we'll take a closer look at which one should we choose, NVIDIA or AMD GPU for Redshift.

The post Should we choose NVIDIA or AMD GPU for Redshift? appeared first on iRender Cloud Rendering Service.

]]>

Should we choose NVIDIA or AMD GPU for Redshift?

When it comes to choosing a GPU for Redshift rendering, there are two main players in the market: NVIDIA and AMD. While NVIDIA has been the dominant player for a long time, AMD has recently gained attention for its support for Redshift through its HIP platform. In this post, we’ll take a closer look at which one should we choose, NVIDIA or AMD GPU for Redshift.

Introducing AMD’s HIP

An interesting addition in this recent Redshift update is the ability to use AMD GPUs. Previously, NVIDIA has been the only GPU option for 3D rendering software due to their CUDA APIs that made it easier for developers to program and utilize the hardware. In contrast, AMD offered their own rendering solution called ProRender as a plugin for software like 3ds Max and Maya. However, that has changed recently with AMD’s HIP libraries.

HIP is AMD’s equivalent to NVIDIA’s CUDA, allowing developers to better integrate AMD hardware into their rendering engines. Blender was one of the first to incorporate HIP. Now, Maxon has integrated HIP into Redshift.

AMD HIP Ray Tracing (HIP-RT) (Source: GPUOpen)

There are two versions of HIP – standard HIP and the newer HIP-RT. Both NVIDIA and AMD’s latest GPUs have hardware-accelerated ray tracing cores. However, standard HIP does not utilize these ray tracing cores just as CUDA does not. HIP-RT will enable these ray tracing cores to help with rendering, just as NVIDIA’s RTX GPUs currently do.

This is important because when testing AMD GPUs in Redshift, NVIDIA GPUs will naturally have an edge since more of their hardware is being utilized. We need to compare performance with and without ray tracing enabled to get a better sense of how AMD GPUs actually perform in Redshift rendering. The performance comparison will be in the third section. Let us now have a look at NVIDIA’s CUDA and AMD’s HIP technologies and examine the two most powerful GPUs of each brand for Redshift, NVIDIA’s flagship RTX 4090 vs AMD’s RX 7900 XTX.

NVIDIA’s CUDA GPU vs AMD’s HIP GPU

NVIDIA’s CUDA (Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) that provides a way to use the GPU for general-purpose computing. This platform is optimized for NVIDIA GPUs and provides excellent performance in Redshift. On the other hand, AMD’s HIP (Heterogeneous-Compute Interface for Portability) is a platform that allows developers to write code for both AMD and NVIDIA GPUs using a single source code. This platform provides a way for AMD GPUs to be used in Redshift, but the performance may not be as optimized as NVIDIA’s CUDA.

AMD’s new Radeon RX 7000 series GPUs are Team Red’s answer to NVIDIA’s RTX 40 series. NVIDIA’s current top GPU is the expensive but extremely powerful RTX 4090. AMD has two top GPUs, the RX 7900 XT, and RX 7900 XTX. How does AMD’s flagship, the XTX, compare to Nvidia’s most powerful flagship? Let’s examine:

Nvidia GeForce RTX 4090 vs AMD Radeon RX 7900 XTX

NVIDIA RTX 4090:

      • 24 GB GDDR6X memory (faster than GDDR6)
      • 76 billion transistors
      • 3rd-Generation RT Cores for Ray tracing
      • 4th Generation Tensor Cores for AI tasks
      • 450W Power

AMD Radeon RX 7900 XTX:

      • 24GB GDDR6 memory
      • 58 billion transistors (-24% fewer)
      • No dedicated ray tracing hardware
      • No Tensor Cores
      • 355W power limit (lower than RTX 4090)

Which is better?

Why RTX 4090 is better than RX 7900 XTX (Source: Versus)

On paper, NVIDIA’s RTX 4090 has advantages in GPU clock speed, memory clock speed, FLOPS, texture rate, memory bandwidth, transistor count, shading units, TMUs, dedicated ray tracing cores, Tensor cores, and higher power limits. 

But how does this translate to real-world Redshift performance? Let’s explore!

Performance Comparison: NVIDIA or AMD GPU for Redshift

RT Accelerated Performance (RTX Enabled)

We will first compare render times using Redshift’s default settings. This allows Redshift to utilize NVIDIA’s dedicated RT cores but not AMD’s GPUs. This shows the full potential of each card at the time is this post.

Redshift 3.5.15 Performance on RTX enabled mode (Source: Puget Systems)

Based on these render times, RTX 4090 is 124% faster than RX 7900 XTX. In other words, RTX 4090 rendered the image over 2 times (124%) faster than the RX 7900 XTX. However, even at a disadvantage, AMD’s top RX 7900 XTX still comes close to matching the RTX 3080 from NVIDIA. If that were the whole story, it would still be a good result for AMD and Maxon in their first attempt at GPU rendering on Windows. NVIDIA has had years to perfect their CUDA and RTX APIs, so AMD’s keeping pace shows promise.

RT Disabled Performance (RTX Disabled)

Redshift 3.5.15 Performance on RTX disabled mode (Source: Puget Systems)

When testing the GPUs without using ray tracing to make it a fair comparison, the results show the 7900 XTX performing better than the RTX 3090 and coming close to the newest RTX 4070. This demonstrates that while NVIDIA still holds an edge in GPU rendering performance, AMD isn’t too far behind when ray tracing acceleration is removed from the equation.

Conclusion

The initial version of AMD hardware in the Redshift render engine is quite promising. Although AMD mainly focused on gaming performance when launching the RX 6000 and 7000 series graphics cards without much emphasis on professional content creation applications, we are now seeing AMD hardware provide good performance in the Redshift renderer. Having a large rendering program like Redshift adopting AMD support will put pressure on NVIDIA to continue innovating and improving their products.

To the question “Should we choose NVIDIA or AMD GPU for Redshift?”. A balance between Needs, Budget & Performance should be considered. 

If maximum Redshift performance – especially with complex ray-traced scenes is your top priority – NVIDIA GPUs with their dedicated RT and Tensor cores are unquestionably a better choice than what AMD Radeon cards have to offer. 

However, if your budget is tight or ray tracing isn’t essential, AMD Radeon cards offer a good value at around half the cost.

In summary, your specific workflow, scene complexity, and budget will determine which brand is better. NVIDIA CUDA and AMD HIP both have advantages and trade-offs. But for the outright rendering performance, NVIDIA GPUs provide a remarkably compelling option for Redshift rendering. For professionals and studios, that significant performance advantage far outweighs the lower cost of AMD cards. When your time and income depend on rendering performance, NVIDIA’s dominance in Redshift makes their GPUs worth the premium.

Render faster in Redshift with iRender

iRender provides high-performance GPU rendering services to unleash creativity for 3D artists. We offer flexible configurations of 1, 2, 4, 6, and 8 GPU servers using the top-tier RTX 4090 and RTX 3090 for accelerated Redshift rendering. Powered by powerful AMD Ryzen Threadripper PRO CPUs with up to 64 cores, 256GB RAM, and 2TB NVMe SSD storage, our servers can handle even the most demanding Redshift scenes. 

Best of all, we grant you complete access to your rented servers. For as little as $8.20 per hour, you can install any software, plugins, or Redshift versions you need. We treat your rented servers like your personal workstations – no restrictions apply.

This freedom enables you, 3D artists, to realize creative visions without limitations. Leverage our accelerated GPU rendering and scalable power to easily tackle CPU-intensive renders that formerly bottlenecked creativity. Say goodbye to renders taking ages and hello to rendering freedom.

Check out Redshift rendering performance on our 6x RTX 4090 server powered by AMD Ryzen Threadripper PRO 5975WX.

Make iRender your partner in creativity and unleash your potential with our high-performance GPU servers.
Start rendering today!

 

iRender – Happy Rendering!

Reference sources: NVIDIA, AMD, Puget Systems, Versus, GPUOpen


		

Related Posts

The latest creative news from Redshift Cloud Rendering.

Should we choose NVIDIA or AMD GPU for Redshift?

The post Should we choose NVIDIA or AMD GPU for Redshift? appeared first on iRender Cloud Rendering Service.

]]>
A deep dive into Caustics in Octane for Cinema 4D https://irendering.net/a-deep-dive-into-caustics-in-octane-for-cinema-4d/ Fri, 16 Jun 2023 06:00:21 +0000 https://irendering.net/?p=26492 Caustics require lots of calculations. It is one of the more difficult tasks we can assign to a GPU. In this post, let’s take a deep dive into Caustics in Octane for Cinema 4D. We will look at how Octane handles this phenomenon and the best method to set up our scene to obtain the most impact without spending the rest of our lives rendering.

The post A deep dive into Caustics in Octane for Cinema 4D appeared first on iRender Cloud Rendering Service.

]]>

A deep dive into Caustics in Octane for Cinema 4D

At some point in our lives, we have all noticed or created beautiful light patterns by moving a metal or glass object around a sunny room. These patterns are known as “caustics;” they are formed when light bounces off a reflective surface (or passes through a refractive one) and concentrates on a matte surface. Caustics require lots of calculations. It is one of the more difficult tasks we can assign to a GPU. In this post, let’s take a deep dive into Caustics in Octane for Cinema 4D. We will look at how Octane handles this phenomenon and the best method to set up our scene to obtain the most impact without spending the rest of our lives rendering.

I. Basics to get you up

What we need to get interesting visible caustic patterns:

      • A piece of geometry with reflective or refractive material on it such as metal, glass, shiny plastic, etc., to cast caustics.
      • A piece of geometry on which the caustic pattern will occur. The material used to cover this surface should be matte or rough. It can have some reflection, but not too much since it can degrade the effect.
      • A light bouncing off our reflective (or through refractive) geometry onto the matte surface.
      • Set the Octane kernel to Pathtracing, PMC, or Photon Tracing. If you are using Photon Tracing, you need to turn on the “allow caustics” checkbox on each material that casts caustics.

Next, set it up really quickly:

Step 1: Geometry

In a new scene, create a tube primitive, which is the caustics casting object. Make the outer radius 60, give it 100 rotation segments, position it +50 cm on Y, and turn on Slice in the Slice tab. This defaults to a slide with a perfect range of 0-180 degrees. Next, make a  Plane object and set it to 1000x1000 cm for our caustic pattern to appear on.

Step 2: Materials

Next, use two materials. The first material is for the plane, thus it should be a matte dark gray. Let’s bring the Specular to 0, then change the Albedo to H:0, S:0, V:25%. We can load it into the plane now. The second material is the metal for the tube. Let’s set the Metallic to 1, and the Albedo Value to pure white – H:0, S:0, V=100%, and apply this on the tube. To allow the Photon Tracing kernel to cast caustics, we must click the “Allow caustics” box on the IOR tab.

Step 3: Lighting

This time, keep things simple using an Octane Area Light. In order for it to bounce off the inside of the tube and then hit the floor plane, set the Position coordinates to X:0, Y:300, Z:-300, and the Rotation.P to -45 degrees.

Step 4: Rendering

Set the kernel to Path Tracing, Max Samples to 2048, and GI Clamp to 10 for our render parameters. The earliest samples arrive as white speckles, but they eventually change to a heart-shaped Nephroid caustic pattern.

Though it is a little weak and grainy, we have caustics finally!

      • When we use the PMC kernel, we need fewer samples to achieve a beautiful look, but it takes roughly twice as long to render half of the samples. It also takes more samples to resolve the caustic pattern, so unless we had a true beast of a workstation, lookdev tuning would take an eternity.
      • Now, switch to the new Photon Tracing kernel and ensure that our metal material’s IOR tab has “allow caustics” checked. Within 4 samples, we notice that the caustics start to visibly resolve! Furthermore, it takes many fewer maximum samples to make it look beautiful. If you want caustics in Octane, this is the way to go most of the time in the future.

II. A deep dive into Caustics in Octane for Cinema 4D

The concentration part is the most interesting when trying to replicate Caustics in Octane. The concentration of the rays is essential to get the most visible caustic patterns. Four factors that affect this include the geometry of the casting object, materials, lighting, and render settings. Let’s dive into each factor.

1. Geometry

Knowing that we want light concentration, it’s clear that the shape of the casting object will be significant.

We can see above how, when using a reflective material (metal in this case), the more concave its shape becomes, the more light is concentrated, and the more complicated and visible the caustic pattern becomes as it builds up in a tighter area on the ground plane. Convex forms (far left shape) splay the light rays out, leaving only a weak glow at the object’s base. A perfectly flat mirror simply lights the ground plane and isn’t very interesting on its own. The 180-degree concave form (the far right shape) bounces light and creates an intricate nephroid pattern.

More complicated shapes create more complex patterns. The geometry doesn’t even have to curve; it only has to redirect the light, so we could use a flat plane with a bump or normal map (far right in the above image) or a lot of tiny angular pieces (the second shape from the left) to obtain an attractive pattern. The more curvature there is in the geometry, the smoother and more organic the caustic pattern will be.

Refractive caustics is far more difficult. There are significantly more calculations involved, making them more difficult to forecast and also more demanding on the GPU (resulting in longer render times).

If we use the same set of curved uniform cubes as in the reflective material example, we find that the light does not concentrate all that much regardless of how much it is curved. This is due to how optics works: with a uniform wall, the entrance and exit angles cancel each other out, so most rays continue to go in the same direction, and the resulting pattern is rather uniform and uninteresting.

Because the entrance and exit angles in a convex lens compound rather than cancel each other out, the rays all angle in and focus the light, which is just what we want for great caustics.

If we go a step further and make more complicated shapes, light will refract and concentrate in more attractive patterns. However, the IOR and light placement play an equal part in redirecting the rays. Let’s see the following Material and Light factors. 

2. Materials

To achieve the brightest, sharpest caustics, our casting material should either reflect or transmit as much light as possible while our receiving material should reflect, transmit, and/or absorb as little light as possible. To control this, we need to look at a few properties.

2.1 Reflective Casting Material

Roughness is the most simple property to understand. The rougher the reflective casting material, the more light rays it disperses and the blurrier and weaker the caustic pattern. Fine bump or normal maps have the same impact on roughness, thus we need to apply these carefully. 

The color of the casting material is also crucial. Colors with a high Value (V) % reflect more light and generate better results when using the HSV color model. Lower Saturation (S) result in whiter overlapping areas that show out more against darker backgrounds.

Certain hue colors (yellow/60°, cyan/180°, and magenta/320°) naturally reflect more light than others (red/0°, blue/240°, or violet/280°). As a result, we need to compensate with lower saturation levels and higher reflectivity (IOR) with darker hues. We can add variation by using Thin Film or a Coating layer driven by a gradient.

2.2 Refractive Casting Material

For good refractive caustics, we want as much unencumbered light passing through our geometry as possible. Roughness and base color have the same effects as reflecting materials in that the less roughness and lighter the color, the more light gets through and the brighter the patterns. Refraction is different from reflection in that we also have IOR and dispersion to contend with.

Important: In more recent versions of Octane, the Transmission channel’s default mode is Diffuse. For the greatest results, change this to Specular. Diffuse transmission will, well, diffuse the light rather than concentrating the rays into caustic patterns as we want.

IOR is a balancing act involving geometry and light placement. In general, the lower the IOR, the less light bends and the further away the focus point. The path of the light is also influenced by the thickness of the geometry and angle at which the light hits the walls. When the IOR is too high, the focal point is so close that the rays going out the opposite side scatter, making the caustic pattern soft or difficult to see.

The IOR of air is 1.0, water is 1.3, normal glass is around 1.5, and the highest refractive IOR found easily in nature is 2.4, which is seen in gemstones such as diamonds. If sharp caustics are required at higher IORs, the wall thickness must usually be fairly thin, or the object must be faceted like a cut gem.

Dispersion is a natural phenomenon. It happens when visible light’s various wavelengths interact with a transmissive thing. This splitting causes various hues to arise, which then translate to the caustics, resulting in some quite stunning effects. Sharp geometry, such as prisms or cut jewels, will provide more distinct color splitting than soft, wavy geometry, such as a noise displacer.

Dispersion + Caustics is one of the most difficult things we can ask a GPU to accomplish. We’ll have to be patient while it calculates refraction, dispersion, and caustics. Higher values frequently slow down render time, and using 0.1 (the highest) frequently results in a lot of noise and artifacts requiring a ton of samples to clean up.

The base color should be reasonably dark and non-reflective (Albedo V (value) is ranging from 20 to 30, with Specular and Metallic channels set to 0 producing decent results). This will also modify as the light’s intensity varies. In general, the lighter the material on the receiving item, the more the caustic patterns are washed out. However, it cannot become too dark, or it will absorb all of the light, as seen on the far left in the above image. 

In this case, a value of 30 is precisely about the sweet spot – black enough to contrast nicely, but not so dark that they are absorbed, or too bright that we lose details in the bright parts.

Reflection also reduces the effects of caustics by bouncing the light away and reflecting the original geometry, which may or may not be desirable. Roughening a metallic or high dielectric specular surface will help, but it must be fairly high to see the caustics sharpen (start with 0.5 and work your way up).

3. Light

In a word, we want relatively small, powerful bright lights that emit as many directible, parallel-ish rays as possible that we can bounce off (or through) our objects without washing out the resulting pattern.

A parallel light would produce the sharpest possible shadows and caustics because all of the rays are released in the same direction. Because there is no real-world equivalent for this form of light, it does not exist in Octane. Light sources in the real world (and Octane) emit rays in all directions. Our goal is to have as many parallel-ish rays reflecting off or going through our casting object as possible. The curvature or the IOR/curvature of a refractive object can then focus these rays even further to provide us with sharp, clear caustics.

Rays that do not hit our object but instead hit the surrounding surface will look like normal illumination, and depending on the size and angle of the light, this can be quite bright.

So, how can we control the light rays if we can’t get a parallel light going?

We can reduce the surface area of the light in comparison to our casting object. This means that fewer rays will hit our object, but those rays will be more parallel than in a larger light because the other, more sharply angled ones will simply miss it. If we use Surface Brightness, we must make the tiny light stronger than the larger one.  A closer, smaller light will enlarge and distort the caustics/shadow area, which may or may not be okay. If caustics are a key goal of the render, one of the simplest methods to alter the sharpness is to change the size of the lights that impact the casting objects.

We can also shift the light away from the casting object. Because of the geometry and angles, fewer rays will hit the casting object, and we must now consider light falloff (running out of energy over the distance before hitting our object) and increase the power accordingly. The advantage of this is that the rays will be much more parallel, resulting in sharper and less distorted caustics and shadows. The disadvantage is that it takes a lot of tweaking to get it to focus properly. Unless the caustic pattern size is a problem, it is preferable to “focus” the caustics using the light size and material IOR.

3.1 Light Cancellation

One of the major problems with shining a bright light on a surface is that the light can muddy or wipe out the caustic pattern. There are a number of options for dealing with this, such as gobos, IES lights, and physically blocking rays with geometry, but a Spotlight Distribution node is a quick and straightforward approach we may consider.

At the time of writing, neither Octane nor C4D’s light linking works with caustics.

To get a spotlight distribution on an area light, simply click the Octane Light tag and navigate to the Light Settings. Click the small down-arrow icon next to Distribution, then select c4doctane, and finally Spotlight Distribution.

The two most essential values here are the Cone Angle and Hardness. They will differ based on the size of the light, the distance between the light and the casting object, and the shape of the casting object, so it will simply be a matter of experimenting till it looks nice.

4. Render Settings

The render settings are the final step in the caustics equation.

Caustics are generated by the Pathtracing, PMC, and Photon Tracing kernels. Pathtracing was once the quick way to generate okay-ish but denoise-able caustics, whereas PMC was the go-to for better-looking but GPU- and patience-taxing caustics. Octane’s most recent versions have added the Photon Tracing kernel, which is effectively a supercharged version of Pathtracing that outperforms the best aspects of both PT and PMC. In fact, it’s so much better that we’re skipping over the other two kernels in this guide.

Because many of the parameters are the same as in Pathtracing, we won’t go through them as thoroughly as the new Photon tracing-related settings. 

The most important aspect to understand about caustics in Photon tracing is that it is material-driven. If we took into account all materials and objects in a scene when producing caustics, our computers would definitely catch fire, thus a line had to be placed somewhere so we could actually finish our renders. 

Important: Only casting materials with the IOR channel option “Allow Caustics” checked will cast caustics in the Photon Tracing kernel. Fake Shadows must also be unchecked in the Common tab for it to work.

Now that we’ve gotten that out of the way, we can dig into the tweaky options.

4.1 Basic Photon Tracing Settings

The top section of the setting box looks a lot like Pathtracing.

Max samples still determine how many passes the kernel does to resolve the image. In general, more passes result in less noise. The beautiful thing about photon tracing is that we need FAR fewer samples to resolve caustics than PT or PMC, so we can really get away with 256 or 512 while doing lookdev, which is incredible.

Although Diffuse, Specular, and Scatter depth have no effect on caustics, they must be considered for the appearance of the objects themselves.

Ray epsilon does have an effect on caustics, however, it should be avoided most of the time. Keeping it as low as possible (the default is 0.0001) is ideal for realism, but we may start increasing it by orders of magnitude to get different creative effects.

Although GI clamp has no direct effect on caustics, it does have an effect on other GI-related light in the scene. A reasonable default for this is 10, and if the scene is mostly caustics, try setting it to 1 and seeing if the render cleans up faster.

Caustic-specific settings

Photon depth: Similar to Diffuse/Specular/Scatter depth, this refers to how many times photons will bounce through different materials until they become extinct. For the most part, we may leave it at (6), but if we have a lot of panes of glass or multiple mirrors in a scene, we’ll want to increase it. It has little effect on render speed, so we’d only want to reduce it if we’re getting wild caustics that we don’t want for visual reasons.

The Photon gathering radius is the most necessary caustic-related setting. It’s basically the photon resolution. The higher the value, the smoother/softer the caustics will resolve; too high, and it will become fuzzy. The smaller this value, the sharper the image and the caustics will be; too low, and graininess and artifacts may develop. It’s an act of balance.

The Photon Count Multiplier acts as a bias slider for how much emphasis Octane places on caustics over other path tracing calculations. Given infinite time and samples, it wouldn’t matter whether this was set to 1 or 8 – everything would eventually resolve roughly the same, but we don’t have time for infinite samples right now, so we need to guide Octane’s efforts.

As a general rule, if the scene has simple caustics (one light source, reflective material, or mild caustics that aren’t a major component of the scene), the default of 2 will serve, and more emphasis is placed on making the rest of the scene seem better quickly. If the scene is all about the caustics, with crazy, jagged patterns, refraction, and dispersion, and the GPU is huddled in the corner, crying out, set this to 8 if you have the VRAM for it.

Setting it to 8 during lookdev is also a useful technique to resolve caustics faster, allowing for faster light/material/etc. tweaking. Once everything is in place, you can return to work on the rest of the scene.

Photon Gather Samples – depending on the scene’s structure and noise tolerance, this can significantly speed up caustics renders. For the example above, 2 (default) delivers decent quality at a reasonable render time given the hardware, use case, size, and so on. When the gather samples are increased to 4, the render time is virtually cut in half, which is fantastic for lookdev, but it introduces some noise, which may or may not be acceptable. As the number of samples increases, the noise increases dramatically while the render times just slightly improve. The trick is to locate the sweet spot for lookdev and then the final render. (8 and 2 for this scene, respectively).

Photon Exploration Strength – this should be left alone most of the time, but if there’s a small amount of noise that won’t go away, increasing this to 1 may help at the expense of some render time. Setting it to 0 can also result in some fascinating aesthetic effects.

4.2 Denoising

Denoising caustics follows the same rules as anything else in a render. The denoiser works best when the forms are larger, smoother, and more organic. Sharp, detailed renders with tight patterns can be ruined if not enough samples are thrown at the initial render before denoising.

In a simple reflective arrangement like the one above, we can see that it performs admirably, even at 8 samples. For this scene, 64 or 256 is certainly better, but we have a lot of leeway. Everything is neatly smoothed out by the denoiser.

In a complex setup like this, we have noticed that the denoiser eats away the fine caustic patterns (particularly in the center right at the base of the cylinder and also on the back wall of it). It is not until we reach the 8192 range that it becomes suitable for final output.

Path Term Power, Coherent Ratio, and Adaptive Sampling are the same as they are in Pathtracing.

      • Path Term Power is useful if there is a lot of noise in the render’s darker parts. We can reduce this to help clear up some of that. We can also increase it to speed up the render but add more noise in darker parts- if the render has no dark parts, this may not be necessary.
      • Coherent Ratio can also significantly speed up the render, but there will be a lot of noise and artifacts unless the maximum sample count is set extremely high. This is particularly difficult with animation.
      • Adaptive Sampling has its own procedure and is outside the scope of this guidance, but it does work with Photon Tracing.

Hopefully, you now have a much better understanding of caustics in Octane and Octane’s new Photon Tracer kernel.  This is one of the most challenging tasks for a render engine, thereby mastering it will require time and patience. But it was well worth it!

III. Speed up Octane rendering with iRender

iRender provides high-configuration single and multi-GPU servers, specifically 1/2/4/6/8x RTX 4090 & RTX 3090. Powered with two of the most single-core performance CPUs for Octane which are AMD Ryzen™ Threadripper™ PRO 3955WX @ 3.9 – 4.2GHz & AMD Ryzen™ Threadripper™ PRO 5975WX @ 3.6 – 4.5GHz, 256GB RAM and 2T NVMe SSD storage, all our servers can work well with almost all levels of complexity of Octane projects.

iRender gives you, all creative individuals and studios, an affordable answer to unleashing your creativity with the beast RTX 4090 from just 8.2 USD/hour. We are proud to be the only render farm where you can install any software and plugins of any version that serves your project. You will be given complete control over the server(s) and use them as your local computers.

Let’s check out our test videos about Octane rendering performance on our 6x RTX 4090 beast.

Get a FREE TRIAL to try our RTX 4090 machines. And don’t forget to enjoy our 4th birthday’s gift.

For further information, please do not hesitate to reach us at hangtt@irender.vn or mobile: +84915875500

 

iRender – Thank you & Happy Rendering!

Reference sources: Scott Benson on otoy.com

Related Posts

The latest creative news from Octane Cloud Rendering.

A deep dive into Caustics in Octane for Cinema 4D

The post A deep dive into Caustics in Octane for Cinema 4D appeared first on iRender Cloud Rendering Service.

]]>
Compare RTX 4090 scaling in Octane, V-Ray, and Redshift https://irendering.net/compare-rtx-4090-scaling-in-octane-v-ray-and-redshift/ Sat, 10 Jun 2023 06:00:33 +0000 https://irendering.net/?p=26396 "Does multi-GPU increase rendering performance?" is a common question among 3D artists. Theoretically, having multiple GPUs splits up the task of processing frames, allowing your system to render frames more quickly. However, to what extent it can be more quickly depends on the ability to take advantage of the multi-GPU of each renderer. Let us now get to the point and get deeper into comparing RTX 4090 scaling in Octane, V-Ray, and Redshift.

The post Compare RTX 4090 scaling in Octane, V-Ray, and Redshift appeared first on iRender Cloud Rendering Service.

]]>

Compare RTX 4090 scaling in Octane, V-Ray, and Redshift

“Does multi-GPU increase rendering performance?” is a common question among 3D artists. Theoretically, having multiple GPUs splits up the task of processing frames, allowing your system to render frames more quickly. However, to what extent it can be more quickly depends on the ability to take advantage of the multi-GPU of each renderer. Let us now get to the point and get deeper into comparing RTX 4090 scaling in Octane, V-Ray, and Redshift

1. Octane Render

OctaneRender, unlike other GPU-based renderers, scales almost perfectly with multiple GPUs. In other words, if you use two cards, your renders will be completed twice as fast as if you only use one.  Renders will be completed four times faster if you use four cards! 

OctaneBench 2020.1.5 (Source: Puget Systems)

Typically, OctaneBench has the best GPU scaling out of any benchmark. It’s not sure how it would hold up with this many GPUs, but it works almost flawlessly. We get within a few percent of perfect scaling for each GPU added to the system. 

In fact, 2x RTX 4090s end up with 1.99x faster than a single RTX 4090. At the very peak, with seven RTX 4090s, we end up with a 6.9x higher performance than a single RTX 4090. That is about as good as you can ask for and really shows off how well OctaneRender is able to take advantage of multi-GPU configurations.

This same 7x RTX 4090 configuration is now ranked 17 on the OctaneBench database, following configurations with 14+ GPUs. Someone has already tested a 13x RTX 4090 setup, which is currently ranked 1st on OctaneBench! 

OctaneBench 2020.1.5 (Source: Otoy)

Compared to the score of 1269 of a single RTX 4090, the 13x RTX 4090 configuration is about 13.4x faster. What amazing scaling! Thus, Octane really shows off its excellent ability to utilize multi-GPUs. To the wonder “Is there a limit to how many GPUs it can scale up perfectly 100% (even more than 100%) like this? Until now, there is no limit, because the number has already reached 13 GPUs. 

2. V-Ray

The GPU-accelerated version of V-Ray can and does make use of multiple GPUs. It will not be a flawless “four GPUs are four times faster,” but you can expect significant performance increases with each GPU added to the system. 

V-Ray Benchmark 5.02.00 (Source: Puget Systems)

As in the above chart, we see the RTX 4090 scaling in the V-Ray in both CUDA and RTX modes. 

Once you get over four RTX 4090s, CUDA mode starts to show diminishing returns. Nonetheless, the RTX mode scales beautifully all the way up to six GPUs. It is great scaling from 1 -> 2 -> 3 > 4 GPUs in the range of 78% to 98% improvement and from 4 -> 5 -> 6 GPUs in the range of 50% to 78% gain. With the seventh GPU added, performance drops slightly, but we are still able to get a maximum of around 6x quicker performance with seven RTX 4090s against a single RTX 4090.

3. Redshift

Redshift supports a maximum of 8 GPUs per session. It is clear that if you are using this GPU-accelerated renderer and want to make use of its power, your hardware will require at least two GPUs. Redshift excels at using multiple GPUs at the same time. Therefore, setting extra GPUs into your system is a terrific method to boost render speed even further.

Redshift 3.5.09 rendering performance scales across multi-RTX 4090 configurations (Source: Puget Systems)

Redshift is different from many other rendering benchmarks in that it is timing the time it takes to render a single scene rather than returning the number of samples per second that the system can handle. A problem with this approach is that when hardware grows stronger, the test takes less and less time to finish. This can have a direct impact on the measurement (especially as Redshift only shows results in whole seconds rather than fractions), but it can also cause factors such as scene load time to account for a larger portion of the result rather than isolating the rendering portion.

As a result, when it comes to really powerful systems such as this 7x RTX 4090 scaling, it seems that the current Redshift benchmark is rather misleading. The performance scaling is excellent for up to 4x RTX 4090 GPUs, but after that, the benefit per GPU begins to drop off. At the very peak 7x RTX 4090s, we see an exact 4.56x improvement in performance over a single RTX 4090. That is not a terrible gain by any means; however, it shows that Redshift does scale as additional GPUs are added to the multi-RTX 4090 scaling system, but not very well. It is not linear scaling since there are still some levels of diminishing returns when more cards are added to a single GPU system. 

However, we believe that actual Redshift users rendering huge scenes would gain more than what this benchmark is currently capable of demonstrating.

Sum-Up from the result of comparing RTX 4090 scaling

Though Octane, V-Ray, and Redshift are all GPU-based render engines that can use multiple GPUs to render faster, their abilities to take advantage of many GPUs vary. Octane is different from other renderers in that it scales almost perfectly with multi-GPUs. It is difficult to find any GPU renderers that can scale perfectly 100%, even more than 100% as OctaneRender. V-Ray comes in second with its impressively excellent scaling. Two GPUs provide twice the performance of one GPU in this render engine, with very slight variations that are within the margin of error. Last is Redshift. Redshift isn’t quite as good as Octane and V-Ray when its two-RTX 4090 system can’t provide a 100% perfect improvement. But, it’s still a very respectable increase in performance per additional card. It’s believed that rendering huge scenes in Redshift will actually gain much more than this current 7x RTX 4090 scaling is able of showing. 

Speed up your Multi-GPU Rendertimes

When seeking multiple GPU rendering options for GPU-based renderers, many artists would expect something like a perfect increase. GPUs, unfortunately, do not always scale perfectly. Two GPUs render an image approximately 1,9 times faster. Having four GPUs can sometimes make rendering 3.6x times faster.

Source: CG Director

Multiple GPUs communicating with each other to render the same task (one image) costs so much performance that a large part of one GPU in a 4-GPU rig is mostly just there used for managing decisions. One option to solve this is to use as few GPUs as possible per task during the final rendering of image sequences.

Consider the following example:

In a multi-GPU rig, we usually have all GPUs working on the same task. In this situation, a single task will be an image in our image sequence. Four GPUs work together to render one image, after that move on to the next image in the sequence until the full sequence is rendered.

When we have each GPU render on its own task, we can reduce preparation time per GPU (when the GPUs sit idle, waiting for the CPU to finish preparing the scene) and avoid some of the multi-GPU slowdowns. This is possible by rendering one task per GPU. So, instead of 4 GPUs working on the same image, a computer with 4 GPUs would now render 4 tasks (4 images) at once, each on its own GPU.

Some 3D-Software may provide this feature. If not, it is recommended to use a Render Manager, such as Thinkbox Deadline (free for up to two nodes/computers).

Option to set the number of GPUs rendering on one task in Thinkbox Deadline (Source: CG Director)

Note: Keep in mind that you may need to increase your System RAM and have a powerful CPU because each GPU-Task requires its own amount of RAM and CPU performance.

Render faster in Octane, V-Ray, and Redshift with iRender Render Farm

iRender provides high-configuration single and multi-GPU servers, specifically 1/2/4/6/8x RTX 4090 & RTX 3090. Powered with two of the most single-core performance CPUs for GPU-based renderers (Octane, V-Ray, Redshift, and so on) which are AMD Ryzen™ Threadripper™ PRO 3955WX @ 3.9 – 4.2GHz and AMD Ryzen™ Threadripper™ PRO 5975WX @ 3.6 – 4.5GHz, 256GB RAM and 2T NVMe SSD storage, all our servers can work well with almost all levels of complexity of your projects.

iRender gives you, all creative individuals and studios, an affordable answer to unleashing your creativity with the beast RTX 4090 from just 8.2 USD/hour. We are proud to be the only render farm where you can install any software and plugins of any version that serves your project. You will be given complete control over the server(s) and use them as your local computers.

Let’s check out V-Ray, Octane, and Redshift rendering performance on our 6x RTX 4090 super server. 

Get our FREE TRIAL offer now to try our RTX 4090 machines for your Octane, V-Ray, and Redshift projects.

For further information, please do not hesitate to reach us at hangtt@irender.vn or mobile: +84915875500

 

iRender – Thank you & Happy Rendering!

Reference sources: Puget Systems, Otoy, CG Director


		

Related Posts

Compare RTX 4090 scaling in Octane, V-Ray, and Redshift

The post Compare RTX 4090 scaling in Octane, V-Ray, and Redshift appeared first on iRender Cloud Rendering Service.

]]>
Octane rendering performance across multi-RTX 4090s https://irendering.net/octane-rendering-performance-across-multi-rtx-4090s/ Sat, 03 Jun 2023 06:00:30 +0000 https://irendering.net/?p=26359 How well does rendering speed scale across multi-GPUs in Octane? Let us answer by digging deeper into Octane rendering performance across multi-RTX 4090s.

The post Octane rendering performance across multi-RTX 4090s appeared first on iRender Cloud Rendering Service.

]]>

Octane rendering performance across multi-RTX 4090s

Known as the world’s fastest unbiased, spectrally correct GPU render engine, Octane uses GPU at the core to run faster in the rendering process. It should come as no surprise that building up a multi-GPU system does make Octane render more quickly. However, to what extent quickly does it make? Let us answer by digging deeper into Octane rendering performance across multi-RTX 4090s.

High-end RTX 4090 - Incredible performance for Octane

As far as Octane is concerned, the GPU is by far the most important component of a computer. As of now, Octane Render on Windows only supports a wide range of current NVIDIA GPUs (2012-present) via CUDA. Octane on macOS supports newer AMD GPUs through Metal, even some Intel GPUs. Octane supports CUDA on the Mac, but macOS discontinued official NVIDIA compatibility after High Sierra, so only older NVIDIA cards will work on Macs running High Sierra or earlier. Since Octane performs the best on a GPU that has CUDA-Cores, we will only talk about NVIDIA GPUs in today’s article. Amongst NVIDIA GPUs, the GeForce RTX 4090 is currently the most powerful for Octane rendering. Why is RTX 4090? Let’s explore how Octane utilizes GPU to find the reason. 

The important parts of a GPU for Octane are the actual speed and the VRAM that comes with it. These two specs define how fast Octane runs, and how much in the way of geometry and textures can be loaded into the GPU’s memory at a time.

First, the VRAM

VRAM is a special type of high-speed RAM integrated into the graphics card that controls how much texture and scene geometry can be loaded into memory at once for Octane to operate on. A scene that fits neatly into VRAM will be significantly faster and more stable than one that does not. RTX 4090 with a decent amount of VRAM (24GB) can meet almost all small to heavy scenes in Octane.

Second, the actual speed

GeForce GPU has good raw performance and RTX 4090 has the best raw performance amongst NVIDIA Geforce GPUs thanks to its newer technology and higher number of cores.

Powered by the ultra-efficient NVIDIA Ada Lovelace architecture, RTX 4090, with 3rd-generation RT Cores, 4th-generation Tensor Cores, an 8th-generation NVIDIA Dual AV1 Encoder, and 24GB of Micron G6X memory capable of reaching 1TB/s bandwidth, brings a massive boost in rendering performance for Octane. It beats other GeForce 40 Series GPUs and GeForce 30 Series GPUs in CUDA cores, RT cores, and Tensor cores, thus rendering performance. 

Let’s take a look at how RTX 4090 is the successor.

OctaneBench 2020.1.5 (Source: Puget Systems)

The RTX 40 Series performs fantastically in Octane renderer. Amongst those, the RTX 4090 dominates the rendering performance with ~37% faster than RTX 4080 and 76% faster than RTX 4070Ti. Not to mention that RTX 4090’s VRAM is the highest of those three in this 40 Series. 

Compared to its predecessors (the RTX 30 Series), RTX 4090 offers a greater improvement in rendering performance. 

OctaneBench 2020.1.5 (Source: Puget Systems)

Taking into the amount of VRAM available, RTX 4090 24GB is the standout performer. It is 90% faster than RTX 3090 for only a 7% price increase. It even outpaces the much more expensive RTX 3090 Ti by 83%. Without a doubt, for those looking for the best-performing GPU for Octane rendering, the RTX 4090 24GB is hands down the most ideal option.

OctaneRender is well-known for its multi-GPU support. While a single RTX 4090 itself is really fast, how fast do multi-RTX 4090 systems perform in Octane? Let us get to the right point of Octane rendering performance across multi-RTX 4090s.

How does Octane scale accross multi-RTX 4090s?

Let’s together look at this OctaneBench from Otoy to get a good sense of how multi-RTX 4090s perform in Octane. 

The graph below shows the OctaneBench’s score on 1, 2, 3, 4, 6, 7, 8, and 12 of the GeForce RTX 4090 24GB VRAM:

OctaneBench 2020.1 on Multi-RTX 4090s (Source: Otoy)

Take a look at it in another way, the relative performance chart. It demonstrates how adding GPUs (RTX 4090) increases Octane rendering performance – shown as a percentage (%) compared to the speed of a 1x RTX 4090:

Octane rendering performance across multi-RTX 4090s
      • Going from one card to two: we see an impressively perfect gain in rendering performance, an 101% increase. 
      • From two cards to three: there is a little drop-off when it’s just a 73% increase in performance, but it’s still a great improvement. 
      • From three cards to four: the increase is 110%. A 4x RTX 4090 system offers a 4.1x time faster than a single RTX 4090.
      • At 6x RTX 4090s: The gain is perfect again. Compared to a single RTX 4090, 6x RTX 4090s is 6.09x time faster in rendering. What’s awesome scaling! 
      • From six cards to seven, and seven cards to eight: both have a 88% increase in performance. 
      • At the very peak, with 12x RTX 4090s, we notice exactly a 12x increase in performance over a single GPU. It is perfect, or linear, scaling.

To Sum Up

What makes Octane different from other renderers is that it scales almost perfectly with multi-GPUs. There still exists some little levels of diminishing returns as more cards are added to a single system. Nonetheless, they don’t affect much its powerful rendering performance. It’s hard to find any GPU renderers in the CG industry that can scale perfectly 100%, even more than 100% as OctaneRender. 

Octane is increasingly growing with more and more robust features. Artists all want to make more creative projects with complex scenes and thus need super-powerful hardware resources to meet their rendering demands. Sadly, though you have owned a strong GPU such as RTX 3090 or RTX 4090, there are still frames that take tens of minutes, even hours to render. Not to mention, it is just a single frame in a sequence of frames that make up a creative project. The power of a single GPU is not enough at this time. A multi-GPU system, however, is sure to shine in these circumstances because Octane actually scales almost linearly across multiple GPUs.

Multi-RTX 3090 or Multi-RTX 4090?

iRender has recently tested rendering OctaneStudio for C4D 2022.1 on 2x RTX 3090 and 2x RTX 4090 systems. Let’s see how well these two rigs perform and whether is it worth upgrading to multi-RTX 4090s.

    • Software version: OctaneStudio for C4D 2022.1
    • Size: 1920x1080p
    • Frame rate: 30fps
    • Duration: 60 frames

Render time:

    • 2x RTX 3090: 1h54m29s
    • 2x RTX 409: 53m06s

2x RTX 4090 renders this scene more than twice as fast as the 2x RTX 3090. RTX 4090 with newer architecture and core generation than RTX 3090 (despite its core count doesn’t double that of RTX 3090) is twice (even more) as powerful as RTX 3090. As a result, it’s worth upgrading to this multi-RTX 4090 system.

The fact, however, is that a one or two-GPU system is rather simple, and there are lots of options available in a wide price range out there. You can easily get yourself a system of one or two RTX 3090 or RTX 4090. However, to improve the rendering speed, certainly, you need to upgrade the configuration. If you want to double the rendering speed, it’s required to double the number of GPUs for the system or upgrade to another system that has a GPU that is twice as fast. If you want to double the speed of a single RTX 3090 system, you will need a system of 2x RTX 3090s or 1x RTX 4090. Accordingly, doubling the speed of a 2x RTX 3090 system requires a system of 2x RTX 4090s or 4x RTX 3090s. It’s very challenging, though. The reason is that a three or four-GPU system is more complicated when it requires much more consideration for power, cooling, and physically fitting the whole system into a case. More than four GPU system demands highly specialized and expensive kits. Luckily, iRender exists to help you blow away all these challenges by bringing the power of high-configuration machines to your desk. You do not need to worry about the motherboard, chassis size, cooling, and power supply. We take care of the hardware, the creativity is yours!

Speed up Octane rendering with iRender Cloud Rendering

iRender provides high-configuration single and multi-GPU servers, specifically 1/2/4/6/8x RTX 4090 & RTX 3090. Powered with two of the most single-core performance CPUs for Octane which are AMD Ryzen™ Threadripper™ PRO 3955WX @ 3.9 – 4.2GHz & AMD Ryzen™ Threadripper™ PRO 5975WX @ 3.6 – 4.5GHz, 256GB RAM and 2T NVMe SSD storage, all our servers can work well with almost all levels of complexity of Octane projects.

iRender gives you, all creative individuals and studios, an affordable answer to unleashing your creativity with the beast RTX 4090 from just 8.2 USD/hour. We are proud to be the only render farm where you can install any software and plugins of any version that serves your project. You will be given complete control over the server(s) and use them as your local computers.

Let’s check out our test videos about Octane rendering performance on our multi-RTX 4090 servers.

We are offering a FREE TRIAL program now. Let’s get it and try our RTX 4090 machines for your Octane.

For further information, please do not hesitate to reach us at hangtt@irender.vn or mobile: +84915875500

 

iRender – Thank you & Happy Rendering!

Reference sources: Otoy, Puget Systems

Related Posts

The latest creative news from Octane Cloud Rendering.

Octane rendering performance across multi-RTX 4090s

The post Octane rendering performance across multi-RTX 4090s appeared first on iRender Cloud Rendering Service.

]]>
How Redshift takes advantage of multi-RTX 4090 Configurations https://irendering.net/how-redshift-takes-advantage-of-multi-rtx-4090-configurations/ Wed, 24 May 2023 06:00:43 +0000 https://irendering.net/?p=26276 Have you wondered how well rendering speed scales across multiple GPUs in Redshift? Let’s see how Redshift takes advantage of multi-RTX 4090 configurations.

The post How Redshift takes advantage of multi-RTX 4090 Configurations appeared first on iRender Cloud Rendering Service.

]]>

How Redshift takes advantage of multi-RTX 4090 Configurations

We have all heard of Redshift as a powerful fully GPU-accelerated biased renderer. This means the graphics card (GPU) is the driving factor for its rendering performance. There is no such thing as an overpowered GPU for Redshift; and by setting up multiple GPUs in a single system, Redshift does render faster. However, how well does rendering speed scale across multiple GPUs in Redshift, have you wondered? Let us get to the point and deep dive into how Redshift takes advantage of multi-RTX 4090 Configurations.

Why is NVIDIA GeForce RTX 4090 for Redshift?

Maxon has expanded Redshift’s high-performance rendering support via Nvidia (CUDA) to Apple (Metal), and most recently via AMD (HIP) technology (Redshift 3.5.15 brings the public beta for AMD GPU support). Among those, Nvidia (CUDA) is still the best technology offering the best rendering performance in Redshift. Amongst NVIDIA’s graphics cards, the GeForce RTX 4090 is the most powerful and suitable GPU for Redshift rendering. Why is RTX 4090? Let’s dig deeper into how Redshift utilizes GPU to find the reason. 

There are two aspects of a GPU that influence rendering capabilities in Redshift. They are video memory (VRAM) and the raw speed of the GPU itself.

The VRAM

Redshift stores scene data, such as geometry and textures, on the VRAM when rendering the scene. Accordingly, if the amount of VRAM is not enough, you will not be able to render complicated scenes as efficiently and quickly. RTX 4090 with a decent amount of VRAM (24GB) can meet almost all Redshift’s projects that contain simple to intermediate to complex and even very complicated scenes.

The raw speed of the GPU

GeForce GPU tends to have good raw performance and RTX 4090 has the best raw performance amongst NVIDIA Geforce GPUs. RTX 4090 is powered by the ultra-efficient NVIDIA Ada Lovelace architecture. Thanks to its third-generation RT Cores, fourth-generation Tensor Cores, an eighth-generation NVIDIA Dual AV1 Encoder, and 24GB of Micron G6X memory capable of reaching 1TB/s bandwidth, RTX 4090 brings a massive boost in rendering performance for Redshift.

RTX 4090 beats other GeForce 40 Series GPUs (RTX 4080 & RTX 4070Ti) and GeForce 30 Series GPUs (RTX 3090Ti, RTX 3090, RTX 3080Ti, RTX 3080, etc) to be the top-performing GPU for Redshift.

Redshift 3.5.12 Benchmark Results (Source: Puget Systems)

Compared to the other two GPUs in the GeForce RTX 40 Series, which are RTX 4080 and RTX 4070Ti, the RTX 4090 wins regarding both the amount of VRAM and rendering performance. It is 22% and 55% faster, respectively than RTX 4080 and RTX 4070Ti. The results are more impressive when compared to its predecessors, GeForce RTX 30 Series. With the same amount of 24GB VRAM, RTX 4090 gives a faster render speed than RTX 3090Ti and RTX 3090 with 64% and 71%, respectively.

Note: We will not compare GeForce RTX 4090 with the Quadro GPU. While Quadro comes with larger amounts of VRAM, it costs more for the same level of raw performance as Geforce GPU. 

All right, it’s enough for a single RTX 4090 in Redshift. Now, let’s get to the right point of this article: How well Redshift takes advantage of multi-RTX 4090 Configurations.

How Redshift takes advantage of multi-RTX 4090 Configurations

Putting Redshift 3.5.09 (RTX enabled) to the test, we are looking at scaling from one to seven RTX 4090s in a single workstation. Let’s see how well the rendering speed scales across multi-RTX 4090s.

The graph below shows the raw Redshift benchmark render times (in seconds) with 1, 2, 3, 4, 5, 6, and 7 of the GeForce RTX 4090 24GB VRAM:

Redshift 3.5.09 Benchmark Results (Source: Puget Systems)

Or another way to look at it. The following graph shows how adding GPUs increases rendering performance – shown as a percentage (%) compared to the speed of a single card:

Redshift 3.5.09 rendering performance scales across multi-RTX 4090 configurations (Source: Puget Systems)

Going from one card to two, we see an impressive gain in rendering performance, an 86% increase. From two cards to three, and three cards to four, both result in a >75% increase in performance. The result in a 4x RTX 4090 configuration performs about 3.4x faster than a single RTX 4090, still an impressive increase in performance over a single RTX 4090. After that, however, the gains drop off relatively. We only look at about a 48% performance gain from four cards to five and a 42% gain from five cards to six. The gains drop off sharply from six cards to seven. We are only looking at about a 24% performance gain for this seventh additional RTX 4090.

At the very peak, with 7x RTX 4090 GPUs, we notice exactly a 4.56x increase in performance over a single GPU. That is not terrible by any means, but it shows that RTX 4090 performance in Redshift scales as additional cards are added, but not very well. It is not perfect, or linear, scaling – there are some levels of diminishing returns as more cards are added to a single system. 

Why Redshift doesn’t scale linearly across a multi-RTX 4090 configuration?

Redshift differs from many other rendering benchmarks in that it times how long it takes to render a single scene rather than returning how many samples per second the system can handle. The problem with this method is that as hardware becomes faster and faster, the test takes less and less time to complete. This can have a direct impact on the measurement (especially because Redshift only reports results in whole seconds rather than fractions), but it can also cause things like scene load time to account for a bigger portion of the result rather than isolating the rendering portion.

As a result, it appears that the current Redshift benchmark is rather misleading when it comes to really powerful configurations like this. The performance scaling was suitable for up to four RTX 4090 GPUs, but the benefit per GPU started diminishing after that. We still ended up with 4.5x the maximum performance of a single RTX 4090, but we feel that real Redshift users rendering huge scenes would notice a larger advantage in most circumstances than what this benchmark is currently capable of proving.

It is a fact that Redshift is increasingly evolving with more and more powerful features. Creative individuals and studios all want to make more creative projects with complex scenes and therefore need super-powerful hardware resources to meet their rendering demand. Sadly, even though you have owned a strong GPU such as RTX 3090 or RTX 4090, there are still frames that take inevitably tens of minutes to hours to render. Not to mention, it is just a single frame in a sequence of frames that make up a creative project. At this time, the power of a powerful GPU is not enough. However, a multi-GPU system is sure to shine in these cases since Redshift actually scales quite impressively with multiple cards.

iRender gonna help you speed up Redshift rendering with the most multi-RTX 4090 machines on the market. 

Boost Redshift rendering speed with iRender Cloud Rendering

iRender provides high-configuration single and multi-GPU servers, specifically 1/2/4/6/8x RTX 4090 & RTX 3090. Powered by two of the most single-core performance CPUs for Redshift which are AMD Ryzen™ Threadripper™ PRO 3955WX @ 3.9 – 4.2GHz & AMD Ryzen™ Threadripper™ PRO 5975WX @ 3.6 – 4.5GHz, 256GB RAM and 2T NVMe SSD storage, all our servers can handle almost all levels of complexity of Redshift projects.

iRender gives you, all 3D artists, an affordable answer to unleash your creativity with the beast RTX 4090 from just 8.2 USD/hour. We are proud to be the only render farm where you can install any software and plugins of any version that serves your project. You will be given complete control over the servers and use them as your local computers.

Let’s check out our test videos about Redshift rendering performance on our multi-RTX 4090 servers.

We are offering a FREE TRIAL program now. Let’s get it and try our render beast for your Redshift.

For further information, please do not hesitate to reach us at hangtt@irender.vn or mobile: +84915875500

 

iRender – Thank you & Happy Rendering!

Reference source: Puget Systems, Maxon

Related Posts

The latest creative news from Redshift Cloud Rendering.

How Redshift takes advantage of multi-RTX 4090 Configurations

The post How Redshift takes advantage of multi-RTX 4090 Configurations appeared first on iRender Cloud Rendering Service.

]]>
Speed up Redshift rendering with 6x RTX 4090s https://irendering.net/speed-up-redshift-rendering-with-6x-rtx4090s/ Thu, 18 May 2023 06:00:05 +0000 https://irendering.net/?p=26227 A single RTX 4090 is already powerful, but how powerful a 6x RTX 4090 supercomputer would be to speed up Redshift rendering?
Let's find the answer in this article!

The post Speed up Redshift rendering with 6x RTX 4090s appeared first on iRender Cloud Rendering Service.

]]>

Speed up Redshift rendering with 6x RTX 4090s

It’s been 5 months since NVIDIA released its RTX 4090. Probably many of you have heard of, used, or even owned this powerful GPU. Undoubtedly, RTX 4090 is the graphics Champion for surprisingly fast rendering speeds, especially for GPU-based renderers like Redshift. A single GPU is already powerful, but have you wondered how robust a 6x RTX 4090 machine would be? And is it worth using this beast to speed up Redshift rendering? 

Let iRender answer you in today’s article.

1. RTX 4090 - a powerful NVIDIA GPU for Redshift rendering

Redshift has expanded its high-performance rendering support from Nvidia (CUDA) to Apple (Metal), and most recently to AMD (HIP) technology. Nevertheless, it only has rudimentary support for AMD GPUs; and there is a lot of work that needs to be done and is being done to better support a wide range of AMD GPUs. Among all, Nvidia (CUDA) is still the best one offering the highest rendering performance in Redshift.

Powered by the new ultra-efficient NVIDIA Ada Lovelace and 3rd generation RTX architecture, RTX 4090 is beyond fast, giving 3D artists a quantum leap in rendering performance. It beats other GeForce 40 Series GPUs (RTX 4080 & RTX 4070Ti) and GeForce 30 Series GPUs (RTX 3090Ti, RTX 3090, RTX 3080, etc) as regards the CUDA cores, RT cores, Tensor cores, and thus rendering performance. As a result, RTX 4090 is the rendering champion of the NVIDIA GeForce RTX GPU.

Let’s take a look at how faster this GPU is compared to other GPUs in Redshift rendering.

Redshift Benchmark Results (Source: Puget Systems)

As we can see from the above chart, the NVIDIA GeForce RTX 40 Series GPUs are now the fastest GPUs for rendering in Redshift. RTX 4080 and RTX 4070Ti are quite fast, but not as fast as RTX 4090. RTX 4090 is 22% faster than RTX 4080, and 55% faster than RTX 4070Ti. 

When comparing RTX 4090 with its predecessors, the results are impressive. RTX 4090 is 64% faster than RTX 3090Ti, and 71% faster than RTX 3090, as shown in the chart below. 

Redshift Benchmark Results (Source: Puget Systems)

These results are not as big of an improvement as we have seen in other GPU rendering engines. However, this is most likely due to the benchmark’s limitations rather than the GPU’s potential. The built-in Redshift Benchmark echoes, but to a lesser extent, what we have seen with other GPU rendering benchmarks. This is most likely due to a constraint of the Redshift benchmark. It renders a single frame and reports how long it takes to complete; however, the time required to complete is becoming so quick that we may have got an artificial barrier. Nonetheless, a 70% improvement over the previous generation is remarkable.

RTX 4090 also outperforms the far more expensive RTX 3090 Ti, a GPU that is less than a year old. Without a doubt, RTX 4090 with 24GB, is the most ideal choice for those looking for the top-performing GPU for Redshift rendering.

2. iRender’s 6x RTX 4090 supercomputer to speed up Redshift rendering

Understanding the fact that Redshift is increasingly evolving, artists want to create more creative projects containing complex scenes and therefore need super-powerful hardware resources to meet their rendering needs, iRender has developed many supercomputers for the fastest Redshift rendering speed. One of the most powerful supercomputers we provide for Redshift rendering is the 6x RTX 4090s. Its detailed configuration is as follows:

Let’s together take a closer look at the configuration of this supercomputer for Redshift rendering.

2.1 The 6x RTX 4090s for further speed up Redshift rendering

Redshift is a fully GPU-based renderer. This means the graphics card (GPU) is the driving factor for its performance. The GPU handles all of the renderings, so there is no such thing as an overpowered GPU for Redshift. Since Redshift supports up to 8 GPUs for one rendering session, and it scales very well with multiple GPUs, a 6x RTX 4090 system can significantly improve the render times. 

Let’s dig deeper into how Redshift utilizes GPU. There are two things about a GPU that influence rendering capabilities: the raw speed of the GPU itself and the amount of its video memory (VRAM).

First, the VRAM: 

Redshift attempts to store scene data, such as geometry and textures, on your VRAM when rendering the scene. Consequently, if you don’t have enough VRAM, you won’t be able to render complicated scenes as quickly and efficiently. You might be saved by an option called “out of core rendering”, which allows system memory (RAM) to be used if there is insufficient dedicated GPU memory (VRAM). However, this reduces speed, thus it is necessary to have enough VRAM onboard if at all possible. Using iRender’s computer, you won’t have to worry about this aspect thanks to its huge 24GB VRAM of RTX 4090.

Next, the raw speed of the GPU: 

GeForce GPU tends to have good raw performance, with decent amounts of VRAM. As analyzed in the preview section, RTX 4090 has the best raw performance amongst NVIDIA Geforce GPUs. We will not compare it to the Quadro GPU because while Quadro comes with larger amounts of VRAM, but costs more for the same level of raw performance as Geforce GPU. 

Let’s see how fast a multiple RTX 4090 system be for Redshift rendering.

Redshift 3.5.14 Benchmark Results (Source: CG Director)

A 2x RTX 4090 system is ~2.5 times faster than a single RTX 4090. A 7x RTX 4090 is ~6.4 times faster than a single RTX 4090. Based on the results, we notice the render time scales very well with multiple GPUs. It doesn’t scale perfectly because having multiple GPUs work with each other to render the same image, costs so much performance, that a large part of one final GPU on the system is mainly just there for managing decisions. However, the render time is cut off impressively on a multiple GPU system. 

Based on the result, it can be estimated that a 6x RTX 4090 might be ~5.3 to 5.5 faster than a single RTX 4090. 

Check out the rendering performance of iRender’s 6x RTX 4090 beast in Redshift.

2.2 A strong AMD Ryzen Threadripper PRO 5975WX

In Redshift, the CPU normally takes a back seat to the GPU. Its primary purpose is to calculate the necessary data for the UI and previews and then send rendering data to the GPU(s). 

Redshift doesn’t make extensive use of parallelization all that much. That means all those calculations made by the CPU will be processed on one or two cores rather than being split evenly across multiple cores. But, that doesn’t mean that you should just use any old bargain bin CPU. Redshift benefits heavily from high single-core performance. Therefore, the foremost thing you should consider when choosing a CPU for Redshift is the single-core performance of that CPU.

Any CPU with a score of 900 or higher will be enough for most intermediate Redshift projects with no major bottlenecks. For more complex projects, you should aim for a score of at least 1300.

Check out the benchmarks of this AMD Threadripper PRO 5975WX, we notice that it has a great Single-Core Performance.

Cinebench R23 single-core performance scores (Source: CG Director)

Overall, our AMD Ryzen Threadripper PRO 5975WX has a high core count (32 cores, 64 threads), but still, gets a relatively high single-core performance. Also, offering a bunch of PCIe Lanes (128 PCIe® 4.0 lanes), it is one of the most powerful CPUs for a multi-GPU rig for Redshift rendering. 

2.3 RAM (System Memory): 256GB

The RAM needs for Redshift are somewhat more complicated than usual due to a thing named “out-of-core rendering.” As we all know, when it comes to rendering, VRAM is crucial for Redshift. Nevertheless, if you don’t have enough VRAM for Redshift to store all of the data required to store a specific scene, Redshift can use its out-of-core rendering technology to shift that data to your system RAM instead. This comes with a performance cost, but it’s a good tradeoff compared to not being able to render the scene at all. While the precise amount of RAM required can vary depending on each specific project, it’s generally recommended a minimum of 32GB for Redshift (and GPU rendering in general).

But, you don’t have to worry about this aspect when using our supercomputer. Our’s 6x RTX 4090 rig has a huge amount of 256GB RAM. You can have several different software open and running with Redshift at the same time– like Cinema 4D, Maya, After Effects, etc.

2.4 Storage (Hard Drive): 2T

The hard drive we built for the 6x RTX 4090 supercomputer is a solid-state drive (SSD). It’s a good drive to host the OS, Redshift installation itself along with any other software you use. The high speed of SSD lets the system boot, launch applications, and load files several times faster than any traditional hard drive. In particular, the NVMe type uses the latest connections like M.2 and offers the fastest transfer rates. Also, the storage of 2T can meet almost all project scales from small to big ones. 

3. Is it worth it to use this 6x RTX 4090 system for your Redshift?

The fact is that we all want to render faster. There are so many methods, tricks, and tips out there for you to optimize your scene or update the hardware to render faster. If you have tried all of them or even bought the most powerful hardware you can, but the speed is not much improved, then try using our supercomputer. We will let you take complete control over our computer via the remote desktop file and you can speed up your rendering by making use of all the 6x RTX 4090 rigs for your Redshift. 

We are offering a FREE TRIAL program now. Let’s get it and try our render beast for your Redshift. We will let you answer the question in the section title after the trial.

Besides the 6x RTX 4090s, there are still other powerful machines available such as the 4x RTX 4090s or 2x RTX 4090s to speed up Redshift rendering. Check out our test videos for more references. 

For further information, please do not hesitate to reach us at hangtt@irender.vn or mobile: +84915875500

 

iRender – Happy Rendering!

Reference source: Puget Systems, CG Director 

Related Posts

The latest creative news from Redshift Cloud Rendering.

Speed up Redshift rendering with 6x RTX 4090s

The post Speed up Redshift rendering with 6x RTX 4090s appeared first on iRender Cloud Rendering Service.

]]>