Off<\/div>
<\/p>\n<\/div><\/div>\n
We also had some users report a freeze on some scenes with Iray Interactive which has been fixed in this release. If you are using the very latest Intel CPUs with AVX512 support then there was also a crash that has been fixed. Be sure to check the neurayrelnotes.pdf<\/em> file that ships with RealityServer for full details.<\/p>\nCompositing System<\/h3>\n
For a long time customers have been asking us for a way to keep the full photorealistic quality produced with RealityServer, but to reduce the resource requirements needed to deploy applications. Many customers are already using compositing solutions to achieve this by rendering pieces of their image, re-colouring them and combining them back together. Typically this relies on alpha channels and so called ‘clown masks’ to separate the elements. We believe we have a better way.<\/p>\n
Actually, the capabilities needed have been in RealityServer for some time, using a feature available in the Iray renderer called Light Path Expressions, or LPEs for short. In contrast to alpha channel and mask based methods, LPEs work by separating the contribution various light paths make to the final image, for example the contribution of a specular reflection from a specific object. This system is extremely powerful, however also quite complicated to use. So we decided to wrap it in a much simpler to use API which allows both the generation of all of the components needed for compositing and a system for doing the runtime compositing as well.<\/p>\n
To the right you can see an example building up several LPE components into a final image and then tinting those components. In this example the direct and indirect are separated and can be tinted individually. You might also notice that the compositing in the out of focus regions is perfect, something that is impossible to do with masking based approaches.<\/p>\n<\/div><\/div>
<\/p>\n
LPE Components Being Added Together<\/p>\n<\/div>\n<\/div><\/div><\/div>\n
The whole system is just three new commands, compositor_render_components<\/em>, compositor_prepare_composite<\/em> and compositor_composite_components<\/em>. You just specify which objects and types of light transport to separate (e.g., diffuse reflection) and it will create and render all of the LPEs for you. It also supports separating the contribution from individual lights or groups of lights as well. This allows you to selectively recolour parts of our scene, including the indirect contributions in a consistent way.<\/p>\nWhen compositing the results together, each component is simply multiplied by the tinting colour and added to the next. There are no masks or no alpha channels and so there are no problems with fringing or aliasing that you see on traditional solutions and unlike those solutions, you can also recolour the indirect illumination. It’s a real game changer and while the functionality has been available for a long time, no one was using it so we decided to make it more accessible with a new API.<\/p>\n
We wanted to take it a step further though. A lot of customers also asked us about not just tinting with a colour, but also modulating that tint with an image, so they could change texture as well. For example, let’s say you have personalisable product, such as a phone case where you want the user to be able to provide their own image and you want to visualise that. Previously there was no way around having to live render that image. With our new compositing solution you can now also re-texture the components.<\/p>\n
At runtime the compositing is all done on RealityServer and is GPU accelerated. However unlike live rendering you don’t need anywhere near as many resources to service your users. Everything is stored in full HDR as well so you can even change tone-mapping settings during compositing. This functionality is so significant that we are currently writing another blog post dedicated just to this feature. Watch for it in the coming weeks.<\/p>\n
Best of all, if you need to customise our solution somehow, the full source code is provided since the entire system is implemented as V8 server-side JavaScript commands. We have put together a system which we feel will likely cover about 85% of the use cases we see, for those specialised cases, you can modify the commands to suit your needs.<\/p>\n
New Commands<\/h3>\n
Aside from the commands added for compositing, we found during the development that there we quite a few other convenience commands we would like to have so we added those. Here is what’s there:<\/p>\n
\n- image_save_to_disk<\/strong> allows you to write out the contents of an image to disk. It includes an option to split the layers of a multi-layer image into separate files if needed. This is a V8 command and to facilitate it a new save_to_disk method was added to the Image wrapper in V8 as well.<\/li>\n
- image_reset_from_files<\/strong> takes an array of image filenames and resets the specified image to be a multi-layer image containing the data from all of the files. It can optionally resize and convert the pixel type of the images to conform to one another.<\/li>\n
- image_encode_canvas<\/strong> and texture_encode_canvas<\/strong> returns a binary of the image or texture specified in the chosen format which can be useful when you need to retrieve images from the db to the client side.<\/li>\n
- generate_complex_scene<\/strong> is a useful diagnostic command which creates a full, renderable scene with an array of boxes of a specified number. The boxes can all instance the same mesh or each can have their own mesh. You can also have a single material for all or random materials for each one. This is handy when you want to quickly create a non-trivial scene or test the effects of large numbers of instances and\/or materials.<\/li>\n
- element_get_graph<\/strong> can be used to retrieve the scene graph below any given element or for an entire scene.<\/li>\n<\/ul>\n
All of these commands are built using the server-side V8 JavaScript API so you have all of the source code for these with the release.<\/p>\n
Let Us Know<\/h3>\n
We’d love to hear how you go with this new functionality, particularly the compositing features. Contact us<\/a> if you have any issues getting running or if you have feedback on this new functionality.<\/p>\n","protected":false},"excerpt":{"rendered":"RealityServer 5.1 Update 227 has just been released and it has some great new features. A new Iray version with improvements to the AI Denoiser, an easy to use compositing system and a bunch of new convenience commands. This post gives an overview of the new functionality, however the compositing features are so significant that […]<\/p>\n","protected":false},"author":2,"featured_media":2069,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"spay_email":"","jetpack_publicize_message":"What's New in RealityServer 5.1 Update 227. Updated AI Denoising, new Compositing Features and more.","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[15],"tags":[16,14],"jetpack_featured_media_url":"https:\/\/www.migenius.com\/migenius\/wp-content\/uploads\/2018\/04\/rsws-51-2017.227-feature.jpg","_links":{"self":[{"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/posts\/2054"}],"collection":[{"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/comments?post=2054"}],"version-history":[{"count":11,"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/posts\/2054\/revisions"}],"predecessor-version":[{"id":2071,"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/posts\/2054\/revisions\/2071"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/media\/2069"}],"wp:attachment":[{"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/media?parent=2054"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/categories?post=2054"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.migenius.com\/wp-json\/wp\/v2\/tags?post=2054"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}