Mental Ray Debugging Question to Experts (Memory HDRI FG Black Buckets)
I am debugging a Mental Ray error that seems to be experienced by several users in this forum and also on Autodesk's and CGSociety's.
I need support from some experts here, since I believe I have read a tested comments from at least 30 posts... with no success.
Basically errors while doing large rendering: either black/losts buckets spread accross the image or a full empty buckets image (with only crosses marking the buckets zones). This happens on large renderings: 5000x5000 and up to 10000x10000
No More Memory errors
Like many others, I have had several memory crashes calculating large images in the past, so I used the backburner split lines function, until max9/backburner 2007 which ended up not reliable enough for production. So I end up giving up backburner and switching to Deadline, which worked really well in cropping the render in tiles (2000x2000), each being sent to a different rendernode: No more memory error, a first good result :-)
However, I still get black buckets as soon as I exceed 6000x6000 for the final image size (although cropping renders into smaller images!)
Here is the set-up for 'sanity check'
Dual Xeon with 4Gb, rendering distributed to a farm of several Dual Xeons with 2Gb each. Plenty disk space (Raid 0) and gigabit network (no latency)
Of course /3Gb switch and clean machine install
Scene in max9 32bits, HDRI + Final gather + log exposure (activated for background), most nurbs geometry so around half a million polygons in the end. Not a big deal here.
Lighting with a couple of MR spots + skylight sets with hdri environment and FG - No photon
Most materials use various Arch&Design and Car Paint shaders
All usual MR sets checked (tried large BSP/BSP with different sizes, even Grid), placeholder, memory limit 1024/512/2048, MR map manager, bitmap pager on/off
Different FG presets tried: Low Mid High, interpolation radius standard or set to pixels
Tried with FG save to disk on and off
Now here is the little debugging:
Problem seems to come from FG calculations based on HDRI image: (I use a 6000x6000 HDRI studio lighting) because I can really see it during FG calculation, and it work without FG (obviously scene totally inacceptable then).
Testing difference with max9 64bits under winxppro64
Rendering works up to 8000x8000, no more lost buckets and the max memory used was 1.6gb despite 4gb installed on the machine: this means that the problem is linked to calculation rather than memory (one of the main difference between 32 and 64); alternatively, it could be linked to MR internal/FG buffer memory management but not actual RAM installed on the system.
So here are a few questions I am still trying to answer:
- Bitmap pager: Is the bitmap page option (on/off) passed to rendernodes in a network rendering set-up?
- Bitmap pager: which set should be use on such configuration?
- FG file save when doing network rendering of a single frame: since each node would use the same file for different images (each slice), how could that work??
- How to convert a .hdr file into an .exr? (MR message windows alerts that .hdr is not recognized an thus will be used as raw)
- Is there a way to force FG-only calculation on a single 64bits machine like with MR standalone and have that reused and shared by 32bits rendernodes?
Will keep you posted a soon as I find a decent solution; please post if you have any good finding.
Freelance 3D Designer & Producer
Autodesk Approved 3DSmax & Mental Ray Instructor www.cefaidesign.ch
The "black buckets on large renderings" is simply max's framebuffer management running out of RAM, and not being able to accept buckets from mr. The only short term solutions are "more RAM" and "64 bits". Othet than that I can only say "stay tuned"
Apparently Max - unlike mr - just silently fails it's memory allocation with no error messages.
NOTE: This has nothing to do with FG, or any other such problems that may in the past have generated "black splotches". Mr is actually successfully rendering the image, and is completely oblivious to max's inability to receive and display the buckets.
I suggest you render your FG map first at a lower resolution, hit the "read only" FG map mode, and then use some split-render MaxScript such as this one that I slapped together:
rollout SplitRender "Split Render Tool" width:250 height:100
radiobuttons splitcount "Pieces to split render in:" labels:#("1", "4", "9", "16") default:2
spinner width "Total width:" type:#integer range:[0,32000,3000]
spinner height "Total height:" type:#integer range:[0,32000,2400]
spinner overlap "Pixel overlap:" type:#integer
checkbutton show "Show image while rendering" checked:on
edittext filename "File name:" text:"my_filename"
edittext extension "File type:" text:".jpg"
button doRender "Do the render"
on doRender pressed do (
a = splitcount.state
b = splitcount.state * splitcount.state
-- actual render width and height
w = width.value / a
h = height.value / a
bm = bitmap w h
p = overlap.value - 1
for i=0 to b-1 do
row = i / a
col = i - floor(row) * a
render renderType:#blowup region:#((w/a)*col,(h/a)*row,w/a*(col+1)+p,(h/a)*(row+1)+p) outputwidth:w outputheight:h outputfile:(filename.text + row as string + col as string + extension.text) vfb:show.checked progressbar:(not show.checked) to:bm
-- create the rollout window and add the rollout
if splitRenderFloater != undefined do
SplitRenderFloater = newRolloutFloater "Split Render Tool" 250 225
addRollout SplitRender SplitRenderFloater
I've seen a couple of scripts that do a similiar job,
but never deal with final gather. But Rendering a lower resolution FGmap is a great idea !!
I hope someday Mental Images figure out a way to keep the FG sampling the same regardless of which crop or region its rendering.
Yes thats the correct way to go about rendering large images.
rendering the low res fG map (res 1000x1000), loading the frozen map then adjust resolution settings (3000x3000) or whatever res you want. Then render it out with a render crop script.
What I mean is Mental Ray changes its sampling based on what portion of the image its rendering. Example if I render a 2500x1500 image with final gather, without loading a fgmap.
Then render just the top right hand corner out without freezing the fgmap you will notice the fgmap does not match the image that was render all at once.
FG sends out different samples for each crop, it doesn't see the whole picture, It should reference it some how.
This would be handy for rendering changes to an interior, where I might change a table and just need to render region that section. I find sometimes it matches up okay other times I have to feather the edge of the region to blend it into the image.
copy and paste the script Masterzap has created into a notepad file. Save it with a .ms instead of .txt file extension.
Drag it into a max viewport and it should work.
Theres other ways to do it but that works for me.