Any modeling process breakdowns for Sauria - Blood for Blood film?

That’s what the…

…was for… on principle it could… but the standards may don’t allow this… but then this
white should be a 16x16 white square… even in JPG… :wink: (… it allows 100% quality with 4:4:4 subsampling and floating point DCT)…

But actually it’s another standard JPEG-LS (see wikipedia PNG) and

Lossy PNG compression:

Although PNG is a lossless format, PNG encoders can preprocess image data in a lossy fashion to improve PNG compression.

I’d argue that doesn’t change what PNG is, in that it is a lossless format and hence is saving that data as lossless based on what it is being feed. It’s not the PNG format that is at fault if it’s being feed half mangled image data to start with.

I see what you did there… :stuck_out_tongue_winking_eye:

1 Like

So returning to the actual topic… :wink:

It may be just a simple very pixely low colorization texture upscaled and then quantizied to some simple values… from the original simple image… almost as using this as a palette… like so… (now of course not so clean because of the JPG artifacts :grin: )

…but also using some different resolutions because at the mouth this are more dense… ??

1 Like

it’s a bit above my knowledge, render in eevee operates in “draw calls”, I’m not 100% sure of what they are but internally the render is divided between material, objects ? It’s a bad idea to exceed VRAM but there is a point where Eevee can start to swap between draw calls, which will make the render extremely slower but it will manage to work anyway. But if you go beyond a limit the render will become uncontrollable and you’ll get artifacts or crash.
At some point when I rendered some scenes the objects where out of place on some frames, it turns out that it was because I had filled the VRAM …

That’s true when saving the file on disk, however to render that image all their pixels data needs to be stored on the memory. Then it doesn’t matter if your texture is jpg , png , tga, if it’s an highly detailed photo or a plain flat color, that image is stored as an array of pixel in memory, and the size on disk as no influence, it’s only for storage that the file size matters !

Hum, IIRC cycles differentiate if the image is 16bits vs 32 and 1 channel vs RGBA therefore you can save some memory, maybe it’s optimized for 8bit as well. At least it’s easy to test by comparing the peak memory with different images.
Last time I checked Eevee isn’t optimized for that and it will always store the image as an RGBA array internally TBH I don’t know if they are stored in 8bits when the image is 8bits or if it’s automatically converted to float even at this stage…

Hope that helps, feel free to ask if something isn’t clear !

3 Likes

I see. At least I’ll know if the limit exceeded, thanks

By quantizing, you mean something like
quantized = ceil(channelValue/step) * step
, where step < 1.0 ?

Yepp… somekind… to get this “clean” look again… sharp borders between the original colors… more some re-paletteizing ( ← is this a word ?) to get back the origianl colors used…

I see, interesting. I actually wondered how to use a single grayscale channel to mix multiple colors (like paletted texture), and came to conclusion that it’ll result in “rainbow” artifacts in places where two distant step values meet. Maybe quantization can take care of it

Just thinking of it… a “simple” colorramp with set to Constant might be enough… but someone has to position the colors manually… …but if you have mutliple colors which map to the same greyscale… no… this might be a bad idea… hmm… except if you satrt with a greyscale inthe firstplace ??? A median filter also does something like this…
:sweat_smile: … just some chaffer / mounder… (and learning some new vocabulary…)