<div dir="ltr">Hi,<div class="gmail_extra"><br><br><div class="gmail_quote">On 1 April 2014 05:54, Keith Packard <span dir="ltr"><<a href="mailto:keithp@keithp.com" target="_blank">keithp@keithp.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">Aaron Plattner <<a href="mailto:aplattner@nvidia.com">aplattner@nvidia.com</a>> writes:<br>
> Can't the max image size be multiple gigabytes? That seems a little<br>
> large to describe as "close" to running out of heap space.<br>
<br>
</div>The problem is that we're going to buffer that in memory anyway, down in<br>
the OS layer. The only alternative is to block down there waiting for<br>
the client to drain the image data.<br>
<br>
So, the chunking in DIX is not helping reduce memory usage, it's just<br>
making the OS layer shuffle data around a lot.<br></blockquote><div><br></div><div>You're correct in bandwidth/transfer terms, but not in terms of peak simultaneous usage.</div><div><br></div><div>OTOH, I don't think it's really worth worrying about too much. People do rather adventurous things like 4K on ARM32, which is fine, but anyone who then does a core GetImage on the whole thing really deserves whatever they get.</div>
<div><br></div><div>Cheers,</div><div>Daniel </div></div></div></div>