Am 24.06.2014 um 23:30 schrieb Jonathan K. F. Jensen:
So the master isn't doing any j2k encoding?
If that's the case, I guess the best master would be one with multiple bundled
NIC's and some fast disk storage, while the CPU speed wouldn't really matter.
Only on the "slaves" would CPU matter.
Did I get that scenario right?
Hmm, Carl would know for sure, but from my network benchmarks it seems that the master is
still doing a full image/compression pipeline per configured thread in addition to
supplying raw images to remote nodes.
In the case of e.g. Big Buck Bunny the 'Master' would actually be bored when only
supplying a bunch of unprocessed 853*480 pix to the remote nodes.
As long as we think of a 'classic' DCP-o-matic workflow, that is, conversion of
pre-encoded video to J2k-MXFs, I think mass storage performance is not so important -
because we come from a low-bandwith compressed video format and go to a medium-bandwith
compressed format.
Again, in the case of Big Buck Bunny, we have a videostream of 11Mbyte (that fits into the
filesystem cache) with around
350KByte/s and end up with two MXFs at something like 25MByte/s - but with the encoding an
thus file write performance on said CPUs is taking place at half realtime speed.
So, typically we come from something like 1MByte/s and go to 15Mbyte/s. That's well
within the limits of common single drive hard discs.
This, of course, changes dramatically when working from uncompressed single image
sequences (like most other DCP encoding tools do), or very high quality FullHD sources
with low compression ratios.
- Carsten