Right now it is not good for much, but still interesting - ffmpeg added J2k MXF support a while ago, and since dcp-o-matic uses ffmpegfor input, it is actually possible to feed a DCP-o-matic generated J2k-MXF (or from any other unencrypted DCP) into DCP-o-matic as source content. Right now, the only option then is to recompress it into another MXF, which is hardly exciting.
But - wouldn't this be a starting point for 'unwrapping' and transcoding of existing DCPs to other formats, again by using FFMPEG, this time for ouput?
- Carsten
Suggestion: Add option to windows installer for a choice to install only for the current or all users. Makes it easier to use the encode server component on standard/non-admin user accounts.
- Carsten
I remember something about the mxf file being hardlinked and not
copied from the "video" directory to the final DCP directory, or moved
as the sound mxf.
But OSX keeps giving the size of the full project directory as about
double the size of the DCP. And had many times the error when trying
to make the DCP sirectly in the usb key where I intended to deliver
the DCP, so just slightly bigger than the expected size, not double.
If the files are actually copied, an option to specify a different
location for the result DCP will be great in many many cases.
Thanks!
Manuel AC
Am 24.06.2014 um 23:30 schrieb Jonathan K. F. Jensen:
> So the master isn't doing any j2k encoding?
> If that's the case, I guess the best master would be one with multiple bundled NIC's and some fast disk storage, while the CPU speed wouldn't really matter.
> Only on the "slaves" would CPU matter.
> Did I get that scenario right?
>
Hmm, Carl would know for sure, but from my network benchmarks it seems that the master is still doing a full image/compression pipeline per configured thread in addition to supplying raw images to remote nodes.
In the case of e.g. Big Buck Bunny the 'Master' would actually be bored when only supplying a bunch of unprocessed 853*480 pix to the remote nodes.
As long as we think of a 'classic' DCP-o-matic workflow, that is, conversion of pre-encoded video to J2k-MXFs, I think mass storage performance is not so important - because we come from a low-bandwith compressed video format and go to a medium-bandwith compressed format.
Again, in the case of Big Buck Bunny, we have a videostream of 11Mbyte (that fits into the filesystem cache) with around
350KByte/s and end up with two MXFs at something like 25MByte/s - but with the encoding an thus file write performance on said CPUs is taking place at half realtime speed.
So, typically we come from something like 1MByte/s and go to 15Mbyte/s. That's well within the limits of common single drive hard discs.
This, of course, changes dramatically when working from uncompressed single image sequences (like most other DCP encoding tools do), or very high quality FullHD sources with low compression ratios.
- Carsten
Am 27.06.2014 um 09:58 schrieb Jonathan K. F. Jensen:
> Hi Carsten.
>
> Thanks for the explanation, I see your point.
> My source material is everything from a single tiff/jpg/png (for slides), to Prores 10bit, 4:4:4.
> I have been wondering if it could speed up the encode on the 'Master' by adding a SSD and be able to point DCP-OMatic to that as a 'Scratch' disk.
> Just a thought :)
SSDs will certainly add something to performance when going from that class of source content to J2k, but as every machine's J2k coding capabilities quickly max out, it won't be a massive boost I guess. Some of those machines I gave Carl benchmark data had SSDs, but it wasn't really visible from the results. But then again, they were all only testing on 'tiny' big buck bunny ;-)
When a GPU assisted OpenJPEG will be there, mass storage performance might become more important.
Carl - did you ever think about implementing support for e.g. Kakadu?
I'm quite okay with the current coding speed and options for network rendering, but I'm not using it for large projects on a daily basis like others might do.
- Carsten
Hi all
DCP-o-matic 1.70.0 was released. This version includes:
Updates to the de_DE translation from Carsten Kurz.
A fairly big code rearrangement to improve speed of encoding of DCPs from sets of images.
Improvements to the KDM dialog which now takes CPLs rather than DCPs. It also allows you to specify any CPL to generate a KDM for.
Improvements to the timeline dialog when dragging content over other content.
Various fixes to uses of separate audio files with accompanying non-standard frame rate video; video rates are now derived from simultaneous video sources.
New "scale to fit width" and "scale to fit height" options to adjust scaling and cropping of a source to fit the DCP's container.
Use the ISCDF DCP naming contention version 9.
Some fixes to audio analyses when channel mappings are changed.
Fix to linkage of command line tools on OS X.
Fix to crash when the timeline window is opened when there is no content.
Fixes for crashes when using sources with more than 8 audio channels.
Fix for bug where video would not be re-made if subtitles were turned on.
Speculative fix for completely broken DCP XML files in some locales.
Possible fix for missing bits at the end of FFmpeg content with negative start times.
Option to allow any DCP frame rate, not just the ‘approved’ ones.
Audio gain can be specified in fractional dBs
Work-around out-of-memory crash when using large start trims
Fix incorrect labels for some audio channels in some locales.
Add slightly better and more configurable logging.
Thanks to Sumit Guha, Pradeep, Carsten Kurz, Matthias Damm, Bill Hamell
and Daniel Chauvet. Download it from http://dcpomatic.com/download
Best regards
Carl
Attached two pictures with recent benchmarks, using 1.69.xx. One single machine testing, one network. The single machine lists only the peak performance runs, as most machines were tested with different encode thread numbers to find the best setting. It can be seen that 'overthreading' will help a lot on current multicore CPUs. Be aware, though, that this might crash the software if running under WIN32 OS.
I can easily be seen that the price/performance sweetspot is with the 6core 3930k/4930k machines. They can even be overclocked to 4.5GHz. This CPU costs only around 500€ or so. This is also reflected in the CPU Passmark list:
http://www.cpubenchmark.net/high_end_cpus.html
I was always skeptical about network encoding with regard to achieving very high conversion rates, because I thought a typical network (even gigabit) would saturate too quickly, already around 10fps or so. I was wrong. I set up a test at a clients site with a couple of (mostly) older Macs, connected through a simple Gigabit switch. Nothing really fancy. Two iMac-i5 were the most beefy machines.
The network still outperforms the fastest (and most expensive) dual Xeon machines.
These tests also show that the J2k coding on the Mac is at least as efficient as under Windows. Although I still have to perform the identical test on a Mac with OSX vs Bootcamp one time. The network test was done with all machines running OS X, adding one machine by one and running a BigBuckBunny encoding for each set. And of course this was using Carls standard Bunny benchmark metadata, so 2k - the network load may look VERY different when using 4k...
It's interesting to see that the 'Master', an outdated Xeon 3530, is able to supply so many frames to the clients and is still encoding many frames itself.
Only with the last iMac-i5 added to the network I could see the occasional drop in CPU load on it, but very short and rare. I guess I could have gotten near 20fps by adding another.
The number is missing in the table, but aggregated single CPU fps for all machines used is around 19 - that's just 2fps loss in the network coding performance. Good work, Carl!
Also, not a single run exhibited problems. No crashes, no lost frames, etc.
I also did a SINTEL run on the same network, giving the same performance (17.3fps) and taking just 20min runtime.
Carl - what is now done on the render clients - only J2k, or also colour conversion, scaling, etc.?
Maybe someone else had a similar issue.
Did a short film, mostly B&W. In a quick projection test I saw some,
very few images with posterized green colors, and it's not an effect
of the film.
Cinemaplayer renders it perfectly from the mxf, and I'm wondering if
it's a problem with the cinema (Doremi+Christie), but I don't have
easy access to it.
It's worth trying another codification? like going the long way with opendcp.
Oh yeah, the source is a prores 4:3 interlaced SD with rectangular
pixels, funny stuff. The deinterlacing filter made the weirdest
flashing colors, so I deinterlaced before, but scaling and stretching
is done in dcpomatic.
Any ideas?
Thanks!
Manuel AC
Hi all,
A friend of mine found this page from a post house that has done some
testing with regards to color shift of Pro Res in qt:
http://www.stopp.se/lab-testing-the-pro-in-apple-prores/
Also there is the lutyuv filter in ffmpeg that could possibly correct
for this. E.g. (example arguments):
-filter:v lutyuv="y=gammaval(0.96),u=gammaval(1.04),v=gammaval(1)"
--
-mattias
'New "scale to fit width" and "scale to fit height" options to adjust scaling and cropping of a source to fit the DCP's container.'
Hmm, why do I find these in the DropDown Menu, instead of in 'Video' where the other scaling options are?
- Carsten