What's new

How does the whole printing stems through PT from Cubase work?

Adat vs madi is mostly the channel count and type of connector.

For adat, one cable gives 8 channels. Toslink cables.

For madi one cable is 64 channels. So higher density. Uses either coaxial or optical connector.

Audio interfaces partly come down to personal preference. For me, I'd be looking at RME madi for 64 channels. MOTU interfaces with AVB may give you high channel count as well over ethernet. Lots of options. Worth it to take some time to research the different digital audio formats to get familiar with them.
 
Adat vs madi is mostly the channel count and type of connector.

For adat, one cable gives 8 channels. Toslink cables.

For madi one cable is 64 channels. So higher density. Uses either coaxial or optical connector.

Audio interfaces partly come down to personal preference. For me, I'd be looking at RME madi for 64 channels. MOTU interfaces with AVB may give you high channel count as well over ethernet. Lots of options. Worth it to take some time to research the different digital audio formats to get familiar with them.
oh! would this be the RME Madi you're referring to?
 
oh! would this be the RME Madi you're referring to?
It is one option, yes. But that one has been unavailable for some time.

Ultimately if you go this route, you need the MADI connection for digital transfer between computers, and also a way to monitor via speakers / headphones. So you will need something with analog AD/DA; either an interface with analog and MADI in the same interface, or a separate AD/DA converter with MADI connectivity.
 
Hello everyone,
I don't want to hijack this thread but as @Tom_D mentioned the MTC synchronisation between Cubase and PT here's my question: How to sync those two DAWs without having a few seconds of locking time?
I know that PT perfectly syncs to Logic with the basic "send MTC" settings but with Cubase it always takes some time ProTools locks to Cubase.
For any hint on this many thanks in advance and have a great weekend! Karsten
 
Hello everyone,
I don't want to hijack this thread but as @Tom_D mentioned the MTC synchronisation between Cubase and PT here's my question: How to sync those two DAWs without having a few seconds of locking time?
I know that PT perfectly syncs to Logic with the basic "send MTC" settings but with Cubase it always takes some time ProTools locks to Cubase.
For any hint on this many thanks in advance and have a great weekend! Karsten
I don't have an answer to this. I also have Logic X and Logic does lock up very quickly to MTC coming from either Cubase or Pro Tools. Sync between PT and Cubase does take about 1-2 seconds to lock up for me. I use PT as the MTC leader and CB as the MTC follower.

However, I'll add that although Logic X syncs up very quickly, it sometimes seems like it is playing back while it is trying to lock onto the MTC. What I mean is that I sometimes get some variable-speed playback artifacts for the first second or two. So ultimately having a second or two between CB and PT may alleviate that type of issue at is gives it time to lock up before playback starts.
 
Hey Tom, thank you very much for your quick answer!
Ah ok, that's interesting, I always use Cubase as the MTC master. ( Which airs perfectly with Non Lethal Applications Video Sync
But I heard of using PT as MTC master and will try that.
Thanks again for the idea!

Have a great weekend and best, Karsten
 
Hey Tom, thank you very much for your quick answer!
Ah ok, that's interesting, I always use Cubase as the MTC master. ( Which airs perfectly with Non Lethal Applications Video Sync
But I heard of using PT as MTC master and will try that.
Thanks again for the idea!

Have a great weekend and best, Karsten
let us know how this goes! It does sound logical to have PT as the MTC master as most film composers use this as a video slave? Anyway keep us informed :)
 
I do not believe 23.98 fps is supported by the Midi spec, and therefore not supported in Midi timecode. If someone knows of some update, please correct.

When working at 23.98 >>> I believe the most bit accurate way to put your stems in Protools is by bounce internally in Cubase or Logic, then importing into Protools. The key words here are ‘bit accurate’ to your daw.

Concerning Logic, I do not believe the program code that resolves ( jam sync ) has been updated to support 23.98 from MTC. I believe Logic ’sees’ it as incoming 24 fps, and tries to sync to that- same way since early versions of Logic.

There is a .1% difference between 23.98 and 24, so Logic must make adjustments, and this will result in sample rate changes. They might be minor to you, but if you watch the sync panel, you can see the adjustments happening while in sync. I used to slave Logic to Protools, but eventually stopped because the adjustments became to noticeable ( to me ).

Logic sending MTC to Protools and having Protools slave seems to work better. It could be down to simply Protools resolving in a more graceful way, I simply don’t know why. This is without the use of an Avid Sync I/O and no Emagic midi interface. Keep in mind both Logic and Protools are set to 23.98 and Logic will be spitting out 24 fps MTC. It’s a bit weird, but it works. Logic has one advantage here, it sends positional MTC when you frame forward/backward with even one frame increments, and this is something I find useful. I have not been able to get Cubase to do this one thing while it is the slave, I have to frame forward while stopped and in one step increments in Protools.

Concerning Cubase, it could be better as resolving to 24 fps MTC in a 23.98 fps project, but again here, I would tend to have Cubase send MTC to Protools, and have Protools slave for the same reasons as Logic above. You can’t get around the sample rate adjustments no matter how the daw jam sync is implemented, and my money is on Protools doing it better.

All that said, I don’t like the resulting sound in Protools when I print stems this way, and noticed some odd behavior in Protools disk reads/writes when Protools audio files were recorded while slaved. I can’t elaborate further, but I just decided the bit accurate internal bounce method was more bullet proof to me, and I did not want/wish to have the post mixers call back with any potential technical issue with playback of the files.

An Avid Sync HD will support 23.98 LTC, but here again, I don’t beleive it’s possible to generate 23.98 LTC or true 23.98 MTC from engaging play in either Cubase or Logic, as the midi spec does not support 23.98.

It could all come down to Protools, via a Sync HD MIGHT resolve in the most graceful manner of all since it’s hardware. But it still needs to make that resolve to 23.98 from incoming 24 fps MTC sent from your daw., which is running with 23.98 in reference to the positional info of when/where to play your audio. Bouncing internally and doing a manual import/place in Protools avoids all this jam sync/resolve business, and the files play back in Protools just like they did in your daw ( Cubase or Logic or whatever )

lol, final point - Ignore all this if you are lucky enough to work at 24 fps or any frame rate fully supported by MTC. But even here, the slave daw still needs to jam sync, and this is still not a ‘bit perfect’ transfer.
 
Would all this work the same way if I chose to host video on PT on a seperate machine?
 
I do not believe 23.98 fps is supported by the Midi spec, and therefore not supported in Midi timecode. If someone knows of some update, please correct.

When working at 23.98 >>> I believe the most bit accurate way to put your stems in Protools is by bounce internally in Cubase or Logic, then importing into Protools. The key words here are ‘bit accurate’ to your daw.

Concerning Logic, I do not believe the program code that resolves ( jam sync ) has been updated to support 23.98 from MTC. I believe Logic ’sees’ it as incoming 24 fps, and tries to sync to that- same way since early versions of Logic.

There is a .1% difference between 23.98 and 24, so Logic must make adjustments, and this will result in sample rate changes. They might be minor to you, but if you watch the sync panel, you can see the adjustments happening while in sync. I used to slave Logic to Protools, but eventually stopped because the adjustments became to noticeable ( to me ).

Logic sending MTC to Protools and having Protools slave seems to work better. It could be down to simply Protools resolving in a more graceful way, I simply don’t know why. This is without the use of an Avid Sync I/O and no Emagic midi interface. Keep in mind both Logic and Protools are set to 23.98 and Logic will be spitting out 24 fps MTC. It’s a bit weird, but it works. Logic has one advantage here, it sends positional MTC when you frame forward/backward with even one frame increments, and this is something I find useful. I have not been able to get Cubase to do this one thing while it is the slave, I have to frame forward while stopped and in one step increments in Protools.

Concerning Cubase, it could be better as resolving to 24 fps MTC in a 23.98 fps project, but again here, I would tend to have Cubase send MTC to Protools, and have Protools slave for the same reasons as Logic above. You can’t get around the sample rate adjustments no matter how the daw jam sync is implemented, and my money is on Protools doing it better.

All that said, I don’t like the resulting sound in Protools when I print stems this way, and noticed some odd behavior in Protools disk reads/writes when Protools audio files were recorded while slaved. I can’t elaborate further, but I just decided the bit accurate internal bounce method was more bullet proof to me, and I did not want/wish to have the post mixers call back with any potential technical issue with playback of the files.

An Avid Sync HD will support 23.98 LTC, but here again, I don’t beleive it’s possible to generate 23.98 LTC or true 23.98 MTC from engaging play in either Cubase or Logic, as the midi spec does not support 23.98.

It could all come down to Protools, via a Sync HD MIGHT resolve in the most graceful manner of all since it’s hardware. But it still needs to make that resolve to 23.98 from incoming 24 fps MTC sent from your daw., which is running with 23.98 in reference to the positional info of when/where to play your audio. Bouncing internally and doing a manual import/place in Protools avoids all this jam sync/resolve business, and the files play back in Protools just like they did in your daw ( Cubase or Logic or whatever )

lol, final point - Ignore all this if you are lucky enough to work at 24 fps or any frame rate fully supported by MTC. But even here, the slave daw still needs to jam sync, and this is still not a ‘bit perfect’ transfer.

I get near-instant lockup time when PT is leader and Logic is following via MTC. But I am not using actual MIDI to transmit MTC - I have a SyncHD and Avid MADI on my HDn TB rig, and I send actual LTC from the XLR output on the SyncHD to the LTC input on the Unitor8mk2 on the Logic rig.

However, I assume that the Unitor8 reads the incoming LTC and converts it to MTC and then sends it to the Mac via the USB connection, because I still get the "24 FPS frame rate detected, keep it?" dialog when Logic is set to read 23.976. So that tells me that what Logic is seeing is actually MTC and not some raw, pure LTC feed. So probably no different than just sending MTC via a 5-pin MIDI cable, or even Network MIDI. (I have used Network MIDI to send MTC from Logic to VideoSync on a separate computer and this seems to work just as well as 5-pin, but I normally use 5-pin for the VideoSync machine.)

I do not use MMC anywhere in my chain - just simple LTC timecode on an XLR cable from PT to Logic, and then MTC from Logic to VideoSync. When I'm not printing mixes, I leave Logic + MOTU audio interfaces' digital audio clock set to internal, and when it's time to print mixes I switch to sync to the word clock coming from the PT rig. So maybe the fact that I'm sending word clock along with timecode from PT to Logic is why I never hear any "spool up" artifacts as you mention, and like we used to get in the analog tape days. I was always taught that if you're slaving any kind of DAW to incoming timecode, then it should also be slaved to word clock alongside that timecode, as long as that timecode is properly resolved / aligned with the timecode at the source. In the case of a SyncHD this is the case.

There are a few ways to achieve this, but the simplest and most direct way is to have PT be the leader, and to have it send LTC and word clock from the SyncHD. Not sure if 5-pin / Network MTC output from PT is resolved to the word clock of whatever interface is in use on the PT rig if a SyncHD is not present, but I have my doubts. And with the possibility of mix-n-match PT setups, and the troubles we had in the analog tape + DAW days, I tend to favor a full hardware implementation of PT with SyncHD to generate timecode and word clock that I KNOW are resolved to each other.

(Ancient history chapter) = It gets a little more complex when a pair of analog 24-track machines are in the loop... since you need a machine synchronizer to lock the two tape machines together anyway, essentially you use that synchronizer to have BOTH machines following the synchronizer (which is being fed "house sync" from a video black-burst generator or other master clock source), instead of letting the master machine run free and using the synchronizer to have the slave machine follow. This eliminates the wow+flutter of a tape machine running free, since both machines are essentially slaved to a crystal-generated black-burst + timecode at that point. Then a PT rig is fed timecode and word clock from the synchronizer and all is well. Of course that PT rig must have a Sync, SyncHD, or in the old days a Video Slave Driver and / or SMPTE Slave Driver peripheral. (This was in the days of NuBus PT24 and PCI PThd TDM rigs). We had plenty of hair-pulling nights trying to slave a TDM rig to tape by simply feeding timecode from track 24 into the Unitor8. Wow and flutter and drift, oh my! As usual, it was the support team from Digidesign that had all the answers, and they were the only ones who could give us a definitive solution, which was what I described above.
 
I get near-instant lockup time when PT is leader and Logic is following via MTC. But I am not using actual MIDI to transmit MTC - I have a SyncHD and Avid MADI on my HDn TB rig, and I send actual LTC from the XLR output on the SyncHD to the LTC input on the Unitor8mk2 on the Logic rig.

However, I assume that the Unitor8 reads the incoming LTC and converts it to MTC and then sends it to the Mac via the USB connection, because I still get the "24 FPS frame rate detected, keep it?" dialog when Logic is set to read 23.976. So that tells me that what Logic is seeing is actually MTC and not some raw, pure LTC feed. So probably no different than just sending MTC via a 5-pin MIDI cable, or even Network MIDI. (I have used Network MIDI to send MTC from Logic to VideoSync on a separate computer and this seems to work just as well as 5-pin, but I normally use 5-pin for the VideoSync machine.)

I do not use MMC anywhere in my chain - just simple LTC timecode on an XLR cable from PT to Logic, and then MTC from Logic to VideoSync. When I'm not printing mixes, I leave Logic + MOTU audio interfaces' digital audio clock set to internal, and when it's time to print mixes I switch to sync to the word clock coming from the PT rig. So maybe the fact that I'm sending word clock along with timecode from PT to Logic is why I never hear any "spool up" artifacts as you mention, and like we used to get in the analog tape days. I was always taught that if you're slaving any kind of DAW to incoming timecode, then it should also be slaved to word clock alongside that timecode, as long as that timecode is properly resolved / aligned with the timecode at the source. In the case of a SyncHD this is the case.

There are a few ways to achieve this, but the simplest and most direct way is to have PT be the leader, and to have it send LTC and word clock from the SyncHD. Not sure if 5-pin / Network MTC output from PT is resolved to the word clock of whatever interface is in use on the PT rig if a SyncHD is not present, but I have my doubts. And with the possibility of mix-n-match PT setups, and the troubles we had in the analog tape + DAW days, I tend to favor a full hardware implementation of PT with SyncHD to generate timecode and word clock that I KNOW are resolved to each other.

(Ancient history chapter) = It gets a little more complex when a pair of analog 24-track machines are in the loop... since you need a machine synchronizer to lock the two tape machines together anyway, essentially you use that synchronizer to have BOTH machines following the synchronizer (which is being fed "house sync" from a video black-burst generator or other master clock source), instead of letting the master machine run free and using the synchronizer to have the slave machine follow. This eliminates the wow+flutter of a tape machine running free, since both machines are essentially slaved to a crystal-generated black-burst + timecode at that point. Then a PT rig is fed timecode and word clock from the synchronizer and all is well. Of course that PT rig must have a Sync, SyncHD, or in the old days a Video Slave Driver and / or SMPTE Slave Driver peripheral. (This was in the days of NuBus PT24 and PCI PThd TDM rigs). We had plenty of hair-pulling nights trying to slave a TDM rig to tape by simply feeding timecode from track 24 into the Unitor8. Wow and flutter and drift, oh my! As usual, it was the support team from Digidesign that had all the answers, and they were the only ones who could give us a definitive solution, which was what I described above.
Hey Charlie! Great to hear from you and this is fantastic wisdom. Is this compulsory prerequisite knowledge and workflow for working at your level? or would one get away with just exporting wav stems from any DAW for the music editors + dubbing mix etc to work off? Love to know the pros + cons of this workflow when delivering final mixes. :)
 
Hey Charlie! Great to hear from you and this is fantastic wisdom. Is this compulsory prerequisite knowledge and workflow for working at your level? or would one get away with just exporting wav stems from any DAW for the music editors + dubbing mix etc to work off? Love to know the pros + cons of this workflow when delivering final mixes. :)
Having a separate ProTools "print rig" - sometimes referred to as a "layback recorder" or "mixdown deck" - is absolutely NOT a requirement, even up to the highest levels.

But it IS a convenience, although it does make things a little more complex and a lot more expensive. There is an in-between approach, since the PT rig doesn't even have to be a separate computer - check Neil Parfitt's videos about his 2019 Mac Pro, he's got RME cards (I think) for native CoreAudio I/O from Logic, feeding into ProTools HDX PCIe cards on the same machine! (He even runs VEPro on the same machine as well).

I've always had a separate PT rig, partly because back in the day, Logic didn't have "export stems" options, didn't have enough busses to do an internal record to empty audio tracks, and my CPU could barely play back the cues, let alone record the stems at the same time! Obviously none of these hurdles still exist, but I am accustomed to the workflow and there are advantages to having a separate PT computer (see below) so I just keep doing it that way.

But you can absolutely just do exports / bounces of your stems from within your primary DAW, and if they're BWAV format they might even be time-stamped, and those time stamps might even work when those files are imported into the music editor's PT rig on the dub stage. (That's supposed to work anyway, and always has for me, but I still don't trust it completely because I'm old, so get off my lawn.). It's 99% sure that your stems will be imported into ProTools for playback on the dub stage - in Hollywood ProTools has 99.999% market share - but in Europe or on some indie projects, your stems might be imported into Nuendo or some Adobe app or even right into the video editing program. But if your stems are cleanly named and organized that shouldn't matter.

I do give the actual ProTools Session to the music editor, but that might be a problem and shouldn't be relied upon 100%. Why? They might be using a newer / older version than you are, meaning the Session file may not load up on their rig. Also, their I/O setup is undoubtedly going to be different from yours, so they'd have to remap / import I/O settings in PT. So I give them the PT Session file, but it's CLEAN CLEAN CLEAN - with NO edits, NO crossfades, NO automation, NO clip gain or fader level adjustments, NO NOTHING except for audio files in the timeline. That way, if they can't (or don't want to) use my PT Session file, it's no biggie - they can drag-n-drop my audio files into their Session template, with their I/O settings, alongside their temp and whatever source music they've already got in their Session, and hopefully the BWAV timestamps work and the files just snap to the correct start time. But... in case they don't:

Some safety procedures to insure that your music goes to the right place on the timeline, in case BWAV timestamps break down:

• Include the timecode start point in the file names. Lots of people do it this way. I don't do this because the names would get really long and ugly, but mainly because I do this:

• Place all the files for each cue inside a sub-folder inside the main Audio Files folder in the PT Session folder. I name those sub-folders like this: "SAWX-3m22v2=03.12.22.00" and the individual files within would be named "SAWX-3m22v2-Hello Gabriella-Astem.L.wav". That suffix "Astem" indicates which stem the file belongs to - A through N at the moment - and I don't use descriptive names like "StringsHI" or "WindsLO" because the content on my stems varies wildly and is never "normal" orchestral layouts. But name your stems however you see fit. I use A through N so that the files will alphabetize nicely in folder views, and my composite mix file is name " MIX" with a *space* character as the first letter of the suffix so that the MIX files appear at the top of the list. The rest of the suffix is generated by ProTools to denote the channels within a stem - L / R / C / Ls / Rs / LFE etc. I also always start my prints / bounces at the nearest whole-second point before the actual first bit of audio (aka "first mod" in olden days terminology). This insures that there will be a little bit (less than one second) of "dead air" at the start of all files, which prevents any clicks or cut-off attacks that might happen if the bounce was started hard on Bar 1 or whatever. (I always leave eight bars of silence at the start of each cue in my primary DAW sessions so that I have that pre-roll buffer). This also means that the music editor doesn't need to fiddle with frames - the cue starts at 11 seconds even instead of 11 seconds and 3 frames or whatever. Also, if they nudge a cue, they can tell easily how much it's been nudged - no frames to add or subtract that way.

Anyway, if you don't have PT hardware or even the software, it's not a deal breaker. Your nicely named and time-stamped files will be fine.

But there are some pros and cons of having a separate print rig:

Cons:

Cost. A second computer, drives, display, PT hardware of whatever flavor, PT license.... it adds up fast. And it might only be used as a glorified mix down deck, so it might sit idle 90% of the time, until it's time to actually print mixes (mine does anyway).

Complexity. It's another computer to fiddle with, cabling, display, updates, iLok..... but my PT rig is still on PT v10 and MacOS El Capitan, nothing has been changed or updated in years. It sits there, powered off, until I need to print mixes and then it boots up in seconds and I'm on my way. But I don't use ANY plugins on the PT rig, except for what comes with ProTools. So that makes things a little simpler.

Pros:

• "Whole Project Overview" - When you print mixes to a separate PT rig, you basically have a whole tv episode (or reel of a film) in a single long session, so you can see all of the cues before and after the one that's currently loaded up in your primary DAW. This is great if you need to preview music for directors / producers sitting in your studio, since you can roll a whole reel in one shot instead of stopping to load up each cue in your primary DAW. Similar advantages if you need to make QuickTime movies for them to preview at their leisure. (However you can just create a "whole reel preview" project in your primary DAW and just import the mixes or stems into that to do previews from.). And these advantages exist whether ProTools is on the same computer or on a separate one.

• "Overlap Previews" - I often (always?) have cues that overlap, and I need to hear the end of the previous cue playing back from ProTools while the next cue is playing live from Logic, so I can finesse that transition by editing stuff in the incoming cue. This is really difficult to do without having a separate ProTools timeline, but it is possible on a single computer setup - but it's a bit easier when you have two displays, two keyboard+mice, etc. Kind of a hassle switching apps on a single computer.

• "Roughs and Demos Stash Point" - My ProTools rig serves as a place to print rough mixes, ideas, demos, etc. and then they can live there, on muted tracks, for quick playback and comparison for directors, etc.

• "Outside Recording Medium" - If I need to record something anywhere other than in my room, I always use ProTools. (The only exception would be if I physically bring a laptop to just record a solo cello or whatever in someone's apartment, but that's super rare for me.). The reason I use ProTools is because it's virtually impossible to bring a Logic Project to another studio and have everything come up sounding right when you're in a hurry. Third-party plugins, custom sample libraries, my custom key commands... there's always something missing. But with ProTools, I can walk into just about any studio in the world with my Session on a thumb drive or T7 drive and be up and running in minutes. So I'll print a click track and some reference stems to overdub against into ProTools and bring that with me to the outside studio. If I'm doing a remote record session I can just DropBox that Record Session to them. So I'l record and make a mess in that Session, then when I get home I can make some clean-up edits and do some rudimentary bussing + mixing in PT before doing a real-time record BACK from PT over to Logic. That way what I have in Logic is nice and simple and compiled into a "pre mix", but if I need to go back to alternate takes or whatever I can load up the PT Session from the recording date. Really helps me to not create chaos in my main Logic Projects.

• "Real Time Quality Control" - I actually prefer to print my stems / mixes in real time, so I can listen verrrryy carefully for any clicks, pops, bad fades, stuck notes, or anything else that can go wrong. (It never does though!). This is my last chance to QC the music before it goes out the door, so I prefer not to do offline bounces or whatever. I close my eyes and focus on the mix and I often catch all sorts of stuff I want to tweak, so I cancel the recording, fix the stuff, and do the print run again.

TL;DR = No, it's absolutely not necessary to have a whole, separate ProTools rig to print your stems+mixes into - but it has some definite advantages.
 
I do give the actual ProTools Session to the music editor,
This might be a dumb question, but when folks are submitting Pro Tools sessions to the dub stage (or whoever it goes to directly (music editor?), do you also load up the video in the pro tools session with all audio lined up? Or is it typical to just submit a PT session of STEMs + full mix with everything at the right timecode position and leave the video out of it to save time uploading files?

On one hand, leaving the video out is way less upload time and less possibility for intercepted downloads of the video (safer overall). On the other hand, is it expected to have the video in the PT session to quickly reference the music against picture? Or it the PT session simply there for importing session data to a larger session where the video already exists in the timeline?

Edit: Additional question - do you bother with importing / creating tempo maps in your delivered PT sessions or just leave the PT session at at static tempo (e.g. 120 bpm) and have an audio click alongside the stems for music edits?
 
This might be a dumb question, but when folks are submitting Pro Tools sessions to the dub stage (or whoever it goes to directly (music editor?), do you also load up the video in the pro tools session with all audio lined up? Or is it typical to just submit a PT session of STEMs + full mix with everything at the right timecode position and leave the video out of it to save time uploading files?

On one hand, leaving the video out is way less upload time and less possibility for intercepted downloads of the video (safer overall). On the other hand, is it expected to have the video in the PT session to quickly reference the music against picture? Or it the PT session simply there for importing session data to a larger session where the video already exists in the timeline?

Edit: Additional question - do you bother with importing / creating tempo maps in your delivered PT sessions or just leave the PT session at at static tempo (e.g. 120 bpm) and have an audio click alongside the stems for music edits?
I never include video in my PT Session uploads for a few reasons:

• As you mentioned, the file sizes would get pretty huge.

• The music editor already has the video... or they're supposed to anyway.

• Most importantly, the video is a VERY protected and closely guarded asset. It's a minor disaster if the score leaks due to some hole in the wall of data security, like the wrong permissions on a shared DropBox folder or whatever, but if the video should leak then that's potentially a career-ending disaster of lawsuit-level proportions. Imagine if you were the reason that Dune4_4k_FULL.mov showed up on the high seas torrents? *shudder*. This is why most of the productions I work on use dedicated, secure video streaming solutions like PIX or DAX to control access to all video files. These services have complete access and permissions control and provide a detailed log of who accessed which files and when, and can permit or prevent downloading of the files. They also provide capabilities for commenting, version control, viewing video on tablet or phone, etc. We composers don't have to buy or operate these software solutions, we are just clients who are given permission to view and / or download by the post supervisor and their crew.

Anyway, the video files I receive are always watermarked with my name in big letters on the screen, so there will be no denying where a leak came from if it should happen. Maybe this isn't a huge issue on some projects, but still... best to keep a tight grip on the video files and never upload them anywhere, including your cloud backups of your work-in-progress files. You never know if some bored and disgruntled IT tech at DropBox HQ can see inside your backup folders....

So that's why I do a few things to insure that a leak ain't gonna come from me:

• I play my video from a separate Mac Mini running VideoSync software (formerly known as VideoSlave). The video files are ONLY on that computer, which has its WiFi turned off, and is only connected to the internet via a Cat5 cable for that ten minutes I'm actually downloading the video file, and then I unplug the cable. The only connections to the outside world that the VideoSync machine has are: HDMI out to the tv on the wall, analog headphone out to aux audio inputs on my Logic rig, USB from keyboard+mouse, USB 5-pin MIDI interface. Good luck penetrating my VideoSync machine via MIDI cables! So... AIR GAP that sucker.

• I don't include the video files in any backups - cloud or otherwise - until the project has been released, and even then I'm a little sketched out about it, so I never include the video files in cloud backups, only on the spinning hard drives I use for long-term archiving once a project is finished.

• While I'm working on the project, I keep any safety copies of the video on dedicated Samsung T5 / T7 / T9 USB SSD drives that only get plugged into the VideoSync machine for a few minutes to make the copies, then they get unplugged and stashed. If the VideoSync machine should get bricked in some disaster, it's certainly possible to get the production to send me a new copy.

I discussed my video-security paranoia in a post a few years back, and some people might have thought I was a little nuts but HZ himself chimed in to confirm that total lockdown of video is best practice. Imagine the pressure on him and his crew to keep tight control of the Dune4_4k_FULL.mov file! Even letting an unauthorized person SEE the video, let alone download it, might be a fireable (and sue-able) offense on some projects! (Spoiler leaks and all that...). Part of the 20-page Composer Agreement that you quickly scroll to the end of and sign details your liability and responsibilities, and when you sign that sucker you're essentially saying, "Yeeeessss, I KNOW I'm on the hook if the video leaks, yada yada yada where do I sign where's my money...."

Now, as to tempo maps:

• No, I don't use tempo maps in my PT stem Sessions - they're always at 120bpm which makes the tempo grid the same as the min/sec grid.

• A whole-reel (or whole-episode) PT Session would have an absolute nightmare of a tempo map, and I'd have to import and somehow join multiple tempo maps from each individual cue. (Not even sure if this is possible in PT). When moving or nudging regions or making audio edits you'd have to always move the tempo map with the audio, and one wrong click or un-time-locked region to the right of the edit point would cause an entire disaster. So.... no. Never.

The only time I export a tempo map from my primary DAW to the PT rig is when preparing a Session for an out-of-house recording session. Then I do export a MIDI file of the tempo map from Logic and import that into PT, then I check the internal metronome click from PT against the one from my DAW, then I bounce an audio file of the DAW click plus all of the reference mixes / stems and import them into PT. After a final check to hear that magical comb-filtering sound when listening to both PT and the DAW playing in sync, I know that the PT Session is good to go out the door to whatever studio is doing the live recordings. (But these Sessions are not "whole-reel" or "whole-episode" Sessions - they're just "single-cue" Sessions, so it's easy to export+import the MIDI File of the tempo map without trying to join multiple cues together somehow.)

But - the video is NEVER included in those Sessions. Best practice is for the video files that have your name in the watermark to ONLY be used by you, in your studio. If you have junior / assistant composers or other collaborators, or need to work at outside studios, the production should prepare video files that are watermarked with THEIR names. Now, at many levels this may not be possible or practical, but this is the theoretical ideal.

In those situations I'd tell the post supervisor, "Hey I need copies of the video for my co-composer and music editor, and we are recording the orchestra at Studio X on this date, and they need to have picture playback on the scoring stage. Can you output watermarked copies and send the video to them and get them to sign an NDA in blood or whatever..." That way, everybody who has access to the video ONLY has access to a copy that's watermarked with their name, so if a leak occurs it's on them, not you.

Again, this is the theoretical ideal best practice, so depending on the scale of the production it may not be possible or practical to output a dozen individually watermarked copies and distribute them accordingly - but if the project is big enough to need that many music collaborators then it's probably big enough to warrant handcuffing an assistant editor to the edit bay for a night to output and distribute all those copies.

One anecdote that illustrates the "danger" of leaks: For one movie that I scored I was asked by the production to do a video showing one of the cues in-progress, to demonstrate the crazy musical sound design and build the hype, which would be released to the public before the movie came out in theaters. Everybody was on-board, the production hired a film crew to come to my studio, and the movie itself was not even playing during the three-camera shoot. BUT. Fans saw the video and noticed that one camera angle showed my DAW screen, so they paused it and zoomed in to read the names of the freakin' Markers in my Logic timeline!!! They saw a Marker named "JOHN DIES" or something, and started speculating at 100mph on the online fan forums. It took less than 12 hours from the video being released until I got a panicked call from the producers about "spoiler leaks" and we had to have the studio delete the video and make a new one with the Marker timeline blurred out, but it was kind of already too late as the screen grabs were out there. Not a huge career-ending issue, but a lesson learned. Now I always sanitize my Marker names to eliminate potential spoilers!
 
Last edited:
I never include video in my PT Session uploads for a few reasons:

• As you mentioned, the file sizes would get pretty huge.

• The music editor already has the video... or they're supposed to anyway.

• Most importantly, the video is a VERY protected and closely guarded asset. It's a minor disaster if the score leaks due to some hole in the wall of data security, like the wrong permissions on a shared DropBox folder or whatever, but if the video should leak then that's potentially a career-ending disaster of lawsuit-level proportions. Imagine if you were the reason that Dune4_4k_FULL.mov showed up on the high seas torrents? *shudder*. This is why most of the productions I work on use dedicated, secure video streaming solutions like PIX or DAX to control access to all video files. These services have complete access and permissions control and provide a detailed log of who accessed which files and when, and can permit or prevent downloading of the files. They also provide capabilities for commenting, version control, viewing video on tablet or phone, etc. We composers don't have to buy or operate these software solutions, we are just clients who are given permission to view and / or download by the post supervisor and their crew.

Anyway, the video files I receive are always watermarked with my name in big letters on the screen, so there will be no denying where a leak came from if it should happen. Maybe this isn't a huge issue on some projects, but still... best to keep a tight grip on the video files and never upload them anywhere, including your cloud backups of your work-in-progress files. You never know if some bored and disgruntled IT tech at DropBox HQ can see inside your backup folders....

So that's why I do a few things to insure that a leak ain't gonna come from me:

• I play my video from a separate Mac Mini running VideoSync software (formerly known as VideoSlave). The video files are ONLY on that computer, which has its WiFi turned off, and is only connected to the internet via a Cat5 cable for that ten minutes I'm actually downloading the video file, and then I unplug the cable. The only connections to the outside world that the VideoSync machine has are: HDMI out to the tv on the wall, analog headphone out to aux audio inputs on my Logic rig, USB from keyboard+mouse, USB 5-pin MIDI interface. Good luck penetrating my VideoSync machine via MIDI cables! So... AIR GAP that sucker.

• I don't include the video files in any backups - cloud or otherwise - until the project has been released, and even then I'm a little sketched out about it, so I never include the video files in cloud backups, only on the spinning hard drives I use for long-term archiving once a project is finished.

• While I'm working on the project, I keep any safety copies of the video on dedicated Samsung T5 / T7 / T9 USB SSD drives that only get plugged into the VideoSync machine for a few minutes to make the copies, then they get unplugged and stashed. If the VideoSync machine should get bricked in some disaster, it's certainly possible to get the production to send me a new copy.

I discussed my video-security paranoia in a post a few years back, and some people might have thought I was a little nuts but HZ himself chimed in to confirm that total lockdown of video is best practice. Imagine the pressure on him and his crew to keep tight control of the Dune4_4k_FULL.mov file! Even letting an unauthorized person SEE the video, let alone download it, might be a fireable (and sue-able) offense on some projects! (Spoiler leaks and all that...)

Now, as to tempo maps:

• No, I don't use tempo maps in my PT stem Sessions - they're always at 120bpm which makes the tempo grid the same as the min/sec grid.

• A whole-reel (or whole-episode) PT Session would have an absolute nightmare of a tempo map, and I'd have to import and somehow join multiple tempo maps from each individual cue. (Not even sure if this is possible in PT). When moving or nudging regions or making audio edits you'd have to always move the tempo map with the audio, and one wrong click or un-time-locked region to the right of the edit point would cause an entire disaster. So.... no. Never.

The only time I export a tempo map from my primary DAW to the PT rig is when preparing a Session for an out-of-house recording session. Then I do export a MIDI file of the tempo map from Logic and import that into PT, then I check the internal metronome click from PT against the one from my DAW, then I bounce an audio file of the DAW click plus all of the reference mixes / stems and import them into PT. After a final check to hear that magical comb-filtering sound when listening to both PT and the DAW playing in sync, I know that the PT Session is good to go out the door to whatever studio is doing the live recordings.

But - the video is NEVER included in those Sessions. Best practice is for the video files that have your name in the watermark to ONLY be used by you, in your studio. If you have junior / assistant composers or other collaborators, or need to work at outside studios, the production should prepare video files that are watermarked with THEIR names. Now, at many levels this may not be possible or practical, but this is the theoretical ideal.

In those situations I'd tell the post supervisor, "Hey I need copies of the video for my co-composer and music editor, and we are recording the orchestra at Studio X on this date, and they need to have picture playback on the scoring stage. Can you output watermarked copies and send the video to them and get them to sign an NDA in blood or whatever..." That way, everybody who has access to the video ONLY has access to a copy that's watermarked with their name, so if a leak occurs it's on them, not you.

Again, this is the theoretical ideal best practice, so depending on the scale of the production it may not be possible or practical to output a dozen individually watermarked copies and distribute them accordingly - but if the project is big enough to need that many music collaborators then it's probably big enough to warrant handcuffing an assistant editor to the edit bay for a night to output and distribute all those copies.
Man... if you ever wrote a book entitled Listen Up Goofballs, Here's What you Really Need to Know About Film Scoring and the content was similar to your last few replies, it'd be a NY Times Best Seller.
 
Man... if you ever wrote a book entitled Listen Up Goofballs, Here's What you Really Need to Know About Film Scoring and the content was similar to your last few replies, it'd be a NY Times Best Seller.
Hahah thanks, but that book already exists... it's my post history on this forum! Plus, there's probably less than two dozen potential buyers of such a book, and they're already on here!
 
I never include video in my PT Session uploads for a few reasons:

• As you mentioned, the file sizes would get pretty huge.

• The music editor already has the video... or they're supposed to anyway.

• Most importantly, the video is a VERY protected and closely guarded asset. It's a minor disaster if the score leaks due to some hole in the wall of data security, like the wrong permissions on a shared DropBox folder or whatever, but if the video should leak then that's potentially a career-ending disaster of lawsuit-level proportions. Imagine if you were the reason that Dune4_4k_FULL.mov showed up on the high seas torrents? *shudder*. This is why most of the productions I work on use dedicated, secure video streaming solutions like PIX or DAX to control access to all video files. These services have complete access and permissions control and provide a detailed log of who accessed which files and when, and can permit or prevent downloading of the files. They also provide capabilities for commenting, version control, viewing video on tablet or phone, etc. We composers don't have to buy or operate these software solutions, we are just clients who are given permission to view and / or download by the post supervisor and their crew.

Anyway, the video files I receive are always watermarked with my name in big letters on the screen, so there will be no denying where a leak came from if it should happen. Maybe this isn't a huge issue on some projects, but still... best to keep a tight grip on the video files and never upload them anywhere, including your cloud backups of your work-in-progress files. You never know if some bored and disgruntled IT tech at DropBox HQ can see inside your backup folders....

So that's why I do a few things to insure that a leak ain't gonna come from me:

• I play my video from a separate Mac Mini running VideoSync software (formerly known as VideoSlave). The video files are ONLY on that computer, which has its WiFi turned off, and is only connected to the internet via a Cat5 cable for that ten minutes I'm actually downloading the video file, and then I unplug the cable. The only connections to the outside world that the VideoSync machine has are: HDMI out to the tv on the wall, analog headphone out to aux audio inputs on my Logic rig, USB from keyboard+mouse, USB 5-pin MIDI interface. Good luck penetrating my VideoSync machine via MIDI cables! So... AIR GAP that sucker.

• I don't include the video files in any backups - cloud or otherwise - until the project has been released, and even then I'm a little sketched out about it, so I never include the video files in cloud backups, only on the spinning hard drives I use for long-term archiving once a project is finished.

• While I'm working on the project, I keep any safety copies of the video on dedicated Samsung T5 / T7 / T9 USB SSD drives that only get plugged into the VideoSync machine for a few minutes to make the copies, then they get unplugged and stashed. If the VideoSync machine should get bricked in some disaster, it's certainly possible to get the production to send me a new copy.

I discussed my video-security paranoia in a post a few years back, and some people might have thought I was a little nuts but HZ himself chimed in to confirm that total lockdown of video is best practice. Imagine the pressure on him and his crew to keep tight control of the Dune4_4k_FULL.mov file! Even letting an unauthorized person SEE the video, let alone download it, might be a fireable (and sue-able) offense on some projects! (Spoiler leaks and all that...). Part of the 20-page Composer Agreement that you quickly scroll to the end of and sign details your liability and responsibilities, and when you sign that sucker you're essentially saying, "Yeeeessss, I KNOW I'm on the hook if the video leaks, yada yada yada where do I sign where's my money...."

Now, as to tempo maps:

• No, I don't use tempo maps in my PT stem Sessions - they're always at 120bpm which makes the tempo grid the same as the min/sec grid.

• A whole-reel (or whole-episode) PT Session would have an absolute nightmare of a tempo map, and I'd have to import and somehow join multiple tempo maps from each individual cue. (Not even sure if this is possible in PT). When moving or nudging regions or making audio edits you'd have to always move the tempo map with the audio, and one wrong click or un-time-locked region to the right of the edit point would cause an entire disaster. So.... no. Never.

The only time I export a tempo map from my primary DAW to the PT rig is when preparing a Session for an out-of-house recording session. Then I do export a MIDI file of the tempo map from Logic and import that into PT, then I check the internal metronome click from PT against the one from my DAW, then I bounce an audio file of the DAW click plus all of the reference mixes / stems and import them into PT. After a final check to hear that magical comb-filtering sound when listening to both PT and the DAW playing in sync, I know that the PT Session is good to go out the door to whatever studio is doing the live recordings. (But these Sessions are not "whole-reel" or "whole-episode" Sessions - they're just "single-cue" Sessions, so it's easy to export+import the MIDI File of the tempo map without trying to join multiple cues together somehow.)

But - the video is NEVER included in those Sessions. Best practice is for the video files that have your name in the watermark to ONLY be used by you, in your studio. If you have junior / assistant composers or other collaborators, or need to work at outside studios, the production should prepare video files that are watermarked with THEIR names. Now, at many levels this may not be possible or practical, but this is the theoretical ideal.

In those situations I'd tell the post supervisor, "Hey I need copies of the video for my co-composer and music editor, and we are recording the orchestra at Studio X on this date, and they need to have picture playback on the scoring stage. Can you output watermarked copies and send the video to them and get them to sign an NDA in blood or whatever..." That way, everybody who has access to the video ONLY has access to a copy that's watermarked with their name, so if a leak occurs it's on them, not you.

Again, this is the theoretical ideal best practice, so depending on the scale of the production it may not be possible or practical to output a dozen individually watermarked copies and distribute them accordingly - but if the project is big enough to need that many music collaborators then it's probably big enough to warrant handcuffing an assistant editor to the edit bay for a night to output and distribute all those copies.

One anecdote that illustrates the "danger" of leaks: For one movie that I scored I was asked by the production to do a video showing one of the cues in-progress, to demonstrate the crazy musical sound design and build the hype, which would be released to the public before the movie came out in theaters. Everybody was on-board, the production hired a film crew to come to my studio, and the movie itself was not even playing during the three-camera shoot. BUT. Fans saw the video and noticed that one camera angle showed my DAW screen, so they paused it and zoomed in to read the names of the freakin' Markers in my Logic timeline!!! They saw a Marker named "JOHN DIES" or something, and started speculating at 100mph on the online fan forums. It took less than 12 hours from the video being released until I got a panicked call from the producers about "spoiler leaks" and we had to have the studio delete the video and make a new one with the Marker timeline blurred out, but it was kind of already too late as the screen grabs were out there. Not a huge career-ending issue, but a lesson learned. Now I always sanitize my Marker names to eliminate potential spoilers!
This is the holy grail! Ty! and yes, I do agree with the lockdown of the videos! Great house keeping rules of not ending your career. Great to know that you use videosync, I've always thought you hosted your video on your PT rig. I'm going to try videosync now that you've sold me it and this type of workflow!
 
I never include video in my PT Session uploads for a few reasons:

• As you mentioned, the file sizes would get pretty huge.

• The music editor already has the video... or they're supposed to anyway.

• Most importantly, the video is a VERY protected and closely guarded asset. It's a minor disaster if the score leaks due to some hole in the wall of data security, like the wrong permissions on a shared DropBox folder or whatever, but if the video should leak then that's potentially a career-ending disaster of lawsuit-level proportions. Imagine if you were the reason that Dune4_4k_FULL.mov showed up on the high seas torrents? *shudder*. This is why most of the productions I work on use dedicated, secure video streaming solutions like PIX or DAX to control access to all video files. These services have complete access and permissions control and provide a detailed log of who accessed which files and when, and can permit or prevent downloading of the files. They also provide capabilities for commenting, version control, viewing video on tablet or phone, etc. We composers don't have to buy or operate these software solutions, we are just clients who are given permission to view and / or download by the post supervisor and their crew.

Anyway, the video files I receive are always watermarked with my name in big letters on the screen, so there will be no denying where a leak came from if it should happen. Maybe this isn't a huge issue on some projects, but still... best to keep a tight grip on the video files and never upload them anywhere, including your cloud backups of your work-in-progress files. You never know if some bored and disgruntled IT tech at DropBox HQ can see inside your backup folders....

So that's why I do a few things to insure that a leak ain't gonna come from me:

• I play my video from a separate Mac Mini running VideoSync software (formerly known as VideoSlave). The video files are ONLY on that computer, which has its WiFi turned off, and is only connected to the internet via a Cat5 cable for that ten minutes I'm actually downloading the video file, and then I unplug the cable. The only connections to the outside world that the VideoSync machine has are: HDMI out to the tv on the wall, analog headphone out to aux audio inputs on my Logic rig, USB from keyboard+mouse, USB 5-pin MIDI interface. Good luck penetrating my VideoSync machine via MIDI cables! So... AIR GAP that sucker.

• I don't include the video files in any backups - cloud or otherwise - until the project has been released, and even then I'm a little sketched out about it, so I never include the video files in cloud backups, only on the spinning hard drives I use for long-term archiving once a project is finished.

• While I'm working on the project, I keep any safety copies of the video on dedicated Samsung T5 / T7 / T9 USB SSD drives that only get plugged into the VideoSync machine for a few minutes to make the copies, then they get unplugged and stashed. If the VideoSync machine should get bricked in some disaster, it's certainly possible to get the production to send me a new copy.

I discussed my video-security paranoia in a post a few years back, and some people might have thought I was a little nuts but HZ himself chimed in to confirm that total lockdown of video is best practice. Imagine the pressure on him and his crew to keep tight control of the Dune4_4k_FULL.mov file! Even letting an unauthorized person SEE the video, let alone download it, might be a fireable (and sue-able) offense on some projects! (Spoiler leaks and all that...). Part of the 20-page Composer Agreement that you quickly scroll to the end of and sign details your liability and responsibilities, and when you sign that sucker you're essentially saying, "Yeeeessss, I KNOW I'm on the hook if the video leaks, yada yada yada where do I sign where's my money...."

Now, as to tempo maps:

• No, I don't use tempo maps in my PT stem Sessions - they're always at 120bpm which makes the tempo grid the same as the min/sec grid.

• A whole-reel (or whole-episode) PT Session would have an absolute nightmare of a tempo map, and I'd have to import and somehow join multiple tempo maps from each individual cue. (Not even sure if this is possible in PT). When moving or nudging regions or making audio edits you'd have to always move the tempo map with the audio, and one wrong click or un-time-locked region to the right of the edit point would cause an entire disaster. So.... no. Never.

The only time I export a tempo map from my primary DAW to the PT rig is when preparing a Session for an out-of-house recording session. Then I do export a MIDI file of the tempo map from Logic and import that into PT, then I check the internal metronome click from PT against the one from my DAW, then I bounce an audio file of the DAW click plus all of the reference mixes / stems and import them into PT. After a final check to hear that magical comb-filtering sound when listening to both PT and the DAW playing in sync, I know that the PT Session is good to go out the door to whatever studio is doing the live recordings. (But these Sessions are not "whole-reel" or "whole-episode" Sessions - they're just "single-cue" Sessions, so it's easy to export+import the MIDI File of the tempo map without trying to join multiple cues together somehow.)

But - the video is NEVER included in those Sessions. Best practice is for the video files that have your name in the watermark to ONLY be used by you, in your studio. If you have junior / assistant composers or other collaborators, or need to work at outside studios, the production should prepare video files that are watermarked with THEIR names. Now, at many levels this may not be possible or practical, but this is the theoretical ideal.

In those situations I'd tell the post supervisor, "Hey I need copies of the video for my co-composer and music editor, and we are recording the orchestra at Studio X on this date, and they need to have picture playback on the scoring stage. Can you output watermarked copies and send the video to them and get them to sign an NDA in blood or whatever..." That way, everybody who has access to the video ONLY has access to a copy that's watermarked with their name, so if a leak occurs it's on them, not you.

Again, this is the theoretical ideal best practice, so depending on the scale of the production it may not be possible or practical to output a dozen individually watermarked copies and distribute them accordingly - but if the project is big enough to need that many music collaborators then it's probably big enough to warrant handcuffing an assistant editor to the edit bay for a night to output and distribute all those copies.

One anecdote that illustrates the "danger" of leaks: For one movie that I scored I was asked by the production to do a video showing one of the cues in-progress, to demonstrate the crazy musical sound design and build the hype, which would be released to the public before the movie came out in theaters. Everybody was on-board, the production hired a film crew to come to my studio, and the movie itself was not even playing during the three-camera shoot. BUT. Fans saw the video and noticed that one camera angle showed my DAW screen, so they paused it and zoomed in to read the names of the freakin' Markers in my Logic timeline!!! They saw a Marker named "JOHN DIES" or something, and started speculating at 100mph on the online fan forums. It took less than 12 hours from the video being released until I got a panicked call from the producers about "spoiler leaks" and we had to have the studio delete the video and make a new one with the Marker timeline blurred out, but it was kind of already too late as the screen grabs were out there. Not a huge career-ending issue, but a lesson learned. Now I always sanitize my Marker names to eliminate potential spoilers!
Oh I just thought about another question . . . what are your processes in presenting your music remotely and clients needing to view it with the video? or do they often want your music synced to video? Do your clients at your level want just have a wav file so their editors can pop it in with the video internally? or even a PT session with all this? Since we focused so much on the final deliveries, it be nice to know what you do in the writing stages!

We really appreciate your time and wisdom Charlie, a privilege to pick your brain!
 
This is the holy grail! Ty! and yes, I do agree with the lockdown of the videos! Great house keeping rules of not ending your career. Great to know that you use videosync, I've always thought you hosted your video on your PT rig. I'm going to try videosync now that you've sold me it and this type of workflow!
VideoSync is my jam. And I mean that literally - I helped Florian develop it. Well, I didn't do any actual coding per se, but I did advise him along the way and bought him a Mac Mini to use as a testing platform right at the beginning.

It all started with a post on GearSlutz (I think, although it might have been on here...) where he made a thread saying he was a Comp Sci major with an interest in music and he was looking for ideas for a coding project that was NOT a plugin or a DAW. I spun a tale of an ancient program called VirtualVTR which was a timecode-slaved video player that I used to use, but which had fallen behind the latest MacOS updates and video codecs. He responded that he knew all about VirtualVTR, since his side job was working at a big post facility in Germany - specifically he was tasked with keeping a row of aging G5 Macs equipped with Kona video capture / output cards in working order since they used VirtualVTR to drive all the projectors on the dub stages he worked at! So he already knew exactly what VirtualVTR was and why it was so cool.

Of course the development process wasn't as simple as just downloading Apple's QT dev kit and building a UI.... right about that time Apple deprecated the aging QT format and switched to something called Media Toolkit (I think). Due to the wide variety of CPU strengths, and the appearance of the h.264 codec, it was no longer practical to build an app that worked as VirtualVTR had. VirtualVTR used DVC format video, which was essentially a shedload of JPEG files, one per video frame. This resulted in massive file sizes (around 10 gig for 42 minutes) but allowed an app to directly call up a specific frame in response to incoming timecode.

But due to the nature of h.264 and other codecs, which use a predictive algorithm that only stores the changes between one frame and the next (over-simplification but it's basically that), it was no longer possible to just call up a given frame from disc - the frames before and after it need to be called and fed into the codec as well (sort of). Plus, differences in CPU power might mean that weaker machines would need more time to process video through the codec, so they'd need to read ahead by a greater amount than stronger, faster machines would.

This gave Florian a big headache. Apple's Media Toolkit framework didn't explicitly provide tools for this.... so he had to code the whole thing from scratch basically. In the end, he figured out a way to do it, and VideoSlave (as it used to be called) was born. Composers loved it, but post-production folks loved it even more - especially ADR people - and although it wasn't cheap, post folks seem to care less about price because the rest of their gear is so freaking expensive, compared to composers who are often a little more price conscious. So Florian started to add features tailored to ADR scenarios, and VideoSync (the new, non-problematic name) grew to become a must-have tool in that world. Although the perpetual license costs $349 for the Standard version and $549 for the Pro version, he offers monthly subscriptions that can be stopped and started at will to reduce the sting, and most of us composers might be fine with the Standard version anyway since we might not need a lot of the ADR-focused features.

Anyway, VideoSync has a ton of features that make life easier for us lowly composers though:

• Playlist for video files so that all reels of a film can live in a single list, and each video will automatically play as the appropriate timecode is received. No more unloading Reel5.mov and loading Reel6.mov as you jump around in your project.

• Video output with optional full-screen mode on any connected Mac display, including the HDMI out on Mac Mini etc. as well as video cards and TB or USB video output devices.

• Audio output to any CoreAudio device, from the analog headphone jack to USB or TB interfaces or even Dante if so equipped. Works with LoopBack, SoundFlower, and other Inter-App Audio pipelines for use on the same computer as your DAW, and can simply share your primary audio interface alongside your DAW via CoreAudio.

• Individual access to the audio tracks within a movie, with volume / mute / pan for each.

• Dual video tracks to A/B between two movies side-by-side or picture-in-picture, for comparison between versions etc.

• MIDI Time Code and MIDI Machine Control support for frame-by-frame scrubbing controlled by your DAW. It will even play in reverse!

• Import new audio files into the timeline of an existing movie, so you can bring in your score mixes, mute the temp score, and mix your imported score against dialog + six right inside the app.... and then:

• Export a video file containing that mixed audio for delivering previews to directors etc.

• Options for perpetual license, or month-by-month subscriptions so you can pay as needed. Only need it for two months for one project? It'll cost you $38 per month, less than your Starbucks tab.

• Apple Silicon native, and it will run nicely on fairly old and weak Intel Macs as well. I use it on a 2012 Mac Mini with quad-core i7, 16 gigs RAM, and 512 SSD, and that machine is overkill.

• It can even be run on the same computer as your primary DAW - that way you get the workflow advantages of the playlists, audio features, and import / export even if you don't dedicate a computer to VideoSync.

• It will also run nicely on your PT Stem rig, or even on your VEPro machine!

• You can send MTC and MMC to it over CoreMIDI's "Network MIDI Session" features, so you don't need to have a hardware 5-pin MIDI interface on the VideoSync machine if you don't want to. This even works over WiFi somehow!

Florian really has covered all the bases, and I'm so pleased that he's been able to turn an idea from a simple forum thread from ten years ago into a successful software company. He's a smart and dedicated man for sure!

(Note that I do not receive any money from sales or subscriptions of VideoSync, I'm just an enthusiastic user.)
 
Oh I just thought about another question . . . what are your processes in presenting your music remotely and clients needing to view it with the video? or do they often want your music synced to video? Do your clients at your level want just have a wav file so their editors can pop it in with the video internally? or even a PT session with all this? Since we focused so much on the final deliveries, it be nice to know what you do in the writing stages!

We really appreciate your time and wisdom Charlie, a privilege to pick your brain!
Twenty-five years ago I used to print my mixes alongside dialog+sfx by doing a real-time record to VHS tapes (!!!) with the video playing from VirtualVTR, audio coming in to the Mac, and using faders on the Mackie Control to real-time duck the score behind dialog. One pass, 42 minutes per tv episode and don't screw it up or you'll have to start over!

Twenty years ago I did the same but recorded to a standalone consumer DVD recorder. One pass with no erase. Screw it up and the disc goes in the trash!

But those situations were when I was scoring weekly network tv series, where turnarounds were brutally short and show runners needed a preview to approve, which maaaaayyyybeeee gave me 24 hours to fix any cues.

So for the last ten years or so, and on most features, I’ve just been sending the WAV files to the assistant picture editor, and they dump them into their timeline. Since they already have DAX or PIX distribution lists and secure servers set up, this is really the only way that I can avoid transferring video files in the clear. Plus then those versions just flow into the producers iPads alongside the VFX approval clips, etc. Makes my life easier too!
 
Top Bottom