What's new

How does the whole printing stems through PT from Cubase work?

Twenty-five years ago I used to print my mixes alongside dialog+sfx by doing a real-time record to VHS tapes (!!!) with the video playing from VirtualVTR, audio coming in to the Mac, and using faders on the Mackie Control to real-time duck the score behind dialog. One pass, 42 minutes per tv episode and don't screw it up or you'll have to start over!

Twenty years ago I did the same but recorded to a standalone consumer DVD recorder. One pass with no erase. Screw it up and the disc goes in the trash!

But those situations were when I was scoring weekly network tv series, where turnarounds were brutally short and show runners needed a preview to approve, which maaaaayyyybeeee gave me 24 hours to fix any cues.

So for the last ten years or so, and on most features, I’ve just been sending the WAV files to the assistant picture editor, and they dump them into their timeline. Since they already have DAX or PIX distribution lists and secure servers set up, this is really the only way that I can avoid transferring video files in the clear. Plus then those versions just flow into the producers iPads alongside the VFX approval clips, etc. Makes my life easier too!
Thats crazy! but great practise for attention to detail!

I guess using frame.io or cueDB etc is out of the equation if we're going to lockdown our videos? lol.

Would you have any advice for composers such as myself in the mid tier on how we can present our music with video if requested? I've used frame io, g drive and digital pigeon, but if you know of other safer and better ways, I'd love to hear it! :) ty
 
Thats crazy! but great practise for attention to detail!

I guess using frame.io or cueDB etc is out of the equation if we're going to lockdown our videos? lol.

Would you have any advice for composers such as myself in the mid tier on how we can present our music with video if requested? I've used frame io, g drive and digital pigeon, but if you know of other safer and better ways, I'd love to hear it! :) ty
Well, it’s really up to the production to determine how locked down they want to keep the video. I’ve had some productions use WeTransfer to send me rough cuts. So I’m not suggesting that any composer should impose maximum strictness if nobody else in the food chain is… I’m just trying to describe the best-case / worst-case scenarios that you might run into.

On a fairly normal drama film maybe nobody will care that much, but on a long-running horror franchise known for twist endings and surprise reveals with a rabid fan base and very active online fan presence…. it might be a different story.

In any case, if you need to present work-in-progress music against picture, I’d just have a quick conversation with a producer or post supervisor. Something like, “So when it comes time to present cues for approval, how do you want me to do it? Should I just use DropBox or WeTransfer to send you QuickTime movies, do you guys use CueDB, or should I send music to the picture editor, or what?” Give them the opportunity to dictate the balance between security and convenience, and be prepared to accommodate their needs.

They may not care at all. On one series I did, my assistant would leave a DVD in the show runner’s mailbox at curbside, but on another (that had about 1/10th the viewers) they were in fully locked-down mode and using PIX exclusively. So it’s up to them really…. If a person of authority says, “Just make a shared DropBox folder and stick the video files in there” then that’s good enough…. Although I’d want that replay in an email or text, as opposed to a purely verbal, off the record conversation, just so if anything blows up you can say, “But… but… Jordan TOLD me to do it this way, and I have receipts!”

I guess it’s just a matter of being receptive and accommodating to whatever their comfort level is on these matters, and to follow their lead instead of suggesting / dictating how video files are controlled. You never want to have them say, “This freakin composer we hired wanted to use some janky video upload software that we’d never heard of, just so we could leave comments and then check a box when a cue was approved or whatever, and that shit was leaky as hell and that’s why the video leaked. Check the watermark, it was him for sure…”

Years ago I saw a demo of an early version of CueDB and I gotta admit it looked slick, but when I asked about security I got blank stares, like they’d never even considered the possibility that it would be an issue. Maybe they’ve added features for access control and history by now, I have no idea.

But even for the scale of projects I work on it seemed like massive overkill. Like, how many people need to leave comments anyway? And how many revisions do they think we’ll be going through? It seemed tailor-made for massive, complex, and important projects - the type of projects that had a dozen producers to please and a dozen people on the music team… which would also seem to be the type of projects that might need access control the most. Like, CueDB seems to be a solution that HZ’s massive army might use, but their clients are exactly the type who might require hardcore security. Maybe not so much on a BBC nature documentary being scored in building four at Bleeding Fingers, but on Dune4 or Pirates7… definitely.

So… I dunno.
 
Hi Ed,

Just curious if the whole Short / Long thing extended to pretty much every orchestral instrument or if certain sections were split out that way more than others.

For example, are we looking at...
  • Piccolo Short
  • Piccolo Long
  • Flute Short
  • Flute Long......................
  • Horns Short
  • Horns Long
  • Trumpet Short
  • Trumpet Long......................
  • Vlns Short
  • Vlns long
  • Violas Short
  • Violas Long............
etc.

I imagine synths, percussion, and sound design can get a bit more tricky to compartmentalize since there is great variety in those types of sounds that you may need split out for mixing purposes.

Best,

-T
And we print in quad or 5.1. Some sessions get up there to a 1000 tracks for Alan to mix. He then gives the dub stage a Dolby Atmos bunch of stems…
but I can’t really see anyone delivering in any format to a professional stage other than in PT.
oh, and make sure all your computers and interfaces are word-clocked to the same source!
 
I play my video from a separate Mac Mini running VideoSync software (formerly known as VideoSlave). The video files are ONLY on that computer, which has its WiFi turned off, and is only connected to the internet via a Cat5 cable for that ten minutes I'm actually downloading the video file, and then I unplug the cable.
Same here, VideoSync is awesome. Well worth the subscription. :thumbsup:

Charlie, I asked my nan and she's willing to buy at least one copy of your Film Scoring for Dummies book and so will I, of course, so clearly that should be enough to change your mind about not having a big enough audience for publication.

Ehrr... I did, sort of, maybe tell my nan you'd come and visit her, play cards and order a bunch of orthopedic pillows over the phone with her. It's free biscuits and tea, though. 🤷‍♂️:whistling:
 
Last edited:
@charlieclouser Ok, you got my curiousity spiked on this item AGAIN. So I had to revisit it again in 2024.

Logic slaving to Protools. The sync is better then what I remember it. It could come down to the fact my system is all dante, and dante is the clock, and since pt is running through a rednet box, the word is the same across everything. But I am still testing with no Sync Hd. I put the same audio, a short drum loop on both systems, and did a rather non scientific evaluation. The sync is close, and does wander a bit, from a sublte delay to a subtle flange. Still, the sample rate slider in the sync panel stays at 48k, even when the deviation slides around a bit. This is all better then what I remember, but than again, I had a different setup when I last checked this. And I was even using a Avid Sync I/O to boot. All in all, not bad, but not my preferred way to go about it.

Protools slaved to Logic. You’d be right to anticipate this would render the same result right ? Well, there is a difference. What I notice is the resulting sync is a bit of a flange, but what I really noticed was that it seemed to be the same delay everytime I hit play. When I hit play in the above first example, the delay would be different everytime. Like the initial start point was a bit random. So I tried recording into PT, and found the sound was pretty much solid, with the same consistent delay. Again not scientific but just listening to the resulting files played together. There was no question that these would not truly null, and I would not expect that anyway. Still, better than what I rememeber from years ago.

So my take away is, for me, driving PT from MTC generated by LPX seems to be the most convenient and stable way to go. I feel bouncing internally and placing into PT still the absolute best way to retain fidelity though it‘s a PITA to do that extra step instead of just hitting record in PT and lay it across in one go. Since I have dante, I can and do throw up the audio stems into their own tracks in PT and input monitor through PT since I have PFX and SFX and video already there. It‘s maybe not the quality of a hardware sync, but the sync quality is really only for video playback, for me. If I need to get specific with markers, LPX sends MMC and PT lands on the exact frame everytime I nudge forward/backward in frame increments.

I have never questioned if the two are at the same place when frame forwarding. But it’s still possible there is a delay to the actual video playback in PT, I’ll be honest, I never did a video playback sync test. I know some ppl have gone through this process and adjusted their video playback in PT to make sure the playback is as close as possible to the timeline. I never did this since I sometimes get differing codecs, and I don’t know if PT would playback with the same video/audio delay for differing codecs. Maybe I need to think about testing that.
 
@charlieclouser Ok, you got my curiousity spiked on this item AGAIN. So I had to revisit it again in 2024.

Logic slaving to Protools. The sync is better then what I remember it. It could come down to the fact my system is all dante, and dante is the clock, and since pt is running through a rednet box, the word is the same across everything. But I am still testing with no Sync Hd. I put the same audio, a short drum loop on both systems, and did a rather non scientific evaluation. The sync is close, and does wander a bit, from a sublte delay to a subtle flange. Still, the sample rate slider in the sync panel stays at 48k, even when the deviation slides around a bit. This is all better then what I remember, but than again, I had a different setup when I last checked this. And I was even using a Avid Sync I/O to boot. All in all, not bad, but not my preferred way to go about it.

Protools slaved to Logic. You’d be right to anticipate this would render the same result right ? Well, there is a difference. What I notice is the resulting sync is a bit of a flange, but what I really noticed was that it seemed to be the same delay everytime I hit play. When I hit play in the above first example, the delay would be different everytime. Like the initial start point was a bit random. So I tried recording into PT, and found the sound was pretty much solid, with the same consistent delay. Again not scientific but just listening to the resulting files played together. There was no question that these would not truly null, and I would not expect that anyway. Still, better than what I rememeber from years ago.

So my take away is, for me, driving PT from MTC generated by LPX seems to be the most convenient and stable way to go. I feel bouncing internally and placing into PT still the absolute best way to retain fidelity though it‘s a PITA to do that extra step instead of just hitting record in PT and lay it across in one go. Since I have dante, I can and do throw up the audio stems into their own tracks in PT and input monitor through PT since I have PFX and SFX and video already there. It‘s maybe not the quality of a hardware sync, but the sync quality is really only for video playback, for me. If I need to get specific with markers, LPX sends MMC and PT lands on the exact frame everytime I nudge forward/backward in frame increments.

I have never questioned if the two are at the same place when frame forwarding. But it’s still possible there is a delay to the actual video playback in PT, I’ll be honest, I never did a video playback sync test. I know some ppl have gone through this process and adjusted their video playback in PT to make sure the playback is as close as possible to the timeline. I never did this since I sometimes get differing codecs, and I don’t know if PT would playback with the same video/audio delay for differing codecs. Maybe I need to think about testing that.
All good info, thanks for testing it all out...

It's good to hear that PT doesn't put up a fuss when acting as a slave to Logic's MTC, but of course I'd kind of expect PT to behave since we've all slaved it to a zillion LTC and MTC masters since the dawn of time. I just continue to do it PT > Logic > VideoSync because I do have three machines and since I'm not running video on PT, then that rig can stay powered down until it's time to print stems.

Just to derail the thread slightly, a couple of questions about your dual-machine, all-Dante setup - since I am considering moving to Dante:

• Do you use Dante Virtual Soundcard on either (or both) machines, or do you have actual Thunderbolt/USB > Dante interfaces (RME DigiFace Dante, Focusrite PCIeNX, DAD, etc.) on either or both?

• If you've used both DVS and hardware Dante interfaces, have you found any differences in CPU load or latency between the two methods?

• In general with Dante, do you find that you can use small buffers (32 or 64) and achieve latencies that are acceptable for playing software-synth percussion, tracking guitars, etc.?

• Do you use a router/switch for all the Dante Cat5 cabling, and if so... which one? Managed or un-managed? (I just got a headache typing that phrase out...)

I like the idea of the flexibility that a Dante system could provide but I've never sat in front of one and am concerned that some aspect of the system would present a hard lower limit on the total system delay between "hit a finger-drumming pad" and "kick sample hits my ears". This becomes even more of a concern as I consider adding a Trinnov Nova to the signal path.... and it's still hard for an old guy like me to wrap my brain around the idea that AoIP can get shove packets around the room as quickly as "linear / continuous / direct" MADI or Thunderbolt data streams.

One idea might be to adopt a hybrid approach, where the primary interface for Logic is connected via Thunderbolt / USB - like a DAD ThunderCore 256, AX Center, or AX64, which all have Dante ports - and Dante is then used to bridge audio from that interface into the Dante network, allowing stuff like more analog I/O and Trinnov Nova to dangle from the end of Cat5 cabling.... but I'm still unsure if such routing is possible or practical, or if I'm just complicating things.

The minimal hardware method would be to use Dante Virtual Soundcard on all three of my machines and just plug a FerroFish Dante <> analog box and the Trinnov Nova into the network, but the DVS limit of 64 I/O might be a limit, so... interested to hear your experiences with Dante.
 
Top Bottom