What's new

Samplicity announces Berlin Studio plugin

and although the result is somewhat different than using OT’s tree, AB, surround mics
Which is kinda the point I’m making, as an IR isn’t a direct substitute for an instrument recorded in the same space as that IR.

Is the preferred end result debatable? Sure. We all have our preferences.
 
Which is kinda the point I’m making, as an IR isn’t a direct substitute for an instrument recorded in the same space as that IR.

Is the preferred end result debatable? Sure. We all have our preferences.
You're mixing up acoustics and instruments here, lol.
 
Not really, because all of the mic positions in OT libraries (or just about any library for that matter) aren’t just capturing IR data (like Berlin Studio is), they’re also capturing the instruments themselves, which can’t be replaced with a reverb plugin.
Really confused... what are "IR data"?

I have the impression you don't know what convolution is.
OT have recorded "instruments"? No, they have recorded sounds emitted by instruments.
I have done the same in the Teldex Studio. Recorded sounds.

Capturing instruments: please explain
 
...but it’s definitely not the same thing as an actual instrument being recorded in a certain area of the stage with a certain mic position...
I fully disagree with you. This is a field of science that you are apparently not familiar with.
 
Really confused... what are "IR data"?

I have the impression you don't know what convolution is.
OT have recorded "instruments"? No, they have recorded sounds emitted by instruments.
I have done the same in the Teldex Studio. Recorded sounds.

Capturing instruments: please explain
I think he's referring to the differences in what gets picked up by mics of varying levels. Timbral changes and subtle details.

For instance, a close mic on a brass instrument could potentially pick up the buzzing of the player's lips (aka "hearing the spit") whereas you would not hear that much if at all from the tree, AB and definitely not the surround mics. A close mic on a woodwind instrument can distinctly pick up the key clicks, but you wouldn't expect to hear that in a surround mic.

Those kind of things.
 
I think he's referring to the differences in what gets picked up by mics of varying levels. Timbral changes and subtle details.

For instance, a close mic on a brass instrument could potentially pick up the buzzing of the player's lips (aka "hearing the spit") whereas you would not hear that much if at all from the tree, AB and definitely not the surround mics. A close mic on a woodwind instrument can distinctly pick up the key clicks, but you wouldn't expect to hear that in a surround mic.

Those kind of things.
I expect worse confusions... "real instruments" being recorded instead of "just recording IR data"
 
@Peter Emanuel Roos

I’m simply saying that there are audible differences the OT mics picked up when they recorded something like Berlin Strings that’s absent from Berlin Studio.

There is quite a bit of human expression in those strings that effects the reflections of Teldex, and that is the information that is naturally absent from Berlin Studio.

Different sources can effect the acoustics of any given space, especially when you consider the amount of human expression found in something like strings.

Even those subtle details a human makes when playing an instrument can be heard in hall reflections, and again, those details that were captured during the recordings of Berlin Strings are not found in Berlin Studio.

And that’s why I don’t believe that just using the close mics in Berlin, then switching on the other mics in Berlin Studio is the exact same thing as using all the mics in Berlin Strings.

@Trash Panda probably explained it better than I did, but what he said is basically what I’m getting at.
 
You are really mixing terms from different domains, like apples and oranges.
Now it is suddenly human expression, in a previous comment you used the term "capturing instruments".

Do you think that a microphone or a Pro Tools rig for one moment "minds" if it is recording an instrument or any other signal?

Do you really believe that human expressions change acoustics?

You are mixing terms from human perception and physics as you like and as if they come from a single domain. That makes no sense.

Please understand my viewpoint:

I really don't care if you do not like the sound of my plugin,
but it annoys me when you try to provide non-valid arguments and strange reasoning for that.
 
Last edited:
You are really mixing terms from different domains, like apples and oranges.
Now it is suddenly human expression, in a previous comment you used the term "capturing instruments".

Do you think that a microphone for one moment "minds" if he is recording an instrument or another signal?

Do you really believe that human expressions change acoustics?
You are mixing up aspects of human perception and physics.
No, it’s not “suddenly” human expression, because that’s a part a part of “capturing instruments”. You really can’t have one without the other. I thought that was a given.

And yes, I do believe human expression can impact the acoustics of a room, because dynamics can be heard in reflections.

To really dumb it down, go stand in the middle of a large hall and clap twice with the same attempted force. Each of those claps are going to reverberate slightly different, because you aren’t a machine, and there were minor differences in the force and of the position of your hands. Dynamics and positioning, my friend, effect those reflections.

And no, a microphone doesn’t care what it’s recording, but a microphone will pick up the differences in reflections that pertain to the signal.

In other words, if you sound source a gun shot, the reverb will sound a hell of a lot different in the same room that you sound sourced a violin.
 
Last edited:
By the way, are there any film scores or other symphonic recordings from Teldex that I could take a listen? I think having some references might help getting the mix right.
 
If it didn't matter at all, why would some devs go out of their way to capture convos generated using impulse responses from the actual instrument that the reverb was specifically designed for? Why would one decide on coloring their dry signal with a blanket of simulated room reflections that don't make sense for the instrument? Room/wall/modes can be exaggerated on the basic theory of Helmholtz resonator principles; you drive a dump truck beside a room or hall, and want to use the dump truck as your impulse response to exaggerate and capture room tails/wall reflections for RT60 measurements? It's surely going to sound different than reverb designed specifically for a flugel horn sitting in the same room. I guess that's why we have Quantum Spaces? This calls for a shoot out! Let's post dry signals of an instrument or two, and also post the recorded mic positions in combination and see if we can simulate the sense of: depth, wall reflections and space that was capture within the original recording sessions :dancedance:Have no fear, I come in peace :) I am generally interested in the subject matter being discussed :)
 
By the way, are there any film scores or other symphonic recordings from Teldex that I could take a listen? I think having some references might help getting the mix right.
I'm sorry I can't help with this, I'm not associated with the Teldex company, I have no connections for such information.
 
If it didn't matter at all, why would some devs go out of their way to capture convos generated using impulse responses from the actual instrument that the reverb was specifically designed for? Why would one decide on coloring their dry signal with a blanket of simulated room reflections that don't make sense for the instrument? Room/wall/modes can be exaggerated on the basic theory of Helmholtz resonator principles; you drive a dump truck beside a room or hall, and want to use the dump truck as your impulse response to exaggerate and capture room tails/wall reflections for RT60 measurements? It's surely going to sound different than reverb designed specifically for a flugel horn sitting in the same room. I guess that's why we have Quantum Spaces? This calls for a shoot out! Let's post dry signals of an instrument or two, and also post the recorded mic positions in combination and see if we can simulate the sense of: depth, wall reflections and space that was capture within the original recording sessions :dancedance:Have no fear, I come in peace :) I am generally interested in the subject matter being discussed :)
Peter can answer for himself of course, but note that Berlin Studio comes with different IRs tailormade for specific instruments of the orchestra.

So it's not using one set of general purpose IRs for everything, as in the dump truck example :)

Edit:
For a correct explanation, see Peter's reply below.
 
Last edited:
Hi, I'm busy building a new template and have been trying to blend the following:

BBCSO Core
HOOPUS
Samplemodeling Strings
Infinite Brass & Wind
+ a few other libs

I don't want to take this off-topic, however Berlin Studio might solve a few issues with the above.

1. I'm blending everything with BBCSO (no reverb). I assume that this is the best option since it's already quite wet.
2. In HOOPUS I was just using a small reverb (Burbank Small Scoring Stage), however I am still a little confused as to how the Pan works in this library since all instruments have a Pan preset.
3. IW & IB I used the Mic 1 blend and the Bersa Hall.
4. Samplemodeling Strings and other very dry libraries I used an insert of Panagement to place.
5. Only a couple of OT instruments at the moment.
6. All sections have a -6 to -4Db Send to a Bus with Valhalla Room set to a pure Late Tail chamber reverb.

This works.. ish. I'm not happy with the blending but am struggling with all the moving parts.

Berlin Studio looks like it could simplify things and potentially replace Panagement, HOOPUS panning and Valhalla. Am I right?

Secondly, I've watched a couple of videos reviewing BS but am still not quite sure how the routing works. Do I need to have BS as an insert on every instrument, or do I just route stereo Audio to the plugin and it handles each input separately? Would I still need a global reverb like Valhalla?
Apologies for the bump, just not sure if anyone saw my post above since it seemed to bet buried at the bottom of page 38 during a heated debate!!
 
The preset names refer to locations/positions in the studio (hence also the "floor layout"). The IRs represent the signals from such positions to the microphones, including all the reflections and comb-filtering (reflection = filter) picked up and the "tail" of the reverberation.

I have never claimed that the IRs have anything to do with the typical sounds of instruments, apart from their typical directivity (French Horns backward, Trombones forward, etc.).
 
Apologies for the bump, just not sure if anyone saw my post above since it seemed to bet buried at the bottom of page 38 during a heated debate!!
When I used the IRs years ago in my Cubase setup, I used them in send/aux channels for instrument groups.

I also experimented with smaller groups (say Flutes) with only early reflections, but that was too much work, too detailed, with general purpose convolvers like REVerence.

If you use them in aux channels, make sure to pull the Source channel to zero.

But nothing wrong to use it in a few channels as an insert effect.
 
Top Bottom