What's new

Sonokinetic Orchestral Strings Released!

Sorry for a somewhat displaced post, but did I miss something about a celli issue? I don’t have a celli mix in my instruments. I downloaded the instrument file twice, but no mix patch.
 
Sorry for a somewhat displaced post, but did I miss something about a celli issue? I don’t have a celli mix in my instruments. I downloaded the instrument file twice, but no mix patch.
Yes, it is a missing file from the downloads, it is supposed to be added in the next update, but if you PM @Sonokinetic he can send you the missing Celli Mix nki file. He did that for me.
 
Sorry for a somewhat displaced post, but did I miss something about a celli issue? I don’t have a celli mix in my instruments. I downloaded the instrument file twice, but no mix patch.
Or download it from Native instruments. But it came with some other issues if i'm correct. But after you downloaded from native you could re-install the instruments only from sono installer. Then you also would have the celli mix.
 
draggable midi default poll.gif
quick community poll:

- the draggable midi file creation in runs and phrases will be a menu option in the next update.
- my inclination is to set the default to 'no', so there's no chance of people getting caught out by the async midi creation interrupting their flow when they are trying out the instrument. (remember that in a DAW these files are only created on DAW stop anyway so this only pertains to standalone playing or trying things when the DAw is stopped).
- my hesitation is that I like it appearing automatically and I feel people might not notice it if it is a menu option they have to manually turn on.

thoughts?
 
This feature absolutely has to be off by default, no doubt about it.

The CPU usage of Sonokinetic strings is already in the heavier half of the ones I measured. Ideally you'd provide additional patches with fewer features for someone like me who thinks CPU usage is important.

Not everyone hosts the instrument in the DAW either. Many people here use Vienna Ensemble Pro; I use an even worse setup, which is a notation editor capable of detailed MIDI performance connected to Reaper via virtual MIDI cables. So I am always working with the DAW stopped the way you said it -- the DAW is only a plugin host.
 
Last edited:
I've never seen a setup like that, in trying to imagine how it works so that you are sending midi via notation to Reaper?
1. I use Presonus Notion, a notation program that has a bit of MIDI editing. I also use extensions that I made for it to make the MIDI editing more powerful, but I won't get into those now. Notion is weak at hosting many plugins and at routing audio, therefore I don't host plugins in it. It has 4 MIDI outputs, I use these.

2. I use free virtual MIDI cables to get the MIDI data and forward it to Reaper.

3. Reaper distributes the MIDI to tracks containing virtual instruments and effects. But there is never a single note in Reaper, it just reacts in real time to incoming MIDI. One of the problems with this? Plugins never know the current BPM since there's never a sequence playing. The latency compensation feature in MSS, for instance, doesn't work. Also, there's no way to apply negative track delay, so all my delays are positive. Finally, an offline bounce is impossible -- unless I get all the MIDI onto Reaper tracks, which is what I am avoiding.

The advantage of my setup is that I never need to descend into the repugnant under-world of music represented as a piano roll. I am always looking at notation. If I want to make arrangement changes after hearing the mockup, I am not going back and forth between Dorico and Cubase, no no no, the mockup and the score are in a single program. My score is full of hidden marks that create the performance, but I can print it out at any time.

The above is a simplification. I lied to you, there's more. 4 MIDI outputs from Notion is too few (this limitation is probably going away in the next major version of Notion). Therefore the MIDI cables talk to TransMIDIfier first -- this allows me to map 64 MIDI channels to 8192 channels --, and then to Reaper. Basically my score knows which patch is playing this section of the music for this staff, but the patch selection is made via a MIDI CC, not via a channel change.

Notion has a way for me to create custom techniques, such that when composing, I write "CSS" on a certain staff, and playback switches to the proper channel. I pay for this convenience when composing... with more work to set things up.
 
Last edited:
1. I use Presonus Notion, a notation program that has a bit of MIDI editing. I also use extensions that I made for it to make the MIDI editing more powerful, but I won't get into those now. Notion is weak at hosting many plugins and at routing audio, therefore I don't host plugins in it. It has 4 MIDI outputs, I use these.

2. I use free virtual MIDI cables to get the MIDI data and forward it to Reaper.

3. Reaper distributes the MIDI to tracks containing virtual instruments and effects. But there is never a single note in Reaper, it just reacts in real time to incoming MIDI. One of the problems with this? Plugins never know the current BPM since there's never a sequence playing. The latency compensation feature in MSS, for instance, doesn't work. Also, there's no way to apply negative track delay, so all my delays are positive. Finally, an offline bounce is impossible -- unless I get all the MIDI onto Reaper tracks, which is what I am avoiding.

The advantage of my setup is that I never need to descend into the repugnant under-world of music represented as a piano roll. I am always looking at notation. If I want to make arrangement changes after hearing the mockup, I am not going back and forth between Dorico and Cubase, no no no, the mockup and the score are in a single program. My score is full of hidden marks that create the performance, but I can print it out at any time.

The above is a simplification. I lied to you, there's more. 4 MIDI outputs from Notion is too few (this limitation is probably going away in the next major version of Notion). Therefore the MIDI cables talk to TransMIDIfier first -- this allows me to map 64 MIDI channels to 8192 channels --, and then to Reaper. Basically my score knows which patch is playing this section of the music for this staff, but this selection is made via a MIDI CC, not via a channel change.
Wow, this is the first time i heard any one do this. I know that many would like tight integration between Studio One and Notion or Cubase and Dorico to be able to compose only in notation but had the plugin support of a DAW. Kudos for finding a way to do it.
 
...
quick community poll:

- the draggable midi file creation in runs and phrases will be a menu option in the next update.
- my inclination is to set the default to 'no', so there's no chance of people getting caught out by the async midi creation interrupting their flow when they are trying out the instrument. (remember that in a DAW these files are only created on DAW stop anyway so this only pertains to standalone playing or trying things when the DAw is stopped).
- my hesitation is that I like it appearing automatically and I feel people might not notice it if it is a menu option they have to manually turn on.

thoughts?
I suggest "OFF" as the default. It is a very unique feature, but I think midi-generation is unlikely to become an "everyday tool" for me. Runs I will just play into the DAW. There is probably a use with phrases that I have yet to realize. I very much appreciate you thinking through this.

You could always add a pop-up help notification over the midi-drag icon suggesting a visit to the preferences as a way to mitigate a new user being confused about the feature.
 
This year, it's a Sonokinetic Christmas for me ...
Finally I hit the Orchestral Strings "Buy Now" button today!
I also pressed this hypnotic button for Ostinato Noir and Indie ...
And, I was also lucky enough to be able to press the "Claim ..." button for Sordino Strings !!
Many thanks to the Sonokinetic team!
As usual, you have put a lot of beautiful gifts for us under the tree again this year!
 
When loading each section, I think it would be better only some articulations are loaded. Because I don't use all the articulations. If needed, an articulation can be loaded later. And it also helps RAM usage.
 
So for anyone who like me is hesitant on picking up this package due to how much of the room you can hear even in the shorts on close mics - I went digging a bit under the hood in Kontakt inside the free Sordino version of the library they were so graciously giving out for a bit, and was rather surprised - it turns out that if you tame the _rel groups in the backend, you can get a much, much drier sound, it's actually not baked into the samples.

After some tweaking I managed to get it working on Violins I for the three articulations (Straight, Expressive and Staccato) for Close and Decca mics on both Divisi within Sordino Strings as a proof of concept to myself. It does make the sound more exposed (and if I had to describe it, a bit more brittle?) when you yank the room out like this, but if you just tame the _rel groups to the point where they're only slightly audible instead of deleting or disabling them fully, you get a more acceptable release behaviour for layering with drier libraries which leaves you with a lot more headroom for your own reverbs.

So if the main library is built the same way, it's possible to get a much drier sound, and it makes me a lot more interested in it again - as lovely as the default tone can be, this just makes it way more flexible.

I'll definitely pick up the library if I'd have confirmation this is possible in the larger, main product as well, so if I'd need to blend it with another rather dry library and then use an external reverb on the multi, I'd know that I can - but it would be nice if this was officially supported, not as a manual workaround in the Kontakt back-end needed to be done per section per divisi per articulation per mic position.

That way, you would be able to either fully use the sound of the room when needed (it's lovely for the right context), but also to duck it out when it's not, without having to manually create tweaked NKIs.

@Sonokinetic BV is there any chance you could enable this behaviour as an option - as a volume knob or some other control to let users determine how much of the room/the release groups are audible, if Divisi Orchestra Strings is indeed built in a similar manner than Sordino Strings? Or if that's outside of the scope of the product, can you at least confirm it's built in a similar way, so that I know I can do it manually, thus I can safely pick it up?

Thank you!
 
Top Bottom