Just out of curiosity and eagerness , a question to folks working in the vst libraries industry.
Is there someone already working on training gen ai models with vst sample music and then showing them a human performance so it can enhance the realism of the vst library performance to make it more realistic?
I've been wondering about this since in the field of photography and video is already widely used in that regards, not with the intention of compose new music but instead to take a decently crafted music with vst and improving upon it....
If there are already working on it, what are the current result and why it hasn't reach the market yet?
Is the problem creating sufficiently large large training dataset?
Or the problem is with time consistency (similar to what happens in video).
Thanks in advance!!!
Is there someone already working on training gen ai models with vst sample music and then showing them a human performance so it can enhance the realism of the vst library performance to make it more realistic?
I've been wondering about this since in the field of photography and video is already widely used in that regards, not with the intention of compose new music but instead to take a decently crafted music with vst and improving upon it....
If there are already working on it, what are the current result and why it hasn't reach the market yet?
Is the problem creating sufficiently large large training dataset?
Or the problem is with time consistency (similar to what happens in video).
Thanks in advance!!!