Posted on Leave a comment

Sample Creators: Introducing Note Sequences

Hello Sample Creators!

I have a couple of news items I wanted to share. First, I’m thrilled to announce a new update to Decent Sampler – a tiny but mighty sequence playback engine hidden right within the plugin. This new feature lets you embed simple note sequences directly into your sample libraries. This opens up a world of possibilities for sample library makers. For example:

  • Guitar libraries: Strumming patterns, arpeggios, and riffs
  • Drum libraries: Provide essential preset beats and fills to jumpstart users’ creative flow.
  • Melodic libraries: Craft short phrases or even full melodic sequences.
  • World percussion libraries: Embed traditional rhythms and grooves to enhance authenticity.

Learn more about how to use these here. By the way, for those of you who are Patrons, the latest version of the Omnichord sample library makes use of the new sequencer functionality. Each of the Omnichord strums is a little 13 note sequences that gets played when the user hits a key.

Which brings us to our second news item: the DecentSamples File Format Developer Guide has been moved here. As I was typing up documentation for the note sequencer stuff, I felt we’d finally reached the moment where we needed to break all of the information in the guide into several discreet pages. I’m now using ReadTheDocs service to help me host and organize the documentation.

OK. I think that’s it. As always, let me know if you find any bugs.

– Dave

Posted on 3 Comments

For Sample Creators: How to Add Tempo-Synced Delay to your Instruments

Up until now, the Delay effect has allowed users to specify the amount of delay in seconds. A brand new feature in version 1.10.0 of Decent Sampler adds the possibility of syncing the delay time to the host clock, and allowing users to specify their delay time in musical time units (eg. quarter notes, eighth notes, etc.). In this article, we’ll talk about how to make use of this new functionality.

1. Getting Started

The patch we are going to be working with is just a basic triangle wave sample library. This is what the code looks like:

<?xml version="1.0" encoding="UTF-8"?>

<DecentSampler>
  <ui>
    <tab></tab>
  </ui>
  <effects></effects>
  <groups attack="0.0" decay="1.0" sustain="0.0" release="1.75" ampVelTrack="0.3">
    <! -- sample definitions are here -->
  </groups>
</DecentSampler>

Code language: HTML, XML (xml)

(For the sake of brevity, I’ve removed the portions of the code listings that contain the sample definitions, as they are not important for this tutorial.) As you can see, the UI is blank and there are no effects added yet. By the way, if you wish to follow along, this starting code can be found here in the file labeled Step 1.

2. Adding the Basic Delay Effect

Our first order of business is to add in the delay effect as well as some knobs to control it. Here how that looks:

<?xml version="1.0" encoding="UTF-8"?>

<DecentSampler>
  <ui>
    <tab>
      <labeled-knob x="180" y="40" label="Delay Time" valueType="float" minValue="0" maxValue="5" value="0.5">
        <binding type="effect" level="instrument" position="0" parameter="FX_DELAY_TIME" translation="linear"/>
      </labeled-knob>
      <labeled-knob x="280" y="40" label="Feedback" valueType="float" minValue="0" maxValue="1" value="0.5">
        <binding type="effect" level="instrument" position="0" parameter="FX_FEEDBACK" translation="linear"/>
      </labeled-knob>
      <labeled-knob x="380" y="40" label="Stereo Offset" valueType="float" minValue="0" maxValue="1" value="0.01">
        <binding type="effect" level="instrument" position="1" parameter="FX_STEREO_OFFSET" translation="linear"/>
      </labeled-knob>
      <labeled-knob x="480" y="40" label="Wet Level" valueType="float" minValue="0" maxValue="1" value="1">
        <binding type="effect" level="instrument" position="0" parameter="FX_WET_LEVEL" translation="linear"/>
      </labeled-knob>
    </tab>
  </ui>
  <effects>
    <effect type="delay" delayTime="0.5" stereoOffset="0.01" feedback="0.2" wetLevel="0.5" />
  </effects>
  <groups attack="0.0" decay="1.0" sustain="0.0" release="1.75" ampVelTrack="0.3">
    <! -- sample definitions are here -->
  </groups>
</DecentSampler>
Code language: HTML, XML (xml)

As you can see, we’ve added our delay in on line 21. In lines 6 through 17, we’ve added in some controls that let us fine-tune various aspects of our delay effect. In this article, we’re going to be focusing almost exclusively on that first control: Delay Time. Right now, the delay effect is receiving information about how long the delay should be from that first control in seconds. The control has a valueType of float, which stands for “floating-point number”, which is the default valueType. This basically means that the number that control is going to output will be anything between the minimum and the maximum. In other words, it could be a whole number, it could be fractional, float means that pretty much anything goes.

(If you wish to see this iteration of the code, it can be found here in the file labeled Step 2.)

3. Adding in tempo syncing

Up until now, we’ve been specifying our time in seconds, and, even if the plugin is being run within some host software, the delay time is not being synced to the tempo of that software at all. Let’s change that. Here’s what the new code looks like:

<?xml version="1.0" encoding="UTF-8"?>

<DecentSampler>
  <ui>
    <tab>
      <labeled-knob x="180" y="40" label="Delay Time" valueType="musical_time" value="10">
        <binding type="effect" level="instrument" position="0" parameter="FX_DELAY_TIME"/>
      </labeled-knob>
      <!-- more knob definitions we don't care about right now -->
    </tab>
  </ui>
  <effects>
    <effect type="delay" delayTimeFormat="musical_time" delayTime="0.5" stereoOffset="0.01" feedback="0.2" wetLevel="0.5" />
  </effects>
  <groups attack="0.0" decay="1.0" sustain="0.0" release="1.75" ampVelTrack="0.3">
    <! -- sample definitions are here -->
  </groups>
</DecentSampler>
Code language: HTML, XML (xml)

There are two important things to note here:

First, on line 6, we’ve changed the valueType of our Delay Time knob to musical_time. This is a special, magic value that will cause the control to display a series of standard musical time increments. For those curious, here are the actual time increments that will be used: 1/64 triplet, 1/64, 1/32 triplet, 1/64 dotted, 1/32, 1/16 triplet, 1/32 dotted, 1/16, 1/8 triplet, 1/16 dotted, 1/8, 1/4 triplet, 1/8 dotted, 1/4, 1/2 triplet, 1/4 dotted, 1/2, 1 triplet, 1/2 dotted, 1, 1/1 dotted. On the back end, this will be transmitted out to the delay effect as a whole number from 0 to 20.

Of course, our delay effect was set to expect its time in seconds, right? So if we were to leave the Delay effect alone, it would misinterpret that special magic number coming from the knob as a value in seconds. To fix this, we’ve added a brand new delayTimeFormat="musical_time" attribute to the <effect> element on line 13. Now, the Delay effect knows that its time is being set using the special magic array of musical time options. For example, if it receives a value of 4 from a binding, it no knows that that does not mean 0 seconds, it means 1/32 note relative to whatever the host tempo is.

By the way, if you wish to see this iteration of the code, it can be found here in the file labeled Step 3. It already works really nicely.

4. Adding a knob that allows the user to switch between Seconds and Musical Time

So you may be wondering what happens if the plug-in is being run in standalone mode. Well, in such a situation, DecentSampler isn’t able to receive a tempo from the host software, so it defaults to 120 beats per minute. Because of this scenario, it is often wise to include a button that lets users switch between musical time and clock time. Here’s how we do that:

<?xml version="1.0" encoding="UTF-8"?>

<DecentSampler>
  <ui>
    <tab>
      <label x="101" y="57" width="80" height="30" text="Tempo Sync" textSize="15" />
      <button x="101" y="90" width="80" height="30" value="0">
        <state name="On">
          <binding type="effect" level="instrument" position="0" parameter="FX_DELAY_TIME_FORMAT" translation="fixed_value" translationValue="musical_time" />
          <binding type="control" level="ui" position="2" parameter="VALUE_TYPE" translation="fixed_value" translationValue="musical_time" />
          <binding type="control" level="ui" position="2" parameter="VALUE" translation="fixed_value" translationValue="10" />
        </option>
        <state name="Off">
          <binding type="effect" level="instrument" position="0" parameter="FX_DELAY_TIME_FORMAT" translation="fixed_value" translationValue="seconds" />
          <binding type="control" level="ui" position="2" parameter="VALUE_TYPE" translation="fixed_value" translationValue="float" />
          <binding type="control" level="ui" position="2" parameter="VALUE" translation="fixed_value" translationValue="0.25" />
          <binding type="control" level="ui" position="2" parameter="MIN_VALUE" translation="fixed_value" translationValue="0" />
          <binding type="control" level="ui" position="2" parameter="MAX_VALUE" translation="fixed_value" translationValue="5" />
        </option>
      </button>
      <labeled-knob x="180" y="40" label="Delay Time" valueType="musical_time" value="10">
        <binding type="effect" level="instrument" position="0" parameter="FX_DELAY_TIME"/>
      </labeled-knob>
      <!-- a bunch of controls we don't care about right now -->
    </tab>
  </ui>
  <effects>
    <effect type="delay" delayTimeFormat="musical_time" delayTime="0.5" stereoOffset="0.01" feedback="0.2" wetLevel="0.5" />
  </effects>
  <groups attack="0.0" decay="1.0" sustain="0.0" release="1.750873208045959" ampVelTrack="0.3">
    <groups attack="0.0" decay="1.0" sustain="0.0" release="1.75" ampVelTrack="0.3">
    <! -- sample definitions are here -->
  </groups>
</DecentSampler>

Code language: HTML, XML (xml)

Woah, there’s a lot here! Let’s walk through it line by line: On line 6, we’ve added is a label. This is purely for descriptive purposes. Next, we’ve got a button with two states. Each state has a number of different bindings as each time the state gets changed, we are not only going to be changing effect’s settings, but also the settings for the Delay Time knob. Here is what is happening in each of the two states:

In the “On” state, on line 9, we change the delayTimeFormat setting to musical_time for the Delay effect. After that, on line 10, we set the valueType to musical_time for our Delay Time knob (which has an index of 2). Finally, one line 11, we set a value so that when the user switches states, the control doesn’t get set to some random value. In this case, I’ve decided to set it to 10, which corresponds to eighth notes in the magical musical_time numbering system.

Moving on, let’s look at the “Off” state. The first thing we do when time sync is switched off is, on line 14, we change the delayTimeFormat setting to seconds for the Delay effect. After that, on line 15, we set the valueType to float for our Delay Time knob. Next, on line 16, we set a value so that when the user switches states, the control doesn’t get set to some random value. In this case, I’ve decided to set it to 0.25 seconds, which corresponds to eighth notes if the tempo were 120BPM. This was arbitrary decision on my part, but it seems to sound nice. Last but not least, on lines 17 and 18, we set minimum and maximum values for our control. We do this because before, when our control was in musical_time mode, its minimum and maximum values are automatically set to 0 and 20, respectively in order to accommodate the musical time system. We now need to reset them to plausible limits.

This final version of the code can be found here in the labeled Step 4.

Conclusion

I know this last case seems like a lot of code, and it may, at first, be a bit confusing. The good news is that for the most part you can just copy and paste the code above into your projects. Just make sure you change those pesky position values within the bindings, so that the bindings actual point to the controls and effects you want to change. 😉

Enjoy!

– Dave

Posted on Leave a comment

For Sample Creators: Changing Sample Start, End, and Loop Points using GUI controls

I have added a few things to the sampler in version 1.9.18:

  1. There’s experimental support for FLAC files.
  2. It’s now possible to assign knobs to the start and end point of samples, as well as their loop points.
  3. It’s now possible to dictate the playback engine that is used by a sample library.

Now, the first item above is probably pretty self-explanatory, this blog post is going to concern itself with items 2 and 3.

How To Manipulate Start, End, and Loop Points

To change a sample’s start, end, loop start, or loop end, simply use the SAMPLE_START, SAMPLE_END, LOOP_START, and LOOP_END parameter names, respectively. Here is some sample code:

<labeled-knob x="445" y="75" width="90" textSize="16" textColor="AA000000" 
                    trackForegroundColor="CC000000" trackBackgroundColor="66999999" 
                    label="Start" type="integer" minValue="0" maxValue="24000" value="0" >
        <binding type="general" level="group" position="0" parameter="SAMPLE_START" />
      </labeled-knob>
      <labeled-knob x="515" y="75" width="90" textSize="16" textColor="AA000000" 
                    trackForegroundColor="CC000000" trackBackgroundColor="66999999" 
                    label="End" type="float" minValue="0.0" maxValue="24000" value="24000" >
        <binding type="general" level="group" position="0" parameter="SAMPLE_END" />
      </labeled-knob>
      <labeled-knob x="585" y="75" width="90" textSize="16" textColor="AA000000" 
                    trackForegroundColor="CC000000" trackBackgroundColor="66999999" 
                    label="Loop Start" type="float" minValue="0.0" maxValue="24000" value="0" >
        <binding type="general" level="group" position="0" parameter="LOOP_START" />
      </labeled-knob>
      <labeled-knob x="655" y="75" width="90" textSize="16" textColor="FF000000"
                    trackForegroundColor="CC000000" trackBackgroundColor="66999999"
                    label="Loop End" type="float" minValue="0" maxValue="24000" value="24000">
        <binding type="general" level="group" position="0" parameter="LOOP_END" />
      </labeled-knob>
Code language: HTML, XML (xml)

In order for this to work properly the sample playback engine must be in RAM/Memory mode (not disk streaming), otherwise you will get very unpredictable results. In order to enforce this, sample creators should use the new playbackMode attribute, which is explained in the next section…

Playback Engines

As you may know, there are two playback modes: the memory mode stores a samples in memory, whereas the disk streaming mode caches only the beginning of each sample, and then uses a series of threads to grab data as needed. Because the memory mode has all of the data it could possible need in its memory already, it is far more flexible in terms of what can be accomplished with it, but, since it loads the entire sample into memory, it can also use up a lot of RAM for large sample libraries. Currently, users can change their playback mode by going into the preferences screen and choosing a new Sample Engine Mode:

97% of the time, this is exactly what you want: the user choosing their own playback system. The problem is that some sample libraries work much better with one mode vs the other. If the sample library has knobs bound to start, end, loopStart, or loopEnd, then being in RAM mode is actually required. Of course, no sample creator wants to have to tell their users “Oh, by the way, make sure you switch playback modes in the preferences before you use my new sample library.” The solution is the playbackMode attribute. It has three possible modes: memory, disk_streaming and auto(default). When a value of memory is used, the sample will be played back using the memory mode as though the user had that playbackEngine selected in the preferences.

OK. I think that’s it. Enjoy!

– Dave

Posted on Leave a comment

Q: Is it possible to extract the samples contained within Decent Sampler instruments?

Occasionally, users will reach out and ask me whether it’s possible to extract the samples contained in a Decent Sampler library. One common use-case for this is users who want to use the samples with a hardware sampler such as an MPC or an SP-404. Anyway, the answer to the question is that in some cases it’s possible and in others it is not.

Me: “Are you sure you really want to do this?”

Before we go into the details of when it’s possible, it’s worth mentioning that a Decent Sampler instrument is usually much more than just a collection of samples: a creator of a library will often have made use of creative layering techniques as well as onboard effects, both of which play a huge part in producing the distinctive sound of a library. This means that the only reliable way of getting the intended sound of a library is to actually use the output from DecentSampler as your sound. This doesn’t mean that you are out of luck if you want to use these sounds with a hardware sampler. You can always load DecentSampler up in your DAW of choice, play a bunch of long notes, and then export the audio out from that session. This will often yield better sound than if you were to just go directly into the sample library and grab the underlying samples.

You: “OK, but I still want to do it!”

So, assuming that you really do want to grab the underlying audio files, here’s what you need to know. First, if the sample library is a commercial release and copy-protected then you are out of luck: the underlying wave files will all encrypted and the only thing that can decrypt them is the DecentSampler plugin itself. In such cases, the only thing that you can do is go into your favorite DAW, play any notes you might want to have as audio files, and export out the audio from that DAW session. This is always my preferred way of operating anyway.

If the library is not copy protected, then you should definitely be able to get at the underlying samples. In fact, it’s pretty easy. There are two formats that DecentSampler libraries come in:

  1. If you are presented with a .dsbundle file, this is really just a directory. If you are on Windows or Linux, you should be able to just look inside that directory and find the sample files. If you are on Mac, this file will show up as a “package.” To get access to it, find the file in the Finder, hold down the Control key, and then click on the file: a context menu will pop up, from which you should select Show Package Contents. From this point on, the .dsbundle file will be presented to you just like any other folder…and somewhere in that folder you will find the raw audio files. The exact location will be different for every library.
  2. If you are presented with a .dslibrary file, then you will need to decompress it. In reality, .dslibrary files are simply .zip files, and if you change their extension from .dslibrary to .zip you should be able to decompress the library just as you would any other .zip file. Within the directory structure that gets created, you should be able to find a folder that contains your samples.

That’s it! Hope this was helpful.

– Dave

Posted on Leave a comment

For Sample Creators: General Advice on Making Sample Libraries Based on Real-World Instruments

Hello Samplists,

Here’s a list of general advice for people making sample libraries based on real-world instruments. All of this advice falls squarely into the “give advice that you yourself need to hear” category. I should also mention that this is an intermediate-level post that assumes that you already know that basics of how to make a sample library.

1. Don’t get hung up on capturing the “authentic” sound of an instrument

Think carefully about your actual goal when making a sampled instrument. When I first started, I approached the work of making sample libraries as though I was an archivist trying to preserve the “authentic” sound of a real world instrument for posterity. This is a noble goal, but it doesn’t necessarily yield the best sampled instruments. I quickly realized a much better goal was trying to make virtual instruments that are fun to play, sound good, and useful to composers or producers. And yes, sometimes these two goals are at odds with each other. 

For example, when you record a piano, you may discover that some notes sound great and others sound lousy. If you are trying to be accurate, you may think you should include those lousy notes in the same, because, after all, they also represent how the instrument truly sounds. Of course, this is a valid perspective. I would argue that you should leave them out, or at very least, provide a version with a curated selection of “good” notes. The vast majority of composers and producers would rather have something that just sounds good immediately, that they don’t have to wrestle a good sound out of.

Another important thing to realize is that samplists are never getting the “true” sound of an instrument anyway. When it comes to acoustic instruments, even something as basic as where you put your microphone can drastically change the way an instrument sounds. In other words, even someone trying to capture the authentic nature of instrument is still making creative and aesthetic decisions. So give that you’re already inserting yourself into the process at the recording stage, why not continue making aesthetic decisions at every point in the process. If you record a violin sample, and it sounds harsh and grating, by all means EQ it until it sounds good! If it sounds to dry, add some reverb.

I’ve made the mistake several times of faithfully re-creating, an instrument sound, only to find that for whatever reason that sound didn’t actually work as a sample library. It’s been helpful for me to think of the source sound as the departure point on which I’m building a brand new Instrument – a virtual Instrument – that will be most likely be triggered using a piano keyboard.

2. You don’t need nearly the number of samples you think you do. 

When most people are getting started, they see sample libraries releases by big sample library companies boasting about how many samples are included in a release and they will naturally think that they also need to record every note of an instrument with five different velocities and sometimes even round robins. I’m here to tell you that 9 times out of 10 this is not necessary. Most users would rather save the hard disk space. Most melodic instruments you can have a zone every 3 to 6 notes — sometimes even just one zone per octave – and it will sound every bit as good. 

As you get more experienced, you will discover which instruments require more samples and which can do without. For example, one exception is when there is any sort of modulation like tremolo, vibrato or an LFO filter on a synth. If you try to pitch bend samples that have these sorts of fast, time-based modulation, you will end up with different rates of modulation depending on which note you play, which may sound terrible. In other words, use your aesthetic judgement.

3. Always pitch-bend down

If have not recorded a sample for every note, you may need to set up mapping so that pitch-bending takes place. Often, when you map samples in a sample-mapping product (such as Logic Sampler or Kontakt), the mapper will want to put the root note in the middle of a zone, meaning that if you play a play a note above the root note, it will pitchblende the note up. For example:

A screenshot of the Logic sampler's default mapping.

Here’s the thing: samples that are pitch-bent down almost always sound better than samples that are pitch bent up. This is because when you pitch-bend samples down, you remove some of the higher frequencies, which at worst just sounds a bit like a nice low-pass filter. On the other hand, when you pitch-bend a sample up, you are actually adding high frequencies, which can sound weird and “chipmunky.”

The solution is to always map your samples with the root note at the top of the range. To continue our example from above, you would get something that looks like this:

A more desirably sample mapping

4. How to know when your instrument is done and ready to release

This is easy. Your sample is done when you can’t stop playing with it. When every time you open it up the library, you get distracted and start writing music. You are your own first customer, so be honest with yourself. If your own reaction to playing with a library is a tepid “I guess this is good?” then there’s something wrong. Figure out what’s bothering you about the sounds and fix them.

Hope this little list has been helpful. As I think of more things, I will go back and update this document as more things occur to me.

All the best,
Dave

Posted on Leave a comment

For Sample Creators: How to use the Wavefolder and Waveshaper effects

Oscilloscope view of a sawtooth wave form that has been folding back on itself.

Decent Sampler v1.7.3 introduces the new wave folder and wave shaper effects. These can be used to add extra harmonic content to your signals (aka distortion). What both of these effects have in common is that they usually sound much better when applied to a single voice rather than to an entire signal. In Decent Sampler, it is possible to apply effects at the voice level by attaching them to groups. Since each group is triggered independently, they do not share effects. In other words, each time you hit a key, a new copy of that voice will be created.

Wave folder

The wave_folder effect allows you to fold a waveform back on itself. This is very useful for generating additional harmonic content. Here is what that looks like in practice:

Oscilloscope view of a sawtooth wave form before wavefolding
Sawtooth waveform before wavefolding
Oscilloscope view of a sawtooth wave form after wavefolding
The same sawtooth wave form after wavefolding

These are the parameters that can be controlled:

AttributeTypeValid RangeDefault
typeRequiredMust be wave_folderwave_folder
driveOptionalThe volume of the input signal1 – 100, where 100 means the signal is amplified by a factor of 100 and 1 means no amplification is applied1
thresholdOptionalThe amplitude above which wave folding should take place0 – 10.00.25
Wavefolder parameters

Because wave folding tends to sound better when applied on a per-voice basis, it usually makes sense to set up the wave folder at the group level (separate group effects get created for each keypress). Example:

<?xml version="1.0" encoding="UTF-8"?>
<DecentSampler pluginVersion="1">
  <ui>
    <tab>
      <labeled-knob x="180" y="40" label="Drive" type="float" minValue="1" maxValue="100" textColor="FF000000" value="1">
        <binding type="effect" level="group" groupIndex="0" effectIndex="1" parameter="FX_DRIVE" translation="linear" />
      </labeled-knob>
      <labeled-knob x="280" y="40" label="Threshold" type="float" minValue="0" maxValue="1" value="1" textColor="FF000000">
        <binding type="effect" level="group" groupIndex="0" effectIndex="1" parameter="FX_THRESHOLD" translation="linear" />
      </labeled-knob>
    </tab>
  </ui>
  <groups>
    <group>
      <!-- samples go here -->
      <effects>
        <effect type="lowpass_4pl" resonance="1" frequency="500" />
        <effect type="wave_folder" drive="1" threshold="1" />
      </effects>
    </group>
  </groups>
  
</DecentSampler>
Code language: HTML, XML (xml)

Waveshaper

The wave_shaper effect allows you to apply standard tanh waveshaping to your input signal. Here are some examples what that looks like in practice:

An oscilloscope display of an example of sine wave before wave shaping is applied.
A sine wave before wave shaping is applied
An oscilloscope display of a sine wave after wave shaping is applied.
A sine wave after wave shaping is applied

There are a few parameters which can be controlled:

AttributeTypeValid RangeDefault
typeRequiredMust be wave_shaperwave_folder
driveOptionalThe amount of distortion. This really just controls the volume of the input signal.1 to 1000 where 1 means no change to the input signal and 1000 means the amplitude is multiplied by a factor of 1000.1
driveBoostOptionalChanges the character of distortion that gets produced0 – 1.00
outputLevelOptionalThe linear output level of the signal0 – 1.00.1
Waveshaper parameters

Because wave shaping tends to sound better when applied on a per-voice basis, it usually makes sense to set up the wave shaper at the group level (separate group effects get created for each keypress). Example:

<DecentSampler pluginVersion="1">
  <ui>
    <tab>
      <labeled-knob x="180" y="40" label="Drive" type="float" minValue="0" maxValue="1000" textColor="FF000000" value="0.5473124980926514">
        <binding type="effect" level="group" groupIndex="0" effectIndex="0" parameter="FX_DRIVE" translation="linear"/>
      </labeled-knob>
      <labeled-knob x="280" y="40" label="Boost" type="float" minValue="0" maxValue="1" value="0.328312486410141" textColor="FF000000">
        <binding type="effect" level="group" groupIndex="0" effectIndex="0" parameter="FX_DRIVE_BOOST" translation="linear"/>
      </labeled-knob>
      <labeled-knob x="380" y="40" label="Output Lvl" type="float" minValue="0" maxValue="1" value="0.1" textColor="FF000000">
        <binding type="effect" level="group" groupIndex="0" effectIndex="0" parameter="FX_OUTPUT_LEVEL" translation="linear"/>
      </labeled-knob>
    </tab>
  </ui>
  <groups>
    <group>
        <em><!-- Samples go here. --></em>
      <effects>
        <effect type="wave_shaper" drive="0.5473124980926514" driveBoost="0.328312486410141" outputLevel="0.1"/>
      </effects>
    </group>
  </groups>
Code language: HTML, XML (xml)

Examples

The examples from this blog post can be download here.

Posted on Leave a comment

For Sample Creators: How to create buttons in your sample libraries

A screenshot of Decent Sampler showing a button.

Well, it finally happened, version 1.7.0 of Decent Sampler introduces the concept of buttons to the world of Decent Sampler. To make a button in you user interface, simply use a <button> element:

<button x="10" y="40"  width="120" height="30" style="image" value="0" mainImage="samples/ButtonMainImage.png" hoverImage="samples/ButtonHoverImage.png" clickImage="samples/ButtonSelectedImage.png">
    <!-- Your button states go here. These are defined using the <state> element. -->
</button>
Code language: HTML, XML (xml)

There are two types of buttons: text and image. The value of the style attribute determines which kind of button gets created.

Text Buttons

Text buttons are pretty basic, they look like this:

An example of a text button

Here is the code for this button:

<button x="350" y="70"  width="120" height="40" style="text">
  <state name="English">
    <!-- Bindings go here -->
  </state>
  <state name="French">
    <!-- Bindings go here -->
  </state>
</button>
Code language: HTML, XML (xml)

As you can see, the actual text that gets displayed is defined in the name= parameter of each <state> element.

Image Buttons

Now, let’s look at the image button. Here, you can use any image you want (even these ugly flag buttons I made in about 20 seconds in Photoshop):

An example of an image button

The code for this button is as follows:

<button x="350" y="70"  width="70" height="50" style="image" value="0" >
    <state name="English" mainImage="samples/EFlag_MainImage.png" hoverImage="samples/EFlag_HoverImage.png" clickImage="samples/EFlag_SelectedImage.png">  
    </state>
    <state name="French" mainImage="Samples/FFlag_MainImage.png" hoverImage="Samples/FFlag_HoverImage.png" clickImage="Samples/FFlag_SelectedImage.png">
    </state>
</button>
Code language: HTML, XML (xml)

As you can see, each <state> has three image parameters. Only the first one, mainImage, is required:

mainImageThe path of the main image to display for this button. This can also be set at the state level so that it only applies to a specific state. (required)
hoverImageThe path of the main image to display when the user hovers their mouse over this button. This can also be set at the state level so that it only applies to a specific state. (optional)
clickImageThe path of the main image to display when the user clicks down on this button. This can also be set at the state level so that it only applies to a specific state. (optional)
<state> parameters for buttons with the image style

Bindings

Of course, if you want your buttons to actually do something, you’ll need to put <binding> elements underneath the <state> elements:

<button x="350" y="70"  width="120" height="40" style="text">
  <state name="English">
    <binding type="general" level="group" position="0" parameter="ENABLED" translation="fixed_value" translationValue="true" />
    <binding type="general" level="group" position="1" parameter="ENABLED" translation="fixed_value" translationValue="false" />
  </state>
  <state name="French">
    <binding type="general" level="group" position="1" parameter="ENABLED" translation="fixed_value" translationValue="true" />
    <binding type="general" level="group" position="0" parameter="ENABLED" translation="fixed_value" translationValue="false" />
  </state>
</button>
Code language: HTML, XML (xml)

As you can see, the example above uses a button to switch between two groups. You’ll note the liberal use of the fixed_valuetranslation mode above. This means that when any of these options are selected, a fixed predetermined value is used for the value of that binding.

Conclusion & Examples

OK. I think that’s it. You can download the examples used in this blog post here. If you find any bugs having to do with buttons, make sure you report them here.

Posted on 16 Comments

For Sample Creators: How to use Convolution in your Decent Sampler presets

A spectrogram of a convolution reverb impulse response.

Version 1.6.12 of Decent Sampler brings a Convolution effect to the Decent Sampler platform. If you don’t know what Convolution is, you can see a great explanation here. The most common use case for convolution is in creating reverb, and that is the use case that will be demonstrated here.

How to add the Convolution effect to a preset

The convolution effect is invoked in much the same way that any other effect is defined:

<effects>
  <effect type="convolution" mix="0.5" irFile="Samples/Hall_IR.wav" />
</effects>Code language: HTML, XML (xml)

As you can see, other than the required type attribute, there are two other attributes:

  • The mix attribute controls how much of the convolved signal is present in the output. A value of 0 is completely dry whereas a value of 1 is completely wet containing only the convolved signal.
  • The irFile attribute specifies the file that should be used as an impulse response or IR.

How to control the convolution effect using UI controls

Two of the convolution effect’s attributes can be controlled using UI controls. The mix level can be controlled by a knob as follows:

<labeled-knob x="680" y="40" label="Conv Mix" type="float" minValue="0" maxValue="1" value="0.5" textColor="FF000000" >
  <binding type="effect" level="instrument" position="0" parameter="FX_MIX" translation="linear"  />
</labeled-knob>Code language: HTML, XML (xml)

The IR impulse can be changed dynamically using a menu control:

<label text="IR File" x="480" y="40" width="120" height="30"></label>
<menu x="580" y="40"  width="120" height="30" requireSelection="true" placeholderText="Choose..." value="1">
  <option name="long hall.wav">
    <binding type="effect" level="instrument" position="1" parameter="FX_IR_FILE" translation="fixed_value" translationValue="Samples/long hall.wav" />
  </option>
  <option name="ABLCR Chord Vocal.aif">
   <binding type="effect" level="instrument" position="1" parameter="FX_IR_FILE" translation="fixed_value" translationValue="Samples/ABLCR Chord Vocal.aif" />
  </option>
  <option name="Amp Spring High.aif">
    <binding type="effect" level="instrument" position="1" parameter="FX_IR_FILE" translation="fixed_value" translationValue="Samples/Amp Spring High.aif" />
  </option>
  <option name="Swede Plate 3.5s.aif">
    <binding type="effect" level="instrument" position="1" parameter="FX_IR_FILE" translation="fixed_value" translationValue="Samples/Swede Plate 3.5s.aif" />
  </option>
</menu>Code language: HTML, XML (xml)

Examples

An example Decent Sampler preset that uses IR reverb can be downloaded here. (You’ll want to check out the example-003-how-to-use-convolution-reverb folder.)

Performance considerations

While convolution is a powerful tool that can go a long way towards shaping a sample library’s sound, it can also be quite costly in terms of CPU usage. Sample creators would do well to create versions both with and without convolution effect and compare the relative CPU usage of the two versions before opting to use convolution.

Posted on 11 Comments

How to add LFOs and extra envelopes to your Decent Sampler instruments

As for version 1.5.24 of Decent Sampler, it is now possible to make use of LFOs and ADSR envelopes in your Decent Sampler sample libraries. In this blog post, we’ll go through how to set these up.

The <modulators> section

This is a new section that lives below the top-level <DecentSampler> element and it is where all modulators for the entire sample library live:

<DecentSampler>
    <modulators>
        <!-- Your modulators go here. -->
    </modulators>
</DecentSampler>

The <lfo> element

Underneath the <modulators> section, you can have any number of different LFOs, which are defined using an <lfo> element, for example:

<modulators>1
  <lfo shape="sine" frequency="2" modAmount="1.0"></lfo>
</modulators>

This element has the following attributes:

  • shape: controls the oscillator shape. Possible values are sine, square, saw.
  • frequency: The speed of the LFO in cycles per second. For example, a value of 10 would mean that the waveform repeats ten times per second.
  • modAmount: This value between 0 and 1 controls how much the modulation affects the things it is targeting. In conventional terms, this is like the modulation depth. Default value: 1.0.
  • scope: Whether or not this LFO exists for all notes or whether each keypress gets its own LFO. Possible values are global (default for LFOs) and voice. If voice is chosen, a new LFO is started each time a new note is pressed.

The <envelope> element

In addition to LFOs, you can also have additional ADSR envelopes. These can be useful for controlling group-level effects, such as low-pass filters. If this is what you wish to achieve, make sure you check out the section on group-level effects below.

To create an envelope, use an <envelope> element:

This element has the following attributes:

  • attack: The length in seconds of the attack portion of the ADSR envelope
  • decay: The length in seconds of the decay portion of the ADSR envelope
  • sustain: The height of the sustain portion of the ADSR envelope. This is expressed as a value between 0 and 1.
  • release: The length in seconds of the release portion of the ADSR envelope
  • modAmount: This value between 0 and 1 controls how much the modulation affects the things it is targeting. In conventional terms, this is like the modulation depth. Default value: 1.0.
  • scope: Whether or not this LFO exists for all notes or whether each keypress gets its own LFO. Possible values are global and voice (default for envelopes). If voice is chosen, a new LFO is started each time a new note is pressed.

How to use bindings in conjunction with modulators

In order to actually have your LFOs and envelopes do anything, you need to have bindings under them. If you are not familiar with the concept of bindings, you may want to read this section of the File Format Reference Guide and then return here. Bindings tell the engine which parameters the LFO should be affecting and how. Here is an example:

<modulators>
    <lfo shape="sine" frequency="2" modAmount="1.0">
        <!-- This binding modifies the frequency of a low-pass filter  -->
        <binding type="effect" level="instrument" effectIndex="0" parameter="FX_FILTER_FREQUENCY" modBehavior="add" translation="linear" translationOutputMin="0" translationOutputMax="2000.0"  />
    </lfo>
</modulators>

This is Example 1 from the example pack you can download here.

How modulator bindings differ from knob or MIDI bindings

If you’re already familiar with the concept of bindings, then you’ll want to read this section especially careful as you may notice a few difference between bindings as they are used by knobs and the ones used by modulators. Specifically, when you move a UI control that has a binding attached, the engine actually goes out and changes the value of the parameter that is targeted by that binding. For example, if you have a knob that controls a lowpass filter’s cutoff frequency, moving that knob will cause that actual frequency of that filter to change. In other words, the changes that the knob is making on the underlying sample library are permanent. The same is also true for bindings associated with MIDI continuous controllers.

Modulators, on the other hand, do not work this way. If a modulator (such as an LFO) changes its value, the engine looks at the bindings associated with that LFO and then makes a list of temporary changes to the underlying data. When it comes time to render out the effect, it consults both the permanent value and the temporary modulation values. As a result of this difference in the way bindings are handled, only some parameters are “modulatable.” At time of press, the following parameters are modulatable:

  • All gain effect parameters
  • All delay effect parameters
  • All phaser effect parameters
  • All filter effect parameters
  • All reverb effect parameters
  • All chorus effect parameters
  • Group Volume
  • Global Volume
  • Group Pan
  • Global Pan
  • Group Tuning
  • Global Tuning

The new modBehavior parameter for bindings

Another new feature of bindings is the addition of the modBehavior attribute. This controls exactly what effect a binding actually has on the parameter it is targeting. There are three possible values for this:

  • set: This means that the value that is generated by the binding becomes the new value for the parameter being targeted. NOTE: set is the default value and this is the way that knobs and MIDI CC bindings work by default. That being said, it’s usually not the correct choice for modulations such as LFOs and secondary ADSR envelopes.
  • add: The value generated by the binding gets added to the current value of the parameter being targeted.
  • multiply: The value generated by the binding gets multiplied with the current value of the parameter being targeted.

In order to understand what any of this means, let’s look at the following example:

<effects>
    <effect type="lowpass" frequency="60.0"/>
</effects>
<modulators>
    <lfo shape="sine" frequency="2" modAmount="1.0">
        <binding type="effect" level="instrument" position="0" parameter="FX_FILTER_FREQUENCY" translation="linear" translationOutputMin="0" translationOutputMax="2000.0" modBehavior="add" />
    </lfo>
</modulators>

The <lfo> tag above sets up an LFO with a frequency of 2 Hz. It has just one binding, which targets the first global effect, which happens to be a low-pass filter with a cutoff frequency of 60Hz. Every binding can be seen as a pipe that takes an input value, translates that value in some fashion, and then sets a parameter somewhere else in the engine. Here are the steps for this setup:

  1. By default, an LFO generates values between -1.0 and 1.0.
  2. These values then get passed to the binding, which is setup to do a linear translation. This linear translation has a minimum of 0 and maximum of 2000, which means that when the LFO is at its lowest point (-1.0) the binding will generate the number 0 (the minimum) and when the LFO is at its highest point (1.0), the binding will generate the number 2000 (the maximum).
  3. Because the modBehavior value is add, this new value that is generated by the binding will be added to the original cutoff value of 60Hz. This means that when the LFO is at its lowest point, the filter cutoff will be 60Hz (i.e. 60 + 0) and when its at its highest point, the filter cutoff will be 2060Hz (i.e. 60 + 2000).

LFO Scope: Global or Voice-level

By default, all modulators will be created at the global level. This means that there will be exactly one modulator that is shared by all voices. In many situations, such as an LFO modulating a single low-pass filter which is shared by all of voices, this is often what we want.

But there are other situations where we don’t want our modulator to be global. For example, what if we want to have an envelope that targets a low-pass filter. Let’s say that when we press down on a key, we want that low-pass filter to open up slowly until, 2 seconds later, it reaches its peak. In theory, we could set up something like this:

 

<effects>
    <effect type="lowpass" frequency="60.0"/>
  </effects>
<modulators>
    <envelope attack="2" decay="0" sustain="1" release="0.5" modAmount="1.0">
        <binding type="effect" level="instrument" position="0" parameter="FX_FILTER_FREQUENCY" translation="linear" translationOutputMin="0" translationOutputMax="4000.0" modBehavior="add" />
    </envelope>
</modulators>

But there’s a problem with this: let’s imagine that we hit a note, and then one second into that first note, we hit another note. If we have just a single envelope, that envelope will be half-way through its attack phase when the second key is pressed. Depending on how the envelope is configured, that envelope will either retrigger because of the new keypress (this would be the wrong behavior for the first note which is still being held) or keep going in which case the second note will start half way through its attack phase.

To solve this problem, in such cases, we need tell the engine to create a separate modulator for every keypress. To do this we add a scope="voice" attribute to the modulator as follows:

<envelope attack="2" decay="0" sustain="1" release="0.5" modAmount="1.0" scope="voice">

But wait, there’s another problem! In the scenario above, even if we have separate modulator for every keypress, those voices are still all sharing a single global low-pass filter. If you’ve got several modulators pinned to the same global effect, they are going to be setting and resetting that global effect’s parameters to competing values. The engine is going to be at war with itself of which envelope’s values are the correct setting for the filter’s cutoff. In other words, we need separate effects for each keypress. These can be added by specifying…

…effects at the group level

Adding effects that only apply to a specific group is easy. All you need to do is create an <effects> group that lives underneath the <group> element for the group you want to affect. For example:

<groups>
    <group>
        <!-- A sample -->
        <sample path="Samples/Volca Keys Poly-V127-60-C3.wav" loNote="10" hiNote="83" rootNote="48"/>
        <effects>
            <!-- These effects will only apply to this group -->
            <effect type="lowpass" frequency="22000.0"/>
        </effects>
    </group>
</groups>

Group level effects are initialized every time a note is started and destroyed every time a note is stopped. If you play two notes simultaneously, two instances of this effect will be created and these will be independent of eachother. As a result, they use more CPU than global effects.

NOTE: Only certain effects will work as group-level effects: lowpass filter, hipass filter, bandpass filter, gain, and chorus. Delay and reverb cannot work properly as they will be deleted before their tail peters out.

How to have knobs control group-level effects

Just as it is possible to have knobs that control instrument-level effects, it is also possible to have them control group level effects. In order to specify a group level effect, set the binding’s level to group, and use groupIndex and effectIndex to specify which specific effect needs to be controlled. Here is an example of this a knob that controls a group-level low-pass filter:

<labeled-knob x="655" y="75" label="Tone" type="float" minValue="0" maxValue="1" value="1">
    <binding type="effect" level="group" groupIndex="0" effectIndex="0" parameter="FX_FILTER_FREQUENCY" translation="table" translationTable="0,33;0.3,150;0.4,450;0.5,1100;0.7,4100;0.9,11000;1.0001,22000"/>
</labeled-knob>

Putting it all together: an envelope filter that controls a low-pass filter

So, if we put all of this into a real world example, we can code imagine an ADSR envelope that is controlling a low-pass filter as follows:

<?xml version="1.0" encoding="UTF-8"?>
<DecentSampler minVersion="1.6.0">
  <groups>
    <group ampVelTrack="0.0">
      <sample path="Samples/Volca Keys Poly-V127-60-C3.wav" loNote="10" hiNote="83" rootNote="48"/>
      <effects>
        <effect type="lowpass" frequency="22000.0"/>
      </effects>
    </group>
  </groups>
  <modulators>
    <envelope attack="1" decay="0.5" sustain="0" release="0.5" modAmount="1" scope="voice">
      <binding type="effect" level="group" groupIndex="0" effectIndex="0" parameter="FX_FILTER_FREQUENCY" modBehavior="set" translation="table" translationTable="0,33;0.3,150;0.4,450;0.5,1100;0.7,4100;0.9,11000;1.0001,22000"  />
    </envelope>
  </modulators>
</DecentSampler>

This is Example 2 from the example pack.

How to make knobs control modulator parameters

Both types of modulators – <lfo> and <envelope> – have a modAmount parameter. This consists of a value between 0 and 1 that dictates how much the modulation affects the things it is targeting. In conventional terms, this is like the modulation depth. In order to create a knob that controls LFO depth, you would create a knob that targets a modulator’s depth. Here’s how you would do this:

<labeled-knob x="585" y="75" label="LFO Depth" type="float" minValue="0.0" maxValue="1" value="1" >
    <binding level="instrument" type="modulator" position="0" parameter="MOD_AMOUNT" />
</labeled-knob>

Note the binding type value of modulator and a position of 0. In other words, we are targeting the first modulator in the modulator block. The parameter we are targeting is MOD_AMOUNT.

It is similarly possible to target LFO rate using a parameter value of FREQUENCY:

<labeled-knob x="655" y="75" label="LFO Rate" type="float" minValue="0.0" maxValue="10" value="1">
   <binding level="instrument" type="modulator" position="0" parameter="FREQUENCY" />
</labeled-knob>Code language: HTML, XML (xml)

You can see a full example of a patch that controls rate and depth in Example 3 of the example pack.

Conclusion

That’s pretty much it. We look forward to seeing what you do with LFOs and envelopes. Make sure you download the example pack from here.

The example pack contains the following examples:

  • Example 1 shows how to make an LFO that controls a global lowpass filter
  • Example 2 shows how to have a voice-level envelope which controls a group-level filter
  • Example 3 shows how to have knobs control LFO parameters
  • Example 4 shows how to have knobs control envelope parameters
  • Example 5 shows how to modulate group volumes
  • Example 6 shows how to modulate group panning
  • Example 7 shows how to modulate group tuning
Posted on 4 Comments

Decent Sampler now has support for legato samples and voice muting

The latest version of Decent Sampler (1.0.11) has experimental support for legato. There is now a full video tutorial explaining how to set up true legato sample libraries. Check it out:

There is also a text version of this true legato tutorial, which can be found as part of the Decent Sampler format documentation here.

Have fun!

– Dave