DMDX Help.
Sound.
The VM extensions of DMTG have been dropped in favor of a easier to use system. Well, the <wav> keyword was easier, but then people wanted this and then that and before we knew it it's just as complex as VM ever was. Build a tool that a fool can use and only a fool will want to use it. And then people also wanted to play enormous wave files for instructions so the <StreamingAudio> keyword was added (see the Digital Video section for details) which once again is a very simple system. And then they either wanted to record a subject's vocal response or didn't want to have to setup an electronic VOX for naming time measurement so the input devices RecordVocal and DigitalVOX were added, see Audio Input for details
(if you use RecordVocal you're probably going to want to set
up some overrun protection for it).
And then they wanted to have sound continually playing in the background so the
<BackgroundSound>
keyword was added. If the playback needs to be abortable
<AbortItemKeyName>
can be used, if the playback needs to be contingent upon a subject's response (or
some other condition I can't currently imagine)
<AbortItemExpression>
can be used (see the jobstatus example).
As a general note that I can't think of anywhere else to stick, once sound is used the bells that DMDX would normally sound are suppressed, the theory being that you don't want subjects blasted with the windows chime at the end of an item file. If you need a bell that is audible to the experimenter and you are presenting acoustic stimuli use the remote monitor, as of version 1.2.00 of DMDX all bells are sent to the remote monitor if it is being used.
As another general note with the advent of the
freesync
modifier to the video mode if the timing of the audio
is paramount then using freesync
will allow the scheduling of audio to the millisecond instead of the usual tick
without necessarily purchasing an AMD FreeSync or NVidia G-Sync display.
A sound spec has become another (almost) ordinary frame, akin to the <bmp> frame (the G switch replacement, or <graphic> if you prefer). Originally the value in the <wav> keyword specified what channel (left = 0, right = 1 or both = 2) to play the file in, as of version 0.26 it specifies the actual pan value used in DirectSound if it is not one of the original values 0,1 or 2 (see below). A wave file begins playing at it's start and ends at it end unless overridden with the <SetStartCue> and <SetEndCue> keywords.
NOTE: If very precise synchronization of wave files between channels is required then the use of stereo wave files is recommended (don't forget to use <wav 2> instead of just <wav>), playing one file in the left channel and another file simultaneously in the right file does not guarantee that sample 0 of both files is played simultaneously, they are still separate requests to DirectSound and it may introduce some delay (maybe milliseconds) between the onset of the two files (I haven't tried to check it).
The duration of a <wav> frame is special -- there are two basic conditions that determine when the wave file is played, determined by whether there is an explicit frame duration specified or not. In the first condition where there is no explicit specification of the frame's duration it's duration becomes the duration of the .WAV file (or if the <SetVisualProbe> switch is used the duration is set as the time between the start and the <SetVisualProbe>'s cursor position). The wave file commences playing at the scheduled time of that frame. In the second condition where there is an explicit frame duration (usually %0) the beginning of the wave file is offset backwards in time by the position of the <SetVisualProbe> (if no <SetVisualProbe> has been specified the probe is the end of the wave file) such that the sample at the <SetVisualProbe> position is played at the scheduled time of the frame -- this allows the following frame to be presented simultaneously with a user specified part of the wave file. To illustrate:
0 "before" / <wav> "wavefile" / "after";
Here "before" will be displayed for the default frame duration, then "wavefile" will commence ("before" will stay on the screen) and then finish playing and then "after" will be displayed
0 "before" / <wav> "wavefile" %0 / "after";
Assuming "wavefile"'s duration is less than the default frame duration, "before" will be displayed for the default frame duration and sometime during it's presentation "wavefile" will begin playing. As "before" is replaced by "after" "wavefile" will reach it's end. If "wavefile"'s duration is longer than the default frame duration "wavefile" would commence playing first (all other frames having been delayed by the scheduler to allow for this).
0 "before" / <wav> "wavefile"
<svp middle> / "after";
Assuming there is a cursor in "wavefile" with the name "middle" and that it is in the middle of that file, "before" will be displayed for the default frame duration, then "wavefile" will commence ("before" will stay on the screen) and when the middle is reached "after" will be displayed and "wavefile" will continue playing till it's end.
0 "before" / <wav> "wavefile" <svp middle> %0 / "after";
Assuming there is a cursor in "wavefile" with the name "middle" and that it is in the middle of that file and that half of the duration of "wavefile" is less than the default frame duration, "before" will be displayed for the default frame duration and sometime during it's presentation "wavefile" will begin playing. As "before" is replaced by "after" "wavefile" will reach it's middle. If half of "wavefile"'s duration is longer than the default frame duration "wavefile" would commence playing first (all other frames having been delayed by the scheduler to allow for this).
0 "before" / <wav> "wavefile" <svp start> %0 / "after";
Here "before" will be displayed for the default frame duration. As "before" is replaced by "after" "wavefile" will commence playing. "start" is a special cursor name, it is the start of the file, a cursor with the name "start" need not exist. Note that when using the start for a visual probe the following is almost functionally equivalent:
0 "before" / <wav> "wavefile" <svp start> / "after";
It's not quite equivalent because the duration of the sound frame will be rounded up to 1 tick.
It's also possible to get an audio file to present simultaneously with a display
with the following syntax:
0
"before" / "target" <msfd 500> / <wav> "wavefile" <svp start
500> %0 / "after";
Here "before" will be displayed for the
default frame duration and then "target"
will displayed and simultaneously "wavefile"
will being playing -- assuming your is longer than 500
milliseconds of course. As "wavefile"
continues playing 500 ms after "target"
was presented
"after" will be displayed.
0 "before" / <wav> "wavefile" <svp start> %15 / "after";
Here "before" will be displayed for the default frame duration. After the default frame duration "wavefile" will commence playing. "before" will stay on the screen for another 15 ticks and then be replaced by "after". Actually, I came up with a use for this in the following:
0 "before" / <wav> "wavefile1" <svp start> %15 /
<wav> "wavefile2" <svp start> %15 / <wav> "wavefile3" <svp start> %15 / "after";
Here "before" will be displayed for the default frame duration. After the default frame duration "wavefile1" will commence playing. "before" will stay on the screen for another 15 ticks and then "wavefile2" will commence playing. "before" will continue to stay on the screen for another 15 ticks and then "wavefile3" will commence playing. "before" will stay on the screen for 15 more ticks and then be replaced by "after".
Portions of a wavefile can be played with the
<SetStartCue> and
<SetEndCue>, for example:
0 "before" / <wav> "wavefile" <ssc first> <sec second> / "after";
Here the portion of "wavefile" between the cues first and second will be played. By default a frame is assumed to have a start cue of start and an end cue of end, they can be explicitly declared however, the following are all functionally equivalent:
0 "before" / <wav> "wavefile" / "after";
0 "before" / <wav> "wavefile" <ssc start> / "after";
0 "before" / <wav> "wavefile" <sec end> / "after";
0 "before" / <wav> "wavefile" <ssc start> <sec end> / "after";
When start and end cues are combined with a visual probe the cues start and end have different meanings. When used in a start or end cue they refer to the start or end of the wavefile, when used with a visual probe they refer to the first sample that will be played or the last sample that will be played (which is affected by <SetStartCue> and <SetEndCue>).
As a general note, versions of DMDX prior to 0.25 had trouble playing the exact same waveform (ie, the same start and end cues, the same visual probe and the same channel) while it is already playing that waveform, this includes immediately after it is already playing as the request to begin playing the sound occurs as it is already playing -- and DirectSound can't do this. It can, but I have to set up special copies of the sound buffer and there is no code to do this prior to 0.25. If there was a sufficiently large gap between them then it was Ok, usually a few ticks or whatever is the latency for playing a buffer. If it's played in a different channel or is different in any other way (different cue names even though they may all be in exactly the same position in the file) even though the samples may be the same the sound becomes a separate DirectSound buffer and any number of these can be played concurrently.
The comma frame delimiter should be avoided in combination with <wav> frames as a <wav> frame does not generate any display and as a result unexpected things happen (as opposed to the initial implementation in 0.08b that used the comma).
Instead if you need to have a display simultaneously presented with a wave file
one might use %0 for a the audio frame's duration however seeing as you need
<svp start> to synchronize the audio with the
following frame and that effectively sets the frame's duration to zero anyway %0
is therefore superfluous. Then you can have the visual frame follow:
0 "before" / <wav> "wavefile"
<svp start> / "after";
Here "before" will be
displayed for the default frame duration and then as
"wavefile" is played
"after" will be displayed. The
really tricky thing would be if you required a visual frame's duration to
be controlled by a audio file's duration and here I suspect you run into a wall
as I can't think of a way to do it (beyond hard coding each wave file's duration
as a visual frame's duration anyway) as use of <svp
start 16> say to move the start of an audio frame to coincide with a
previous one tick visual frame will override the audio frame's duration. Closest I can come is having the
visual frame up for a tick longer than the audio component:
0 "before" %1 / <wav> "wavefile" / "after";
Here "before" will be
displayed for a tick and then after "wavefile"
is finished playing "after" will be
displayed.
In order to benchmark the latency value setup beforehand in TimeDX I used an oscilloscope and a PIO12 with the following item file to play a pulse repeatedly simultaneous with the falling edge of a signal on the PIO12:
f70 <id keyboard> <id pio12> <VideoMode 640 480 480 8 0>
0 "Experiment ready" ;
1 <aw 50,20> <wav 0> "50ms" <svp start> /
<aw 50,20> <op 0> <px 250> %10 "50ms on" /
<aw 50,20> <op 255> %10 "50ms off" /;
2 <bu -1> "continuous" <cr>;
50ms.wav has 50ms of high intensity sound in it, I set the 'scope to trigger on positive edge of the PIO-12 output and with the timebase set to 20ms per div. can then watch for falling edge and note where the sound commences. The on and off are reversed logic because I have leds hung on those outputs that turn on when the output is low.
As of version 0.26 of DMDX precise control over the panning and volume of a buffer is offered. Two new keywords are available to exercise this control, <pan> and <volume>. From the DirectSound documentation (edited by me):
The volume is specified in hundredths of decibels (dB). Allowable values are between 0 (no attenuation) and -10000 (silence). The value 0 represents the original, unadjusted volume of the stream. The value -10000 indicates an audio volume attenuated by 100 dB, which, for all practical purposes, is silence. Currently DirectSound does not support amplification.
The pan value is measured in hundredths of a decibel (dB), in the range of -10000 to 10000. The value -10000 means the right channel is attenuated by 100 dB. The value 10000 means the left channel is attenuated by 100 dB. The neutral value is zero. This value of 0 for pan means that both channels are at full volume (they are attenuated by 0 decibels). At any setting other than 0, one of the channels is at full volume and the other is attenuated.
A pan of -2173 means that the left channel is at full volume and the right channel is attenuated by 21.73 dB. Similarly, a pan of 870 means that the left channel is attenuated by 8.7 dB and the right channel is at full volume. A pan of -10000 means that the right channel is silent and the sound is all the way to the left, while a pan of 10000 means that the left channel is silent and the sound is all the way to the right.
The pan control acts cumulatively with the volume control.
To maintain compatibility with old item files the value in the <wav> keyword is still interpreted as it used to be, however it can also contain the new raw pan values as long as they aren't 0,1 or 2 (in which case they will be interpreted as -10000, 10000 and 0 respectively). The volume of a wave is 0 by default and it's pan is -10000.
For systems where sound latency is of paramount concern (versus retrace detection) the priority of the sound thread can be upped, see TimeDX / Advanced / Task Priorities / Help leading to a much more consistent latency -- it can fluctuate wildly on some machines with older sound cards.
DMDX Index.