You need to have some way of making a self extracting zip file execute a batch file, appropriating a self installer works well enough -- I used Winzip with it's Winzip Self Extractor package, WinRAR and it's SFX option work as well which is in fact freeware. Originally you couldn't be expecting miracles in accurate timing either as we were using DMDX's EZ mode where there's no synchronization with the raster, nowadays the later -auto option actually allows tracking the raster when remote testing. For this method you'll also need a SMTP server (later methods like the HTTP POST don't) that doesn't require all sorts of authentication unless you're happy sticking passwords into batch files (which I really don't recommend). I recently found SMTP2GO that offers these kinds of services so you might investigate them if using the later methods isn't possible.
So first off the script. WinZip will have created a temporary directory and extracted all the files there, it'll be the current directory so as long as there's no path information on anything DMDX should be able to find images and so on. The only thing that's different here is using the desktop's video mode with <vm desktop>, but that's normal for EZ mode. The emailer I'm using is my custom code and it's not super (it certainly won't deal with SSL email server connections) but there are a number of other programs out there if you need them. If you want to use mine it's in DMDXUTILS.ZIP. Then the batch file to run them (you've got to change the bits in red):
start /wait "DMDX" dmdx.exe -ez -buffer 2 -run eztest.rtfTo build your remote testing setup you'll want to drag your item file along with other image files (if you use them) to WinZip to make the initial .ZIP file as well as the files I've mentioned (the batch file along with DMDX.EXE and sendmail.exe), they all go in. I guess if you really cared you could use a subdirectory for images and sound files but I'm not. Next you'll tell WinZIp to make a self extracting archive out of it and once you've bought the Self Extractor extension you can tell it to make an archive for software installation. I included an optional message to the users when the extractor is first extracting telling them that DMDX is sensitive to other applications popping up windows and for them to log out of IM sessions and to otherwise disable anything that might pop up a window as DMDX is running. When it asks for the name of the command to run you tell it the name of your batch file, here it's eztest.bat, but you'll want to put a .\ in front of it as they recommend (so .\eztest.bat). And then a few more prompts and you'll have your .EXE that you can stick on a web page and tell users to point their browsers at. They'll have to actually run the thing and answer all the security nags but it doesn't have to run as administrator (for Vista should anyone be using it) and should be fairly straight forward. Hopefully you get an email with the subject "ez testing" with the .AZK for it's body.
An extension to the batch file I recently made was to make it send the diagnostics if the run failed with a couple of IF EXIST statements in the batch file. Makes it much easier to figure out what went wrong if things fail (you've got to change the bits in red):
start /wait "DMDX" dmdx.exe -ez -buffer 2 -run eztest.rtfAnd then there's the ultimate emailer script that actually tries different ports if one is blocked (now that sendemail has been expanded with a -p switch for the port number) (you've got to change the bits in red):
start /wait "DMDX" dmdx.exe -ez -buffers 2 -run eztest.rtf
Wider Internet Testing using an HTTP POST.
So with another study here (2009) needing to run on the wider Internet as opposed to just across campus as the earlier study had needed I set out to test how widely blocked alternative mail ports are across the globe. Turns out they're widely blocked which pretty much rules out using SMTP
(email) across anything other than a relatively controlled network. While I could have tried lots of different ports and maybe I would have found one that hadn't ever been used for SMTP before I suspect they all would have met with less than 100% success -- not to mention a lot of tester fatigue. Instead I wrote a program to POST the results over HTTP on port 80 as if it were a browser filling in a form that went to a script on one of my servers and that then sent email on to the researchers. Kind of round about I admit, however it worked
back in 2009 and the only problem then with it revolved around personal firewalls needing to be told that the program posting the results needs to be allowed to do so. Most users savvy enough to have a personal firewall are fairly used to this and those that aren't savvy are used to just clicking on OK anyway so it's a moot problem
--- although it would appear that some modern (2020) internet security programs
are just blocking outgoing communications outright these days so having a peek
at the reworked reliability section might be a good
idea.
The larger issue for anyone else trying to do a study like this is the script on a server that sends the form results as an email. While scripts that email things are fairly common it is something that's going to require someone with significant technical chops to setup and a server to run it on
and I don't recommend using our server because our server can only semi-reliably send email to accounts
on campus and even then since about 2015 I've seen the campus email spam
appliance snarf email from psy1 to on campus accounts from time to time because
we don't use fancy mail server authentication stuff and I've had to yank the
sysops' chain to get it to stop snarfing our stuff not to mention that fact that
while it does
indeed send mail off campus this could end at any time.
Of course, you don't actually have to email the results, you could just write
them to a file which is what the third method listed below does.
The poster program I wrote is in the second communications test https://psy1.psych.arizona.edu/~jforster/dmdx/commstest2.exe
and is called poster.exe (it's also available in
DMDXUTILS.ZIP)
and if you're setting up your own emailer poster takes a -h option (so
-hyour.server.edu) for the host server to post
data
to (it also has a -p port option like the sendemail.exe program). The first argument
is a script name on the server to post the data to (it uses HTTP 1.1 so multi-homed servers are fine) and
the rest are form control names and their values where if a value is a name of a
file it will send the contents of the file instead of the value. The batch
file for using our sever follows (you've got to
change the bits in red):
start /wait "DMDX" dmdx.exe -ez -buffers 2 -run
commstest2.rtf
if not exist
commstest2.azk goto diags
poster.exe
/cgi-bin/bsdemailer email=your@email.address subject=poster+testing -iemailaddr
results=commstest2.azk
if errorlevel 1 pause
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address subject=DMDX+failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel
1 pause
:end
So like the earlier email solution to build it you'll want to drag your item file (along with any image files if you use them but this is questionable with remote testing as you don't know the sizes of the screens your item file will be running on and unless you take steps to scale your images they'll appear larger or smaller depending on the subject's display density) to WinZip to make the initial .ZIP file as well as the new batch file along with DMDX.EXE and instead of sendmail.exe you'll put poster.exe in there as well. Then you'll tell WinZip to make a self extracting archive out of it and answer the following prompts in the same fashion to make your self extracting .EXE file.
The Luxury Yacht solution.
With all the data coming in (2010) one
email at a time experimenters rapidly discovered that keeping all the data
straight and concatenated into the correct .AZK file was actually quite a bit of
work, prone to error to boot and the call went out for something superior.
So I made a new CGI-BIN called UnloadAZK4web that takes the heart of UnloadAZK
and buries it in a shell of my bsdemailer that stores the data on our server,
psy1 (http://psy1.psych.arizona.edu/DMDX/
or
http://psy1.psych.arizona.edu/cgi-bin/unloadazk4web)
that experimenters can then download with their web browsers. As a backup
it can email the data to an experimenter just in case the server tanks (or
someone accidentally tells the server to nuke the data, see below). This also spawned
a request for a more rigorous timing method than DMDX's EZ mode so a new
auto mode was created that trusts the refresh
rate the operating system says the display is running at and if the OS doesn't say it
goes with 60 Hz.
I would also recommend that people using psy1 for testing
sign up to the DMDX email list so that they receive psy1 down time notices as
detailed at the end of the root page of the help.
The problem here is that we have no control over names of
the experiments and a name collision would have two experiments combining their
data. Probably not catastrophic as item numbers would in all probability
be different however very messy to recover from. So the new CGI generates
an MD5 hash from the item numbers used in an experiment and appends that to the
name of the item file. Which is fine if your experiment always executes
the same items every time it runs, however things like maze tasks (or my
communications test) don't so an additional control should be used to override the data used to generate the hash (called
hash of
course). We've been
using the .RTF file for the hash so that any subtle changes in the item file not reflected in the
name of the item file will generate separate data but you could also use any
arbitrary string (so hash=kjahfkjahfkjasdhf for
example). Indeed there's some argument for using arbitrary strings as
people are finding the multiple new files spawned from trivial edits irritating.
In the past you would also have to use the hash control if your
experiment produced a .ZIL file (say you're using a
<zil> <zor> <zol> rating
task or as a number of people are doing lately using
<ztr> to gather typed
subject IDs and then switching to <zor> mode for binary RT gathering) as UnloadAZK4web
used not to pull item numbers out of a .ZIL file but as of version 3.2.6 of
UnloadAZK4web it now does so the hash control is no longer essential there.
Using an item file for a hash coincidentally exposed a bug in poster 1.1.0 where
if you used two file controls the second one wouldn't get sent so one has to be
careful to use poster 1.1.1 (or later, the current version is in
DMDXUTILS.ZIP)
if one is using a file for the hash control.
And then of course there's the determination of the item file's name, it's not
actually transmitted (results=commstest2.azk
means send the contents of the file commstest2.azk not the
text "commstest2.azk") so DMDX 4.0.4.2 when invoked with -EZ or -auto spews the item file name in
a comment in the .AZK (or .ZIL) and UnloadAZK4web looks for it. If you don't use DMDX 4.0.4.2 (or later) then
UnloadAZK4web will use the subject as the first part of the file name (before
the MD5 hash) and if you don't include a subject it will just use the hash for
the name (meaning you'll have to guess which file on the server has your data).
I would also note that the original version of this remote testing package used
a version of DMDX that was pretty ancient and
didn't account for the Direct3D renderer
needed for Windows 8 and 10 and so on however I've since updated it from time to
time with versions that do,
but if you wanted to use a feature only found in a later version of DMDX and
wanted to use that package as a basis for
your experiment you'd want to grab the latest version of DMDX and find it's
DMDX.EXE (probably in Program Files x86 \ DMDX, also later versions of
DMDXUTILS.ZIP
have a fairly current version of DMDX.EXE in them) and replace the one in this
package. So the
new script used for UnloadAZK4web testing is
as follows (you've got to change the bits in red):
start /wait "DMDX" dmdx.exe -auto -run
commstest2.rtf
if not exist
commstest2.azk goto diags
poster.exe /cgi-bin/unloadazk4web
email=your@email.address subject=unloadazk4web+commstest2+testing -iemailaddr
hash=commstest2.rtf results=commstest2.azk
if errorlevel 1 pause
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address
subject=unloadazk4web+commstest2+failure -iemailaddr
diagnostics=diagnostics.txt
if errorlevel 1 pause
:end
Note that it passes the name
/cgi-bin/unloadazk4web when posting the data versus
/cgi-bin/bsdemailer when emailing a diagnostic of a failure (there's
not much point sending diagnostics to UnloadAZK4web). Also note we're
passing the item file for the hash, I don't think that's such a good idea as
they can get kind of long for real experiments, any garbage string will do fine
(I tend to use things like hash=commstest2hash
these days).
So like the earlier posted email solution to build your remote testing setup you'll want to drag your item file
(along with any image files if you use them) to WinZip or WinRAR (with it's SFX
package) to make the
initial archive file as well as the new batch file along with DMDX.EXE and poster.exe.
Then you'll tell whatever archive program you're using to make a self extracting archive out of it and answer the following prompts in the same fashion
as the email based solutions telling it
to execute the batch file and so on
to make your self extracting .EXE file.
There are a few considerations that might not immediately occur to people that
have cropped up over the years so I'll call attention to them here, first off if
you're testing in an international environment you probably want to use the
#keyboard input device as the standard keyboard only works well in English
countries, not to mention using response keys other than the
shift keys. You might also consider using
<safemode 3> if
you have a long experiment as subjects are prone to switching away from DMDX
with ALT-TAB. And last but not least you might consider the
Notepad option to give subjects proof of completion.
There are several
errors that UnloadAZK4web can throw, it will prepend FAILURE: to the subject
when it does throw one and will append a failure control at the end of the email
with more detail. Typically unless the failure also has WARNING: after it
the data won't have been stored on the server. For now UnloadAZK4web will
pretty much append any text file regardless of whether it's an .AZK file or not
(meaning you could in fact toss diagnostics at it but then your subject count
would be off and you'd have to cut the contents out before ANALYZE ever
processed it which kinda defeats the purpose of making a script to lower the
amount of cutting and pasting an experimenter has to do). If we see abuse
of such glasnost then UnloadAZK4web will start rigorously parsing for .AZK (or
.ZIL) components and if they're not found it will reject the post, as it is
UnloadAZK4web will purge data files older than 6 months and the
directory listing
will warn that a file is about to be deleted once it's more than 5 months old).
Recent misconfigured tests almost mandate I limit the amount of data so the
limit is 20 MB per file when a warning will appear in UnloadAZK4web's directory
listing and at 22 MB the file will be deleted. As of version 3.3.0 of UnloadAZK4web it also sends a separate email notification
about the size warnings to the experimenter's email address when new data is
posted to one of those files. For others without a UofA
email address they'd use a script more along the lines of this (that won't
attempt to send any email backup data -- although as noted earlier psy1 is
indeed capable of sending email to the wider world right now (01/07/12), exactly
how long that will last is anyone's guess however so I strongly recommend you
use the above script that at least gives you some idea of what when wrong when
when something does, because believe me, it's a rare experiment where you don't
encounter something you didn't think of beforehand):
start /wait "DMDX" dmdx.exe -auto -run
commstest2.rtf
if not exist
commstest2.azk goto end
poster.exe /cgi-bin/unloadazk4web
hash=commstest2.rtf results=commstest2.azk
:end
if errorlevel 1 pause
And then there's the issue of testing.
Say you're testing the package and it works and it's sent data to psy1 and then
you want to start collecting real data but there's this file on psy1 now that's
got test data in it. We've included the ability for you to nuke data in
files you've caused to be on psy1 by allowing you to send a poster command to
UnloadAZK4web that has the control delete
instead of the control results. You'll
need to send it a sample .AZK file because that's probably the easiest way to
get the item file name to UnloadAZK4web (you could send it in the subject if you
weren't using the items in the .AZK for the hash and if you didn't have the item file
in the directory you execute the poster command from). So from a command
prompt in a directory that has poster.exe and the item file and at least one
.AZK in it this command line could be given to nuke the old data (you've got to
change the bits in red):
poster.exe /cgi-bin/unloadazk4web hash=commstest2.rtf
delete=commstest2.azk
I've just
realized you can probably use a web browser rather than having to use poster if
you've got a simple hash (rather than using an item file as the hash) where you
can provide the name of your item file with the
subject control (you've got to
change the bits in red):
https://psy1.psych.arizona.edu/cgi-bin/unloadazk4web?hash=somesimplehash&delete=youritemfile.azk&subject=youritemfile.rtf
Reliability / Proof of Completion.
If you happen to care about every
single subject (say you're paying them or their grade depends on it) you might
want to consider making your batch file open a copy of the data in Notepad and
advise the user to save the file for proof of completion and should you find
their data missing at a later date you can ask them to email you the missing
data (internet security programs appear to be taking the silent
route these days so if a user isn't savvy enough to disable one during a remote
testing task data can get lost).
Here you stuff some <emit> keywords first thing
after the parameters
in your item file with something like <emit> <emit>
<emit Here's a copy of your data as proof of completing the study,> <emit please save
the file somewhere in case it's needed for proof of completion.> <emit> <emit>
and then before posting the data back to psy1 opening it in Notepad
like this (the
demo is here):
start /wait "DMDX" dmdx.exe -auto -run
poccommstest2.rtf
if not exist
poccommstest2.azk goto diags
start "proof of completion" notepad.exe
poccommstest2.azk
poster.exe /cgi-bin/unloadazk4web
email=your@email.address subject=unloadazk4web+poccommstest2+testing -iemailaddr hash=poccommstest2hash results=poccommstest2.azk
goto end
:diags
poster.exe /cgi-bin/bsdemailer
email=your@email.address
subject=poster+poccommstest3+failure -iemailaddr
diagnostics=diagnostics.txt
:end
Of course if you don't like the
<emit> method of telling the subject what's up you could prepare a text
file with your instructions and append the subject data to it with the copy
command before opening the result in Notepad thus hiding messy DMDX headers from
the subjects view unless they scroll down to them. Also I guess another possible use of the
proof of completion additions would be if psy1 ever goes away permanently or an
experimenter can't use it for some reason and here they could use this Notepad
route to get subjects to email them the data.
These days the concerns in the rest of this section are moot, the internet is much more stable than it used to be and the afflicted server that caused these problems has long since been replaced so feel free to skip ahead to the next section.
After having the remote testing capabilities up for a while it was noticed that DNS was flaky for psy1.psych.arizona.edu so if a subject's script was trying to post data to it and DNS happened to be down at that moment the data would be lost. The quick fix was to use -h128.196.98.40 in the poster command lines so it no longer had to use DNS to resolve psy1.psych.arizona.edu to 128.196.98.40, the longer term fix was update poster.exe to use this automatically and to also retry a number of other internet related functions. The versions of poster.exe in some of the previous examples haven't been updated so if you are going to build your own remote testing setup I recommend using the latest version (1.2.1 as of writing) in the DMDXUTILS.ZIP package or to use the one in the reliability test itself (https://psy1.psych.arizona.edu/~jforster/dmdx/reliabilitytest.exe). Which by the way has a substantially nicer script from the users perspective that fully breaks out failures and could even be expanded to attempt to educate the user on sending their data in manually if someone cared to (by either echoing the file to the screen and using the clipboard instructions in the script already or by telling the user the location of the file and so on). However I'm guessing such efforts aren't needed at the moment, so far we have 100% reliability from all corners of the web using 128.196.98.40 (as far as communications are concerned, people can still have machines that can't run DMDX).
start /wait "DMDX" dmdx.exe -auto -run
reliabilitytest.rtf
if not exist reliabilitytest.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+reliabilitytest+testing -iemailaddr hash=reliabilitytest results=reliabilitytest.azk
if errorlevel 1 goto error
goto success
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=unloadazk4web+reliabilitytest+failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 goto error
:success
echo off
echo .
echo .
echo .
echo The test was a success, thank you for helping improve DMDX.
goto end
:error
echo off
echo .
echo .
echo .
echo Alas the communications failed. Please copy the error messages above and
echo email them to some@email.address (Clicking the C:\ icon in top
echo left corner and selecting Edit / Mark will allow you to highlight text
echo in this window with the cursor and Enter will copy it to the clipboard).
:end
echo .
pause
And then while this is DNS stuff all going down Thomas Schubert offered us the use of his server in Germany (scriptingrt.net) as a backup UnloadAZK4web server and after a few server tweaks and a number of tweaks to UnloadAZK4web and a bit of new functionality it now runs on his server as well as psy1 (many thanks to Thomas). This means that remote testing setups can either post their data to both servers or post to one if the other is failing (however the UnloadAZK4web on scriptingrt.net hasn't been up for years so this is all rather academic). The trouble with posting data to both servers is determining just what data went where and what's duplicated if one of the servers was down or unreachable for any number of subjects. Given the recent improvements to posting data to psy1 where DNS failures no longer cause data loss I'm recommending people post first to psy1 and only if it fails go on to post data on Thomas' server. For people that host their experiments on the arizona.edu servers the likelihood of psy1 being down and those servers being up is even lower than just plain old internet failures, but it can still happen. Among the differences between the servers is that scriptingrt requires the extension .cgi on it's CGI files and it doesn't require them to live in a cgi-bin folder so the URL for Thomas' server UnloadAZK4web data file listing is http://scriptingrt.net/unloadazk4web.cgi. Then there's the auxiliary decision which server to post to first (assuming you're not going to post to both). scriptingrt is in Germany so if you're testing on that continent perhaps communications are less likely to fail to it. I haven't noticed any routing flaps in the US for the last few years so continental differences may be moot. Still, you may decide to post first to scriptingrt after all as it is not subject to the whims of campus sysadmins who may at a moment's notice decide they're fed up with allowing on campus SMTP connections to go through without authentication -- which is pretty much going to kill off off campus use of psy1 if people need the email acknowledgement that UnloadAZK4web sends out each time data is stored. Then again, sciptingrt is subject to Thomas continuing to lease the server and pay for it's domain name. You also need to post to it's DNS name instead of it's IP (like the default psy1 post is these days) because it's likely to be a multi-homed server (many sites, one IP address).
Of course, having put all the extensive retries into poster.exe any failure to post to psy1 is going to take a good fraction of an hour to expire (one I tested today was over a half hour) so I have altered poster.exe again (now version 2.1.0) to allow a specification of the number of retries (-r) to attempt. Here we can whip off a couple of quick attempts first to psy1 and then to scriptingrt and if either of them is up an running they'll succeed. Then if they both failed fall back on extensive retries to both servers and hopefully one of them comes up during the time it takes (you've got to change the bits in red):
start /wait "DMDX" dmdx.exe -auto -run
redunancytest.rtf
if not exist redunancytest.azk goto diags
poster.exe -r1 /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto fallback
goto success
:fallback
poster.exe -r1 -hscriptingrt.net /unloadazk4web.cgi email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto moreretries
goto success
:moreretries
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto moreretriesfallback
goto success
:moreretriesfallback
poster.exe -hscriptingrt.net /unloadazk4web.cgi email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto diags
goto success
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=unloadazk4web+redunancytest+failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 goto error
:success
echo off
echo .
echo .
echo .
echo The test was a success, thank you for helping improve DMDX.
goto end
:error
echo off
echo .
echo .
echo .
echo Alas the communications failed. Please copy the error messages above and
echo email them to some@email.address. (Clicking the C:\ icon in top
echo left corner and selecting Edit / Mark will allow you to highlight text
echo in this window with the cursor and Enter will copy it to the clipboard).
:end
echo .
pause
And then Microsoft go and release Windows 8 which doesn't actually
contain DirectDraw any more but instead emulates it and of course DMDX uses
DirectDraw to manipulate the screen and that emulated version of DirectDraw in
Windows 8 (and 10) isn't so crash hot timing wise (you'll see lots of 25
millisecond display errors and if you looked really closely you'd probably see
some frames not being displayed at all) so I had to go and
craft version 5 of DMDX that has an optional
Direct3D renderer in it. People have two choices here, one use the new
version 5 binaries (by pulling the DMDX.EXE file out of Program Files / DMDX
after a recent installation of DMDX say or looking in the later versions of the
utilities
package that contain a DMDX executable, it might not be the latest but you don't
have to install DMDX either) and let DMDX choose which renderer it wants to use based on
the OS it finds itself running on or just force DMDX to use the Direct3D
renderer with -d3d
on the command line. At this stage I'm fairly sure the second option is
viable unless you're looking at testing on some very ancient hardware, I've
setup an
example using it that will spew diagnostics at me if it fails but there's
already been fairly widespread testing of this and no significant issues have
arisen lately. It also uses the relatively new
<prose>
and <instruction>
keywords that make typing and displaying text more hospitable to different
display dimensions and international keyboard differences. If people go
with the automatic route they can tell which renderer was used by looking at the
video mode diagnostics as when Direct3D is being used the code
D3D
will occur before the Video Mode
text in the output file:
**********************************************************************
Subject 1, 06/03/2014 10:23:13 on 666-DEVEL, refresh 16.67ms
Item RT
! DMDX is running in auto mode (automatically determined raster sync)
! D3D Video Mode 1280,1024,24,60
! Item File <commstest4.rtf>
Using <prose> means there's a chance people can type in extended characters and while this might have worked with versions of DMDX earlier than 5.3.3.0 if the local machine's default ANSI code page had those characters in it with 5.3.3.0 we now have the option of making DMDX emit typed data using Unicode's UTF-8 format. One will want to put -unicode on the DMDX command line and <prose utf8bom> in the item file's parameters so that DMDX (a) uses UTF-8 coding for extended characters and (b) puts the UTF-8 byte order mark (BOM) at the start of the saved data so that when you view the data file with a browser it will know that the text is UTF-8 and you won't get mojibake for the Unicode characters -- and that BOM has to be in the first data posted to UnloadAZKweb (ie you can't start a data gathering operation without <prose utf8bom> and expect programs to recognize the resulting data as being UTF-8 without gerfingerpoken, which is what I had to do). The data file that's emailed to you won't be so lucky unless you can tell your mail client to use Unicode for that piece of mail although if you carefully cut the squiggles out immediately following the results= string to the end of your data and pasted it into a new text file and reopened it it would probably be interpreted as Unicode. You will of course need to make sure you're using a later version of DMDX than is found in the other communications tests and you'll also need to use a version of poster that's 4.0.1 or later as prior to that it would happily convert all your UTF-8 characters to 0xFF, urg.
Seeing as people are not likely to be interested in the diagnostic spew here's a
version of the
batch file more amenable to running someone's actual experiment
(you've got to change
the bits in red):
start /wait "DMDX" dmdx.exe -auto -unicode -d3d -ignoreunknownrtf
-run commstest4.rtf
if not exist
commstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web
email=some@email.address subject=unloadazk4web+commstest4+testing -iemailaddr
hash=commstest4.rtf results=commstest4.azk
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address
subject=poster+commstest4+failure -iemailaddr diagnostics=diagnostics.txt
:end
If people are interested in the diagnostic spew here's the
actual batch file that runs that test and if DMDX
fails to run at all it sends us the system information about that machine so I
can try and guess what it is that's stopping DMDX from running (you've got to change
the bits in red -- but I can't imagine anyone
else would be interested in using this):
start /wait "DMDX" dmdx.exe -auto -unicode -d3d -ignoreunknownrtf
-run commstest4.rtf
if not exist
commstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web
email=some@email.address subject=unloadazk4web+commstest4+testing -iemailaddr
hash=commstest4.rtf results=commstest4.azk
goto end
:diags
if not exist
diagnostics.txt goto systemdiags
poster.exe /cgi-bin/bsdemailer email=some@email.address
subject=poster+commstest4+failure -iemailaddr diagnostics=diagnostics.txt
goto end
:systemdiags
echo off
echo .
echo .
echo .
echo It
would appear DMDX has failed to run at all. Please wait while we
echo gather
some diagnostic information to help us improve DMDX. If you
echo don't wish
to provide us with such information hit CONTROL-C now.
echo Otherwise hit
space to continue...
pause
echo on
msinfo32 /report systemdiags.txt
cmd /a /c type systemdiags.txt>systemdiagsansi.txt
poster.exe /cgi-bin/bsdemailer
email=some@email.address subject=poster+commstest4+system+failure -iemailaddr
diagnostics=systemdiagsansi.txt
:end
if errorlevel 1 pause
del *.azk
I figured this was fairly self evident but
there's at least one person out there that hasn't figured this out and has
tens of megabytes of data posted on psy1 (whereas the nearest competitors are
merely a few hundred kilobytes) that you can see is clearly riddled
with duplicates from not removing old AZK files when running out of a local
directory.
You'll also notice that command line has
-ignoreunknownrtf
in it. This is because if you have DMDX installed on a machine that you're
also running a remote testing script on and you've turned off DMDX's
ignore unknown RTF control for some
inexplicable reason your remote testing
instance will find that control setting in the registry and blow your remote
testing session up if you indeed have unknown RTF control words in your item file.
Yes, not likely to happen but it happened to at least one person out there
besides me.
Recently an experiment hit an unexpected glitch related to
using the
poster.exe /cgi-bin/bsdemailer email=some@email.address
subject=poster+commstest4+failure -iemailaddr diagnostics=diagnostics.txt
job1=job1.zil
If DMDX failed to run at all and never got around to
generating
data the email will just have the line job1=job1.zil
in it, otherwise you'd get the RTs gathered to the point of failure.
While we're talking about keyboards I suppose I should
mention the fact that remote machines are not likely to have run DMDX before and
are thus very likely to have Microsoft's insane Sticky Keys feature still active
where hitting a shift key five times in a row pulls up a dialog about enabling
Sticky Keys and thus blowing your data gathering out of the water. While
DMDX will recover from this once it gets the focus back you've trashed at least
a couple of items' data (unless you've used
Another thing to be aware of is that machines are not
necessarily going to have 60 Hz displays on them so you should avoid using
DMDX's tick based timing commands. While the absolute vast majority of
them are in fact still 60 Hz I've seen machines with rates that are lower (24 Hz
is about the lowest I've seen) and much higher (240 Hz, must have been someone's
gaming rig). So avoid using
And while we're talking about item files I should mention
that when you run DMDX with
Another feature you might want to include in your item file
is using
Also note that while DMDX can now handle special characters in file names when
the Unicode option is in use I wouldn't recommend
using non-ANSI characters in item file names with versions of DMDX prior to
6.2.0.0. While doing so might work in
your locale when you're remote testing the locale of your subjects can be quite
different and when a non-ANSI name gets translated it's dependent on the locale
so it's name can be translated into who knows what. After I
made DMDX's
And then there's the question of
user agent strings. Since writing this section my sslposter.exe
application has been updated and made usable for remote testing so even if you
don't want to use the private data posting discussed in the SSL
section just using sslposter.exe instead of poster.exe renders all the
considerations in this section moot (it's available in the
utilities
package). If you don't want to use sslposter
for some reason read on: If you're noticing some locations succeed at posting
data to psy1 and some other locations fail or you're just flat out not succeeding
anywhere (and my
communication
test also fails, if you can't tell the results of that test go at the end of
this
file) you may be running afoul of some
internet security appliance that doesn't recognize poster.exe as a legitimate
internet application. This can also occur if you live in places like Iran and
China that have great firewalls. This is because poster.exe identifies
itself as
And then there's the issue of IPv6, the
next protocol that the internet will be using that psy1 doesn't use (it's still
IPv4). Twenty years old at the time of this writing (2018) and still only
used by 25% of the web's servers it's still largely optional. I figured
I'd have to make poster use IPv6 because the developing world would be using
IPv6 on account of the fact that there's bugger all IPv4 addresses left for them
but according to recent articles I've read that's the not the case, they and the
rest of the web are doubling down on IPv4. Then my ISP upgraded my router
to support IPv6 so I had no excuse not to at least make poster support IPv6 so
version 4.0.0 of it now does, however psy1 still doesn't. If someone finds
a population they want to test that only supports IPv6 I'll check and see if the
building has IPv6 routed to it and turn IPv6 support on on psy1 but it's just as
likely that any corner of the internet that's only got IPv6 routed to it also
knows how to thunk to IPv4 so I'm not holding my breath. Of course if you
set your own server up that talks IPv6 then the upgraded
version of poster should work for you (it is in the
utilities .ZIP
file).
sslposter doesn't have all the switches that poster has
however it automatically uses the
user agent string
Mozilla/5.0 sslposter/2.0.0 and not only that the user agent is
transferred securely with HTTPS so -u is a non-issue as only the server sees it.
While I added code to poster to use IPv6 I suspect the OpenSSL routines
optionally use IPv6, haven't figured out a way to definitively test that yet. The -b switch used in
the next topic hasn't been implemented but other than that sslposter.exe may be
a better thing to use than poster.exe just because suspicious network appliances
can't peek at it's data (I certainly
tested it and it works). The -p switch is no longer used as the port
number is part of the host name specification (see below).
So in order to hide the data from world view UnloadAZK4web 3.0.0 introduces a
new control (or rather it recognizes a new control) and that is the
And if you've set up your own server and you get "Unable to open lock" failure messages it's
probably because the
permissions of the data directory are blocking the web server from writing to it
(see earlier comment about www-data owner and group names).
Getting it all going.
While I was
getting this all working I wasn't bundling everything up with Winzip every time
I wanted to test some feature, I would have a local directory on my development
system with all the files that
would go in the zip file (so DMDX.exe, poster.exe, the batch file and item file)
and then execute the batch file in that directory. Which works nicely the
first time, however the next time you run the script poster will post the .AZK
file with the first run's data in it along with the second run's data instead of just
the second run's data. And likewise
for the third and so on so what you need to do is put a line at the start of the
batch file that deletes any AZK files before it runs DMDX (you've got to change
the bits in red):
start /wait "DMDX" dmdx.exe
-auto -ignoreunknownrtf
-run commstest4.rtf
etc...
poster.exe /cgi-bin/unloadazk4web
email=some@email.address subject=unloadazk4web+commstest4+testing
"-uMozilla/5.0 poster" -iemailaddr
hash=commstest4.rtf results=commstest4.azk
Secure communications using SSL/TLS.
So after about a decade of people using some variant of the
Luxury Yacht
solution someone finally came up with a scenario where they couldn't anonymize
their data gathering (they were using some
typed responses) necessitating a
number of new tools, not the least of which was me dusting off an old version of
poster that I'd made called sslposter that used some ancient version of OpenSSL
to post data to secure HTTPS websites (SSL or TLS being the technology behind
HTTPS) that used DLLs for the cryptographic stuff and was
generally less than convenient to use. Current versions of OpenSSL are much
nicer (people that wind up using sslposter.exe should donate to the
OpenSSL developer) and offer static linking of the cryptographic code (so no more DLLs), but of course they changed
a few things as well. Minor degrees of hair pulling involved. sslposter.exe winds up being quite a bit larger (2.5 MB or so) than poster.exe
(350 KB or so), too bad. Next up was turning
SSL/TLS on in psy1's Apache web server, initially we just self
signed a certificate seeing as sslposter blindly trusts whatever you point it at
however using Let's Encrypt to get a free certificate is significantly easier
these days so now psy1 has regular HTTPS capability which you can checkout using the
secure version of this page on psy1. After that we have the interesting tweak of not wanting the data
posted publicly and here we had to make UnloadAZK4web look for a new control
that specifically allows experimenters to nominate the directory where the
data is stored instead of the world readable public one. And of
course UnloadAZK4web is no longer emailing a backup of the data when it sees
said directive as email is not secure so if the server bites it you gotta live with it
(all that happens is the
del
*.azk
start /wait "DMDX" dmdx.exe -auto -unicode -ignoreunknownrtf
-run sslcommstest.rtf
if not exist
sslcommstest.azk goto diags
sslposter.exe /cgi-bin/unloadazk4web email=your@email.address
subject=unloadazk4web+sslcommstest+testing -iemailaddr hash=sslcommstesthash
results=sslcommstest.azk datadir=/home/jforster/DMDX/
goto end
:diags
sslposter.exe /cgi-bin/bsdemailer
email=your@email.address subject=poster+sslcommstest+failure -iemailaddr
diagnostics=diagnostics.txt
:end
If you were setting up your own server then the sslposter
command line would need to have a -h host switch to specify it's address, for
instance -hyour.server.edu:443 and you'd also
want the datadir control to specify your account instead of my jforster account, datadir=/home/youraccount/DMDX/
for instance. Also note the port specification
in that host specification -hyour.server.edu:443.
Here you need to specify port 443 (the HTTPS port) as part of the host name,
while using poster.exe one can specify a host name independent from a port
number (usually 80 for HTTP posts) it's not so with sslposter.exe (one of those
hair pulling things noted above with later versions of OpenSSL and given how
freakily sslposter behaves when the port specification is forgotten as of 2.1.1
it now adds :443 if there's no port specified). Beyond
that there are a couple of unobvious considerations here, one is
what to do on a failure in that you still want notifications, particularly as
you're getting it all working, but those diagnostics are going to potentially
contain data that you don't want transferred around the place without secure
communications and email is most specifically not secure, a conundrum you'll
have to solve (maybe some PowerShell scriptfu to take only the first few and the
last couple of lines of diagnostics.txt). Also, if you were using your own
server while the successful sslposter invocation would be using your.server.edu the bsdemailer for the failure diagnostics
doesn't have to, email not
being encrypted anyway I don't see the point in setting up two CGI binaries for
people when only one is needed. While I do use the sslposter for the diags
that's simply
because I'd rather not to have to put both sslposter.exe in the self extracting
archive as well as poster.exe.
Using the datadir control has several repercussions beyond the data simply not
being readable by the world (assuming permissions on the directory you've asked
it to be written to are set correctly of course). First of them used to be that
UnloadAZK4web would no longer be able to delete the data once it gets older than
six months however with UnloadAZK4web 3.4.0 I modified the data deletion
routines such that every time data is posted to any given datadir
override folder it scans that folder for old files an nukes them (the
conventional data directory is scanned for old data any time UnloadAZK4web has
to generate a directory listing). Later versions of UnloadAZK4web also expanded the
delete
control's functionality to work with the datadir
control.
For people that can't stand their own Linux server up and would like to use psy1
for their secure data collection version 3.1.0 of UnloadAZK4web now recognizes a retrieve control similar to the
delete control that
along with the datadir and
hash controls allow
one to pull securely stored data over HTTPS with sslposter 2.1.0 that now has
poster.exe's -o output switch that makes it write all the gathered data
(excluding the HTTP headers) to a file. Like
delete the
retrieve control needs to be
fed a valid .AZK file so UnloadAZK4web can pull the item file name from it and
the
hash and
datadir controls obviously
need to be whatever they were in your batch file that drove the remote testing. In order to keep the data secure an additional
control is required (the
?????=????? below), the details of which people will have to email me for as
publishing it here isn't even remotely secure and while me sending it to you over email
isn't a whole lot better a single email amid the storm of them is secure enough
I suspect given the
propensity of any number of web sites willing to send passwords over email --
although I might take a page out of the steganography handbook and send it as an
image (and in case I forget the instructions are in my sslcommstestretrieve.bat file
so that's what you should ask for). In
order to limit the chances for miscreants to misuse UnloadAZK4web the retrieve
control will only retrieve files that DMDX generated so if you're getting things
going and wind up accidentally posting stuff that isn't valid data you won't be
able to retrieve those files. While you could get away without use of the
-o switch by copying the text out of the console you'll find that every couple of thousand or so characters there will
be non-DMDX data in there, this is the chunk size information that's stripped out when -o
saves the received data (when -o is in use you won't actually see the text on
the console and will instead just get to the see the HTTP headers and the
chunking information). And because UnloadAZK4web is primarily an HTML
generating script you'll have to save the output as a .HTM or .HTML file, open
it in a browser and copy your data out into a text editor as it will have
UnloadAZK4web's title in there (not to mention various HTML tags like <head>,
<body> and <pre>). You'll also want to make sure that the first line
is blank before the Subjects Incorporated line if you're going to use it with
any other DMDX utilities. Like I said, round about, but hey, probably less
work that standing your own Linux server up. Here's the command to get the data for the sslposter test
(you've got to change the bits in red and get
the blue control and it's value from me):
sslposter.exe /cgi-bin/unloadazk4web hash=sslcommstesthash
retrieve=sslcommstest.azk datadir=/home/jforster/DMDX/
?????=????? -ooutput.htm
I've just realized you can probably use a web browser to retrieve data rather than having to use sslposter if you've got a simple hash (as
opposed to using an item file as the
hash) where you can provide the name of your item file with the
subject control and paste it into the address
bar on your browser (you've got to
change the bits in red and get the blue control and it's value from me):
https://psy1.psych.arizona.edu/cgi-bin/unloadazk4web?hash=sslcommstesthash&retrieve=sslcommstest.azk&datadir=%2fhome%2fjforster%2fDMDX%2f&subject=sslcommstest&?????=?????
Remotely using RecordVocal.
People occasionally ask if they can do an
auditory naming
task remotely and the answer has always been yes, but getting the data back is
problematic and on top of that there are problems with microphone setup and
VOX settings. The original
solution posted here (now below) just dealt with the problems of getting the data back
however in
2020 what with the covidity and people Zooming left and right with their laptops there was renewed
interest in recording subjects responses remotely so I figured I'd have a go at making a self-titrating VOX setup item file
to get around the microphone setup problems given that now there's a good chance
people's computers actually have microphones to begin with so with a few new VOX bells and whistles
added in DMDX 6.1.4.0
we now have a
remote
testing task that attempts to calibrate the VOX more or
less automatically. While we could have blended both transmission of
recorded data and VOX setup into one task we haven't done so at the moment (the
VOX task just uses the DigitalVOX without RecortdVocal) so if
people require both they'll have to merge the two tasks or perhaps a better idea
is to have the VOX setup task chain to the
actual testing task (not forgetting to double up your poster command lines as
there would now be two data files but I guess you might not care about the data
from the VOX setup item file). Once the VOX is set up the values are stored in the
registry and will be available for any following tasks so if you have a
multipart experiment over time you wouldn't necessarily need to have them set
the VOX up again -- although the counter to that would be that if they're using
a laptop (and I'm betting they're almost exclusively going to be using laptops)
they may have moved locations and there may now be totally different background
noise to deal with, details I leave up to you to deal with, I just provide the
technical tools...
The juicy bits of this task's item file are covered in the
VOX section, the batch file is
not much different from the commstest4 batch file but I guess there's some case
to be made for making the batch file detect DMDX puking when it's told to run an
item file that requires a sound capture device on a machine that doesn't have
one, I'm guessing that your subject recruiting should make sure of it but if it
didn't you could chuck a test in where the batch file sends the diagnostics when DMDX has
failed to produce a data file that looks for the string
"DirectSoundCaptureCreate Failed" in the diagnostics and prompts the user
to run it on a laptop:
del dvoxcommstest4.azk
start /wait "DMDX" dmdx.exe
-auto -d3d -unicode -ignoreunknownrtf -run dvoxcommstest4.rtf
if not exist
dvoxcommstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=your@email.address
subject=unloadazk4web+dvoxcommstest4+testing -iemailaddr hash=dvoxcommstest4hash results=dvoxcommstest4.azk
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address
subject=poster+dvoxcommstest4+failure -iemailaddr
diagnostics=diagnostics.txt
type diagnostics.txt | find /I "DirectSoundCaptureCreate Failed" > nul
if errorlevel 1 goto end
echo off
echo .
echo .
echo .
echo It would appear DMDX has failed to run because your computer doesn't have
echo a microphone and it's associated electronics. Try running this
experiment
echo on a laptop or some other computer with one.
pause
echo on
:end
The solution to the problem of getting the subject utterances back is to use email to mail
the recorded audio using psy1's bsdemailer CGI-BIN and use UnloadAZK4web for storing the RT data. Conceivably
UnloadAZK4web could be modified to store the recorded audio as well, but FERPA etc considerations means it couldn't be publicly visible like the run data is
and I was having trouble coming up with solutions for that short of giving
people accounts on psy1 and I'm not really willing to go there (since setting
this test up I have added the ability for UnloadAZK4web to store
data privately however that data has to be DMDX RT data as it stands right
now due to security concerns). So email
it is. Here of course we're now emailing scads of audio data so I included
the LAME MP3 command line encoder and reduce the files down to 24 kbps
versions that are emailable unless you have obscene quantities of them, trick is
you're going to run out of command line space (260 characters) before I suspect
you exceed email sizes so you'll probably need to have multiple
poster.exe /cgi-bin/bsdemailer lines in your
batch file (below). Next trick was that even though poster URL encodes
data to send to the bsdemailer for emailing bsdemailer tries to send you the
resulting binary file, urgh. So enter poster 3.0.0 that has a -b switch to
indicate that the next file sent should be base64 encoded before the URL
encoding and now you can receive the MP3 files -- of course they are now base64
encoded but there are countless online and offline base64 decoders out there, if
people hassle me enough I'll probably write a decoder that takes data from the
clipboard along with the file names we transmit to automatically decode
everything. The
actual test
is based on the capture test and relies
on people selecting the Stereo Mix recording device to properly function,
otherwise it's just going to record two seconds or so of microphone activity --
and in later versions of Windows 10 the Stereo Mix device has to be enabled
before it can be selected as the input device so you may have to go hunting for
that first.
Notable things in the script are that I've used a control name that's identical
to the file name to send the individual MP3 audio files to facilitate decoding the
data (and if I write an automatic decoder it will use those names). Of
course you're still faced with the problems of matching up one subject's voice
data with the run data (notably the subject ID) but hey, it's a start (I'm
guessing people are going to be paying very close attention to the time stamps
of those files or maybe we do some batch file ginsu and find the subject
identifier in the AZK file and either put that in the subject of the data email
or even concatenate it into the data file control names):
del *.azk
echo off
echo .
echo This
test works best with the Stereo Mix device selected as the default
echo
recording device and the digital vox has been setup in DMDX. In later
echo versions of Windows 10 the Stereo Mix device has to be enabled before it
echo can be selected as the input device so you may have to go hunting for
echo that first.
echo .
pause
start /wait "DMDX" dmdx.exe -auto -ignoreunknownrtf -run
rvcommstest4.rtf
if
not exist rvcommstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web
email=your@email.address subject=unloadazk4web+rv+commstest4+testing -iemailaddr
hash=rvcommstest4.rtf results=rvcommstest4.azk
LAME -b 24 -m j -c -h -q 2
--strictly-enforce-ISO rvcommstest41.WAV
rvcommstest41.mp3
LAME -b 24 -m j -c
-h -q 2 --strictly-enforce-ISO rvcommstest42.WAV
rvcommstest42.mp3
LAME -b 24
-m j -c -h -q 2 --strictly-enforce-ISO rvcommstest43.WAV
rvcommstest43.mp3
poster.exe /cgi-bin/bsdemailer email=your@email.address
subject=poster+rv+commstest4+data -iemailaddr -b
rvcommstest41.mp3=rvcommstest41.mp3 -b
rvcommstest42.mp3=rvcommstest42.mp3 -b
rvcommstest43.mp3=rvcommstest43.mp3
goto end
:diags
poster.exe /cgi-bin/bsdemailer
email=your@email.address subject=poster+rv+commstest4+failure -iemailaddr
diagnostics=diagnostics.txt
:end
if errorlevel 1 pause.
Online subject recruitment.
While tangential to DMDX itself its come up a few times so perhaps it bears mention
here but Amazon's Mechanical Turk is a convenient way to recruit online subjects
for studies. Linguists have been using it for some time so I include
comments made by one of them that has run a few DMDX remote testing auditory tasks
(task specific information has been removed or paraphrased in square brackets):
My MTurk experiences have been varied. I usually aim for
twice as many participants as I want, because I need to exclude non-native
English speakers, people who didn't do the task correctly, people who are trying
to scam me, etc. One thing I was told is the data is cheap, so you should always
pay for more than you need because only a portion of the data you'll get isn't
garbage. But here's a few quick thoughts on what I understand of your situation:
Winzip alternatives.
If buying the Winzip Self Extractor Package is out of
the question there are a couple of packages out there that appear to be usable.
First off there's 7zip that would appear to have self extracting capabilities, I
looked it over but without using one or two other add-ons it doesn't look like
it's easily useable (one not only needs it to auto extract files, one needs it
to execute a command, ostensibly to start the installation but in our case to
run the test).
And then it also appears that WinRAR can create self extracting archives with it's SFX add
on these days, had at
least one person use it and it works and it's free so hard to argue with that
one.