DMDX Help.


Remote Testing Overview.

    On this page:

Remote Testing Using SMTP (email).
Wider Internet Testing using an HTTP POST.
The Luxury Yacht solution.
Reliability / Proof of Completion.
Windows 8 and Unicode.
Getting it all going.
Secure communications using SSL/TLS.
Remotely using RecordVocal.
Online subject recruitment.
Winzip alternatives.


    I used to get asked from time to time if DMDX could be used over the web and the answer was always pfft.  However nowadays remote testing is big stuff and the ways one can use DMDX remotely have proliferated, hell, you could probably even build a Macintosh testing system using a WineSkin and if I cared to modify some code you could even do a naming task using the remote monitor if we expand it to transfer voice data.

    Basically there are three methods these days to do remote testing and the only one you're really going to be interested in is the third one.  The first relies on SMTP ports (email) being open on the wider internet (not the case) so remote testing using my emailer is only relevant if you want to test on your own local network where you know the ports are open.  It also requires the experimenter to collate all the data gathered as each subject's data comes in one email at a time and if you're running more than one experiment that can actually get "interesting".  The second method might require an email account on campus at the University of Arizona and I'm guessing that's not happening for anyone other than us and like the first method still requires manual collation of the data.  The third method using a HTTP POST to get the data back however is applicable to wider testing by bodies other than those at the UofA and it automatically gathers all the subject's data for each experiment into it's own data file.  A fourth option exists that's a modification of the third one if you have a tough IRB or are otherwise concerned about having your data viewable by all and sundry, but there's more buy in required by experimenters to get that going.  That said, the background to what's going on and considerations you need to be aware of are still present in the original first two methods so perusing them before really concentrating on the third or fourth ones is a good idea.

    If this is too confusing there's also Thomas Schubert's remote testing documentation.  Actually I notice there's several pages out there these days so searching for "DMDX remote testing" is a good idea.

    If you're looking to run in a computer lab and run a lot of subjects simultaneously there's also the Networked and Read Only file systems overview that shows how to set that up and gather the data without having to install DMDX on all the machines or develop a remote testing package.

    Note that this page was started long before HTTPS was deemed to be the desirable thing to use for URLs everywhere instead of the old HTTP protocol so with the exception of the SSL section it's talking HTTP, however I noticed Microsoft have a complete fit about downloading one of these tests the other day (2022) and it's primary gripe was that it wasn't over a secure connection, and I suppose they have a point as that's vulnerable to a man in the middle attack where ne'er-do-wells have control of a router between this web server and your browser and modify our package on the fly to inject something evil into it.  So I've modified all the URLs to use HTTPS when it matters and psy1 now has a proper Let's Encrypt certificate so secure connections work to it even though there's a whole section on how to get around that (one day I'll purge it).


Remote Testing Using SMTP (email).

    Originally (2007) I came up with a mechanism for a study here where subjects could not be expected to come in to the lab.  Basically what I did was zip up a DMDX executable, an item file, a program of mine to send email and a batch file to run the whole shebang into a self extracting and executing archive.  Subjects only have to be able to run a program from an URL, where the only choices they can make are basically to do it or not.

     You need to have some way of making a self extracting zip file execute a batch file, appropriating a self installer works well enough -- I used Winzip with it's Winzip Self Extractor package, WinRAR and it's SFX option work as well which is in fact freeware.  Originally you couldn't be expecting miracles in accurate timing either as we were using DMDX's EZ mode where there's no synchronization with the raster, nowadays the later -auto option actually allows tracking the raster when remote testing.  For this method you'll also need a SMTP server (later methods like the HTTP POST don't) that doesn't require all sorts of authentication unless you're happy sticking passwords into batch files (which I really don't recommend).  I recently found SMTP2GO that offers these kinds of services so you might investigate them if using the later methods isn't possible.

     So first off the script.  WinZip will have created a temporary directory and extracted all the files there, it'll be the current directory so as long as there's no path information on anything DMDX should be able to find images and so on.  The only thing that's different here is using the desktop's video mode with <vm desktop>, but that's normal for EZ mode.  The emailer I'm using is my custom code and it's not super (it certainly won't deal with SSL email server connections) but there are a number of other programs out there if you need them.  If you want to use mine it's in DMDXUTILS.ZIP.  Then the batch file to run them (you've got to change the bits in red):

start /wait "DMDX" dmdx.exe -ez -buffer 2 -run eztest.rtf
start /wait "sending results" sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing" eztest.azk


    The first line runs DMDX in EZ mode and waits for it to finish.  It runs it with a limited number of video buffers (because who knows how wretched the destination machine is) and tells it to run our item file, in this case eztest.rtf.  Once DMDX has finished the batch file runs the emailer and tells it to send eztest.azk which DMDX will have left behind after running eztest.rtf.  If you're using my emailer you'll have to tell it the name of your SMTP machine with -h because the UofA's server sure ain't gonna accept connections that aren't SSL from anywhere off campus.  Once the emailer is done WinZip deletes the files and temporary directory it put the files in and about the only thing left will be a few registry keys.  It won't need to execute as an administrator.  For testing purposes you can stick a pause command at the end of the batch file if things aren't working and you need to find out what's up.  As the batch file is paused if you're using an older OS like Windows 7 you can go look in C:\Documents and Settings\Username\Local Settings\Temp\WZXXXX.XX (you can find it with later OSes by using something like %temp%\WZXXXX.XX) and see the files before they get whacked.

     To build your remote testing setup you'll want to drag your item file along with other image files (if you use them) to WinZip to make the initial .ZIP file as well as the files I've mentioned (the batch file along with DMDX.EXE and sendmail.exe), they all go in.  I guess if you really cared you could use a subdirectory for images and sound files but I'm not.  Next you'll tell WinZIp to make a self extracting archive out of it and once you've bought the Self Extractor extension you can tell it to make an archive for software installation.  I included an optional message to the users when the extractor is first extracting telling them that DMDX is sensitive to other applications popping up windows and for them to log out of IM sessions and to otherwise disable anything that might pop up a window as DMDX is running.  When it asks for the name of the command to run you tell it the name of your batch file, here it's eztest.bat, but you'll want to put a .\ in front of it as they recommend (so .\eztest.bat).  And then a few more prompts and you'll have your .EXE that you can stick on a web page and tell users to point their browsers at.  They'll have to actually run the thing and answer all the security nags but it doesn't have to run as administrator (for Vista should anyone be using it) and should be fairly straight forward.  Hopefully you get an email with the subject "ez testing" with the .AZK for it's body.

     An extension to the batch file I recently made was to make it send the diagnostics if the run failed with a couple of IF EXIST statements in the batch file.  Makes it much easier to figure out what went wrong if things fail (you've got to change the bits in red):

start /wait "DMDX" dmdx.exe -ez -buffer 2 -run eztest.rtf
if exist eztest.azk start /wait "sending results" sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing" eztest.azk
if not exist eztest.azk start /wait "sending diagnostics" sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez diagnostics" diagnostics.txt

     And then there's the ultimate emailer script that actually tries different ports if one is blocked (now that sendemail has been expanded with a -p switch for the port number) (you've got to change the bits in red):

start /wait "DMDX" dmdx.exe -ez -buffers 2 -run eztest.rtf
if not exist eztest.azk goto diags
sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing results" eztest.azk
if errorlevel 1 sendemail.exe -p2525 -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing 2525 results" eztest.azk
goto end
:diags
sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing diagnostics" diagnostics.txt
if errorlevel 1 sendemail.exe -p2525 -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing 2525 diagnostics" diagnostics.txt
:end
 


Wider Internet Testing using an HTTP POST.

    So with another study here (2009) needing to run on the wider Internet as opposed to just across campus as the earlier study had needed I set out to test how widely blocked alternative mail ports are across the globe. Turns out they're widely blocked which pretty much rules out using SMTP (email) across anything other than a relatively controlled network. While I could have tried lots of different ports and maybe I would have found one that hadn't ever been used for SMTP before I suspect they all would have met with less than 100% success -- not to mention a lot of tester fatigue. Instead I wrote a program to POST the results over HTTP on port 80 as if it were a browser filling in a form that went to a script on one of my servers and that then sent email on to the researchers. Kind of round about I admit, however it worked back in 2009 and the only problem then with it revolved around personal firewalls needing to be told that the program posting the results needs to be allowed to do so.  Most users savvy enough to have a personal firewall are fairly used to this and those that aren't savvy are used to just clicking on OK anyway so it's a moot problem --- although it would appear that some modern (2020) internet security programs are just blocking outgoing communications outright these days so having a peek at the reworked reliability section might be a good idea.

    The larger issue for anyone else trying to do a study like this is the script on a server that sends the form results as an email.  While scripts that email things are fairly common it is something that's going to require someone with significant technical chops to setup and a server to run it on and I don't recommend using our server because our server can only semi-reliably send email to accounts on campus and even then since about 2015 I've seen the campus email spam appliance snarf email from psy1 to on campus accounts from time to time because we don't use fancy mail server authentication stuff and I've had to yank the sysops' chain to get it to stop snarfing our stuff not to mention that fact that while it does indeed send mail off campus this could end at any time.  Of course, you don't actually have to email the results, you could just write them to a file which is what the third method listed below does. 

    The poster program I wrote is in the second communications test https://psy1.psych.arizona.edu/~jforster/dmdx/commstest2.exe and is called poster.exe (it's also available in DMDXUTILS.ZIP) and if you're setting up your own emailer poster takes a -h option (so -hyour.server.edu) for the host server to post data to (it also has a -p port option like the sendemail.exe program).  The first argument is a script name on the server to post the data to (it uses HTTP 1.1 so multi-homed servers are fine) and the rest are form control names and their values where if a value is a name of a file it will send the contents of the file instead of the value.  The batch file for using our sever follows (you've got to change the bits in red):

start /wait "DMDX" dmdx.exe -ez -buffers 2 -run commstest2.rtf
if not exist commstest2.azk goto diags
poster.exe /cgi-bin/bsdemailer email=your@email.address subject=poster+testing -iemailaddr results=commstest2.azk
if errorlevel 1 pause
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address subject=DMDX+failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 pause
:end

    So like the earlier email solution to build it you'll want to drag your item file (along with any image files if you use them but this is questionable with remote testing as you don't know the sizes of the screens your item file will be running on and unless you take steps to scale your images they'll appear larger or smaller depending on the subject's display density) to WinZip to make the initial .ZIP file as well as the new batch file along with DMDX.EXE and instead of sendmail.exe you'll put poster.exe in there as well.  Then you'll tell WinZip to make a self extracting archive out of it and answer the following prompts in the same fashion to make your self extracting .EXE file.



The Luxury Yacht solution.

    With all the data coming in (2010) one email at a time experimenters rapidly discovered that keeping all the data straight and concatenated into the correct .AZK file was actually quite a bit of work, prone to error to boot and the call went out for something superior.  So I made a new CGI-BIN called UnloadAZK4web that takes the heart of UnloadAZK and buries it in a shell of my bsdemailer that stores the data on our server, psy1 (http://psy1.psych.arizona.edu/DMDX/ or http://psy1.psych.arizona.edu/cgi-bin/unloadazk4web) that experimenters can then download with their web browsers.  As a backup it can email the data to an experimenter just in case the server tanks (or someone accidentally tells the server to nuke the data, see below).  This also spawned a request for a more rigorous timing method than DMDX's EZ mode so a new auto mode was created that trusts the refresh rate the operating system says the display is running at and if the OS doesn't say it goes with 60 Hz.

    I would also recommend that people using psy1 for testing sign up to the DMDX email list so that they receive psy1 down time notices as detailed at the end of the root page of the help.

    The problem here is that we have no control over names of the experiments and a name collision would have two experiments combining their data.  Probably not catastrophic as item numbers would in all probability be different however very messy to recover from.  So the new CGI generates an MD5 hash from the item numbers used in an experiment and appends that to the name of the item file.  Which is fine if your experiment always executes the same items every time it runs, however things like maze tasks (or my communications test) don't so an additional control should be used to override the data used to generate the hash (called hash of course).  We've been using the .RTF file for the hash so that any subtle changes in the item file not reflected in the name of the item file will generate separate data but you could also use any arbitrary string (so hash=kjahfkjahfkjasdhf for example).  Indeed there's some argument for using arbitrary strings as people are finding the multiple new files spawned from trivial edits irritating.  In the past you would also have to use the hash control if your experiment produced a .ZIL file (say you're using a <zil> <zor> <zol> rating task or as a number of people are doing lately using <ztr> to gather typed subject IDs and then switching to <zor> mode for binary RT gathering) as UnloadAZK4web used not to pull item numbers out of a .ZIL file but as of version 3.2.6 of UnloadAZK4web it now does so the hash control is no longer essential there.  Using an item file for a hash coincidentally exposed a bug in poster 1.1.0 where if you used two file controls the second one wouldn't get sent so one has to be careful to use poster 1.1.1 (or later, the current version is in DMDXUTILS.ZIP) if one is using a file for the hash control.  And then of course there's the determination of the item file's name, it's not actually transmitted (results=commstest2.azk means send the contents of the file commstest2.azk not the text "commstest2.azk") so DMDX 4.0.4.2 when invoked with -EZ or -auto spews the item file name in a comment in the .AZK (or .ZIL) and UnloadAZK4web looks for it.   If you don't use DMDX 4.0.4.2 (or later) then UnloadAZK4web will use the subject as the first part of the file name (before the MD5 hash) and if you don't include a subject it will just use the hash for the name (meaning you'll have to guess which file on the server has your data).  I would also note that the original version of this remote testing package used a version of DMDX that was pretty ancient and didn't account for the Direct3D renderer needed for Windows 8 and 10 and so on however I've since updated it from time to time with versions that do, but if you wanted to use a feature only found in a later version of DMDX and wanted to use that package as a basis for your experiment you'd want to grab the latest version of DMDX and find it's DMDX.EXE (probably in Program Files x86 \ DMDX, also later versions of DMDXUTILS.ZIP have a fairly current version of DMDX.EXE in them) and replace the one in this package.  So the new script used for UnloadAZK4web testing is as follows (you've got to change the bits in red):  

start /wait "DMDX" dmdx.exe -auto -run commstest2.rtf
if not exist commstest2.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=your@email.address subject=unloadazk4web+commstest2+testing -iemailaddr hash=commstest2.rtf results=commstest2.azk
if errorlevel 1 pause
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address subject=unloadazk4web+commstest2+failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 pause
:end

    Note that it passes the name /cgi-bin/unloadazk4web when posting the data versus /cgi-bin/bsdemailer when emailing a diagnostic of a failure (there's not much point sending diagnostics to UnloadAZK4web).  Also note we're passing the item file for the hash, I don't think that's such a good idea as they can get kind of long for real experiments, any garbage string will do fine (I tend to use things like hash=commstest2hash these days).

     So like the earlier posted email solution to build your remote testing setup you'll want to drag your item file (along with any image files if you use them) to WinZip or WinRAR (with it's SFX package) to make the initial archive file as well as the new batch file along with DMDX.EXE and poster.exe.  Then you'll tell whatever archive program you're using to make a self extracting archive out of it and answer the following prompts in the same fashion as the email based solutions telling it to execute the batch file and so on to make your self extracting .EXE file.

    There are a few considerations that might not immediately occur to people that have cropped up over the years so I'll call attention to them here, first off if you're testing in an international environment you probably want to use the #keyboard input device as the standard keyboard only works well in English countries, not to mention using response keys other than the shift keys.  You might also consider using <safemode 3> if you have a long experiment as subjects are prone to switching away from DMDX with ALT-TAB.  And last but not least you might consider the Notepad option to give subjects proof of completion.

    There are several errors that UnloadAZK4web can throw, it will prepend FAILURE: to the subject when it does throw one and will append a failure control at the end of the email with more detail.  Typically unless the failure also has WARNING: after it the data won't have been stored on the server.  For now UnloadAZK4web will pretty much append any text file regardless of whether it's an .AZK file or not (meaning you could in fact toss diagnostics at it but then your subject count would be off and you'd have to cut the contents out before ANALYZE ever processed it which kinda defeats the purpose of making a script to lower the amount of cutting and pasting an experimenter has to do).  If we see abuse of such glasnost then UnloadAZK4web will start rigorously parsing for .AZK (or .ZIL) components and if they're not found it will reject the post, as it is UnloadAZK4web will purge data files older than 6 months and the directory listing will warn that a file is about to be deleted once it's more than 5 months old).  Recent misconfigured tests almost mandate I limit the amount of data so the limit is 20 MB per file when a warning will appear in UnloadAZK4web's directory listing and at 22 MB the file will be deleted.  As of version 3.3.0 of UnloadAZK4web it also sends a separate email notification about the size warnings to the experimenter's email address when new data is posted to one of those files.  For others without a UofA email address they'd use a script more along the lines of this (that won't attempt to send any email backup data -- although as noted earlier psy1 is indeed capable of sending email to the wider world right now (01/07/12), exactly how long that will last is anyone's guess however so I strongly recommend you use the above script that at least gives you some idea of what when wrong when when something does, because believe me, it's a rare experiment where you don't encounter something you didn't think of beforehand):

start /wait "DMDX" dmdx.exe -auto -run commstest2.rtf
if not exist commstest2.azk goto end
poster.exe /cgi-bin/unloadazk4web hash=commstest2.rtf results=commstest2.azk
:end
if errorlevel 1 pause

    And then there's the issue of testing.  Say you're testing the package and it works and it's sent data to psy1 and then you want to start collecting real data but there's this file on psy1 now that's got test data in it.  We've included the ability for you to nuke data in files you've caused to be on psy1 by allowing you to send a poster command to UnloadAZK4web that has the control delete instead of the control results.  You'll need to send it a sample .AZK file because that's probably the easiest way to get the item file name to UnloadAZK4web (you could send it in the subject if you weren't using the items in the .AZK for the hash and if you didn't have the item file in the directory you execute the poster command from).  So from a command prompt in a directory that has poster.exe and the item file and at least one .AZK in it this command line could be given to nuke the old data (you've got to change the bits in red):

poster.exe /cgi-bin/unloadazk4web hash=commstest2.rtf delete=commstest2.azk

    I've just realized you can probably use a web browser rather than having to use poster if you've got a simple hash (rather than using an item file as the hash) where you can provide the name of your item file with the subject control (you've got to change the bits in red):

https://psy1.psych.arizona.edu/cgi-bin/unloadazk4web?hash=somesimplehash&delete=youritemfile.azk&subject=youritemfile.rtf


Reliability / Proof of Completion.

    If you happen to care about every single subject (say you're paying them or their grade depends on it) you might want to consider making your batch file open a copy of the data in Notepad and advise the user to save the file for proof of completion and should you find their data missing at a later date you can ask them to email you the missing data (internet security programs appear to be taking the silent route these days so if a user isn't savvy enough to disable one during a remote testing task data can get lost).  Here you stuff some <emit> keywords first thing after the parameters in your item file with something like <emit> <emit> <emit Here's a copy of your data as proof of completing the study,> <emit please save the file somewhere in case it's needed for proof of completion.> <emit> <emit> and then before posting the data back to psy1 opening it in Notepad like this (the demo is here):

start /wait "DMDX" dmdx.exe -auto -run poccommstest2.rtf
if not exist poccommstest2.azk goto diags
start "proof of completion" notepad.exe poccommstest2.azk
poster.exe /cgi-bin/unloadazk4web email=your@email.address subject=unloadazk4web+poccommstest2+testing -iemailaddr hash=poccommstest2hash results=poccommstest2.azk
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address subject=poster+poccommstest3+failure -iemailaddr diagnostics=diagnostics.txt
:end


   Of course if you don't like the <emit> method of telling the subject what's up you could prepare a text file with your instructions and append the subject data to it with the copy command before opening the result in Notepad thus hiding messy DMDX headers from the subjects view unless they scroll down to them.  Also I guess another possible use of the proof of completion additions would be if psy1 ever goes away permanently or an experimenter can't use it for some reason and here they could use this Notepad route to get subjects to email them the data.

    These days the concerns in the rest of this section are moot, the internet is much more stable than it used to be and the afflicted server that caused these problems has long since been replaced so feel free to skip ahead to the next section.

    After having the remote testing capabilities up for a while it was noticed that DNS was flaky for psy1.psych.arizona.edu so if a subject's script was trying to post data to it and DNS happened to be down at that moment the data would be lost.  The quick fix was to use -h128.196.98.40 in the poster command lines so it no longer had to use DNS to resolve psy1.psych.arizona.edu to 128.196.98.40, the longer term fix was update poster.exe to use this automatically and to also retry a number of other internet related functions.  The versions of poster.exe in some of the previous examples haven't been updated so if you are going to build your own remote testing setup I recommend using the latest version (1.2.1 as of writing) in the DMDXUTILS.ZIP package or to use the one in the reliability test itself (https://psy1.psych.arizona.edu/~jforster/dmdx/reliabilitytest.exe).  Which by the way has a substantially nicer script from the users perspective that fully breaks out failures and could even be expanded to attempt to educate the user on sending their data in manually if someone cared to (by either echoing the file to the screen and using the clipboard instructions in the script already or by telling the user the location of the file and so on).  However I'm guessing such efforts aren't needed at the moment, so far we have 100% reliability from all corners of the web using 128.196.98.40 (as far as communications are concerned, people can still have machines that can't run DMDX).

start /wait "DMDX" dmdx.exe -auto -run reliabilitytest.rtf
if not exist reliabilitytest.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+reliabilitytest+testing -iemailaddr hash=reliabilitytest results=reliabilitytest.azk
if errorlevel 1 goto error
goto success
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=unloadazk4web+reliabilitytest+failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 goto error
:success
echo off
echo .
echo .
echo .
echo The test was a success, thank you for helping improve DMDX.
goto end
:error
echo off
echo .
echo .
echo .
echo Alas the communications failed. Please copy the error messages above and
echo email them to some@email.address (Clicking the C:\ icon in top
echo left corner and selecting Edit / Mark will allow you to highlight text
echo in this window with the cursor and Enter will copy it to the clipboard).
:end
echo .
pause

 

    And then while this is DNS stuff all going down Thomas Schubert offered us the use of his server in Germany (scriptingrt.net) as a backup UnloadAZK4web server and after a few server tweaks and a number of tweaks to UnloadAZK4web and a bit of new functionality it now runs on his server as well as psy1 (many thanks to Thomas).  This means that remote testing setups can either post their data to both servers or post to one if the other is failing (however the UnloadAZK4web on scriptingrt.net hasn't been up for years so this is all rather academic).  The trouble with posting data to both servers is determining just what data went where and what's duplicated if one of the servers was down or unreachable for any number of subjects.  Given the recent improvements to posting data to psy1 where DNS failures no longer cause data loss I'm recommending people post first to psy1 and only if it fails go on to post data on Thomas' server.  For people that host their experiments on the arizona.edu servers the likelihood of psy1 being down and those servers being up is even lower than just plain old internet failures, but it can still happen.  Among the differences between the servers is that scriptingrt requires the extension .cgi on it's CGI files and it doesn't require them to live in a cgi-bin folder so the URL for Thomas' server UnloadAZK4web data file listing is http://scriptingrt.net/unloadazk4web.cgi.  Then there's the auxiliary decision which server to post to first (assuming you're not going to post to both).  scriptingrt is in Germany so if you're testing on that continent perhaps communications are less likely to fail to it. I haven't noticed any routing flaps in the US for the last few years so continental differences may be moot.  Still, you may decide to post first to scriptingrt after all as it is not subject to the whims of campus sysadmins who may at a moment's notice decide they're fed up with allowing on campus SMTP connections to go through without authentication -- which is pretty much going to kill off off campus use of psy1 if people need the email acknowledgement that UnloadAZK4web sends out each time data is stored.  Then again, sciptingrt is subject to Thomas continuing to lease the server and pay for it's domain name. You also need to post to it's DNS name instead of it's IP (like the default psy1 post is these days) because it's likely to be a multi-homed server (many sites, one IP address).

    Of course, having put all the extensive retries into poster.exe any failure to post to psy1 is going to take a good fraction of an hour to expire (one I tested today was over a half hour) so I have altered poster.exe again (now version 2.1.0) to allow a specification of the number of retries (-r) to attempt.  Here we can whip off a couple of quick attempts first to psy1 and then to scriptingrt and if either of them is up an running they'll succeed.  Then if they both failed fall back on extensive retries to both servers and hopefully one of them comes up during the time it takes (you've got to change the bits in red):  

start /wait "DMDX" dmdx.exe -auto -run redunancytest.rtf
if not exist redunancytest.azk goto diags
poster.exe -r1 /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto fallback
goto success
:fallback
poster.exe -r1 -hscriptingrt.net /unloadazk4web.cgi email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto moreretries
goto success
:moreretries
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto moreretriesfallback
goto success
:moreretriesfallback
poster.exe -hscriptingrt.net /unloadazk4web.cgi email=some@email.address subject=unloadazk4web+redunancytest+testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto diags
goto success
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=unloadazk4web+redunancytest+failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 goto error
:success
echo off
echo .
echo .
echo .
echo The test was a success, thank you for helping improve DMDX.
goto end
:error
echo off
echo .
echo .
echo .
echo Alas the communications failed. Please copy the error messages above and
echo email them to some@email.address. (Clicking the C:\ icon in top
echo left corner and selecting Edit / Mark will allow you to highlight text
echo in this window with the cursor and Enter will copy it to the clipboard).
:end
echo .
pause

 

Windows 8 and Unicode.

    And then Microsoft go and release Windows 8 which doesn't actually contain DirectDraw any more but instead emulates it and of course DMDX uses DirectDraw to manipulate the screen and that emulated version of DirectDraw in Windows 8 (and 10) isn't so crash hot timing wise (you'll see lots of 25 millisecond display errors and if you looked really closely you'd probably see some frames not being displayed at all) so I had to go and craft version 5 of DMDX that has an optional Direct3D renderer in it.  People have two choices here, one use the new version 5 binaries (by pulling the DMDX.EXE file out of Program Files / DMDX after a recent installation of DMDX say or looking in the later versions of the utilities package that contain a DMDX executable, it might not be the latest but you don't have to install DMDX either) and let DMDX choose which renderer it wants to use based on the OS it finds itself running on or just force DMDX to use the Direct3D renderer with -d3d on the command line.  At this stage I'm fairly sure the second option is viable unless you're looking at testing on some very ancient hardware, I've setup an example using it that will spew diagnostics at me if it fails but there's already been fairly widespread testing of this and no significant issues have arisen lately.  It also uses the relatively new <prose> and <instruction> keywords that make typing and displaying text more hospitable to different display dimensions and international keyboard differences.  If people go with the automatic route they can tell which renderer was used by looking at the video mode diagnostics as when Direct3D is being used the code D3D will occur before the Video Mode text in the output file:

**********************************************************************
Subject 1, 06/03/2014 10:23:13 on 666-DEVEL, refresh 16.67ms
Item RT
! DMDX is running in auto mode (automatically determined raster sync)
! D3D Video Mode 1280,1024,24,60
! Item File <commstest4.rtf>

    Using <prose> means there's a chance people can type in extended characters and while this might have worked with versions of DMDX earlier than 5.3.3.0 if the local machine's default ANSI code page had those characters in it with 5.3.3.0 we now have the option of making DMDX emit typed data using Unicode's UTF-8 format.  One will want to put -unicode on the DMDX command line and <prose utf8bom> in the item file's parameters so that DMDX (a) uses UTF-8 coding for extended characters and (b) puts the UTF-8 byte order mark (BOM) at the start of the saved data so that when you view the data file with a browser it will know that the text is UTF-8 and you won't get mojibake for the Unicode characters -- and that BOM has to be in the first data posted to UnloadAZKweb (ie you can't start a data gathering operation without <prose utf8bom> and expect programs to recognize the resulting data as being UTF-8 without gerfingerpoken, which is what I had to do).  The data file that's emailed to you won't be so lucky unless you can tell your mail client to use Unicode for that piece of mail although if you carefully cut the squiggles out immediately following the results= string to the end of your data and pasted it into a new text file and reopened it it would probably be interpreted as Unicode.  You will of course need to make sure you're using a later version of DMDX than is found in the other communications tests and you'll also need to use a version of poster that's 4.0.1 or later as prior to that it would happily convert all your UTF-8 characters to 0xFF, urg. 

    Seeing as people are not likely to be interested in the diagnostic spew here's a version of the batch file more amenable to running someone's actual experiment (you've got to change the bits in red):

start /wait "DMDX" dmdx.exe -auto -unicode -d3d -ignoreunknownrtf -run commstest4.rtf
if not exist commstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+commstest4+testing -iemailaddr hash=commstest4.rtf results=commstest4.azk
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=poster+commstest4+failure -iemailaddr diagnostics=diagnostics.txt
:end

    If people are interested in the diagnostic spew here's the actual batch file that runs that test and if DMDX fails to run at all it sends us the system information about that machine so I can try and guess what it is that's stopping DMDX from running (you've got to change the bits in red -- but I can't imagine anyone else would be interested in using this):

start /wait "DMDX" dmdx.exe -auto -unicode -d3d -ignoreunknownrtf -run commstest4.rtf
if not exist commstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+commstest4+testing -iemailaddr hash=commstest4.rtf results=commstest4.azk
goto end
:diags
if not exist diagnostics.txt goto systemdiags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=poster+commstest4+failure -iemailaddr diagnostics=diagnostics.txt
goto end
:systemdiags
echo off
echo .
echo .
echo .
echo It would appear DMDX has failed to run at all. Please wait while we
echo gather some diagnostic information to help us improve DMDX. If you
echo don't wish to provide us with such information hit CONTROL-C now.
echo Otherwise hit space to continue...
pause
echo on
msinfo32 /report systemdiags.txt
cmd /a /c type systemdiags.txt>systemdiagsansi.txt
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=poster+commstest4+system+failure -iemailaddr diagnostics=systemdiagsansi.txt
:end
if errorlevel 1 pause

 


Getting it all going.

    While I was getting this all working I wasn't bundling everything up with Winzip every time I wanted to test some feature, I would have a local directory on my development system with all the files that would go in the zip file (so DMDX.exe, poster.exe, the batch file and item file) and then execute the batch file in that directory.  Which works nicely the first time, however the next time you run the script poster will post the .AZK file with the first run's data in it along with the second run's data instead of just the second run's data.  And likewise for the third and so on so what you need to do is put a line at the start of the batch file that deletes any AZK files before it runs DMDX (you've got to change the bits in red):

del *.azk
start /wait "DMDX" dmdx.exe -auto -ignoreunknownrtf -run commstest4.rtf
etc...

    I figured this was fairly self evident but there's at least one person out there that hasn't figured this out and has tens of megabytes of data posted on psy1 (whereas the nearest competitors are merely a few hundred kilobytes) that you can see is clearly riddled with duplicates from not removing old AZK files when running out of a local directory.

    You'll also notice that command line has -ignoreunknownrtf in it.  This is because if you have DMDX installed on a machine that you're also running a remote testing script on and you've turned off DMDX's ignore unknown RTF control for some inexplicable reason your remote testing instance will find that control setting in the registry and blow your remote testing session up if you indeed have unknown RTF control words in your item file.  Yes, not likely to happen but it happened to at least one person out there besides me.

    Recently an experiment hit an unexpected glitch related to using the keyboard device where most of the way through the task the script failed because DMDX wasn't able to map an input.  Usually in a remote testing task one would be using the #keyboard device to handle international keyboard name variations however this task wasn't, a point I should perhaps have belabored somewhat before now.  Prior to this task pretty much any remote testing task either runs to completion or fails outright (usually during development) so it never occurred to me to consider trying to retrieve partial results of a run, in debugging the collateral damage we realized if they could get a hold of the job1.zil file they could have gotten most of that subjects data back so perhaps other people might want to gather it along with the diagnostics on failure.  poster.exe is not limited in the number of data files it can send back so you can just simply add another control to the bsdemailer line in your batch file sending the job1.zil data (you've got to change the bits in red):

poster.exe /cgi-bin/bsdemailer email=some@email.address subject=poster+commstest4+failure -iemailaddr diagnostics=diagnostics.txt job1=job1.zil

    If DMDX failed to run at all and never got around to generating data the email will just have the line job1=job1.zil in it, otherwise you'd get the RTs gathered to the point of failure.

    While we're talking about keyboards I suppose I should mention the fact that remote machines are not likely to have run DMDX before and are thus very likely to have Microsoft's insane Sticky Keys feature still active where hitting a shift key five times in a row pulls up a dialog about enabling Sticky Keys and thus blowing your data gathering out of the water.  While DMDX will recover from this once it gets the focus back you've trashed at least a couple of items' data (unless you've used <safemode 3> of course).  The solution used by a number of people is to use the map the Z and M keys for example and because you're likely to be using the #keyboard device you'd do that with <mpr +#50> and <mnr +#44>.  You can find what key numbers those are with TimeDX's Input Test, some people like to use a slash instead of the M, which is +#53.  Of course if you're testing in various countries throughout Europe while the #44 key is probably still in the bottom left of the key layout it's probably not a Z key anymore so good luck conveying instructions to your subjects -- but I suppose language barriers are also high enough that no one would ever be trying to design a remote testing task running across such a variety of countries.  And then there's the issue of Chinese or other East Asian IMEs, input method editors, where a single keystroke is assembled from multiple other keystrokes and this of course involves displaying stuff on the screen and blowing DMDX testing up.  I've no idea what games do to avoid having the IME pop up on the screen, perhaps they ask people to disable their IMEs but my guess is the best solution is to choose keys that don't invoke the IME and after looking at a layout guide I'm guessing TAB, +#15, and ENTER, +#28, are the best choice.  Perhaps ` +#41 (the back tick and tilde ~ key) and BACKSPACE +#14 work too (although the back tick moves all over the place on other keyboard layouts so it might not be too great).  If you find a solution let us know and I'll update this guide.  Good luck!

    Another thing to be aware of is that machines are not necessarily going to have 60 Hz displays on them so you should avoid using DMDX's tick based timing commands.  While the absolute vast majority of them are in fact still 60 Hz I've seen machines with rates that are lower (24 Hz is about the lowest I've seen) and much higher (240 Hz and there are 480 Hz monitors now and future ones are likely to be 1 kHz).  So avoid using <fd> or % and instead use <msfd> and on top of that there are also things like <delay> or D where <msdelay> should be used instead (the 240 Hz machine tossed the item file that used <delay> in it for a complete loop as DMDX couldn't even prepare the display in the time specified).  With the prevalence of hyper refresh rate monitors (10% or more of machines at the end of 2024 in Asia) I've updated the other tick based keyword that's relevant in version 6.5.2.0 of DMDX so there's also <msfbd>. With that a remote testing item file with <msd -800> <msfbdur 533> in it's parameters you're assured of uniform operation very close to traditional DMDX operation regardless refresh rate.

    And while we're talking about item files I should mention that when you run DMDX with -run on the command line making it automatically execute that item file it automatically saves the data and exits DMDX once the item file has run to completion.  A nice thing to do for the subjects is to make the last item display message for a second or three before finishing so the subject understands they're done, or you can even put an instruction telling them that after they've hit the space bar the data will now be sent in.

    Another feature you might want to include in your item file is using <safemode 3>.  Traditionally with DMDX if you ALT-TAB away from a running job (accidentally hitting the Windows Key can do it as well) you get presented with a blank screen and either have to issue a request if an instruction was being displayed or respond if the timeout was really long and DMDX was gathering an RT, in either case you have to guess what's up (on some machines anyway, one of the Windows 10 machines here actually caches the old display, it's completely evil).  No particular big deal to people that are familiar with DMDX (indeed most long term users of DMDX wouldn't try it given DMDX's historical antagonism towards task switching) however if you're remote testing it looks like use of ALT-TAB is not unheard of as I see data on psy1 showing subjects occasionally switching away from DMDX and there was one recent task reported to me where the subject was unable to resume execution of their task.  So I've made a new safe mode, number 3, that makes DMDX deal with ALT-TAB or anything that steals it's focus (including declining to abort a job) by repeating the most recently executed item. This means that if an RT was being gathered that item will be presented again (the first one should get an ABORTED message in the data file) and this is true even if DMDX gathered the RT and got to the feedback as that RT gathering item was still the most recent item executed.  Again, the repetition only happens if the subject uses ALT-TAB or hits ESC and then declines to abort.  You'll want to make sure you're using version 6.1.5.0 of DMDX or later as that's when safe mode 3 was added, the commstest4 package has been updated to use it and prompt for ALT-TAB testing.

     Also note that while DMDX can now handle special characters in file names when the Unicode option is in use I wouldn't recommend using non-ANSI characters in item file names with versions of DMDX prior to 6.2.0.0.  While doing so might work in your locale when you're remote testing the locale of your subjects can be quite different and when a non-ANSI name gets translated it's dependent on the locale so it's name can be translated into who knows what.  After I made DMDX's -run command use wide Unicode interfaces in 6.2.0.0 I updated poster to version 5.0.0 and sslposter to 3.0.0 to use the wide file creation routines and other than being a blindingly easy because I could cheat and have both an ANSI command line with the original switch parsing code and a wide copy of that that I only use to open the files it was almost a non-event, didn't even have to translate back forth between wide characters and UTF-8 that DMDX has to do like a banshee.  And that was about that, although I would note that sticking a <prose utf8bom> in the parameters of your item file will make your browser display the data file nicely instead of the UTF-8 bits being mojibake -- even if you aren't using any of the other prose features.  Then there are all sorts of "interesting" issues surrounding the use of Unicode in a batch file like having to put a chcp 65001 command into the batch file to set the code page to UTF-8 and of course using UTF-8 in the batch file itself but they're fairly easy to deal with (like use Notepad++ if your editors don't handle UTF-8).  There's talk about having to use a Unicode capable font such as Lucida Console for the command window but beyond the fact that I don't think you have any way of controlling that on a remote testing setup my experimentation didn't throw any errors using the standard Consolas font -- even though the display was corrupted the actual characters were correctly communicated to DMDX and poster or sslposter from the command line, the fact that the display is garbage doesn't really matter for our purposes (apparently without a Unicode capable font being used the command processor could choke on the Unicode characters and would either not find the files or display a "system cannot write to the specified device" error, I'd have to think that was an old post and modern consoles don't suffer from the problem).  Should you want to use non-ANSI characters in names as long as you pick up the later versions of poster and sslposter from the utilities package you're probably good to go once you use chcp 65001 in your batch file.

    And then there's the question of user agent strings.  Since writing this section my sslposter.exe application has been updated and made usable for remote testing so even if you don't want to use the private data posting discussed in the SSL section just using sslposter.exe instead of poster.exe renders all the considerations in this section moot (it's available in the utilities package).  If you don't want to use sslposter for some reason read on:  If you're noticing some locations succeed at posting data to psy1 and some other locations fail or you're just flat out not succeeding anywhere (and my communication test also fails, if you can't tell the results of that test go at the end of this file) you may be running afoul of some internet security appliance that doesn't recognize poster.exe as a legitimate internet application.  This can also occur if you live in places like Iran and China that have great firewalls.  This is because poster.exe identifies itself as poster/2.1.2 and not say something like Firefox's user agent string Mozilla/5.0 (Windows NT 10.0; WOW64; rv:59.0) Gecko/20100101 Firefox/59.0.  It turns out that the browsers all use Mozilla as the first word in their user agent strings so it's conceivable that changing poster's user agent string to Mozilla/poster (the /2.1.2 gets added by the code, you have no control over that) with -uMozilla/poster on poster's command line will solve the problem of poster getting blocked by firewalls.  If you want to put spaces in the user agent string like Firefox's user agent string above (and technically Mozilla/poster/2.1.2 is an invalid user agent string)  you have to escape it with quotes (so "-uMozilla/5.0 poster" will generate the user agent string Mozilla/5.0 poster/2.1.2).

poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web+commstest4+testing "-uMozilla/5.0 poster" -iemailaddr hash=commstest4.rtf results=commstest4.azk

    And then there's the issue of IPv6, the next protocol that the internet will be using that psy1 doesn't use (it's still IPv4).  Twenty years old at the time of this writing (2018) and still only used by 25% of the web's servers it's still largely optional.  I figured I'd have to make poster use IPv6 because the developing world would be using IPv6 on account of the fact that there's bugger all IPv4 addresses left for them but according to recent articles I've read that's the not the case, they and the rest of the web are doubling down on IPv4.  Then my ISP upgraded my router to support IPv6 so I had no excuse not to at least make poster support IPv6 so version 4.0.0 of it now does, however psy1 still doesn't.  If someone finds a population they want to test that only supports IPv6 I'll check and see if the building has IPv6 routed to it and turn IPv6 support on on psy1 but it's just as likely that any corner of the internet that's only got IPv6 routed to it also knows how to thunk to IPv4 so I'm not holding my breath.  Of course if you set your own server up that talks IPv6 then the upgraded version of poster should work for you (it is in the utilities .ZIP file).


Secure communications using SSL/TLS.

    So after about a decade of people using some variant of the Luxury Yacht solution someone finally came up with a scenario where they couldn't anonymize their data gathering (they were using some typed responses) necessitating a number of new tools, not the least of which was me dusting off an old version of poster that I'd made called sslposter that used some ancient version of OpenSSL to post data to secure HTTPS websites (SSL or TLS being the technology behind HTTPS) that used DLLs for the cryptographic stuff and was generally less than convenient to use.  Current versions of OpenSSL are much nicer (people that wind up using sslposter.exe should donate to the OpenSSL developer) and offer static linking of the cryptographic code (so no more DLLs), but of course they changed a few things as well.  Minor degrees of hair pulling involved.  sslposter.exe winds up being quite a bit larger (2.5 MB or so) than poster.exe (350 KB or so), too bad.  Next up was turning SSL/TLS on in psy1's Apache web server, initially we just self signed a certificate seeing as sslposter blindly trusts whatever you point it at however using Let's Encrypt to get a free certificate is significantly easier these days so now psy1 has regular HTTPS capability which you can checkout using the secure version of this page on psy1.  After that we have the interesting tweak of not wanting the data posted publicly and here we had to make UnloadAZK4web look for a new control that specifically allows experimenters to nominate the directory where the data is stored instead of the world readable public one.  And of course UnloadAZK4web is no longer emailing a backup of the data when it sees said directive as email is not secure so if the server bites it you gotta live with it (all that happens is the
results control that contains the run data is removed from the email confirmation when the alternate data directory control is found so you still get notifications of data being gathered).   Because the data is in a private non-world readable directory one would normally have to have an account on the server to access it, I'm not giving out accounts on psy1 to other people unless they're personally known to me (and even then I'd be reluctant, having to schedule down times when I have to restart the thing for patches would get old real quick) so people wanting secure communications will either have to stand their own Linux server up (I'm always happy to set up UnloadAZK4web for them) or they can use the somewhat round about method I came up with to allow people to securely retrieve their stored data from psy1 with a special sslposter invocation (see waaay below). 

    sslposter doesn't have all the switches that poster has however it automatically uses the user agent string Mozilla/5.0 sslposter/2.0.0 and not only that the user agent is transferred securely with HTTPS so -u is a non-issue as only the server sees it.  While I added code to poster to use IPv6 I suspect the OpenSSL routines optionally use IPv6, haven't figured out a way to definitively test that yet.  The -b switch used in the next topic hasn't been implemented but other than that sslposter.exe may be a better thing to use than poster.exe just because suspicious network appliances can't peek at it's data (I certainly tested it and it works).  The -p switch is no longer used as the port number is part of the host name specification (see below).

    So in order to hide the data from world view UnloadAZK4web 3.0.0 introduces a new control (or rather it recognizes a new control) and that is the datadir control.  Here I recommend using a directory in the experimenters /home folder, for instance on psy1 I've used /home/jforster/DMDX (it's actually limited to some directory depending from the /home folder in 3.0.2).  If you're standing your own server up it's possible the directory permissions will need to be set to let the web server write to it, I certainly had to set the owner and the group of that folder on psy1 to www-data which is the owner and group of the Apache2 web server that's running on psy1.  Also note that data posted using the datadir control has to meet a higher standard than normal data posted to UnloadAZK4web because when the delete control is used it will only delete files that DMDX has produced (just in case some miscreant somehow uses it to attack psy1).  This means that people getting a remote testing job going have to be more aware of what they're doing as any number of times I've seen people not understand how poster works and wind up posting a file that just has the word "power" for instance in it when what they meant to do was post power.azk.  Here's an approximation of the batch file from the new sslcommstest package showing off the use of the new sslposter.exe and the new datadir control (you've got to change the bits in red):

del *.azk
start /wait "DMDX" dmdx.exe -auto -unicode -ignoreunknownrtf -run sslcommstest.rtf
if not exist sslcommstest.azk goto diags
sslposter.exe /cgi-bin/unloadazk4web email=your@email.address subject=unloadazk4web+sslcommstest+testing -iemailaddr hash=sslcommstesthash results=sslcommstest.azk datadir=/home/jforster/DMDX/
goto end
:diags
sslposter.exe /cgi-bin/bsdemailer email=your@email.address subject=poster+sslcommstest+failure -iemailaddr diagnostics=diagnostics.txt
:end
 
    If you were setting up your own server then the sslposter command line would need to have a -h host switch to specify it's address, for instance -hyour.server.edu:443 and you'd also want the datadir control to specify your account instead of my jforster account, datadir=/home/youraccount/DMDX/ for instance.  Also note the port specification in that host specification -hyour.server.edu:443.  Here you need to specify port 443 (the HTTPS port) as part of the host name, while using poster.exe one can specify a host name independent from a port number (usually 80 for HTTP posts) it's not so with sslposter.exe (one of those hair pulling things noted above with later versions of OpenSSL and given how freakily sslposter behaves when the port specification is forgotten as of 2.1.1 it now adds :443 if there's no port specified).  Beyond that there are a couple of unobvious considerations here, one is what to do on a failure in that you still want notifications, particularly as you're getting it all working, but those diagnostics are going to potentially contain data that you don't want transferred around the place without secure communications and email is most specifically not secure, a conundrum you'll have to solve (maybe some PowerShell scriptfu to take only the first few and the last couple of lines of diagnostics.txt).  Also, if you were using your own server while the successful sslposter invocation would be using your.server.edu the bsdemailer for the failure diagnostics doesn't have to, email not being encrypted anyway I don't see the point in setting up two CGI binaries for people when only one is needed.  While I do use the sslposter for the diags that's simply because I'd rather not to have to put both sslposter.exe in the self extracting archive as well as poster.exe.

 
    Using the
datadir control has several repercussions beyond the data simply not being readable by the world (assuming permissions on the directory you've asked it to be written to are set correctly of course).  First of them used to be that UnloadAZK4web would no longer be able to delete the data once it gets older than six months however with UnloadAZK4web 3.4.0 I modified the data deletion routines such that every time data is posted to any given datadir override folder it scans that folder for old files an nukes them (the conventional data directory is scanned for old data any time UnloadAZK4web has to generate a directory listing). Later versions of UnloadAZK4web also expanded the delete control's functionality to work with the datadir control.

    And if you've set up your own server and you get "Unable to open lock" failure messages it's probably because the permissions of the data directory are blocking the web server from writing to it (see earlier comment about www-data owner and group names).

    For people that can't stand their own Linux server up and would like to use psy1 for their secure data collection version 3.1.0 of UnloadAZK4web now recognizes a retrieve control similar to the delete control that along with the datadir and hash controls allow one to pull securely stored data over HTTPS with sslposter 2.1.0 that now has poster.exe's -o output switch that makes it write all the gathered data (excluding the HTTP headers) to a file.  Like delete the retrieve control needs to be fed a valid .AZK file so UnloadAZK4web can pull the item file name from it and the hash and datadir controls obviously need to be whatever they were in your batch file that drove the remote testing.  In order to keep the data secure an additional control is required (the ?????=????? below), the details of which people will have to email me for as publishing it here isn't even remotely secure and while me sending it to you over email isn't a whole lot better a single email amid the storm of them is secure enough I suspect given the propensity of any number of web sites willing to send passwords over email -- although I might take a page out of the steganography handbook and send it as an image (and in case I forget the instructions are in my sslcommstestretrieve.bat file so that's what you should ask for).  In order to limit the chances for miscreants to misuse UnloadAZK4web the retrieve control will only retrieve files that DMDX generated so if you're getting things going and wind up accidentally posting stuff that isn't valid data you won't be able to retrieve those files.  While you could get away without use of the -o switch by copying the text out of the console you'll find that every couple of thousand or so characters there will be non-DMDX data in there, this is the chunk size information that's stripped out when -o saves the received data (when -o is in use you won't actually see the text on the console and will instead just get to the see the HTTP headers and the chunking information).  And because UnloadAZK4web is primarily an HTML generating script you'll have to save the output as a .HTM or .HTML file, open it in a browser and copy your data out into a text editor as it will have UnloadAZK4web's title in there (not to mention various HTML tags like <head>, <body> and <pre>).  You'll also want to make sure that the first line is blank before the Subjects Incorporated line if you're going to use it with any other DMDX utilities.  Like I said, round about, but hey, probably less work that standing your own Linux server up.  Here's the command to get the data for the sslposter test (you've got to change the bits in red and get the blue control and it's value from me):

sslposter.exe /cgi-bin/unloadazk4web hash=sslcommstesthash retrieve=sslcommstest.azk datadir=/home/jforster/DMDX/ ?????=????? -ooutput.htm

    I've just realized you can probably use a web browser to retrieve data rather than having to use sslposter if you've got a simple hash (as opposed to using an item file as the hash) where you can provide the name of your item file with the subject control and paste it into the address bar on your browser (you've got to change the bits in red and get the blue control and it's value from me):

https://psy1.psych.arizona.edu/cgi-bin/unloadazk4web?hash=sslcommstesthash&retrieve=sslcommstest.azk&datadir=%2fhome%2fjforster%2fDMDX%2f&subject=sslcommstest&?????=?????


Remotely using RecordVocal.

   
People occasionally ask if they can do an auditory naming task remotely and the answer has always been yes, but getting the data back is problematic and on top of that there are problems with microphone setup and VOX settings.  The original solution posted here (now below) just dealt with the problems of getting the data back however in 2020 what with the covidity and people Zooming left and right with their laptops there was renewed interest in recording subjects responses remotely so I figured I'd have a go at making a self-titrating VOX setup item file to get around the microphone setup problems given that now there's a good chance people's computers actually have microphones to begin with so with a few new VOX bells and whistles added in DMDX 6.1.4.0 we now have a remote testing task that attempts to calibrate the VOX more or less automatically.  While we could have blended both transmission of recorded data and VOX setup into one task we haven't done so at the moment (the VOX task just uses the DigitalVOX without RecortdVocal) so if people require both they'll have to merge the two tasks or perhaps a better idea is to have the VOX setup task chain to the actual testing task (not forgetting to double up your poster command lines as there would now be two data files but I guess you might not care about the data from the VOX setup item file).  Once the VOX is set up the values are stored in the registry and will be available for any following tasks so if you have a multipart experiment over time you wouldn't necessarily need to have them set the VOX up again -- although the counter to that would be that if they're using a laptop (and I'm betting they're almost exclusively going to be using laptops) they may have moved locations and there may now be totally different background noise to deal with, details I leave up to you to deal with, I just provide the technical tools...
 
    The juicy bits of this task's item file are covered in the VOX section, the batch file is not much different from the commstest4 batch file but I guess there's some case to be made for making the batch file detect DMDX puking when it's told to run an item file that requires a sound capture device on a machine that doesn't have one, I'm guessing that your subject recruiting should make sure of it but if it didn't you could chuck a test in where the batch file sends the diagnostics when DMDX has failed to produce a data file that looks for the string "DirectSoundCaptureCreate Failed" in the diagnostics and prompts the user to run it on a laptop:

del dvoxcommstest4.azk
start /wait "DMDX" dmdx.exe -auto -d3d -unicode -ignoreunknownrtf -run dvoxcommstest4.rtf
if not exist dvoxcommstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=your@email.address subject=unloadazk4web+dvoxcommstest4+testing -iemailaddr hash=dvoxcommstest4hash results=dvoxcommstest4.azk
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address subject=poster+dvoxcommstest4+failure -iemailaddr diagnostics=diagnostics.txt
type diagnostics.txt | find /I "DirectSoundCaptureCreate Failed" > nul
if errorlevel 1 goto end
echo off
echo .
echo .
echo .
echo It would appear DMDX has failed to run because your computer doesn't have
echo a microphone and it's associated electronics.  Try running this experiment
echo on a laptop or some other computer with one.
pause
echo on
:end

 
    The solution to the problem of getting the subject utterances back is to use email to mail the recorded audio using psy1's bsdemailer CGI-BIN and use UnloadAZK4web for storing the RT data.  Conceivably UnloadAZK4web could be modified to store the recorded audio as well, but FERPA etc considerations means it couldn't be publicly visible like the run data is and I was having trouble coming up with solutions for that short of giving people accounts on psy1 and I'm not really willing to go there (since setting this test up I have added the ability for UnloadAZK4web to store data privately however that data has to be DMDX RT data as it stands right now due to security concerns).  So email it is.  Here of course we're now emailing scads of audio data so I included the LAME MP3 command line encoder and reduce the files down to 24 kbps versions that are emailable unless you have obscene quantities of them, trick is you're going to run out of command line space (260 characters) before I suspect you exceed email sizes so you'll probably need to have multiple poster.exe /cgi-bin/bsdemailer lines in your batch file (below).  Next trick was that even though poster URL encodes data to send to the bsdemailer for emailing bsdemailer tries to send you the resulting binary file, urgh.  So enter poster 3.0.0 that has a -b switch to indicate that the next file sent should be base64 encoded before the URL encoding and now you can receive the MP3 files -- of course they are now base64 encoded but there are countless online and offline base64 decoders out there, if people hassle me enough I'll probably write a decoder that takes data from the clipboard along with the file names we transmit to automatically decode everything.  The actual test is based on the capture test and relies on people selecting the Stereo Mix recording device to properly function, otherwise it's just going to record two seconds or so of microphone activity -- and in later versions of Windows 10 the Stereo Mix device has to be enabled before it can be selected as the input device so you may have to go hunting for that first.  Notable things in the script are that I've used a control name that's identical to the file name to send the individual MP3 audio files to facilitate decoding the data (and if I write an automatic decoder it will use those names).  Of course you're still faced with the problems of matching up one subject's voice data with the run data (notably the subject ID) but hey, it's a start (I'm guessing people are going to be paying very close attention to the time stamps of those files or maybe we do some batch file ginsu and find the subject identifier in the AZK file and either put that in the subject of the data email or even concatenate it into the data file control names):

del *.azk
echo off
echo .
echo This test works best with the Stereo Mix device selected as the default
echo recording device and the digital vox has been setup in DMDX.  In later
echo versions of Windows 10 the Stereo Mix device has to be enabled before it
echo can be selected as the input device so you may have to go hunting for
echo that first.
echo .
pause
start /wait "DMDX" dmdx.exe -auto -ignoreunknownrtf -run rvcommstest4.rtf
if not exist rvcommstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=your@email.address subject=unloadazk4web+rv+commstest4+testing -iemailaddr hash=rvcommstest4.rtf results=rvcommstest4.azk

LAME -b 24 -m j -c -h -q 2 --strictly-enforce-ISO rvcommstest41.WAV rvcommstest41.mp3
LAME -b 24 -m j -c -h -q 2 --strictly-enforce-ISO rvcommstest42.WAV rvcommstest42.mp3
LAME -b 24 -m j -c -h -q 2 --strictly-enforce-ISO rvcommstest43.WAV rvcommstest43.mp3

poster.exe /cgi-bin/bsdemailer email=your@email.address subject=poster+rv+commstest4+data -iemailaddr -b rvcommstest41.mp3=rvcommstest41.mp3 -b rvcommstest42.mp3=rvcommstest42.mp3 -b rvcommstest43.mp3=rvcommstest43.mp3

goto end
:diags
poster.exe /cgi-bin/bsdemailer email=your@email.address subject=poster+rv+commstest4+failure -iemailaddr diagnostics=diagnostics.txt

:end
if errorlevel 1 pause.

 


Online subject recruitment.

    While tangential to DMDX itself its come up a few times so perhaps it bears mention here but Amazon's Mechanical Turk is a convenient way to recruit online subjects for studies.  Linguists have been using it for some time so I include comments made by one of them that has run a few DMDX remote testing auditory tasks (task specific information has been removed or paraphrased in square brackets):

My MTurk experiences have been varied. I usually aim for twice as many participants as I want, because I need to exclude non-native English speakers, people who didn't do the task correctly, people who are trying to scam me, etc. One thing I was told is the data is cheap, so you should always pay for more than you need because only a portion of the data you'll get isn't garbage. But here's a few quick thoughts on what I understand of your situation:

Number of Participants: In the lab on campus, I'd expect I couldn't use 10% of subjects (but linguists are a little more picky than psychologists are). Online, I usually ask for twice as many as I need. I'd definitely ask for at least the maximum number of subjects given the power you want in the study.

Duration: My auditory experiments were about 10-15 minutes. I had attended a MTurk workshop and they suggested that was a good task length for people doing MTurk over their lunch breaks and such. I know people who have gotten away with 45 minute experiments though and many tasks are shorter (1-5 minute range).

Payment: I pay $1 for the 10-15 minute experiment. My colleague paid $3 for the 35-45 minute one. Paying too much will attract scammers and make people think there's something fishy about the assignment. Too little and no one will do it. I usually check to see if there are similar tasks online before I set a payment in case I need to do a little higher or could do lower. You can search for "psychology" and "experiment" or "survey" keywords to see your competition.

Quality Control: One important thing to do with MTurk is include a quality control measure. For behavioral experiments, I can look at reaction time or accuracy and discard bad subjects that way. People often add in a "trap" demographic question like "Answer 'yes' to this question" or something to ensure people aren't just zooming through the task. ... I'm also leery of putting a lot of restrictions on subjects, like "must be native English speaker, must be right handed, must be 18-32, etc." People might lie to say they're eligible for the study figuring you'll never know the truth, but if it doesn't affect their payment they'll answer demographic questions honestly (and then you can exclude the participants after you get their data).

Location and other restrictions: You can restrict the subjects' location by IP address, setting it to US only will ensure you're not getting everyone from India that you may or may not want. You can restrict things to a "master" status, which will get more trustworthy data (supposedly) but you might need to pay more to entice the turk experts to do the task. There are other restrictions you can use, like only people who have successfully completed 50 tasks, or something which might also get higher quality respondents.

Comments box: Subjects love the comment box, so make sure you include one. Also you might get some ideas on why something failed, what was confusing, difficult, etc.

Secret password: On the lines of quality control, many MTurk experiments are done on separate software (like DMDX). The easy way to interface between the Mturk and your software is to require subjects to either enter their MTurk worker ID into a box and/or a password that the experiment reveals at the end. So subjects need to [actively participate] all the way through, maybe the secret password could be added in to the end of the [task] somehow ... Some experiments use both.

Timing: I'm not sure if its true, but I was told to put experiments online around 7am New York time. That way your task is "fresh" for the day. It may be outdated advice, but it would mean as people wake up across the US the task will still be relatively new. For 15-min experiments, I could easily get 80 participants in 3-6 hours this way. My colleague who did the 45-min experiment got 64 participants in less than 24 hours.

Try a few out: My final suggestion would be so spend an hour on MTurk as a worker, search for surveys and experiments, and earn a buck or two. It lets you see how other labs are doing things and you can model your own HITs off theirs.

 


Winzip alternatives.

    If buying the Winzip Self Extractor Package is out of the question there are a couple of packages out there that appear to be usable.  First off there's 7zip that would appear to have self extracting capabilities, I looked it over but without using one or two other add-ons it doesn't look like it's easily useable (one not only needs it to auto extract files, one needs it to execute a command, ostensibly to start the installation but in our case to run the test).
  And then it also appears that WinRAR can create self extracting archives with it's SFX add on these days, had at least one person use it and it works and it's free so hard to argue with that one.

 



DMDX Index.