DMDX Help.

Networked and Read Only file systems Overview.

    A number of people over the years have tried to use DMDX in a networked environment with one machine serving up the item file and all the resources to a number of lab machines that then all run the experiment from that network share (usually simultaneously).  The problem these people all hit is that DMDX creates temporary files as well as output files in the network directory that the item file is found in so only the first machine could run the experiment and all others would throw errors as they tried to create the same files.  The only easy way around it was to have the files copied to each user's machine with a batch file. 

    Some time ago a tweak to a remote testing paradigm involved running DMDX from a DVD where similar issues are encountered because the directory that contains the item file is read only.  The solution was to make DMDX version try to use the system or user's temporary directory for all the files DMDX might try and create, however I just discovered that if the item file used scrambling it would fail as scramble's code didn't try to use the temporary directory so that was fixed in and in I made DMDX's diagnostic spew include the temporary path name if it was used to facilitate getting something like this going -- although some people these days use the remote testing options instead of a local network share. 

    If you do choose to go with the network share route with a bit of scriptfu the run can be setup to look in the %temp% directory for the resulting output files as long as the networked directory is set to be read only (in Microsoft parlance you have deny writes to the directory).  If this is not done, one user's files will be on the network but other's data will be left in their local machine's %temp% directory.  UnloadAZK can accept parameters on the command line with the -automatic command so the batch file can fire UnloadAZK up after the DMDX run with the network path to the output data file on the server and the data file in the temporary directory with %temp%\datafile.azk for instance.  UnloadAZK as of version 3.0 goes to much greater lengths to automatically serialize it's requests so if it can't write to the destination it retries the operation 10 times at varying time intervals (0.5 to 1.5 seconds) and only after those are tried does it toss an error and even then it's a retry or abort the operation error.  If you're not going to go that route then the instructor will have to be visiting each machine and unloading the data manually -- which is what I suspect people have been doing in the past when not using the remote testing features.

    If I were going to set up a large networked lab testing environment for running a single study (as opposed to a lab that ran lots of different studies throughout the year) then I'd have a network share with a batch file in it and with two subfolders, one called run and one called data.  The run subfolder would be read only and it would contain DMDX.EXE and the item file and any resources it needed to run as well as a version of UnloadAZK.exe that's 3.0 or later.  The data subfolder would have normal file permissions and not be read only and would receive the subjects' data.  The batch file would live in the root of the share (so above run and data) and would execute DMDX the way the remote testing scripts do with start /wait and then it would run UnloadAZK passing it the paths to the data files.  The batch file would look something like this:

start /wait "DMDX" \\networkshare\run\dmdx.exe -auto -unicode -ignoreunknownrtf -run \\networkshare\run\itemfile.rtf
\\networkshare\run\UnloadAZK.exe -automatic \\networkshare\data\itemfile.azk %temp%\itemfile.azk

    This works as long as (a) you've made the run directory read only, (b) you're using DMDX or later and (c) you're using UnloadAZK 3.0 or later (available in DMDXUTILS.ZIP). 


DMDX Index.