NOTES on DMDX Utilities ----------------------- Users of DM and DMTG who are now using DMDX and were accustomed to the DOS utilities UNLOAD, UPDATE and CONCAT, which handled binary output data (.DTP) files, can now use the following Windows versions with DMDX's .AZK files. In addition there are several filters useful for exporting .AZK data to other analysis programs and a couple of utilities for remote testing. The utilities can be found at http://www.u.arizona.edu/~jforster/dmdx/dmdxutils.zip (1) UNLOADAZK This program is designed to transfer .azk files from multiple machines and combine them into a single .azk output file. It replaces the older utility UNLOAD. This program asks for a source file (an .azk file) and a destination (usually on the network). The .azk file is transferred to the new destination, appending the new material to any existing file with the same name. The original file is then renamed as an .rdb (raw data backup) file. Works with .zil files too, (2) ANALYZE Computes subject and item mean RTs and error rates from .azk files. It replaces the older utility UPDATE. The assignment of items to conditions is specified in an .spc (specification) plain text file. This is an ordinary text file (not a formatted Word .DOC, DOCX or .RTF one), and includes a number of parameters which specify computational options (e.g., data trimming, error rate criterion for exclusion of subjects, analyze only incorrect responses, etc). ANALYZE produces for each .azk file two output files: (a) an .ism (item summary) file, which lists the mean RT and error rates for each item in each condition, and (b) a .das file, which has exactly the same format as the original .das files. See below for an example. (3) CONCATENATE Combines data from several .das files in a suitable fashion for input to ANOVA programs. This replaces the former utility CONCAT. The method of concatenation is specified in a .cat file which lists the name of the .das file, and the conditions to be selected from it. See below for an example. =========================================================================================== Each program has its own HELP function. Pressing COPY will copy the contents to the clipboard. Download these programs from: http://www.u.arizona.edu/~jforster/dmdx/dmdxutils.zip =========================================================================================== =========================================================================================== UnloadAZK 3.0 Unload a DMDX .AZK output file (ie copy gathered DMDX data to common network file). By Jonathan C. Forster (j.c.f.) 07/25/21 Usage: UnloadAZK with no arguments just runs the dialog based interface or: UnloadAZK inputfile.azk or: UnloadAZK -automatic outputname.azk inputfile.azk Original versions of UnloadAZK prior to 3.0 would take whatever was on the command line and and use it as the input filename and path, spaces and all, version 3.0 and later require input names with spaces to be quote delimited. Althernate -automatic usage takes both output and input path and filename specs (quote delimited if they have spaces) and automatically goes ahead and performs the unload. Both original source and destination files are renamed with .RDB extensions. Source subject numbers will be renumbered as they are added to the destination .AZK file, already unloaded subject numbers will not be touched (which means you can edit the destination .AZK to remove spurious subjects and the subject numbers won't change on you the next time a file gets unloaded). As of 3.0 a .lock file is created in the destination to serialize access in the case of many lab machines simultaneously trying to unload, if UnloadAZK is unable to secure the .lock it will retry the operation up to ten times with a random 0.5 to 1.5 second interval, if at the end of that it still can't secure access it will prompt the user to retry or give up. Filenames and paths can now contain non-ASCII characters and UnloadAZK also unloads .ZIL data files along with .AZK and if you really wanted .ZOL too... =========================================================================================== =========================================================================================== Analyze 6.0.0 Analyze a DMDX .AZK output file from .SPC plain text file specifications and produce .DAS, .ISM and optional .CSV output. By Jonathan C. Forster (j.c.f.) 06/13/21 Filenames and paths can now contain non-ASCII characters. Specification .SPC plain text file parameter lines are are limited to 1024 characters and can be any of the following: # text Lines beginning with a # are comments. double_check .ISM file will contain additional information including raw data values after rules have been applied. title: text Description of experiment. Can be multiple lines with multiple "title:" lines, although only the first line will be used in the .DAS output (.ISM will use everything). analyze_incorrect_responses Only incorrect responses are analyzed (default is to only analyze correct responses). The program toggles correct responses for incorrect responses so percentage errors in fact become percentage correct responses. discard_display_errors Items that have had display errors in them will be discarded (default is to not discard them). subject_rejection: N Subjects with an error rate of N% or over will be rejected (default is no rejection) data_threshold: F Threshold in standard deviations above or below the mean (both calculated over all conditions) beyond which which data will be trimmed to or rejected if data_rejection is used. Default is 2.0 S.D. No trimming or rejection will be performed unless data_threshold or data_rejection is used and then cutoffs are applied before the SD is calculated and then applied. data_rejection Data above or below the data_threshold standard deviations from the mean will be rejected. normalize_data Converts each RT to standard scores. Sets RT_precision to 3. substitute_mean_for_error_RTs Subject's overall mean RT (based on the items specified in the .spc file) is substituted for trials on which an error occurred. global_rejection All items are considered to determine subject rejection, otherwise only items in conditions are used synthesized_items_flag_errors Synthesized average items by default ignore errors and missing data points. With synthesized_items_flag_errors turned on synthesized average items will contain -1 whenever a member item contains an error. Missing data points are still ignored. low_cutoff: N Data at or below N milliseconds will be rejected (default is no rejection). high_cutoff: N Data at or above N milliseconds will be rejected (default is no rejection). condition: N Begins definition of condition number N. Following condition relevant definitions will all refer to this condition until another condition number is defined. name: text Defines the name of current condition. This should be a short one line name. description: text Verbose description of current condition. Can be multiple lines with multiple "description:" lines. items: N1 [N2...] [N3-N4] [N5=N6-N7] [N8=(N9 [N10-N11]...)] Defines items in current condition. Any number of space delimited item numbers can occur (up to 1024 characters per line of course), including ranges of items with the hyphen. Synthesized average items (so N5 and N8 don't occur in the RTF item file) are specified with =, complex ranges allowed for with brackets. Multiple "items:" lines can occur so if you have large numbers of items and need to specify them all individually they won't fit on one 1024 character line so you will need to use multiple "items:" lines. RT_width: N .DAS file RT output width (default is 5). RT_precision: N .DAS file RT output precision, the number of decimal places (default is 0). ERR_width: N .DAS file Error output width (default is 5). ERR_precision: N .DAS file Error output precision, the number of decimal places (default is 1). median_split_condition: N1 N2 Defines a median split condition with the experimental condition being condition number N1 and the control condition being condition number N2. median_split_n_ways: N Modifies the median split to become an N way split Currently only 2, 3 and 4 way splits supported. wildcard_input_files Multiple data files are present of the form itemfilename*.AZK, if the wildcard portion contains digits then they are taken as the subject number. headings .DAS file columns have condition name headings and subject or item count numbers on rows. generate_mixed-effect_CSV_data Turns on code that emits mixed-effect data into a CSV file itemfilename.CSV. The CSV file will contain one line for every item in all conditions excluding rejected subjects. Format of CSV files is: subj,desc,itemN,trial,rt,error,prevRT,prevERR,prevYES,itemERR,subjERR,ID Where subj is the numeric subject id desc is the condition description of the condition containing the current item itemN is in the current item number trial is the absolute order in which this item was presented starting from 1 (so this includes practice items and others not in any condition). rt is adjusted RT for the current item, will be unadjusted if incorrect error will be 1 if the response was incorrect, 0 otherwise prevRT is the unadjusted RT for the preceding item, will be missing if incorrect. prevERR is yes if the previous item was incorrect, no if not. prevYES is yes if the previous item had a + CR and was correct, no if - CR and correct, missing if anything else itemERR % item error rate subjERR % subject error rate ID is the alphameric ID entered when the subject was run Note, ANALYZE only detects CR indicators at the start of the item. textualize_identifiers Mixed-effect CSV data identifiers (subj and itemN) are prepended with text to distinguish them from data values. subject_offset: N Adds N to any subject ID. get_nearest_previous_correct_RT Mixed-effect prevRT becomes most recent correct response. no_smart_quotes turn off MS Word smart quote detection for Chinese and other double byte code page fonts in item files =========================================================================================== =========================================================================================== Concatenate 3.0.0 Concatenate columns from multiple .DAS files into one new .DAS file. Unlike CONCAT this program generates .DAS files that always have 4 matrices, one SUBJECT RT matrix, one SUBJECT ERR matrix, one ITEM RT, and one ITEM ERR. Also, columns in both SUBJECT matrices are nominated with one set of commands as are the ITEM matrices. By Jonathan C. Forster (j.c.f.) 06/13/21 Filenames and paths can now contain non-ASCII characters, if you wish to use non-ASCII characters in configuration file specifications you will have to use UTF-8 as a character encoding for your .CAT configuration file. Configuration file lines are: # any text Lines beginning with a # are comments. title: text Description of experiment. Can be multiple lines with multiple "title:" lines although only the first line will be used. dasfile: filename.DAS Subsequent subject: and item: commands will fetch columns from this .DAS file. Multiple uses of the same filename interspersed with other filenames is fine. subject: N1 [N2...] [N3-N4] Fetch SUBJECT RT and SUBJECT ERR column numbers and add them to the output .DAS file. Multiple subject: lines can occur and each line can define any number of columns. item: N1 [N2...] [N3-N4] Fetch ITEM RT and ITEM ERR column numbers and add them to the output .DAS file. Multiple item: lines can occur and each line can define any number of columns. RT_width: N .DAS file RT input and output width (default is 5). If a non standard RT_width: was used to produce the .DAS source files it must also be specified in the .CAT file. This is due to the possibility of having 1234567890 in the .DAS file (in order to maintain backwards compatibility with the FORTRAN programs) which is actually two values, 12345 and 67890. RT_precision: N .DAS file RT output precision, the number of decimal places (default is 0). ERR_width: N .DAS file Error input and output width (default is 5). See RT_width:. ERR_precision: N .DAS file Error output precision, the number of decimal places (default is 1). ===================================================================================== Example of a .cat file ---------------------- title: low density targets # this will appear on the .ism and .das file rt_width: 7 rt_precision: 1 # as in the .spc file dasfile: exp1a.das # the first input file is the file "exp1a.das" subject: 1-3 # the first three columns of the output file will list the subject # mean RTs and the error rates for the first # three conditions of the file "exp1a.das". dasfile: exp1b.das # the second input file is the file "exp1b.das" subject: 1-3 # columns 4-6 of the output file will contain the subject # mean RTs and error rates for conditions 1-3 of exp1b.das # the above will generate two data matrices each with N rows # (equal to the number of subjects) and 6 columns, # corresponding to the conditions specified above. The first # matrix contains mean RTs, the second error rates. # if you also wish to have item mean RTs and error rates # included in the output, you need to specify a new # combination. This usually will not match the specification # for the subject means in a counterbalanced design, in which # different groups of subjects receive counterbalanced items. # In the following example, a counterbalanced design is assumed. dasfile: exp1a.das # this is the input file item: 1 # output the item mean RTs and error rates for condition 1 # in file exp1a.das dasfile: exp1b.das item: 2 # output the item mean RTs and error rates for condition 2 # in file exp1b.das dasfile: exp1b.das item: 1 dasfile: exp1a.das item:2 # same as above, except reversed. ===================================================================================== ===================================================================================== AZK2COLUMNS vers 3.0, convert DMDX .AZK (or ) files to columnar .TXT files. Note if reading files No Responses items will generate missing data points. Usage: AZK2COLUMNS [options] file.azk [morefiles.azk] options are: -sort [-s] : does not retain item order and all but first column item numbers dropped -discarddisplayerrors [-d] : discards RTs in items with displayerrors -precision [-p] : next option is number of decimal places printed, default is 2 -width [-w] : next option is width of each column, default is 7 You can just drag .AZK files to this program. The new .TXT file(s) should be dropped in whatever directory the .AZK file(s) is(are) in. Filenames and paths can now contian non-ASCII characters. ===================================================================================== ===================================================================================== AZK2CSV vers 3.0 06/12/21, convert DMDX .AZK (or ) files to linear .CSV files. Note if reading files No Responses items will generate missing data points. Usage: AZK2CSV [options] file.azk [morefiles.azk] options are: -discarddisplayerrors [-d] : discards RTs in items with displayerrors -precision [-p] : next option is number of decimal places printed, default is 2 You can just drag .AZK files to this program and the new .CSV file(s) should be dropped in whatever directory the .AZK file(s) is(are) in. Alternatively running it with no command line arguments will offer up the dialog front end. Filenames and paths can now contian non-ASCII characters. ===================================================================================== ===================================================================================== zilliononeline vers 2.0, convert DMDX .ZIL files that weren't recorded with to format Usage: zilliononeline [options] file.zil [morefiles.zil] You can just drag .ZIL files to this program. Filenames and paths can now contian non-ASCII characters. NOTE: zillionneline is a command line program so you either fire up a command prompt to run it or you just drag the file or files you want it to process onto it or a shortcut to it and it makes a .zol variant of the file in that directory. ===================================================================================== ===================================================================================== zil2azk While zil2azk doesn't exist per se once you've used zilliononeline on your files (assuming you didn't have in your item files of course) all you'd have to do is a search and replace to remove all the "Item" strings and the comma that follows the item number and you'd have something approximating an .AZK file as Analyze won't care about the additional data on the rest of the line. Extra spaces (or fewer spaces as long as there's at least one) won't matter but that comma after the item number will. If you can't figure out a search and replace that only gets commas following the item number you could probably nuke all the commas in the file, Analyze won't care about the missing comma after the subject number. That and save the result as an .AZK file... For example assuming you have .ZIL data that didn't have in the parameters your data will look like this: Item 111, 1127.61 1127.61,+Numpad 9 After zilliononeline has been run over it (or you have in your parameter line) you'll have data that looks like this: Item 111, 1127.61 1127.61,+Numpad 9 You'll want transform that into one that looks like this: 111 1127.61 1127.61,+Numpad 9 These would also work: 111 1127.61 1127.61+Numpad 9 111 1127.61 1127.61,+Numpad 9 111 1127.61 1127.61+Numpad 9 111 1127.61 1127.61 +Numpad 9 ===================================================================================== ===================================================================================== zillionttt vers 1.0, a console win32 app to filter DMDX .ZIL for total typing time Usage: zillionttt [options] file.zil [morefiles.zil] options are: -precision [-p] : next option is number of decimal places printed, default is 2 You can just drag .ZIL files to this program. ===================================================================================== ===================================================================================== poster 5.0.0, post to a CGI script as if it was a browser filling in a form with a POST command (IPv6 used if path is available). Usage: poster [options] scriptpathname [control=value] [...] Options are: -hxxx set host (default is psy1.psych.arizona.edu) suspect localhost will work on servers -uxxx emit xxx in User-Agent string, usu. Mozilla for servers that want a recognized agent -ixxx emit xxx=poster_5.0.0@machine.domain -pnnn set host port (default 80) -rnnn set retry limit (default 10) -b next file sent will be base64 encoded before URL encoding (for binary files) -oxxx write receieved data to file xxx for GIFs etc control=value can be repeated multiple times and must be URL-encoded already, if missing POST becomes a GET. if value is an openable file name then the file contents will URL-encoded and then sent [text files only unless -b used immediately before this control]. File names can be UTF-8 if your batch file has used "chcp 65001" ===================================================================================== ===================================================================================== sslposter 3.0.0, post to a SSL/TLS (HTTPS) CGI script as if it was a browser filling in a form using OpenSSL 1.1.1g 21 Apr 2020. Usage: sslposter [options] scriptpathname control=value [...] Options are: -auuu:xxx use basic authentication with user uuu password xxx -hxxx:ppp set host and port (psy1.psych.arizona.edu:443) suspect localhost will work on servers :443 added if no port present -ixxx emit xxx=poster_3.0.0@machine.domain -oxxx write receieved data to file xxx for GIFs etc control=value can be repeated multiple times with different control names and must be URL-encoded already, if missing POST becomes a GET. if control value is an openable file name then the file contents will URL-encoded and then sent [text files only]. File names can be UTF-8 if your batch file has used "chcp 65001" ===================================================================================== ===================================================================================== SendEmail 2.2, send an email from the command line so windows can send reminders using Scheduled Tasks and so on. Usage: SendEmail [options] emailaddress subject body Options are: -hxxx set host (default smtpgate.email.arizona.edu) suspect localhost will work on servers -pnnn set host port (default 25) body can contain \n sequences (literal \ and n) if body is an openable file name then the file will be sent and no substitution of for CRLF will occur