DMDX Help.


Bitmap notes.

    A rather cool thing is the ability to make the background of a bitmap transparent using a feature of DirectX called Color Keying. A graphic frame with the <ColorKey> switch set will examine the color of the top left pixel of the bitmap and when adding that graphic to the displayed frame all pixels of that color will not be copied. Especially handy if you have changed the background color or your bitmaps do not have a white background.

    If you are importing large numbers of .IMG files from DMTG using IMG2BMP.EXE you will not need to stretch the images using some other image manipulation program, DMDX can do it with the
<DefaultBMPMultipliers> switch, typical values for stretching EGA images being <dbm 1.0,1.371>.


Stretch Blt modes.

    For the longest time whenever you wanted to scale a bitmap smaller with the bitmap multipliers you could get some fairly ugly artifacts until I stumbled across the win32 SetStrechBltMode function and nowadays you put something like <dbm 1 HALFTONE> in your item file and down scaled bitmaps are just fine.  Up scaled bitmaps have always been more or less fine (beyond the obvious fuzziness issues that are inherent in magnifying data).  Why is this important?  If you want to build an item file that runs on a number of machines that uses bitmaps and you want a fairly consistent experience from one machine to the next you'll want to provide higher resolution bitmaps that you scale down using a fraction of the screen's dimensions otherwise depending on the screen resolution on the machine the sizes of the bitmaps will change (assuming you're using <vm desktop> and these days you really should be using it, the days of being able to run everything at a fixed display resolution are all but past because when you run a flat screen LCD monitor at a resolution other than it's native one the monitor then takes extra time to render what it's being sent to it into it's native resolution).


Scaling bitmaps across machines.

    Speaking of making an item file running on multiple machines with scaling bitmaps you'd probably also want to know the aspect ratio of the screen your item file is running.  Maybe you can get away with assuming a 16:9 widescreen device regardless of it's resolution and scale those bitmaps down to a fraction of the screen width nicely but if not in the past you'd have been stymied pretty good, however these days you can actually detect the aspect ratio using the expression tokens VideoModeX and VideoModeY (see the <SetCounter> help).  You've got two choices here, one (as long as the aspect ratio of all your images is the same, if not see the second option) is to detect the aspect ratio like the <Instruction> help does and then passing one of two macros (one for 4:3 screens and another for 16:9) to all your <bmp> keywords that determines the upper left and lower right corners for your bitmaps.  The other option involves knowing the size of each bitmap which is perhaps more labor intensive (unless of course they're all the same) but is less prone to accidentally distorting an image as it involves using the bitmap multipliers that carefully preserve image aspect ratios.  Using DMDX 6.5.0.0 or later you set up a couple of macros that have the scaling factor for that that image and because DMDX has no floating point stuff available we'll fudge it with with 4 digit fixed point calculations using the fixed point set option FXPSET in the macro keyword and then display it using the macro in a <bm> keyword.  In the following example the bitmap being displayed is 2048 pixels high and we're aiming to display it as five sevenths of the screen height:

+101 <macro fxpset 4 f = (videomodey * 10000 / 2048) * 5 / 7> <bm `f halftone> <bmp> "image" * ;

    This method is demoed in the graphics portion of the features test demo FEATURES.RTF in this archive:  https://psy1.psych.arizona.edu/~jforster/dmdx/demos.zip

    Note that if hardcoding your image size into the item file isn't a good option because all your images are different sizes and you don't have an undergrad hanging around to do all that work for you you can in fact make DMDX determine the size of the images automatically (if not inelegantly) by first displaying the image for zero ticks with <StoreCoords> storing the resulting image size in a pair of macros (two of the four of them anyway).  Of course those sizes come back as fractional coordinates (so "0.654321" for instance) and not pixel values so we have to strip the "0." off them with a bit of macro popping and now we've got values between zero and one million so we have to do a bit of gerfingerpoken mathematically to make sure we don't get an arithmetic overflow by first dividing by 100 before normalizing to the screen height and then getting the remaining factor of 10000 but then we can plug them into something closely resembling the above:

0 <md .l.++> <md .t.++> <md .r.++> <md .b.++> <set c1=0> <bmp> "golftp" <sc .l., .t., .r., .b.> %0 / c;

0 <macro pop .b., c1> <macro pop .t., c1> <macro pop .b., c1> <macro pop .t., c1> <set c.ht. = ((`.b. - `.t.) / 100 * videomodey / 10000)))> <macro fxpset 4 .f. = (videomodey * 10000 / c.ht.) * 5 / 7> <bm `.f. halftone> <bmp> "golftp" ;

    Of course that's only going to work if your source images are smaller than the display, if your images are larger then I suppose you can put a <bm 0.5> in the <sc> frame and then multiply c.ht. by two once it's been determined (or <bm 0.25> and four if they're much larger).  And <md .l.++> <md .t.++> <md .r.++> <md .b.++> <set c1=0> only has to occur once, no need to include it for every item.  Also note you could stick  a <delay 2> in the first frame of one of those items as you'll be encountering the default delay of a half second or so for both of them.  And they do have to be two items, the macros with the bitmap dimensions in them won't be usable till that frame is rendered and rendering doesn't happen till after an item is parsed.  And technically that popping operation really should continue popping leading zeroes off at least the top value as any constant with a leading 0 is interpreted as octal by DMDX's expression evaluator.  Personally, if your source images are around 4/5ths of your screen size or larger you should probably be using the <bm 0.5> solution above or just display the %0 bitmap at the top of the screen -- assuming your source bitmaps are more than 1/5th of the screen as octal zero is the same as decimal zero...



Click logging in scaled bitmaps.


    With the addition of relativeclicklogging in the <2Did> keyword a good opportunity arose to demo scaled bitmaps and the ability to click on a specific region, or at least to ask a subject to do so and see if they got it right.  Because we're using the normalizedata <2Did> option this item file works across multiple display sizes (essential if you're going to be using it for remote testing).  The bitmaps here are found in the demos archive mentioned above.  The key thing about this demo is using <sc> to store the coordinates that the bitmap was rendered to and then using <sbr> to set the rectangle for the button defined in the <2Did> statement to those stored coordinates so different aspect ratio bitmaps will all be handled correctly -- however in order to do that we have to actually display the text before we can change the button rect.  Unlike other uses of <sc> and <sbr> (say <expireif>) we can't just present the bitmap in the same color as the background before testing the subject so we just present it for zero ticks beforehand followed by a blank frame gathering the coordinates of it's display and then present the item that gathers the response click.  Other interesting points here include the fact that bitmaps are smaller than the requested display size (a third of the width of the screen) in most cases so we're scaling them up (most of the time anyway) with the halftone scaling method.  Also of interest is presenting them with an <inst> keyword with the explicatory text in the right half of the screen (we do have to use the framebyframe option because <inst> spacing is going to freak if it uses centered positioning):

<ep>
<!branchdiagnostics>
<fd 30> <cr> <id #keyboard> <!ntl> <t 100000> <nfb>
<2Did mouse relativeclicklogging,100
    bitmap,.1,.2,.3,.4
    normalizedata>
<xyjustification framebyframe> <inst hardmargin>
<eop>

~1 ml++ mt++ mr++ mb++ <mpr +bitmap>;

! <! GOLFTP is 350 px wide>;
1 <xyjustification center> <macro fxpset 4 f = (videomodex * 10000 / 350) / 3> <dbm `f halftone STAT>  <xy .25,.5> <bmp> "GOLFTP" <sc l, t, r, b> %0 / c;
+101 <delay 2> <inst .5,.3,.9> <xyjustification framebyframe> <sbr bitmap, `l, `t, `r, `b> <xy .25,.5> <bmp> "GOLFTP" <inst esc>, <xyjustification leftbottom>
"In ", "this ", "experiment ", "click ", "on ", "the ", "flag ", "in ", "the ", "image ", "on ", "the ", "left" * ;

    Having just crafted that new example I see that it's almost as convoluted as the old method that presents the image for two ticks prior to gathering the response so here's the old way (it uses tildes for macro expansion, not much different seeing as the definition is in a different item anyway):


~1 <macro fxpset 4 f = (videomodex * 10000 / 320) / 3> <! WINLOGO is 320 px wide>;
1 <xyjustification center> <xy .25,.5> <dbm ~f halftone STAT> <bmp> "WINLOGO" <sc l, t, r, b>;
+102 <delay 2> <inst .5,.3,.9> <sbr bitmap,~l, ~t, ~r, ~b> <noerase> <xyjustification leftbottom>
"In ", "this ", "experiment ", "click ", "on ", "the ", "flag ", "in ", "the ", "image ", "on ", "the ", "left" * ;

    If we use clicklogging instead of relativeclicklogging we can use the data logging macros and add a line after items 101 and 102 to display a character over where they clicked:

0 ! "X" <xyjustification center> <xy ~.dataloggingx.,~.dataloggingy.>;


Alpha Blending.

    Another cool thing is the ability to alpha blend two (or more) frames together -- although this looks busted these days and is explicitly unavailable when using the Direct3D renderer. An alpha blend adds two frames together as transparencies, a succession of frames with alpha values changing from 1.0 to 0.0 in small increments will fade from one frame to the next. As with all DMDX color things a 16 bpp (bit per pixel, 65535) color mode is nearly essential. It works with all types of frames with the likely exception of Digital Video frames regardless of how many colors or what was used to make the frame. It is fairly CPU intensive to prepare (I have not written a very efficient algorithm, it does however work with everything), check your preparation times if you intend to use REQSHED or the D parameter. To blend from the word "first" to the word "second" in 6 steps the following item would be used:

0 "first" <as> / <ab .8> "second" / <ab .6> "second" / <ab .4> "second" / <ab .2> "second" / "second";

    The first frame's switch <as> (or <AlphaSource> if you like) causes it to be stored as the alpha source for all following blends in that item, after that it is discarded, you can't blend across items. The next four frames will be a combination of the words first and second and the last frame is only the word second, <ab 0> being equivalent to no blending. Each frame is generated in the normal fashion and then blended with the surface that was stored with the most recent <as> switch -- they are not blended with the previous frame (and if you put non-erase switches in very odd things would happen). An interesting thing to note with only four steps in the blend is that this could be displayed by a 256 color display as the default windows palette contains four shades of gray -- however four steps is a pretty crude blend, a much better instance that requires a 16 bpp display is:

0 "first" <as> / <ab .95> "second" / <ab .9> "second" / <ab .85> "second" / <ab .8> "second" / <ab .75> "second" / <ab .7> "second" / <ab .65> "second" / <ab .6> "second" / <ab .55> "second" / <ab .5> "second" / <ab .45> "second" / <ab .4> "second" / <ab .35> "second" / <ab .3> "second" / <ab .25> "second" / <ab .2> "second" / <ab .15> "second" / <ab .1> "second" / <ab .05> "second" / "second";

    An interesting thing to note is that you can store a new alpha source in any frame leading to the ability to blend a bit to one word and then change to a new word (whether anyone would ever want to do this is another thing).
   

Image Loading time issues.

    In the ancient past (no modern hardware is slow enough to require this degree of chicanery) if people required the fastest possible loading of bitmap displays for low ISIs (usually because they are using continuous running with an FMRI image sequence) they should convert their bitmaps to 256 color .BMP files, however they should still use a 16 bit display, or possibly a 24 bit display. Because the image is displayed on an RGB surface the fact that it's encoded as a 256 color palletized surface doesn't matter, no changing of colors to match a system palette occurs (some changing does occur as a 16 bit surface is 5 bits of red, 6 of green and 5 of blue whereas a 256 color palette has 6 of everything but that's trivial or if it does matter use a 24 bit display mode) and because each image is encoded separately the palette for the image can be super fudged by the conversion program to match the colors in the image and it's impressive what a bit of dithering will achieve. 800x600 8 bit .BMPs load in 50ms on my test bed (preparation A times) vs. 150ms for 24 bit .BMPs vs. 300ms to 600ms for .JPGs. All have a preparation B time of about 40ms (the time to make a DMDX frame out of them once loaded into memory).
   
   



DMDX Index.