DMDX Help.


Bitmap notes.

    A rather cool thing is the ability to make the background of a bitmap transparent using a feature of DirectX called Color Keying. A graphic frame with the <ColorKey> switch set will examine the color of the top left pixel of the bitmap and when adding that graphic to the displayed frame all pixels of that color will not be copied. Especially handy if you have changed the background color or your bitmaps do not have a white background.

    If you are importing large numbers of .IMG files from DMTG using IMG2BMP.EXE you will not need to stretch the images using some other image manipulation program, DMDX can do it with the
<DefaultBMPMultipliers> switch, typical values for stretching EGA images being <dbm 1.0,1.371>.


Stretch Blt modes.

    For the longest time whenever you wanted to scale a bitmap smaller with the bitmap multipliers you could get some fairly ugly artifacts until I stumbled across the win32 SetStrechBltMode function and nowadays you put something like <dbm 1 HALFTONE> in your item file and down scaled bitmaps are just fine.  Up scaled bitmaps have always been more or less fine (beyond the obvious fuzziness issues that are inherent in magnifying data).  Why is this important?  If you want to build an item file that runs on a number of machines that uses bitmaps and you want a fairly consistent experience from one machine to the next you'll want to provide higher resolution bitmaps that you scale down using a fraction of the screen's dimensions otherwise depending on the screen resolution on the machine the sizes of the bitmaps will change (assuming you're using <vm desktop> and these days you really should be using it, the days of being able to run everything at a fixed display resolution are all but past).

    Speaking of making an item file running on multiple machines with scaling bitmaps you'd probably also want to know the aspect ratio of the screen your item file is running.  Maybe you can get away with assuming a 16:9 widescreen device regardless of it's resolution and scale those bitmaps down to a fraction of the screen width nicely but if not in the past you'd have been stymied pretty good, however these days you can actually detect the aspect ratio using the expression tokens VideoModeX and VideoModeY (see the <SetCounter> help).  You've got two choices here, one (as long as the aspect ratio of all your images is the same, if not see the second option) is to detect the aspect ratio like the <Instruction> help does and then passing one of two macros (one for 4:3 screens and another for 16:9) to all your <bmp> keywords that determines the upper left and lower right corners for your bitmaps.  The other option involves knowing the size of each bitmap which is perhaps more labor intensive (unless of course they're all the same) but is less prone to accidentally distorting an image as it involves using the bitmap multipliers that carefully preserve image aspect ratios.  Using DMDX 6.5.0.0 or later you set up a couple of macros that have the scaling factor for that that image and because DMDX has no floating point stuff available we'll fudge it with with 4 digit fixed point calculations using the fixed point set option FXPSET in the macro keyword and then display it using the macro in a <bm> keyword.  In the following example the bitmap being displayed is 2048 pixels high and we're aiming to display it as half the screen height:

+101 <macro fxpset 4 f = (videomodey * 10000 / 2048) / 2> <bm `f halftone> <bmp> "image" * ;

    This method is demoed in the graphics portion of the features test demo FEATURES.RTF in this archive:  https://psy1.psych.arizona.edu/~jforster/dmdx/demos.zip.

    With the addition of relativeclicklogging in the <2Did> keyword a good opportunity arose to demo scaled bitmaps and the ability to click on a specific region, or at least to ask a subject to do so and see if they got it right.  Because we're using the normalizedata <2Did> option this item file works across multiple display sizes (essential if you're going to be using it for remote testing).  The bitmaps here are found in the demos archive mentioned above.  The key thing about this demo is using <sc> to store the coordinates that the bitmap was rendered to and then using <sbr> to set the rectangle for the button defined in the <2Did> statement to those stored coordinates so different aspect ratio bitmaps will all be handled correctly -- however in order to do that we have to actually display the text before we can change the button rect.  Unlike other uses of <sc> and <sbr> (say <expireif>) we can't just present the bitmap in the same color as the background before testing the subject so we just present it for zero ticks beforehand followed by a blank frame gathering the coordinates of it's display and then present the item that gathers the response click.  Other interesting points here include the fact that bitmaps are smaller than the requested display size (a third of the width of the screen) in most cases so we're scaling them up (most of the time anyway) with the halftone scaling method.  Also of interest is presenting them with an <inst> keyword with the explicatory text in the right half of the screen (we do have to use the framebyframe option because <inst> spacing is going to freak if it uses centered positioning):

<ep>
<!branchdiagnostics>
<fd 30> <cr> <id #keyboard> <!ntl> <t 100000> <nfb>
<2Did mouse relativeclicklogging,100
    bitmap,.1,.2,.3,.4
    normalizedata>
<xyjustification framebyframe> <inst hardmargin>
<eop>

~1 ml++ mt++ mr++ mb++ <mpr +bitmap>;

! <! GOLFTP is 350 px wide>;
1 <xyjustification center> <macro fxpset 4 f = (videomodex * 10000 / 350) / 3> <dbm `f halftone STAT>  <xy .25,.5> <bmp> "GOLFTP" <sc l, t, r, b> %0 / c;
+101 <delay 2> <inst .5,.3,.9> <xyjustification framebyframe> <sbr bitmap, `l, `t, `r, `b> <xy .25,.5> <bmp> "GOLFTP" <inst esc>, <xyjustification leftbottom>
"In ", "this ", "experiment ", "click ", "on ", "the ", "flag ", "in ", "the ", "image ", "on ", "the ", "left" * ;

    Having just crafted that new example I see that it's almost as convoluted as the old method that presents the image for two ticks prior to gathering the response so here's the old way (it uses tildes for macro expansion, not much different seeing as the definition is in a different item anyway):


~1 <macro fxpset 4 f = (videomodex * 10000 / 320) / 3> <! WINLOGO is 320 px wide>;
1 <xyjustification center> <xy .25,.5> <dbm ~f halftone STAT> <bmp> "WINLOGO" <sc l, t, r, b>;
+102 <delay 2> <inst .5,.3,.9> <sbr bitmap,~l, ~t, ~r, ~b> <noerase> <xyjustification leftbottom>
"In ", "this ", "experiment ", "click ", "on ", "the ", "flag ", "in ", "the ", "image ", "on ", "the ", "left" * ;

    If we use clicklogging instead of relativeclicklogging we can use the data logging macros and add a line after items 101 and 102 to display a character over where they clicked:

0 ! "X" <xyjustification center> <xy ~.dataloggingx.,~.dataloggingy.>;


Alpha Blending.

    Another cool thing is the ability to alpha blend two (or more) frames together -- although this looks busted these days and is explicitly unavailable when using the Direct3D renderer. An alpha blend adds two frames together as transparencies, a succession of frames with alpha values changing from 1.0 to 0.0 in small increments will fade from one frame to the next. As with all DMDX color things a 16 bpp (bit per pixel, 65535) color mode is nearly essential. It works with all types of frames with the likely exception of Digital Video frames regardless of how many colors or what was used to make the frame. It is fairly CPU intensive to prepare (I have not written a very efficient algorithm, it does however work with everything), check your preparation times if you intend to use REQSHED or the D parameter. To blend from the word "first" to the word "second" in 6 steps the following item would be used:

0 "first" <as> / <ab .8> "second" / <ab .6> "second" / <ab .4> "second" / <ab .2> "second" / "second";

    The first frame's switch <as> (or <AlphaSource> if you like) causes it to be stored as the alpha source for all following blends in that item, after that it is discarded, you can't blend across items. The next four frames will be a combination of the words first and second and the last frame is only the word second, <ab 0> being equivalent to no blending. Each frame is generated in the normal fashion and then blended with the surface that was stored with the most recent <as> switch -- they are not blended with the previous frame (and if you put non-erase switches in very odd things would happen). An interesting thing to note with only four steps in the blend is that this could be displayed by a 256 color display as the default windows palette contains four shades of gray -- however four steps is a pretty crude blend, a much better instance that requires a 16 bpp display is:

0 "first" <as> / <ab .95> "second" / <ab .9> "second" / <ab .85> "second" / <ab .8> "second" / <ab .75> "second" / <ab .7> "second" / <ab .65> "second" / <ab .6> "second" / <ab .55> "second" / <ab .5> "second" / <ab .45> "second" / <ab .4> "second" / <ab .35> "second" / <ab .3> "second" / <ab .25> "second" / <ab .2> "second" / <ab .15> "second" / <ab .1> "second" / <ab .05> "second" / "second";

    An interesting thing to note is that you can store a new alpha source in any frame leading to the ability to blend a bit to one word and then change to a new word (whether anyone would ever want to do this is another thing).
   

Image Loading time issues.

    If people require the fastest possible loading of bitmap displays for low ISIs (usually because they are using continuous running with an FMRI image sequence) they should convert their bitmaps to 256 color .BMP files, however they should still use a 16 bit display, or possibly a 24 bit display. Because the image is displayed on an RGB surface the fact that it's encoded as a 256 color palletized surface doesn't matter, no changing of colors to match a system palette occurs (some changing does occur as a 16 bit surface is 5 bits of red, 6 of green and 5 of blue whereas a 256 color palette has 6 of everything but that's trivial or if it does matter use a 24 bit display mode) and because each image is encoded separately the palette for the image can be super fudged by the conversion program to match the colors in the image and it's impressive what a bit of dithering will achieve. 800x600 8 bit .BMPs load in 50ms on my test bed (preparation A times) vs. 150ms for 24 bit .BMPs vs. 300ms to 600ms for .JPGs. All have a preparation B time of about 40ms (the time to make a DMDX frame out of them once loaded into memory).
   
   



DMDX Index.