Hello!
I found information in the help menus about using ML responses in DRT, but I am wondering about using information from DRT and putting it into media lab.
Specifically, in DRT we play a little conversation "movie" in which we have pictures randomly assigned to different clips of narrative (e.g., so sometimes female1.bmp is matched with clip1, other times with clip 2, and so forth). We have almost all of this stimuli on one line in DRT to make the conversation less choppy (found that tip on the threads-thanks!) and if the same person talks more than once, we can easily refer back to the randomly picked picture (so if female1.bmp was paired with clip 1 and should also be paired with clip 4 because clips 1 and 4 are the same person talking, we used ?b1t2s1 to recall that image-again, thanks, this is an awesome feature).
Later, we ask questions about the clips they heard, but we wrote these questions in ML. Is there any way we can have ML call up the correct image that was randomly selected, like the ?b1t2s1 command in DRT? Or could I maybe somehow have DRT write the stimuli used to a file to be called up later? We already employ a pretty complicated response.xls file, so adding a little more won't kill us
It's not a dire need, but since we ask about a few different people we thought it would be nice to have their picture come up when we ask a question about what they said. I know the easiest solution would be to write all of the questions about the audio in DRT, but I'm curious if there is another way.
Thanks for your help,
Susan