Unit | Topic |
---|---|
2 ![]() |
Data acquisition (scanning on 3T) |
fMRI data will be was acquired in ~30min sessions (in small groups) on one of our 3T scanners. Have a look at
the webpage for the 3T scanners on campus to learn a bit more about the machines. Up until last year we ran our experiments on the 3t Achieva, but the SPMIC has now decommisioned that machine to make space for the new, national 11.7T facility which is currently being planned in detail.
Two important sets of things to consider:
The protocol will be pretty standard for a cognitive neuroscience scanning sessions. The plan for the time in the scanner is as follows
FFAlocaliser
for lots of details.The code is written in matlab/mgl
using the task
library that comes with mgl
. Written by Alex Beckett and DS based on a version of a working code from Justin Gardner
There are a couple of short youtube videos explaining a version of the FFA localiser and the fixation dimming task to control attention, which would be used in a real experiment. This should give you a sense of what the subject is doing inside the scanner.
The experiment runs as a simple block design in the following order:
[rest, faces] , [rest, objects] - …
The length of each [stimulus, rest]
cycle is determined by the cycleLength
(in TRs).
To run, make sure the stimulusCode
folder is on the path and then simply run the following command. the Escape
key can be used to stop the experiment at any point:
FFAlocaliser % quick test to see what's going on
To run at the MR centre, we also want to specify TR, not to run in a small window, etc. So probably worth setting a few parameters in the call like this:
FFAlocaliser('TR=1.5', 'debug=0', 'numBlocks=10', 'cycleLength=12')
2025-02-06, Denis Schluppeck
3 volunteers (sub-01
.. sub-03
), scanned on the 3T Philips Ingenia scanner at the SPMIC UP site. Scanner operator: AC. Start times 930h, 1015h, 1100h.
(Data available via moodle
link to a zip file on OneDrive).
For each person we obtained several scans. See json
sidecar copied along for some details.
FFAlocaliser.m
);
2.167mm inplane, 2.5mm slice thickness (so not quite isotropic), TR/TE 1500ms/30ms12s OFF (gray). |
12s ON (faces). |
|
12s OFF (gray). |
12s ON (objects). |
... then each repeated for a total of 10 stimulus-rest blocks (5 faces, 5 objects)
We’ll run a “Faces versus objects / scenes localiser, as this works well and is a very robust experiment.
Stimulus images courtesy of Michael J. Tarr, Center for the Neural Basis of Cognition and Department of Psychology, Carnegie Mellon University, https://www.tarrlab.org/. Funding provided by NSF award 0339122.
Object stimuli from: Brady, T. F., Konkle, T., Alvarez, G. A. and Oliva, A. (2008). Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences, USA, 105 (38), 14325-14329.
Scene stimuli from: Konkle, T., Brady, T. F., Alvarez, G.A. and Oliva, A. (2010). Scene memory is more detailed than you think: the role of categories in visual long-term memory. Psychological Science, 21(11), 1551-1556.