When calculating the difference between the beta weights to work out the critical t-value, do we always take away the face beta weight from the object?
For the timeseries from the face-selective voxel, the beta weight for the “faces” explanatory variable (EV), call it \(F\), will be higher than for the object EV (call that one ‘O’). That means the difference between them will be positive (\(F-O > 0\)).
But for the object selective voxel that difference will be negative. The distribution of \(t\) values is symmetric, i.e. has the same shape for positive and negative going values – this makes calculating \(p\) values (area remaining under the curve at a particular \(t\) value confusing for some… but as long as you can remember the sign of the effect (was it \(F>O\) or \(O>F\)?), then you can just use the \(t\) value without a sign to find the \(p\) values for reporting.
Code
score <-2.1p9 <-ggplot(data.frame(x =c(-4, 4)), aes(x = x)) +stat_function(fun = dt, args =list(df =158)) +stat_function(fun = dt, args =list(df =158), xlim =c(score, 4),geom ="area", fill ="#84CA72", alpha = .5) +geom_vline(xintercept =0, color="blue") +geom_vline(xintercept = score, color="#84CA72") +labs(title ="t-distribution (df = 158)",subtitle =paste('for e.g. a t-statistic of t=',score),x ="t", y ="density")p9
1.2\(t\) for faces, objects, and also their comparison?
Also do we only use the t-value for the specific data set to find the p-value (meaning face t-value due to face data set), and if the answer is yes, what do we do for the non-selective V1 region?
I suggest you report the stats for \(F\), \(O\), but also comparing them (so \(F-O\) or vice versa). This will allow you to say something about the significance of the “selective” response. For V1 (early visual cortex), that comparision may well be not significant…
2 What are the units for Beta weights?
Good question. They are dimensionless multipliers. But all we need to know is their size and how variable they are (which is captured in the standard error for each weight).
3\(\pm1\) standard error interval includes 0.
I got an SEM that goes into the negative for a Beta weight. How can this be true? In actually, it may be that there is less than 0 explanation in our voxel model from this stimulus?
Beta weights can take on positive, 0, or negative values.
Consider, e.g. a regression line with the equation \(y = mx\). It ca have a a downward slope…. the weight \(m\) for that is negative. It just means that for every positive going step in our independent variable \(x\), we have a decrease in the value of \(y\).
4 Exact, inexact degrees of freedom
Will we be marked down for using a df of 175? Should we state in the notes what we should have used? Is this a valid critique for the discussion?
I assume this value for degrees of freedom is the closest that can be found in a table of critical \(t\) values in a book or on some online. Using \(df=175\) would be slighly more conservative, and in practice it will make only a tiny difference, but you you can just use the values I provide here below.
Here is a table that I have computed with R:
Code
data <-as_tibble(c(0.05, 0.01)) %>%rename(p_value = value) %>%mutate(critical_t =qt(1- p_value/2, df =158)) data %>%gt() %>%tab_header(title =md("Critical *t* values a given *p*"),subtitle =md("df=158, two-tailed") ) %>%fmt_number(columns =1:2,decimals =3,use_seps =FALSE )
Critical t-values for a given p (two-tailed)
Critical t values a given p
df=158, two-tailed
p_value
critical_t
0.050
1.975
0.010
2.607
5 Results in both graphs and text?
If we demonstrate results through a table or graph, do we need to report this in-text as well?
Yes. You should provide context for your figures, tables, and results in the written part of the report. That doesn’t mean you have to replicate exactly the same information (after all, graphs and tables can usually be much more information-dense). Have a look at papers / publications to see how authors generally strike this balance.
6 What were the stimuli?
I know we saw the experiment, but please could you release the stimuli used?
When we collected the data from this participant, we ran some computer code (a program we called FFAlocaliser(). You can have a look at some of the details on this webpage
I’m having trouble creating a graph for the stimulus timings for the experiment. Please could you explain this?
There were 1. 10 images of faces (which took a total of 12s to display), followed by 2. 12s of rest (whitish / gray background) 3. then 10 images of objects, (which took a total of 12s to display), followed by 4. 12s of rest (whitish / gray background)
… then repeat this loop (with different exemplars) for a total of 240s.
For an answer of how to display stimulus timings in a figure, you could draw some inspiration from one of the illustrations online - search for e.g. “stimulus timing figures” for a relatively good crop.
8 The guidance says “houses” not “objects”…
The guidance says that the object stimuli were houses but in lab class you showed use objects like a ladder and said that was the stimuli presented.
The objects included some houses, iirc, but this version of the experiment looks at broader categories of objects, not just houses. This snippet of text may have survived from earlier, where the experiment was specifically trying to locate the parahippocampal place area (PPA), whereas here, we are looking at responses to objects more broadly defined.
9 Significant difference from 0? And should we report \(r^2\)?
For testing to see whether each voxel is significantly different to 0, is this a one-sample t-test?
Yes. You get the \(t\) value from the weight (face or object selective) and its standard error by
\[
t=\frac{\text{weight}}{\text{standard error}}
\] and then checking the \(p\) value for \(df=158\) degrees of freedom.An alternative is figuring out whether the \(t\) you have calculated exceeds a critical \(t\) value for \(df=158\) at a \(p\) value of, say, 0.05 or 0.01.
Is this test also seeing whether the predictor is significant using linear regression? Are these the same thing?
Yes, that’s what I meant by “being significant”. There is a separate question about whether one predictor (eg. EV for faces) is different from another. For that you need to compare the difference between the two weights (and use the pooled standard error to calculate that \(t\)).
Should we report this as a regression with R2 … ?
The assessment of whether linear regression does a good job of explaining the data is a separate question (but you get that information from the LINEST() function results from Excel). In that context, \(r^2\), the coefficient of determination, tells you the following. How well does linear regression with the two explanatory variables we have included in our model explain the data. And here \(r^2\) can tell you the proportion of variance in the data explained by the model…. more simply: how much of the data is captured by the model? E.g. \(r^2=0.45\) means that 45% of the variance in the data is explained.
10 Do we need to do an \(F\) test?
No. There are some statistics reported by Excel’s LINEST() function that allow you to do an \(F\) test on overall goodness of fit for the model. But we didn’t cover this in class, and I don’t expect any of this in the lab report.
11 How to report stats (APA format)??
[It’s hard] to write up my results section so it fits the APA format and isn’t confusing. Are there any resources to report each stat correctly?
I am not going to mark papers based on the details of the APA style, although some of the principles expressed in that style guide are good.
(for me) treat the APA style guide as a guide / set of suggestions for what to do and think about the principles behind the suggestions. You won’t get marked down on tiny details
can the reader reconstruct/understand what you did?
is the presentation unambiguous?
there is more than one way to be correct / write a good results section.
12 Searching Echo360 transcripts
You can look for text in the transcribed video recordings - not just one-by-one, but for all recordings in one go. Very nifty. Video explainer here.
13 Data: coverage of the brain?
It looks like not the whole brain was scanned, so how to we explain how much of the brain was scanned?
You can state that the coverage was optimised to include the relevant regions in the occipital and ventral temporal cortex.
To give you an idea about what the actual coverage was, here an image (gray, anatomical scan with skull removed; reddish colours, coverage of the fMRI data). Feel free to download that image and use it in your lab repport.
## Regions-of-interest (ROIs)
What are we actually expected to write in the ROI section.
You can use that space to describe approximate anatomical locations, a key reference (example papers that cite, explain them), and any other details you find interesting if you find anything (eg. is it thought to be one very specific region or location, variability across participants, etc.). You can include this for
a face selective region (FFA)
on object-selective region (several options, nomenclatures)
an early visual region (eg primary visual cortex, V1)
14 The provided statistical maps… what do the colours mean?
What do the statistical maps you’ve given us highlight? Is one colour response to faces and the other response to objects? Or perhaps faces-objects and objects-faces. An explanation of this colour scale would be much appreciated.
I provided three images that show the extent of the maps for different statistical comparisons I made when I ran that dataset through FSL/FEAT. The colours I chose are arbitrary - I just didn’t want to use colour maps that clash… to avoid potential confusion. For two of the maps you can see that using different colour maps allows you to show multiple statistical maps together, which can help with keeping the number of figures down.
You don’t need to know or include the details in your report, but for completeness and for those who are interested in the nitty-gritty:
(I used the standard settings in FSL/FEAT to control for family-wise error rate (Z-threshold: 3.1, cluster p threshold: 0.05)
one map shows which voxels are responded “significantly more” during the face presentations, than the objects.
another maps shows the voxels that responded with the opposite pattern: more to object presentations than faces
the third map shows are that responded more to any of the stimulus presentations than rest (as less conservative test, so therefore much more “activated” area)