How can I avoid being "the negative one" when giving feedback on statistics?
I would suggest approaching your colleague in a humble and inquisitive way (especially since you're a junior member of the team). If you start the conversation with "your conclusions are wrong and here's why" you're going to set a combative tone for the rest of the meeting. There may be reasons that they interpreted the data the way they did that you're not aware of.
Instead try approaching the situation with something similar to "I looked at the data and came to this interpretation, can you explain your interpretation to me?" You're a researcher in your own right, so junior or not your opinion should be valued. But at least with this approach you indicate that you are open to the idea of being wrong, and hopefully that will start a constructive conversation where you can debate the merits of analysis type A over type B, etc.
General feedback rules apply.
Here are some of them:
- There is no need to criticize any person.
- Stick to facts.
- Describe things, especially describe what you think about things, e.g. don't write "the assertion is not justified by the data" but "I can't see how this is explained by the data" (and probably give an example of some claim which you find equally "unexplained" by the data).
- Don't be negative. Instead, make a suggestion for something better.
- If you don't know an improvement, ask a question (e.g. "I could not figure out, how this conclusion was drawn, could you please clarify/provide further explanation").
Last tip: Sandwich your criticism, i.e. start with something good, end with something good and put the meat in between.
(You may also want to google "feedback rules" or "how to give feedback", but there are some rules/tips which may not apply…)
It would be nice if statistics was always about the truth, and there was a right answer or method to every question. That simply isn't the case though, and many elements have room for debate. I'm an economist, and I've seen this first-hand in three different areas.
First, I did some interdisciplinary empirical research where I worked with a sociologist for a while. I was struck by how our assumptions ran contrary to each other; a few times I suggested things that were completely standard in the field of economics, only to discover that he couldn't fathom why we would do it that way. Then at least three times it was him proposing standard methods in sociology that I couldn't fathom.
Second, I moved into the policy research field. Holy cow, the policy field does things with statistics and econometrics that would make an econometrician roll over in his or her grave, while they were still alive.
And then it just got worse, because I started working alongside some data scientists. I won't even get started, except to say that they accepted as completely normal things that made me roll over in my grave.
My point in sharing my anecdotes is to suggest a more humble approach than "I don't think those conclusions are remotely justified by the data." Offer criticism, certainly, but don't be the person in the department who is a complete pedant about every statistical detail. My points were mainly about what I ran into when venturing outside of my field, but I think the lesson is still relevant within your primary field. Be particularly cautious about things you find objections to that are, regardless, widely used in your field.
None of this is to suggest that you should just ignore things you find incorrect. Rather, try statements like:
- "I'm familiar with using method A as you did here. The issues raised in (somepaper, someyear) look possibly relevant, so you might want to address their points here too."
- "What made you decide to use method A over method B? Maybe method B would be a good robustness check?"
- "A line or two about how you verified the data fits with assumption X might be good here."
In short, approach it as if you're assuming they know what they're doing, then asking helpful questions to lead them to your desired points.