<![CDATA[Behind Lab Doors]]>http://behindlabdoors.com/Ghost v0.4.2Thu, 16 Jul 2020 13:39:32 GMT60<![CDATA[The Art of Being Venn]]>I was going to talk more about the Zika virus this month, but honestly, I'm too tired for serious. I have a fellowship application due next month and I'm all serioused-out.

tired

So instead, I'm going to talk about one of my pet peeves: poorly drawn Venn diagrams. You may not know the name, but I guarantee everyone's seen this type of data representation. It's the one that's just over-lapping circles with words in them.

venn description

That's it. You'd think they'd be easy to get right, wouldn't you? But it's a little more nuanced than that. (Yes, that's a quote from the title song of Crazy Ex-Girlfriend. I already said I'm tired! Plus it's a fantastic TV show.) ANYWAY, back to Venn diagrams. So, slap 2 circles together, throw in some words and voila! Right? Not so much. That will get you nonsense like this:

venn bad 5

See, the circles are supposed to be descriptors that encompass a lot of things/people/ideas. The place where they overlap should be the things that have both of these descriptors. Dr. Peter Venkman (the ghostbuster pictured) is not a ghost, although he is "a thing I ain't afraid of". Here's a slightly better example of how this works:

venn platypus keytar

First off each circle should be one descriptor. Plus, the blue circle should be called "has a beaver tail and guitar strings", while the yellow one should be "has a bill and keyboard". That way the pictures are examples of the category and not the whole circle. Although, really, how many other creatures with that kind of tail slay at the guitar? But that's beside the point. The middle circle (the platypus with a keytar) would then encompass all 4 descriptors. I'll show an example of that later.

venn friends

Here's a good one, and rather applicable to me! Two descriptors with (assumedly) lots of people who fit into each separately. One very small area of overlap that encompasses my friends. Ta dah! You have successfully Venned at a very basic level.

venn santa

Add another circle with a new descriptor, and the number of extra categories goes up. Being careful and/or pedantic can still be funny!

venn superheroes

In case it's still not making sense, here's a good example. The writing in the circles is small, so here's a link to one you can zoom in on. 3 circles with 3 descriptors, each area has names of superheroes with 1, 2, or 3 of those descriptors, depending on how many circles are overlapped there. Technically, the size of the area should reflect how many names are in each though. That is, the green area has more names than the purple area, so it should be bigger by shifting the circles over slightly. But I'll let that one slide. Onward and upward!

venn singers

So. Many. Loops. There's a lot to unpack here, but I find it hilarious! Just keep in mind that the main circles have descriptors in bold. So, for example, Bob Marley is a smoker, toker and picker, but not a grinner, lover, sinner or joker. Dennis Leary is only a smoker and a joker. Steve Miller is all 7 and now that song will be stuck in my head for days. Science!

]]>
http://behindlabdoors.com/the-art-of-being-venn/c180293b-41c5-422a-8ea6-24e379223e85Thu, 09 Jun 2016 02:41:32 GMT
<![CDATA[Zika virus: The Criterion Collection]]>If you're not familiar with the link between the recent outbreak of Zika virus and an increase in microcephaly cases, I highly suggest you read this post first (mainly because I like tooting my own horn). Basically, the World Health Organization and the Centers for Disease Control and Prevention have both concluded that being infected with the Zika virus during pregnancy likely causes birth defects. However, this is based on a scientific consensus, rather than direct proof, due to the difficulty in getting that kind of data and the pressing need to minimize risk for babies. So let's talk about how scientists come to a consensus in situations like this.

In the absence of a “smoking gun” (which in this case means definitive proof of causation, not a murderer), scientists have to lean on each other more. There are several lines of evidence that point to Zika virus causing microcephaly, but none of them are strong enough to stand on their own. They’re suggestive, not absolute proof. But if you’re doing something (such as playing a sport) and enough people suggest that you’re terrible, at some point you have to realize it’s probably true! This is why I was always picked last for teams.

baseball gif

And of course researchers aren’t happy with just a vague definition of what constitutes “enough suggestions”. There are lists! The main two being discussed are Shepard’s criteria and Bradford Hill criteria. Hill is from 1965 and quite general, but can still be useful in these situations. Shepard’s was established in 1994 and is specific for teratogenicity (things that cause birth defects), so I’ll focus on that one. Keep in mind, not all 7 points on the checklist have to be met, as it’s more of a framework for discussion. So I’ll paraphrase what others have said on this (and published in the high end journal, The New England Journal of Medicine) to make it easier to understand.

1. Proven exposure to agent at critical time(s) in prenatal development.

Yes. The timing of Zika virus outbreak coincides with the microcephaly epidemic. Like I’ve said before, not strong evidence but it’s a start. Additionally, there have been proven reports of maternal Zika virus infection prior to several microcephaly cases.

2. Consistent findings by two or more epidemiologic studies of high quality: (a) Control of confounding factors; (b) Sufficient numbers; (c) Exclusion of positive and negative bias factors; (d) Prospective studies, if possible; and (e) Relative risk of six or more.

Partially. Two studies have been published of sufficient quality, one on the outbreak in Brazil and one on the French Polynesia, as well as several lesser studies. None of them are perfect studies, which is why this gets a “partially satisfied” rating. Notice what is taken into consideration for high quality: controls, numbers, bias. All important issues to think about when evaluating data and things we’ve discussed before.

3. Careful delineation of the clinical cases. A specific defect or syndrome, if present, is very helpful.

Yes. To be honest, this is one of the things I haven’t looked into much because it would require me to see pictures of the afflicted babies. I’m just not willing to do that. So I’ll have to trust the doctors and researchers when they say that the Zika-associated microcephaly has distinct features and patterns.

4. Rare environmental exposure associated with rare defect.

Yes. There are reports of women who have been exposed to Zika virus on a trip, then returned to a non-epidemic country and still had a child with microcephaly, which is normally a rare disorder.

5. Teratogenicity in experimental animals (important but not essential).

No. There is no animal model for Zika infection during pregnancy yet.

6. The association should make biologic sense.

Yes. Zika virus can cross the pacent, infect neural cells and it’s been found in brain tissue of fetuses with microcephaly. Plus, there’s precedence as other viruses (rubella, etc) are able to cause neural birth defects.

7. Proof in an experimental system that the agent acts in an unaltered state.

Not applicable. This refers to a medication or chemical agent, not a virus.

So there we go, 4 out of 6 applicable criteria are fully met and 1 is partially met. I’m not going to go into the Hill criteria, but you can read about it here, here and here. Basically, that checklist had 7 out of 9 fully met, 1 not applicable (biological gradient, for those looking at the links) and 1 not met (which was also about the animal model). Overall, pretty much a solid case for Zika being a real jerk of a virus.

And now I’m sad and need a cute animal gif. Here we go! This puppy has as much athletic ability as I do! Possibly more.

sports puppy

]]>
http://behindlabdoors.com/zika-virus-the-criterion-collection/fd3622c9-a7a2-4b82-b95a-fb0ceeb151b0Tue, 03 May 2016 01:48:17 GMT
<![CDATA[Zika virus 101]]>Let's talk outbreak. Nope, not the movie about Ebola, but a close cousin. Well, "close" as in, they're both about a virus originally found in Africa and that's about it. So, third cousin twice removed?

family

Anyway, I'm talking about the Zika virus outbreak happening in South America right now, specifically in Brazil. Until recently, this virus has been fairly localized to Africa and Asia, so North America hasn't paid much attention. It's not deadly and the illness is pretty mild, with only an estimated 20% of infected people even showing symptoms. Plus, its symptoms are very common and could easily be mistaken for a bunch of different viruses: fever, rash, joint pain, and sometimes headache. The one notable thing to watch for is conjunctivitis, or red eyes, which is more common with Zika than other bugs.

If you're healthy and get bit by a mosquito carrying Zika, a couple of weeks later you have a weird flu for 2-7 days, then you get better and carry on with life. In fact, the first outbreak outside of Africa and Asia occured on Yap Island in the South Pacific in 2007. No one died or was even hospitalized, despite an estimated 73% of the population being infected. So in general, not a particularly note worthy infection, just kinda annoying.

sick woman

The problem is, and the main reason this virus is in the media, if you happen to be a pregnant woman who catches the Zika virus. And here's where things start to get serious, so don't expect as many jokes as normal. Early in 2016, doctors in Brazil started noticing what looked like an increase in the amount of babies born with abnormally small heads and severe brain defects; microcephaly.

So of course everyone went a little crazy. Wild theories took hold, with the most talked about pointing fingers at the larvide pyriproxyfen and everyone's favourite "evil" corporation, Monsanto (which doesn't even produce the chemical in question). Just to ease your fears, that theory has been thoroughly debunked. There are several regions of Brazil that don't use the larvicide and are still reporting increased cases, while one of the hardest hit cities (Recife) doesn't even use that pesticide, at least within the municipal water. Plus, while the chemical affects insect development, humans don't even make the proteins that pyriroxyfen targets.

Alternatively, many doctors and scientists in Brazil suspected the Zika virus. This relatively innocuous bug had appeared in Brazil in May 2015, around 9 months before so many babies were born with microcephaly. Yes, just a correlation which we know can be coincidence like with the larvicide, but a decent theory that was worth investigating.

thinking scientist

Yet I remained skeptical; it's very deeply ingrained in my psyche from years of science. The link between Zika virus and microcephaly just seemed too weak, and there were reports that the number of microcephaly cases were being over-reported. So I have something to admit: I was totally wrong. If I had written this post about the Brazilian microcephaly epidemic a month ago (which I started to, but didn't finish it), there would have been a lot of "it's possible, but not necessarily true" statements being used.

oops sorry

The thing is, it's really, really hard to get conclusive evidence for serious diseases, especially when it involves babies. Nobody's going to let you experiment on their newborns or fetuses, And it's next to impossible to get that sort of data in a rush. Producing diagnostic tests, finding hard data and developing an animal model can take quite a long time. The best we can do is a scientific consensus. And the more I read about this unfortunate epidemic, the more I'm convinced that mosquitoes are the devil.

mosquito devil

No wait, I mean I'm convinced that Zika infection in early pregnancy can lead to microcephaly. A virus leading to birth defects certainly isn't unheard of. Rubella (German measles), toxoplasmosis (from cats, which is why pregnant women shouldn't change litter boxes) and cytomegalovirus (I have nothing to add here but enjoy consistency) can all cause birth issues including microcephaly.

So let's talk scientific consensus. What in the world does that mean? I figured it had guidelines (scientists and government agencies generally like rules), but I only looked into what they were recently. However, this post is long enough already and the consensus criteria is going to be very wordy, so I'll put it up now as a separate post.

]]>
http://behindlabdoors.com/zika-virus-101/e9d69baf-f33d-4787-9f84-fc90ff953cf9Tue, 03 May 2016 01:13:22 GMT
<![CDATA[What's in a graph]]>I have a confession to make. I am a graph addict. A data junkie. I make no apologies for this, it is just the truth.

I realized this a few weeks ago when I came across a website that charts all the causes of death across age and separates it by gender or ethnicity. This results in a glorious, interactive, stacked area graph that I can get lost in for hours. Ok, maybe not hours. But I seriously spent 20 minutes playing with it the first time I clicked the link. I think I'm in love.

Which leads me to this month's post. I want to share my love of graphs by teaching you where to look for the interesting tidbits of information. If you only focus on what the author is pointing at, you'll miss out on lots.

Sometimes you can find good things, like an intriguing difference that might spawn more research questions. But sometimes you can discover the error in the research method or analysis that makes you doubt the study validity. This is the type of critical thinking that it takes years, no decades, to fine tune. In fact, there's a great article by the same people who made the death chart with rules to follow when creating any graph type imaginable. It's worth checking out.

So without further ado, I present to you, my newest research project: Things That Make Me Happy.

As you can see, I have graciously charted out my level of happiness with various activities for you. Clearly, everyone is yearning to know these important details about me, whether I prefer watching TV, yoga or running. But let's not be too hasty, we'll take it step by step.

1. Check the y-axis A lot of graphs you'll see around are bar graphs like this and it's absolutely critical that you check the y-axis (the vertical one). The categories on the bottom are important too, but you can't understand what you're looking at until you at least know what data is being measured.

Here, we're measuring my happiness, using how often I smile as a proxy. I know, smiles per 10 min is an odd measurement. But I had already made up the numbers without thinking about units and didn't want to have to change it!

But that right there shows how important looking at the y-axis is. Weird units or (the absolute worst) no y-axis information at all are big red flags that something's wrong with this data. So yeah, I totally did that on purpose.

2. Look at the x-axis Now that you know what we're measuring, check out what categories the data is separated by on the horizontal axis. Here's where you can start thinking about positive and negative controls. These are generally listed first in the graph, before the test groups.

In this example, I used going to the dentist as the negative control. You'd expect this group to be consistently low, which it is. This demonstrates that you can measure low numbers, as in I'm not just a perpetually happy person who never stops smiling.

Likewise, eating ice cream is the positive control, which is consistently high in happiness. I mean, who doesn't like ice cream?? Crazy people, that's who! This shows that the test is able to measure high levels of smiling and can see differences between low and high.

3. Evaluate the error bars I purposefully haven't talked about the test groups up to this point. I know you're all anxious to know the results, but it's key to have established the model first. Otherwise you can't trust that you really know me!

So now we evaluate the quality of the data, by looking at how consistent it is. That is, each time I go to yoga, do the researchers measure about the same number of smiles? That's shown by the error bars; the tees sticking out of the blocks. Big error bars = big errors.

Don't believe me? Good, you're being skeptical, my favourite trait! Look at that same data, as a dot plot.

In this type of graph, each time the happiness is measured, it appears as a single dot. So you can see that I ate eight ice cream cones and the data was between about 7 and 11 smiles/10 min. That's pretty consistent.

Now look at watching TV. Eight measurements again, but the error bar is much longer due to the wide spread of data points. It almost looks like that group could be split into two groups. Maybe I wasn't watching the same program each time or not all of the episodes were good? This is why error bars are important.

4. Find out the sample size In the dot plot, you can easily see how many times each category was measured. Not so in the bar graph. That's why it's usually listed somewhere what the sample size (called n) is for that experiment.

However, it can be misleading. For this study, I'd say n = 4-8, but you can see that only one group has only 4 dots: running. I really dislike running. Or maybe I don't! You can't really tell with that few measurements. There's one time that I really liked it, so maybe that's the true value and the other times I was just in a bad mood to start with. You won't know for sure unless you increase those numbers.

Spoiler alert: I actually hate running. And the data is completely fabricated.

5. Mathematicize the statistics Not a word. I'm just trying to make boring stats sound cool.

The last, very important step is to see if the differences you are actually significant. They could look different enough to be interesting, but not actually be statistically significant. Or they could be statistically significant, but not pass what my PhD supervisor called "the bloody obvious" test. If it's not bloody obvious that there's a difference, it's probably not physiologically relevant.

Most graphs try to keep it simple by only putting up the important comparison; here, that would be is it higher than the negative control (dentist). If so, that bar get a pretty little asterisk, as you can see in the first picture I posted.

The problem is that this method leaves out a lot of information. For example, does yoga make me less happy than ice cream? Another way of showing the data is messy, but complete and thus not biased in what the reader is being shown.

The lines connect the bars that have a statistically significant difference. Hard to interpret right? That's why I prefer the first way, with details in the text about what other differences were seen.

So there you have it. I love ice cream and yoga, hate running and the dentist, and have mixed feelings about TV. Everything you could have ever wanted to know about me. You are welcome.

]]>
http://behindlabdoors.com/whats-in-a-graph/29d9de0e-b7ab-411a-967d-0ac772f755cdTue, 02 Feb 2016 03:38:47 GMT
<![CDATA[12 days of holiday myths]]>Happy holidays! This blog is a bit later than normal in order to perfectly time it with the beginning of the actual 12 days of Christmas! Yeah, I'm just kidding. I decided that putting up lights and drinking eggnog were more important. I just got lucky with the timing.

I decided this month to basically make a mish-mash post of other people's rigorously researched posts... because that tree is not going to decorate itself! Plus, citing lots of other writers is just good science.

plagiarism

1) The tryptophan in turkey makes you sleepy.

Let's start with the most popular one. Mainstream enough that Mythbusters even tested it out. People like to blame the holiday turkey for their tiredness post-meal. Nope, sorry, you just ate and drank WAY too much. Long story short, turkey has lots of tryptophan, which is the precursor to the hormone that makes you tired (serotonin). As sciencey and logical as that sounds, lots of foods have similar levels of tryptophan and don't give the same effect.

2) The eggs in raw cookie dough make it unsafe to eat.

I have tried and failed to find any sort of scientific evidence that commercial eggs carry enough bacteria (commonly Salmonella) to harm anyone. I get the theory, Salmonella is a happy little bug in the chicken's digestive tract. Chickens have an odd little tube called a cloaca, which can release both poop and eggs. Gross. Which means that poop can get on the outside of the eggs.

But the US and Canada have strict sanitization requirements for commercial eggs, so contamination from that route is unlikely. Plus, eggs are generally sterile inside when they're laid. Granted, if the eggshells are wet for a while, bacteria are able to get through. But I still think it's a small enough risk to not stop me, and I'm not the only one. Especially considering the tiny amount of egg containing tiny amounts of bacteria that would be in each raw cookie I eat. All bets are off with store-bought, mass-produced dough. But if we're talking the homemade stuff, I just don't get the fuss.


3) 90% of your body heat is lost through your head.

It's a toque manufacturer conspiracy! Or just your parents trying to make sure you didn't get frostbite. Either way, not true. Think about how cold you'd be if you went outside in a speedo and a warm woolen hat. Besides becoming an internet sensation, you'd likely just get hypothermia and die.

4) Exposure to cold makes you sick.

It's funny, this one makes so much sense I never really looked into it before now. Sure, you need some sort of bacteria or virus to actually get sick, but I figured the cold would probably lower your immune system or something. Turns out, it's the exact opposite. Cold weather can actually boost your immune system in small doses, plus being inside just means your chances of catching something from the people around you is higher.

5) Alcohol warms you up.

Nope, it just dialates your blood vessels so you FEEL warm. It also makes you feel like you love everyone around you and can rock at karaoke, and we both know that's not true, right? In this case, the dialated vessels move more blood by your skin, which actually allows heat to escape and cools you down.



6) You get drunk faster in an airplane.

The though was that lower oxygen at high altitudes means your body can't process the alcohol as well and you get drunk faster. In actuality, the low oxygen by itself leads to tiredness and sometimes light headedness, so you just feel more drunk.

7) Reindeer can fly.

This one was obvious right? I mean, clearly Santa's reindeer are light weight, sophisticated robots designed to be resistant to cold and fly at high speeds. And of course, Rudolph's bright red nose is the nuclear reactor powering this set-up. They don't survive on cookies and milk. Let's not be ridiculous.



8) Sugar makes kids hyper.

Ok, this one may actually be controversial, but there have been a number of controlled, randomized trials that don't show a difference in the behaviour of kids given sugar or a placebo. The main difference? The parents tend to rate their kids are more active if they think the had sugar. Which is exactly why blinded trials are essential to science!

9) Closing the apps on your iPhone or iPad makes it run faster.

Yeah, I know, I'm stretching the definition of "holiday myth". But twelve is a lot! And we all know that we're going to be spending a lot of time with our phones (face time, if you will) when the crowds of people get annoying. Plus, this one really irritates my husband. As an iOS developer, he really, really wants people to know that this is nonsense! In fact, it may actually slow your phone down because closing the programs stops them from doing quick updates in the background. So when you go to open that app again, it has to do the whole info dump at once and it takes longer.

10) Cranberry juice can treat urinary tract infections.

Cranberries are essential to Christmas dinner, so this counts right? Anyway, it turns out that the evidence isn't very strong for cranberries helping your UTI. It's possible this horrible fruit (you know it's true) could help prevent the infection though. It's not a definite no and it certainly doesn't hurt to eat/drink them, but I'm listing it as a myth anyway. Because twelve!

11) Santa will read your letters.

These days, it's all about delegating and out-sourcing. The elves do the reading and organizing, of course. Good thing they have modern technology to help them and they never close their background apps!

12) Pointsettias are poisonous.

I never really heard this one, but I've asked a few people and apparently it's a common myth. The sap apparently can upset your kid's or pet's stomach, but it's not life threatening.

I made it! Hurray! And all of those were totally legit holiday myths. Just let me have that one guys.

Have a great Christmas, Kwanzaa, Hanukkah, Flying Spaghetti Monster Day, whatever holiday you celebrate. I'll catch you again in February, because I plan to be too tired on January 1st from lighting fireworks to post anything.

]]>
http://behindlabdoors.com/12-days-of-holiday-myths/8e5eaf17-ffdb-46e0-abcb-da993d87d1bcMon, 14 Dec 2015 05:07:24 GMT
<![CDATA[Correlation, causation and coincidence]]>Alternate title: Are you sure there’s fire at that smoke?

I’ve written previously about curse words for scientists. One term that I didn’t include, but probably should have, is “correlative”. It rates right up there with “data trend”. It boils down to someone telling you that sure, your data looks pretty, but it doesn’t mean anything. Which stinks to hear, even if it’s an important critique. Scientists strive for causation; definitively showing that A leads to B. Not that A and B are related because they occur together (ie: correlation) or randomly occur at the same time with no connection (coincidence).

Or, to use a fun example from a wonderfully nonsensical website:

Image credit: Tyler Vigen

Clearly, this demonstrates that longer words in the Scripps National Spelling Bee angers venomous spiders, causing them to bite. I mean, that’s just common sense, right? Everyone knows that spiders become murderous when they feel their intelligence is threatened.

Seriously though, this example is pretty obvious in its ridiculousness, which is exactly why I’m using it. No one would call that anything except a coincidence, even if the incidences do correlate 80.57% of the time. That number means the data looks pretty, but doesn’t tell us anything about the underlying details.

So let’s say you run across a graph like this. Sure, the percent correlation is high, yet something just doesn’t quite sit right. What do you do to find out the truth? I’d start by looking at the timing.

alt Image credit: xkcd.com

If event A triggers event B, then you’d expect A to happen before B. In our Spelling Bee example, there’s an increase in word length in 2001, followed by an increase in killer spiders in 2002. That could theoretically be the initiating event. Maybe spiders were totally fine with long words until something happened in 2001 that caused them to become murderous in later years when they heard long words. Or maybe that’s when spiders got cable and could start watching the program.

To test that hypothesis of causation, you need to run some experiments. I’d look more closely at the highly correlated time points (such as 2002, 2005 and 2009), maybe examining whether spiders killed more people in the days or weeks directly following the Spelling Bee. If you had access to spiders without cable, you could introduce them to a pre-recorded Spelling Bee (one group with a short winning word and another group with a long word) and measure anger levels. But until you test your theory with more in-depth studies, everyone just calm down! It’s only a correlation.

Another possibility is that the two factors (bites and word length) and in fact related, but not directly. There could be a third element at play that influences both separately.

This is a lot harder to figure out in studies, because you ultimately have to make an educated guess at what that factor is in order to test it. Maybe air humidity affects the ability of spelling bee participants to spell long words and it independently irritates spiders causing them to bite. How could you possibly figure that out just looking at the one graph? That’s the tough part of science and the reason a purely correlative study is not widely trusted.

The final option is the one that makes me cringe: coincidence. It’s too easy to write data off as “coincidence” or artefact or even irrelevant. A lot of the time, it probably is. However, it can also mean that you’re ignoring data because it doesn’t fit your hypothesis and that’s where you can get into trouble in research.

alt Image credit: xkcd.com

Let’s look at a real world (and not crazy spider) example. One day you buy new shoes. The next day your feet hurt. Obviously, your new shoes are causing your foot pain, right? Causation. But what if it’s just correlation? Perhaps there’s a third factor, like you bought the new shoes to go on vacation where you were walking a lot more than normal. Sure, the new shoes might be causing the pain, but more likely it’s your increase in activity. Or maybe it’s pure coincidence! Maybe you didn’t realize you hurt your foot yesterday and that’s what’s causing the pain; completely independent of what shoes you wear.

I know this sort of thought experiment might seem unnecessary, but this is the basis of many medical decisions. People get a flu shot, then happen to catch a cold a day later and blame it on the vaccine. Or they use an alternative medical therapy that is supposed to cure a cold, and are happy when they feel better 3 days later (which is the normal time it takes to feel better, even when taking nothing). I’m not saying that alternative medicine has no benefit. I mean, I’ve published papers on probiotic use and I’m currently researching the benefits of black tea! What I want people to do is simply consider all the options. Don’t immediately assume that something is causative when it might be coincidental. And with that, I shall step off my high horse and take my leave. Good day sir!

]]>
http://behindlabdoors.com/correlation-causation-and-coincidence/36e1ad8f-6e87-445a-a023-dac3a24bc70cThu, 05 Nov 2015 22:42:10 GMT
<![CDATA[An ode to conferences]]>

How do I love thee? Let me count the ways.
I love thee to the depth and breadth and height
Of 8 foot poster boards lined up by the hundreds.
To the ends of cutting-edge talks and expert questions.
I love thee to the early morning symposiums,
But only if there's free breakfast.

Ok, well now everyone knows why I went into science and not the arts. But let not my amateur attempt at stealing repurposing classic poetry dissuade you, conferences are one of my absolute favourite parts of science.

And it's not just for the free (usually terrible) coffee or the occasional wine and hors d'oeuvre receptions. Honestly, no matter how bad my experiments are going, attending a conference never fails to excite me about science again. You get to watch passionate people talk about cutting-edge work, surrounded by other people who love this is as much as you do. I can't think of a better place to be.

Ok, maybe Hogwarts is better. Harry Potter's pretty cool.

Conferences can be anywhere from 100 attendees at the smaller, workshop-type events, all the way up to around 15,000 physicians and scientists at some of the biggest meetings I've been to. There are pros and cons to both. The small meetings are great for networking, because everyone goes to the same talks and eats together. You really get to know the people there over the short (2-3 day) gethering. However, a smaller crowd means a more narrow research area. After all, the organizers tend to invite people they're already acquainted with, which means they are interested in similar things. This isn't necessarily bad either, as you really get to know an area in depth.

On the other hand, big conferences always have multiple seminars going at the same time in a broad range of topics. I've gone a full day barely seeing anyone I know! It's quite hard to schedule your time there, because there's so much going on at once and I don't have a time turner. (Yes, more Harry Potter references. You already know I'm a nerd, why are you surprised?)

Because these tend to be 3-4 days long, seminar burn out is a real possibility. People get too ambitious and go to every talk they can fit in at first, but can barely stay focused on words by the last day. The other downside to these massive gatherings is the difficulty to network. Yes, there are all kinds of major names in your field doing amazing research... that everyone and their dog (lab?) want to chat with. It's definitely more daunting, but can provide some fantastic connections if done well.

So, you ask, how do I get to such a magical place? Well, as with anything researchy, the first step is to get data. You need something to present at the conference, because there's no way your boss is going to pay for a "tourist" to attend.

Image credit: Lucille Petruzzelli

Conferences typically have an abstract deadline months in advance. These are similar to the abstract for a paper, but with a few key differences. Where a paper abstract focuses on conclusions, here you want it to be result-heavy. You need to show the people choosing abstracts that you have lots to talk about. The research you're presenting can't be published yet, but it should be enough for at least a short paper to be written. Although, you can usually get away with less for the small conferences.

From there, the abstract selection committee decides which abstracts are good enough to be accepted. The best of those get a talk (typically 10-12 min followed by 5 min for audience questions), while the rest are selected for posters. The large meetings can have a surprisingly low acceptance rate, where up to 35% of the submitted abstracts don't get in. The small meetings tend to be more inclusive. I'd imagine they wouldn't want to accept some members of a lab and cut out others since it's an close setting. But that's just my theory.

Poster sessions are a completely different world at large conferences. Rows of poster boards are lined up back to back as far as you can see. Words seriously don't do it justice, so here's a random picture I found online.

That is not an exaggeration. You're one person in a sea of words and graphs and you only have about 10 seconds to catch someone's attention, so a good title is key! Poster presenters typically have to be at their spot for an hour in case anyone has questions or wants to be taken through the data. It can make for some really interesting discussions. Or it can be painfully boring as you watch people walk past without slowing down as their eyes glaze over on the 4th day of the conference when they're just counting down the minutes until the shuttle leaves for the airport. Or maybe that's just me.

I know some people enjoy poster sessions, but hoenstly I'd rather just go to talks. It's probably because I don't like talking to people.

The best talks, I find, are in the 20-30 min range and are typically given by senior post-docs or head researchers themselves. This is where you get to hear the cutting-edge research with plenty of time for background explanations so you can fully understand what's going on. I love going to these distinguished seminar sessions in an area I'm only kind of familiar with and learning just a ton of interesting new ideas. I once went to a talk simply because the title said something about taste receptors in the intestinal tract and I just wanted to be able to make jokes about tasting your own poop. But it turned out to be one of the most fascinating talk I've gone to, because it was all new information!

And also, poop jokes. Always poop jokes. And with that high class attitude, I'm done. Mic drop.

]]>
http://behindlabdoors.com/an-ode-to-conferences/ec52b3ce-b5df-406a-8651-faae0a1a8e3fTue, 06 Oct 2015 03:10:45 GMT
<![CDATA[A day in the life]]>... of a postdoc/graduate student, at least. I have no idea the crazy things those professors get up to! Probably just boring meetings and endless grant writing, but you never know. Maybe they have secret bubble machines for stress relief!

Anyway, I occasionally get asked the dreaded question from a well-meaning person; “what does an regular day look like for a scientist?” I always want to reply “it depends on your definition of regular,” but somehow I don’t think that would be a particularly good response.

The thing is, I don’t have a short answer for it. My days can sometimes be quiet and methodical, but more often than not, they’re rather chaotic and require me to be flexible. Some researchers have adapted to this life by eschewing all hopes of daily plans at all. Only the essentials get put in a calendar, and even then it’s in pencil (or pixels, that can easily be moved to another day).

I go the other way. I know I function better with structure and a task list that I can check off, even if it has to be moved around as things crop up. About half the time, I’ll have 1 or 2 experiments going that I know are going to take up a good chunk of time during the day. Or I’ll have a lab meeting, or a seminar to attend. So I’ll schedule those into my daily planner first. But many experiments have incubation times where I have to just sit and wait. If it’s only 10-20 min, maybe I’ll just take care of emails or check Facebook ahem Pubmed for new papers in my field. It’s not enough time to really read a paper, but at least I can read a few abstracts.


photo by Intel Free Press

If it’s a long break during a protocol, I can sometimes start another experiment. Although there have been some times where too much gets scheduled and hands-on times overlap and oh the stress! That’s where some of the flexibility comes in. By knowing the techniques (or having someone experienced around like a good lab manager), you can decide which experiment can be pushed off for a few more minutes and which one is critical that it gets done on time. I can’t say enough good things about the wonderful people I’ve worked with who have taught me this essential knowledge!

Once the big things are in place, then I’ll start jotting down a few littler things that can be done during any extra down time. Sometimes it’s cleaning equipment I’ve used, or preparing tubes for the next day. Maybe even looking up details for a new test that I want to try or ordering materials I need. Preparation is key for experiments to not get bogged down before they even get started.

Finally, I also need to schedule in time for writing. Be it a manuscript I’m preparing, funding that I’m applying for, or even just keeping up to date in my lab notebook. Which, I’m definitely not several weeks behind. Nope, absolutely not me. I have no idea why you’d even ask me that, I’m not a procrastinator at all...

So I hope you can see the problem with answering what my day looks like. It really can be almost anything. Sometimes I run my feet off preparing samples and running tests all day long. Some days my butt hurts from sitting for hours on end, my fingers are stiff from typing and my eyes are glazed over from reading papers. But most days are some sort of weird hybrid of both, which is exactly how I like it!

]]>
http://behindlabdoors.com/a-day-in-the-life/346a34b9-96f3-4cd4-b2da-b0c1afa8813dMon, 03 Aug 2015 19:21:51 GMT
<![CDATA[Click bait]]>This post is just some random lists about objects I see daily in the lab. It was supposed to be a quick, fun post to commemorate my first year of posting (woo!). I did not realize how much effort these click bait lists take! And thus begins another late post. I better not quit my day job... especially since this doesn't pay anything.

6 pieces of lab equipment I wish I had at home

Parafilm This is Saran Wrap's awesome cousin. It stretches and seals anything. ANYTHING

Kimwipes Like a cross between Kleenex and tissue paper, it doesn't leave papery bits behind and absorbs like crazy. Not very soft on the nose though, trust me.

Graduated cylinders Way easier to read than a measuring cup and more accurate! Don't worry, you don't have to tell me a nerd. I write this blog, I'm well aware that I am.

Extra long forceps Every time I drop something into a narrow glass, I wish I had these foot long suckers.

Biological grade 100% ethanol Bring on the party! Just kidding, seriously, there are laws. But I've heard stories about back in the day...

-80C freezer No half-melted ice cream or warm beer in this house!

10 things that you can find in every health science lab

(besides the stereotypical flasks, beakers, etc)

Bottles of clear liquids Sorry, it's not like TV. Fun, coloured solutions aren't common.

Brightly coloured lab tape To make up for the clear liquids maybe? Lab tape is thicker than even painter's tape so that it's easy to peel off and smooth for ease of writing.

An endless supply of Sharpies To write on the lab tape, boxes, anything plastic, and even notes if need be. Thin and thick, large and small, we like them all! Except yellow which never shows up.

Vortex When things are a mixin', don't come a knoc... no, wait, never mind.

Eppendorf tubes Little tubes that hold 0.75-2 ml of liquid with a snap lid. Pretty much every sample we work with fit in these.

Pipettes Even CSI got this one right! Accurate down to less than 1/1000 of a ml, I can't even imagine doing work without them.

Latex gloves As far as the eye can see. Also sometimes nitrile. NEVER powdered.

Fisher Scientific timers Other companies make timers, or so I'm told. Those must be what I've spotted in the back of lab drawers, tucked in with the dead batteries and empty tubes.

Pens with random drug names Grants only stretch so far! And drug companies have piles of these at every vendor show.

Coffee machine(s) Researchers were found to be the number one coffee-consuming profession for a reason.

9 lab items with odd names

Presented without explanation, for increased ridiculousness. Or because I'm lazy. Either way.

*My bad, that's a sonic screwdriver! Here's the real thing.

]]>
http://behindlabdoors.com/click-bait/93a99e91-d891-42eb-8f56-40c972d3a1c3Wed, 08 Jul 2015 03:20:52 GMT
<![CDATA[Return to sender]]>Doesn’t it feel like an eternity since I started blogging about the peer review process? Guess what, it’s actually a fairly accurate of a timeline for the process! It’s long and drawn out, but that’s not necessarily a bad thing. Rush jobs are rarely done well, and in this case, can even be a sign of fraud!

Hearing of a researcher who faked data is unfortunately too common, but a pretty new concept (to me at least) is the idea of a fraudulent review. Rather than stand up to the scrutiny of your peers, some people have taken to writing their own reviews using fake names or false email addresses under a real scientist's name. The major clue that this deceit was happening was a quick response time (less than a day, sometimes even within hours) resulting in a (surprise!) glowing review. Considering half the battle for an editor is finding researchers willing to fit a peer review into their busy schedules, this seems really obvious. So ultimately, the fault of these fake reviews getting through the system lies at the feet of the editors in my opinion. No matter how busy you are, take the time to do your job properly.

The good news is that journals are becoming savvy to this fraud and are cracking down. Last year, a "peer review and citation ring" was discovered by the Journal of Vibration and Control and 60 articles were retracted. Currently, a major publishing company, BioMed Central, is in the process of retracting 43 papers due to fabricated reviews. This is crazy stuff, considering how uncommon it is to hear of even a single paper being retracted.

But back to what peer review looks like for the average, honest, non-horrible scientist. Assuming you have actual reviewers on your paper, you’re almost certainly going to need to make changes to the manuscript. The reviews typically come back as a brief summary of your paper, followed by a list of major issues to be addressed, then any minor edits. Each of these points need to be dealt with in the manuscript itself, then a Response to Reviewers must be written that addresses each point individually. This is where you can argue with their critiques, if you want. Sometimes it’s worth it! But not usually, and even then it’s a give-and-take. “I’ll give you the citation of what’s obviously your paper, but I’m taking the liberty of not doing the suggested long and boring experiment that won’t add anything to my conclusions.” That sort of thing.

Keep in mind that reviewers can pretty much ask you for anything, including additional experiments to further support your conclusions. In fact, it’s cause for celebration if a review comes back without extra experiments suggested! Editing is way easier (and quicker!) than having to do more bench work. It’s also occasionally acceptable to say, “Nope, I’m not doing all that work for this lower impact journal!” Ok, well, you may want to be more diplomatic. I’d recommend, “While the suggested experiments would likely generate interesting results, they are beyond the scope of this paper.”

It’s a delicate balance between refusing to do any extra work and agreeing to do everything. Refusal will often result in a rejection after the first round of revisions, which most certainly can happen. Yet agreement won’t necessarily get you published either. Sometimes Dr. Grumpy Reviewer will just find more roadblocks to throw in front of you until one of you capitulates. But I like to be optimistic about the nature of people. I truly think that peer review, for the most part, improves a paper and leads to better science. The critiques aren’t personal attacks, even though they can often feel like it.

My own exciting news is that yesterday I got an acceptance letter for a paper I wrote in my previous lab. Hurray! … after almost 6 months and 3 rounds of reviews. No joke. It’s quite good timing actually, as it demonstrates what I’m talking about in this post. No experiments were suggested, which was good since I’m literally a continent away from my samples! I’m sure other people in the lab would have been able to help out in exchange for a middle authorship, as I’ve done for others, but it was still a lucky break. Plus, reviewer #2 was quite happy with the work and only had 3 small comments. However, reviewer #1 was another story. Here’s a generalized, considerably paraphrased account of what happened. There were other critiques, but this was the main one.

First review:

“Why did you use these 2 tests for the basis of your paper? They’re commonly used for another process entirely, and there are other standardized tests you could have used instead. Moreover, the procedures are not explained in the text and the 2 tests were incorrectly administered. Test #1 usually requires [insert all kinds of technical details here].”

My response:

“While less common, test #2 has been characterized by several groups for this function when administered as we did. The manuscript has been updated to specify these differences (pg 5 and 8). *List of 4 supporting paper references* Test #1 is indeed conventionally used to study that other process. However, as the regular way didn’t work in our situation, we modified the data collection to also include this process. This is explained on pg 7-8. In separate, unpublished trials, we attempted to use those other tests. However, again this didn’t work in our situation. Therefore, we could not use any of the data collected and we did not pursue these tests further.”

Second review:

“Regarding the first part of test #1, the authors do not mention any time they did this other part of the procedure. If that was correct, the test was invalid (see this paper reference). I am not convinced at all that this process can be accurately measured with test #1. By contrast, test #2 was correctly administered and I agree with the authors’ interpretation.”

My response:

“Test #1 did not include that part of the procedure. This is what we did instead. Details about this have been added to the methods section.”

Third review:

ACCEPTED

We wore him/her down! But honestly, it improved my work. If our methods weren’t perfectly clear to a reader, I didn’t do my job properly. Especially if that means they doubted my conclusions, because that is the name of the game folks: Convince the audience you know what you’re doing!

]]>
http://behindlabdoors.com/responses/742a55bc-2788-42b6-a3ad-8ec4626c8161Thu, 11 Jun 2015 02:00:13 GMT
<![CDATA[Reviewing the irony]]>Sometimes, just sometimes, the stars line up and the subject I’m discussing on my science blog gets big media attention. Yay, relevancy!

I mentioned last post that most peer reviews in my field of study are single blind, basically letting the reviewers be on the cop side of the one-way mirror. While this does seem to work, there are of course problems with it. Reviewers can be mean if they don’t like one of the authors, or they can let things slide if the authors are well known or friends. Since science really does try to minimize bias whenever possible, the ideal situation would be to make reviews double blind by erasing the authors’ names from the manuscript sent to the reviews. To be honest, I’m not entirely sure why this isn’t how the system works. Laziness perhaps? It takes quite a bit of effort to change even small things that are rooted in a bureaucratic process.

Well, here’s to hoping that a blatant display of sexism is the push needed to change that. In short, two female authors recently had a manuscript rejected based on one review that flat out told them they needed male help to do better science. They appealed the decision, waited 3 weeks, got frustrated and tweeted about it, leading to #addmaleauthorgate. Clumsy hashtag, but effective.

 

You read that right. Men are apparently purely fact-based creatures with better stamina, unlike those weak, emotional women. Surely that’s why they make over 30% more than females on average; they've earned that gender wage gap.

Ok, breathe Christina, stay calm. Ignore the irony of a sexist review of a paper on the gender differences in the transition to a postdoc position. Let’s focus on the scientific process. The first question that comes to mind for many people is “what if the reviewer’s right and it’s just bad science?” To which I respond, so what? If it’s bad science, point out where they’re wrong in the manuscript and tell them to deal with it. Do NOT insult the writers, regardless of gender, by telling them they need better people involved in their work. And especially don’t base the concept of “better people” on their genitalia. The blatant sexism is horrible on its own, but it’s not even backed up by an attempt at constructive criticism.

Almost as bad, where was the editor in all of this? There’s a reason for having a 2 step process for peer review. There’s no way such obviously prejudiced comments should have made it back to the authors. The good news is that the journal involved agrees and has taken steps to rectify the situation by removing the reviewer and editor.

So what does this mean for peer review process? This is a dramatic example of a broken system. Sure, not all reviewers are terrible people, but if the review process allows this to slip by sometimes, how much bias and discrimination is being caught and therefore hidden? Some people have tweeted to Dr. Head (Dr. Ingleby’s co-author), that we should move to an open review format.That is, everyone involved in scientific review knows who’s doing what.

I disagree. I think double blinded is the way to go. Open review could work if implicit bias didn’t exist, but it’s sneaky. You may not even know you view genders or races slightly differently and so how can you correct for it? I certainly didn’t realize I had these kinds of subtle biases until I took the Implicit Association Test. I highly recommend this quick but interesting test.

So we’re all human, we likely all have some sort of subconscious bias. Sure, we could try to overcome it and work around it, but I’m an advocate of keep it simple (stupid). Blind both parties of the peer review process and bypass focus on anything but the science itself. I’ve actually had one manuscript reviewed this way and, while a bit annoying, it’s quite often not too difficult on the author end of things.

However, this becomes a bigger problem when you're doing a follow-up study and want to cite your previous paper. How do you say "based on our previous results, we decided to look at..." without a reviewer in your field knowing who you are? Even specialized equipment or techniques can give away an author's identity. Despite these concerns, I still think it’s worth it. Sure, if a reviewer really wants to know whose work they're reviewing blindly, they'll figure out a way. But I'd prefer to think that they'd be more focused on evaluating my work, which could level the playing field for minorities and women.

This may be an illusion of fairness, but an article in Nature last summer suggests that double blinded reviews may be increasing in popularity. So obviously the world revolves around me and my opinions! Don't worry, I’ll solve all the world problems, right? Friends? Is that laughter I hear??

]]>
http://behindlabdoors.com/reviewing-the-irony/c7c4685a-9b7f-46b1-8d33-07a6ee83adddTue, 05 May 2015 02:52:11 GMT
<![CDATA[Anonymous complaints: the peer review story]]>Ok, the experiments are done, the journal is picked out and the manuscript is written. It’s time to make a sacrifice to whichever science god you follow, press the submit button on the online submission form and wait… and wait, and wait some more. Most will take close to a month, but my experience has been closer to 2 or 3 months. And that’s just for the first round of reviews!


Image Credit: Nick at http://www.lab-initio.com/

This may feel like forever, but peer review really does end up improving your paper. Plus, the alternative is a fast rejection letter! The editor is the first obstacle to pass, as he/she quickly screens incoming manuscripts. This is to decide if the article suits the journal’s focus and is of high enough quality (mostly the writing, but somewhat the science) to make it worth taking up the time of busy reviewers. If you send bad papers to scientists too often, they’re going to stop agreeing to do it.

If the article passes these minimum standards, it typically gets sent to 2 scientists in that field of study and they’ll have around 3 weeks to submit their opinions. Between the pre-screening, finding scientists willing to do the review, waiting for those (often late) reviews, and making an editorial decision based on them, it’s easy to see how this step can take some time.

To speed it up a bit, some journals ask for reviewer recommendations from the authors rather than having to search for experts. Otherwise, the editor will select people based on who’s publishing in that field (such as those cited in your article) or people they know from conferences. To be honest, I’ve never been involved in the editor side of things, so I’m getting most of this information from online resources.

What I do have a bit of first hand experience with being a reviewer. I’m lucky enough to have had a great PhD supervisor who recommended me to a low impact journal for a review that he wasn’t able to do. From there, I was asked to review a few other articles, bringing me to a grand total of… 3. Officially. I’ve also had a hand in a few others for bigger journals, since one of the training aspects of a postdoc is to help your supervisor with peer review. But, obvious disclaimer, I’m certainly no expert yet.

The peer review itself consists of just a few sections. First, a quick summary of the paper, including the overarching conclusions, the study’s strengths, any generalized weaknesses and whether it contributes to its field. Not too detailed, that’s for the next section: major edits. Here, it helps to have the criticisms numbered, as the authors are going to have to respond to them point by point. These should cover anything that makes it hard for you to believe their conclusions, read the paper or understand how experiments were done. I won’t go into details, as there are plenty of websites already discussing how to write peer reviews.

Next up is the third section, which is for minor edits. This is where you get all your nit picking out. Yes, it is hard to understand their data when an axis is mislabeled, but is that really a major concern? I’ve actually had one reviewer tell me as a major critique that there wasn’t enough space between the graphs in a figure. Ignoring that graph spacing can be fixed during the page layout phase before publication, is that honestly a make-or-break issue?

So I would hope it goes without saying, but don’t be a jerk, no matter how tempting it is. While in theory most reviews are single blind (ie: the authors don’t know who the reviewers are, but the reviewers can see the authors listed), it’s often all too easy to figure out who is picking apart your work. Maybe they’ll insist you add a reference, that just happens to be theirs. Or sometimes they’ll be critical of a widely accepted conclusion/assumption/method in the review, but are also publicly vocal about that opinion. Scientific fields tend to be quite small and nichey (Yes, that’s a word. Or at least I’m declaring it is now!), so hiding a mean review behind anonymity isn’t a good idea. Especially considering that the editor still knows who you are!

The last part of the peer review is the recommendation, which is confidential to the editor. This is just a couple of quick sentences summarizing the strengths and weaknesses, followed by what category it falls into:

  1. Accept
  2. Accept with minor revisions
  3. Accept with major revisions
  4. Revise and resubmit (Also called reject with hope, which I think sounds sort of mean. “Sorry Sally, I’m rejecting your offer of a date. But maybe if you become prettier in the next week, I’ll revise my opinion!”)
  5. Reject

#1 is almost unheard of, because it means that it will get publishes completely as is. And really, no one’s that perfect. The next 2 are more common and require a response to the reviewers. I was going to go over that in this blog piece, but apparently I’m too long winded! So I’ll cover that next time.

#2 tend to not go back to the reviewers and can be decided by the editor. But #3 and 4 have to go through another round of peer review with the same reviewers, which means another month or more. You can see how the time adds up quickly! And if you’re racing with another lab to publish your data first, every week matters. In fact, there are stories of horrible people using this delay to their advantage and stealing an idea.

Lastly, there’s #5: full on rejection. The journal will not publish this study, no matter how much you fix it up. No need to say more. Just move on to another journal.

After the peer reviews are submitted, the editor steps in again and ultimately decides whether the manuscript will get accepted or rejected. But sometimes the reviews are polar opposites of each other (which happens more than you’d think in a supposedly objective discipline), and the editor can’t decide what to do. So they bring in the big guns: reviewer #3 (warning: foul, hilarious language).

But that’s my time, folks. Or rather, your time… Either way, this sucker is long enough! Next time, on Behind Lab Doors, can Dr. Author respond to his reviewers without pissing them off? How will Dr. Scientist react when she finds out that peer review fraud rings exist? Stay tuned!

]]>
http://behindlabdoors.com/anonymous-complaints-the-peer-review-story/8fa556b2-1103-4511-8a84-2ead959a6f1eTue, 07 Apr 2015 14:32:27 GMT
<![CDATA[Put it all together]]>I’ve been debating what to write in this post. I want to outline what a research paper looks like and what each section is for, but it’s so dry! My husband assures me that he didn’t know a lot of this information before he met me though, so I’m going ahead with it. Fair warning! (And blame him if it’s boring.)

Let’s start with the abstract. This is the part I hate writing because it has pretty strict space limitations (usually 250 words). That seems like less work, but it actually means you have to cram your entire background, hypothesis, results and conclusions into less than a page! This is often the only section that gets read in your paper. That’s kinda sad, but true. There are simply too many articles that get published daily to be able to read them all, so the abstract is a peak through the keyhole to see if it’s worth opening that metaphorical door. Which means you have to be very picky about which results to mention (only the absolute highlights), word things very succinctly, and make sure to not overstate results. Not an easy job, but a skill worth developing.

The second section is the Introduction; pretty self-explanatory. This is where you lay out the background information a reader needs to be able to understand your study. Keep it to the point and well referenced. No one likes a meandering storyteller who you’re pretty sure is making stuff up on the spot. Unless you’re playing D&D I guess.

Next up is Results, although sometimes it’s the fourth after Materials and Methods. Here you get to put up your pretty graphs and tables, and briefly describe them. You might need to discuss them just a bit in order to justify your next experiments, but that’s pretty much it.

Now for the Discussion. This can be tricky to write as you don’t want to simply re-state the results. This is the area to summarize what you discovered and put it into a broader picture, both in terms of what else is known about this area of research and also how it might impact the world. In my case, this often means how it might help human health and medicine. It’s a big of an odd thought experiment when you’re researching very basic cell biology. But keep in mind that antibiotics revolutionized medicine after one dedicated researcher decided to figure out why his moldy bacteria plates had rings of non-growing areas. It had been observed many times before, but Dr. Fleming was the first to consider the impact it might have on human disease and fully develop that idea.

Finally, Materials and Methods. This is typically the section I start with, because it’s tedious but I can feel productive which helps motivate me for the rest of the writing. One of the foundations of science is reproducibility. If no else can get the same results as you, that means one of 3 things:

  1. There’s something different with the details of how the experiments are performed between labs. This can sometimes lead to really cool science, such as how in the past 10 years we’ve realized that results from animal models can vary widely between facilities because their healthy intestinal bacteria is slightly different! But mostly, this just stinks.
  2. The published results are not real. Not necessarily malevolently, but sometimes an interesting phenomenon is simply an artefact.
  3. You’re lying, but that’s a worst-case scenario (ahem I’m looking at you Andrew Wakefield).

The most frequent result is simple frustration from a graduate student who wants to try out a cool new technique, but can’t because the paper is too scarce on details. For the love of glob, just take an hour and read this section over, no matter how mind-numbingly boring it is!

So I lied. The Materials and Methods isn’t really the final section. You also have a references section, but you’d be absolutely crazy to not use some sort of citation software to generate this for you. Seriously, how did people get any research done without computers for manuscript writing and the internet for journal database searches?? #grateful #tryingtoberelevant #i’mhipi’mcool #idon’tevencapitalizefirstpersonpersonalpronouns #nailedit

And that’s it. 5 easy steps to write a paper, or something like that. The hard part is over, right? Sure. Until you get a grumpy peer-reviewer that makes you re-write most of the paper. Not that I’m talking from current experience or anything. (I am.) I love reviewers of all shapes, sizes and dispositions. Please don’t reject my paper if you’re reading this. Please?

]]>
http://behindlabdoors.com/put-it-all-together/921f8cf6-2b38-4d9e-bf81-ec3c53b0ddd4Fri, 06 Mar 2015 14:35:05 GMT
<![CDATA[%&*#]]>One of the first things people (ok, teens) ask of foreigners is how to swear in their native language. Since science can almost be considered a language of its own considering all the jargon, I figured I should let people in on a few key cuss words. I don't mean justifiable (or not) critiques of a study, such as "poorly controlled" or "unconvincing data". Once I got "oversimplified to the point of factual inaccuracy". Ouch. What I mean is words or phrases that will make a researcher cringe, even if it's not aimed at them. In no specific order:

Scooped
adjective
You know that study you've been working your butt off on and are 2 experiments away from submitting to a nice journal? Yeah, someone else had the same idea and just published it. They're going to get all the citations, and you're going to be left in the cold.
Sample sentence: "Dude, you just got scooped by Jim's lab."

Artefact
noun
Sorry guys, I'm not Indiana Jones. Plus, that's spelled "artifact", not with an e. This type is a result that looks interesting, but is actually man-made, such as by experimental error. Definitely a downer if you think you've figured something out.
Sample sentence: "I thought I cured cancer, but it was just an artefact."

Incremental gain of knowledge
phrase
Most commonly found in reviews of grant applications, this is widely applicable to any body of work that you wish to belittle. Since high impact journals necessitate novel, exciting data, you're essentially telling this person their work is kindergarten level. Granted, it's often a valid criticism, but it's still cringe worthy.
Sample sentence: "Your study only gives incremental gain of knowledge and your mom stinks too!"

Exploratory
adjective
This sounds good, right? Exploration is a good thing! Yeah, not when we're talking about study design, again often seen in grant applications. In this context, it means unsupported by preliminary data and generally not hypothesis driven. Also called a "fishing expedition", this is where you cast a wide net and hope you pull in some significant data. An acceptable and often necessary way to start a new idea, but definitely not what reviewers want to finance with a half a million dollars or so.
Sample sentence: "We could fund this exploratory grant, or we could just give the money to that cat on the sidewalk and hope for the best."

Data trend
noun or verb, depending on use
Don't say this. Just don't. If your data's not actually significant, you're going to make the senior scientists wince. Or if you must, back it up with further data that shows that the trend was a valid starting point.
Sample sentence: "While not statistically significant, the data trend shows an increase in people who hate me."

Physiologically irrelevant
adjective
Sometimes, no matter how statistically significant and well controlled data is, it's just not going to make an actual difference in the grand scheme of things. In this case, it oftens means that an interesting pathway discovered in a cell line just isn't going to affect the whole human body enough to matter.
Sample sentence: "You sure did show a decrease in that protein, too bad it's physiologically irrelevant."

So there you have it. Easy, simple ways to annoy the scientist in your life. Oh wait, maybe it was a bad idea to post this and tell my friends how to get under my skin...

]]>
http://behindlabdoors.com/post/6a0e0681-a73a-4f50-a355-c55eac2e76f1Mon, 12 Jan 2015 01:51:13 GMT
<![CDATA[Take your supplements]]>There's one thing I want to address before I go into writing a research article, and that's supplemental figures. I mentioned in my last post that high impact journals such as Science, Nature and Cell require vast amounts of data to publish there. Fair enough. They want well-supported, in-depth studies which require a lot of experiments. The issue is that they also have very strict space limitations, usually around 5 pages. And that's including space for the actual figures! Which means there's very little room for providing background on the area of research or a detailed discussion of how this data impacts science. This can make the articles feel rushed when you're reading them, but it's also kind of refreshing to just get the information without all the blabbity-blah (says the blabbity-blah blogger).

The other consequence is that a lot of the "non-essential" information gets hidden in the supplemental material. This online-only section was originally intended for large data sets that simply aren't feasible to publish in print, such as with a bacterial genome sequencing project. That's WAY too many A, G, C, T's for a printer to handle! The supplemental material can also contain details of the methods used to perform the experiments, which I completely agree with including here. That's extra information that is important for anyone who wants to reproduce the experiments, but isn't necessarily essential to understanding the study (assuming the authors put enough detail in the text).

The problem arises, however, when researchers are forced to put important supporting data into supplemental figures. Only recently has there even been standard guidelines described for writing and editing this section, and it's debatable that these will become common practice. The people who peer review manuscripts submitted for publication are researchers themselves and their time is already limited by other demands. So the worry is that supplemental figures aren't as thoroughly reviewed and the quality of the experiments may be overlooked. That's not good; no one wants sloppy science.

The other issue is that sometimes the supplemental figures become a data dump. Negative data is very difficult to publish and only gets into low end journals, so it's often considered a waste of time to write up. But it is important information for scientists. Kind of a PSA to save others time and funding: "Hey guys, don't bother testing this idea. It totes doesn't work!" So why not put that in the supplemental material? I don't have a good answer for that, and to be honest, I'm not sure it's a bad idea. Although a better idea has been broached by journals that exclusively publish negative data, such as the Journal of Negative Data in Biomedicine or Ecology and Evolutionary Biology. I love this idea, but I don't feel like these have quite found their footing yet. In particular the Biomedicine journal focuses on "unexpected, controversial, provocative and/or negative results", which misses the point a bit. But it's a start! Let's get those allegedly "failed" experiments out of the supplemental wasteland and into a proper manuscript.

But it all comes back to journal expectation. If 5 figures and double that in the supplemental figures (I'm not even exaggerating) are what's expected to get a high impact journal on your CV, that's what you have to do. Publish or perish. The good news is that two fairly high impact journals (that I know of, there may be more) have decided to eliminate supplemental data-dumping by either restricting what is allowed or getting rid of it all together. I say good on them, change is good! Encyclopedias have given way to Wikipedia in our technological world, maybe it's time to reconsider how research data is published too.

]]>
http://behindlabdoors.com/take-your-supplements/c809a358-20da-4d83-9d8c-0d2af8345db9Wed, 03 Dec 2014 02:38:35 GMT