Next steps in the identification of ROS-related huntingtin protein-protein interactions

Blog post by Dr. Tamara Maiuri

In my last real-time report of the HDSA-funded project to identify of oxidation-related huntingtin protein-protein interactions, I was happy to report the successful purification of huntingtin and its interacting proteins from mouse cells. I was quite optimistic that the experiment would work using cells from an HD patient. This turned out not to be the case. Despite growing large amounts of cells, there was simply not enough starting material. Although we want to answer our questions about HD using human sources of information, it is just not technically possible with patient fibroblasts.

The good news is that I was able to generate two more replicates of the experiment in mouse cells. The total list of proteins identified by mass spectrometry can be found on Zenodo, and further refinement of the data was done by quantifying the intensity of each peptide (bit of protein) to give us a better sense of the most abundant hits. This has also been deposited on Zenodo.

Sifting through the data is taking some time—being a scaffold, huntingtin interacts with several hundred proteins. We are also in the final revision stages of a few manuscripts for which experiments have been prioritized (one manuscript describes how we turned HD patient skin cells into a tool for the HD research community—a pre-print can be read on Bioarchive). I will post a more detailed analysis in the coming weeks, but here are some general conclusions from the most reproducible results:

Stable interactions:

The proteins that interact with huntingtin in cells treated with DNA damaging agents also interact with huntingtin in untreated cells. This could be because

  • The treatment didn’t work, or the untreated cells are under an unintended form of stress
  • Huntingtin transiently “samples” interactions with many proteins in unstressed conditions, which it binds more tightly upon stress. In this case, the cross-linking step may cause us to capture weak interactions
  • Some of the interactions may be non-specific artifacts of the experimental set-up

These possibilities will be tested by following up on interesting hits in our human fibroblast system.

A connection to poly ADP ribose:

Many of the proteins that interact with huntingtin are also found in data sets of “PARylated” and “PAR-binding” proteins (see references below). Poly ADP ribose, or PAR, is a small biomolecule that plays a role in the process of DNA repair (among many other cellular processes). When the DNA repair protein “PARP1” notices some damaged DNA, it starts to attach chains of PAR to nearby proteins. This forms a sort of net to recruit other DNA repair factors. The overlap between our list of huntingtin interacting proteins and PARylated/PAR-binding proteins suggests that huntingtin may also bind PAR, just like many other DNA repair proteins. In fact, I have preliminary results suggesting it does just that. I will post them soon!

 

Data sets of PARylated and PAR-binding proteins:

Gagné J-P, Isabelle M, Lo KS, Bourassa S, Hendzel MJ, Dawson VL, et al. Proteome-wide identification of poly(ADP-ribose) binding proteins and poly(ADP-ribose)-associated protein complexes. Nucleic Acids Res. 2008;36: 6959–6976.

Jungmichel S, Rosenthal F, Altmeyer M, Lukas J, Hottiger MO, Nielsen ML. Proteome-wide identification of poly(ADP-Ribosyl)ation targets in different genotoxic stress responses. Mol Cell. 2013;52: 272–285.

Zhang Y, Wang J, Ding M, Yu Y. Site-specific characterization of the Asp- and Glu-ADP-ribosylated proteome. Nat Methods. 2013;10: 981–984.

 

Let’s Fix Peer Review

Scientists are the smartest idiots I know. 

fullwidth.ba450976

If one explains the current system of peer review to a non-scientist, the response is typically, “that’s insane, I thought you guys were supposed to be smart”.

To recap:

When we apply for a grant or want to publish our science, we secretly get the work reviewed by our peers, some of which are competing with us for precious funding, or a bizarre version of fame. Under the veil of anonymity, a reviewer can write anything, included false statements, or incorrect statements to justify a decision. The decision is most often, “do not fund” or “reject”, even if the review is based off of inaccuracies, lack of expertise, or even blatant slander. There are no rules, there are no repercussions. There are few integrity guidelines, or oversight, nor rules of ethics in the review process for the most part. It can lead to internet trolling at a level of high art. In funding decisions, these mistakes can be missed by inattentive panels, but were definitely missed in the CIHR reform scheme before panels were re-introduced. We still have a problem of reviewers self-identifying expertise they simply do not have.

Scientists have to follow strict rules of ethics when submitting data, including conflicts of interest, research ethics, etc.  No such rules are often formally stated in the review process and can vary widely between journals.

This system is historic, back to an era when biomedical research was a fraction of the size it is today, and journal Editors were typically active scientists. The community was small. But as science rapidly expanded in the 90s, so did scientific publishing, and soon editors became professional editors, with some never running a lab or research program. Then, came the digital revolution, and journals were no longer being read on paper and the pipeline to publish increased exponentially.

What drove the massive expansion of journals? Money.  Big money. And like many historic industries, it’s thriving, mostly based off free slave labor.

CELL was sold it to Elsevier press in 1999. While the sales number was never formally revealed, it was rumored to exceed $US100M. No person who reviewed for this journal received a thin dime. The analogy would be hiring workers to build a road, pay them nothing, insist the road get paved in under 14 days, then charge them to use the road. Why? for the prestige of being associated with a road (which is fundamentally no different than any other road).

What CELL and Nature started, mushroomed wildly in the next 20 years, with journals starting up weekly, now numbering in the thousands, on a simple business model: hire Editors, accept submission, get three reviews, charge to publish, with numbers in the thousands of dollars per manuscript.  It’s a money making machine, based off free labor. Why not? Scientists are idiots, they work for free, they do hard work just based off ideology.

What happens under the veil of anonymity? Papers are trivially reviewed, either quickly dismissed or accepted without much scientific input, which has driven a lot of fraud or just bad science, that when revealed often leads to the question: how did the reviewers not see this? It gets worse, “knowledge leaders” in some fields can manipulate the process, hold up or block manuscripts while post-docs race to reproduce the data as their own, as outlined in a public letter to the Editors of Nature in 2010. Journals return comments to authors, but many take secret comments from reviewers directly to the Editors, without author knowledge. Why? Horror stories abound about revision after revision, then a final rejection after a year or more because the Editor lost interest. Meanwhile, careers and lives are stalled. This is very problematic when a field of research becomes dogmatic, and truly innovative theories or approaches are presented: to accept this work means having to remove dogma, and this can mean invalidation of “knowledge leaders” entire publication records.

These publications can set careers or lack of them can ruin careers and gain or lose funding. PDFs get hired by institutions that look like they can walk on water based on their CVs, only to drown in a few steps as an independent investigator.

Often overheard at symposium by senior scientists: ” we had a problem with reviewer #2, so I called the Editor and sorted it out”. Called? How? No journal lists phone numbers for Editors, what magic Rolodex does this involve?

We have a system in place that is used because it is historic. It’s not working, it’s not fair, it benefits fraud, and it’s bad for science. This failure needs to be addressed with a series of ethical guidelines and transparency, because the process has been corrupted and failure is now so common, there are entire websites dedicated to it. Suggestions:

  1. Editors need to be active scientists. The Journal of Biological Chemistry is an excellent example.
  2. Reviewers and academic editors need to be paid. The Public Library of Online Science (PLoS) sounds like an altruistic organization to disseminate scientific knowledge, but executive compensations can exceed $330,000-$540,000 a year. Clearly, PloS feels expertise and talent should be rewarded, which is fair, but not when it comes to reviewers who put in hours of work to review manuscripts. The same reviewers then have to pay $1500+ to publish, and the journal decided to just stop copy editing manuscripts, leading to sloppy publications. The line between legitimate journals and “predatory” journals is blurring. This is not unique to PloS. Scientific publishing is a massive profit business. The NY times revealed a “shocking” number of $500 for page charges in “predatory” journals. Yet, many established journals charge $500-600 per color figure alone. I cannot think of another profession that requires so many years of expertise under draconian standards that has so little value applied to our time. Try getting a free 3 hours from a lawyer, accountant, or consultant. Good luck with that, or look out for what you get.Dr._Nick_S28_billboard_gag
  3. Reviewers need to be scored. By both Editors and submitting authors. We recently were reviewed at a leading cell biology journal, and while the paper was not accepted for publication, we received deeply detailed, outstanding reviews from all three reviewers. Their intent was obvious: address these criticisms and this will be better work. We were also reviewed at two leading magazines recently, and what we got back were late reviews, of 5-7 lines or less with terms like “unconvincing”, or simply incorrect statements, with no chance to respond (at 18 years as PI, I have not received the magic Editorial Rolodex). Review without scientific justification. These scores should be tied to ORICID. Editors should be able to flag scientific misconduct to home institutions or funding agencies, for reviewers behaving inappropriately. Low scoring reviewers should be asked to justify this score to home institutions. Scoring then allows justification to pay good reviewers, and insist on sincere efforts to avoid trivial reviews.
  4. No more gatekeeping. Famous journals do not review most of their submissions, with most rejections coming from the desk of non-expert, non-scientist Editors looking for name and institutional recognition and trendy buzzwords. The issue is they simply have too many submissions if they are regarded as “high impact” journals. Yet, regardless of the science, they will not decline submission from high profile institutions, with fear the institution’s senior scientists will not submit. What is compounding this problem is that relative new or lesser known journals are doing this in an attempt to boost “impact” based on trendy subjects, which just demonstrates what their priorities are: not science, but gaming the impact factor metrics. This would require a new system, onto point 5….
  5. No more direct submissions. Manuscripts should be openly submitted to free access sites like BioRxiv, and go live in hours, and journal Editors can bid to authors to send to review in a clearing house type model. What this does is allow the Editors to judge impact based on comments from the community if they lack direct expertise. As it stands, the current process is stochastic and decision to review is based on often one opinion. It can now take months before a paper is even reviewed, as journals can sit on the decision to send to review up to a month or more (remember, they don’t have to follow any rules). It can take half a day at time just to submit a manuscript. The cycles of submission and editorial desk rejection can suck half a year out of the publication process -this does nothing for science.
  6. One manuscript and reference format. One journal format. Pick one, any one. the current need for software to deal with 1000s of journal reference styles for 1000s of  journals is asinine. It’s like trying to do science in 1000 different standards of measurement. We picked the metric system and moved on.
  7. Manuscript and funding agency reviews should be public, as this is publicly funded. This allows readers to know exactly how well a manuscript or grant was reviewed, and if a journals press hype matches actual scientific opinion, and if any obvious bias occurred in the review process. This would help with the media coverage of manuscripts as the journalists almost entirely rely on PR hype.
  8. All Reviews should be addressable by authors before decision. This is particularly a problem on grant panels that lack expertise, they can rank and score based on reviewer errors, but this cannot be addressed until the next competition. Same problem with rejection after first submission at journals. There should be a brief ability to respond to reviews before a decision is made. The current system relies on pure chance that our work is reviewed properly. We might as well have a lottery, especially in Canada where biomedical research grants can be reviewed, scored, and ranked  by non-scientists (seriously).
  9. Reviewers should discuss and unmask prior to decision, after reading the responses.  Currently some journals do unmask reviewers to each other and allow discussion (EMBO J., Current Biology, eLife…). Nothing is more discouraging than spending hours on review to improve a manuscript, only to have another reviewer dismiss the manuscript with an obvious minimal effort and comments like, “unconvincing” plus secret comments to the editor I cannot see. I don’t see the point to unblinding reviewers to authors, this will just discourage participation and fear of vindictive authors.
  10. Define Misconduct in the Scientific Review Process.  There needs to be repercussions for unethical activity.
  11. Have a Higher bar for Authorship. Many clinicians have networks that yield their names on hundreds of manuscripts, for zero effort on the actual work, and it’s very likely they never read the manuscripts. This is simply not ethical, and unfair to authors with real effort on manuscripts. This is a real problem in funding agencies that then use reviewers that count papers, coming to the conclusion a good scientist publishes seriously every two to three weeks of their lives.
  12. Keep individual manuscript metrics, ban journal impact metrics. Journal impact scores can be gamed, and are gamed, and make no sense. It’s like saying a single driver of a Honda is more intelligent, because on average, Honda drivers have a high IQ, and thus, driving a Honda makes you smarter.  Using metrics like impact factors or H-index to judge careers is lazy, incompetent administration. You drive a Honda? Hired! We denied tenure? Not my fault, he/she drove a Honda!
  13. Retracted manuscripts due to figure fraud should reveal who reviewed the manuscript. Maybe these guys will pay attention next time. Or, maybe if we paid them, this would happen a lot less. It’s very likely if we got to see the reviews of these manuscripts, we would see that they were trivially reviewed.
  14. Canada needs an Office of Research Integrity. For a variety of reasons, fraudsters can flourish in the Canadian system, as funding institutions defer fraud investigations to home institutions, who have a perverse incentive to bury any conclusions. The US has the ORI, independent of any institution, and in some countries, scientific fraud is legally regarded as fraud of the public trust and can result in civil action or jail time. If Canadian science wants serious public support, as the Naylor report recommends, it should come with equally serious scientific integrity.

ROS-dependent huntingtin interactions in mouse striatal cells

Blog post by Dr. Tamara Maiuri

Well it’s been a long haul, but I’m happy to say I finally have a list of proteins that interact with the huntingtin protein (expanded versus normal) under conditions of reactive oxygen species (ROS) stress. This is the very first step to achieving the goal of the project: to identify drug targets that are relevant to the process of DNA repair, which, through powerful genetic studies, has been repeatedly implicated in the progression of HD.

This first step was not without its obstacles. The goal at the outset was to identify proteins out of real HD patient cells, a more relevant system than cells from an HD mouse model. Unfortunately, it’s nearly impossible to grow up enough cells to yield the protein needed for mass spectrometry. My solution to this problem was to treat cells in batches, snap freeze them, and store them for processing once I had enough.

After working out the conditions for cross-linking and fractionation, inducing oxidative stress, and pulling huntingtin-associated proteins out of HD patient cells, I started growing up batches of cells. On the day I harvested the largest batch yet, the ROS inducer, 3NP, didn’t show the tell-tale signs of working (floating cells, larger cell pellet). When I tested a sample for interacting DNA repair proteins, I found almost no interaction. That was a bad day. This batch cannot be used–it amounted to a waste of time and resources. I spent a few weeks trying to figure out what went wrong with the 3NP, but no dice.

At this point, it was time to consider options and cut losses: we need to move on with this project. So, I considered switching to mouse striatal cells. They may not be as accurate a model as human fibroblasts, but we can get a list of huntingtin-interacting proteins from mouse cells and verify them in human cells. I revisited the other ROS sources tested in the past, and decided on H2O2 (see the optimization experiment on Zenodo).

It was much faster growing up enough striatal cells for mass spec analysis. The experiment wasn’t perfect–the untreated HD cells showed undue signs of stress, and so this will have to be repeated to be sure of our results. But we now have a list! Here are the preliminary results, in a nut shell:

After eliminating likely false positives (ribosomal proteins, chaperones, cytoskelton), there are:

  • 92 proteins that interact with huntingtin under basal conditions and are released upon ROS stress
    • 36 of these are inappropriately maintained by expanded huntingtin
  • 38 new interactions formed upon ROS stress
    • 29 of which do not happen with expanded huntingtin
  • 52 proteins that interact with expanded, but not normal, huntingtin upon ROS stress

Of note, HMGB1 was identified in the immunoprecipitates from H2O2-treated cells (both Q7 and Q111), which we have previously identified as a huntingtin interacting protein. Further, the list of proteins from H2O2-treated Q111 cells includes FEN1, PCNA, Formin1 (the actin binding protein required for DNA repair), CENPJ, PARP8, and many other DNA repair proteins. The full list and experimental conditions are available on Zenodo.

The good news is that we now know these conditions worked very well in the mass spec analysis, and it may be feasible to grow up enough human cells after all. Since our TruHD-Q43Q17 cells (from a patient with 43 CAG repeats) grow the fastest, I started with those. Last week, I sent samples from the TruHD-Q43Q17 cells treated with a DNA damaging agent called MMS for mass spec analysis. It will take a few more weeks to get enough TruHD-Q21Q18 cells (from a spousal control). Stay tuned for the results!

This project is funded by the HDSA Berman/Topper HD Career Advancement Fellowship. 

Update: Measuring DNA repair capacity and visualizing huntingtin in HD patient cells

Blog post by Dr. Tamara Maiuri

Image credit: Justinas

Previously I described a method to measure DNA repair capacity in cells: the GFP reactivation assay. This worked nicely in mouse striatal cells, with HD cells consistently showing about half the repair capacity of wild type cells. I have since tried it in cells from HD patients, using a different method to measure the GFP signal (microscopy instead of flow cytometry). The results were similar: a lower repair capacity was seen in HD cells (see experiment on Zenodo). The difference wasn’t as big as with mouse striatal cells, which is to be expected from clinically relevant CAG lengths compared to a model system that exaggerates the effect of expanded huntingtin. But the experiment was done twice with very similar results each time. In the coming weeks I will test whether this small but consistent difference is exacerbated by treating cells with DNA damaging agents. I will also make sure we’re measuring DNA damage pathways, and not some other phenomenon, by knocking down or inhibiting PARP. Stay tuned…

I also previously reported a way to visualize huntingtin protein at sites of DNA damage: stable cell lines expressing an inducible, huntingtin-specific YFP-tagged intrabody. I’m happy to say that the stable cell lines are growing, albeit slowly. If the growth rates recover, we will have available TruHD-Q21Q18, TruHD-Q41Q17, TruHD-Q43Q17, and TruHD-Q50Q40 cell lines in which huntingtin protein can be visualized in real time by addition of doxycycline to the media. The slow growth may be because of the combined toxicity of nucleofection and G418 selection, or due to leaky expression of the intrabody, which interferes with cell division. I’m currently testing the first idea by lowering the G418 concentration. If this doesn’t work, I may have to use alternate methods of detecting endogenous huntingtin. Fingers crossed!

 

Visualizing real huntingtin protein in cells from an HD patient

Blog post by Dr. Tamara Maiuri

I am still busily collecting cells to be sent for mass spec for our goal of obtaining a list of proteins that interact with huntingtin upon oxidative DNA damage. Unfortunately I’ve run into a few road blocks, which I will blog about in the coming weeks (hopefully with a resolution!).

Meanwhile, I’ve been working on methods to assess the hit proteins for their physiological relevance as potential drug targets. Last time I described one such approach: the GFP reactivation assay. Since then, data from 3 experiments have been combined and look promising. While repair efficiency varies from experiment to experiment, mouse HD cells consistently show approximately half the repair efficiency of normal cells (an average of 44.8% over 3 experiments). This is a readout we can use to test the effects of manipulating our hit proteins.

Another approach involves measuring how long huntingtin hangs around at sites of DNA damage, and whether expanded huntingtin lingers too long. We know expanded huntingtin has no trouble reaching damaged DNA, so maybe the problem is that it can’t get off, inappropriately gluing down all the proteins it is scaffolding.

To test this hypothesis, we first need a way to visualize huntingtin protein at sites of DNA damage. While most researchers use overexpression (getting cells to generate protein from externally supplied DNA) to visualize their protein of interest, this is very difficult with huntingtin because of its huge size. To get around this, many HD researchers express small fragments of the huntingtin protein. Overexpression of any protein can have pitfalls because it’s impossible to know if the overexpressed protein is behaving the way it would under normal expression levels in the cell. This is especially true if you’re only using a fragment of the protein—what if the fragment doesn’t fold into the same shape that it would as a whole? What if the missing parts of the protein interact with other important proteins?

Exciting new technologies now allow us to track the behaviour of endogenous huntingtin protein (the huntingtin existing naturally in the cell). We put to use two different intracellular antibodies, or “intrabodies” that recognize and bind the huntingtin protein. We tagged these intrabodies with yellow fluorescent protein (YFP) to generate “chromobodies”. This allowed us to follow their interaction with endogenous huntingtin in live cells. Indeed, we could watch the endogenous huntingtin protein being recruited to sites of DNA damage.

This tool is not without its drawbacks. While it doesn’t seem to interfere with huntingtin’s recruitment to damaged DNA, it must interfere with its role in cell proliferation. We know this because we can’t stably express the chromobody in cells over time. When we watch cells expressing the chromobody try to divide, they just die (unlike the cells expressing only YFP, which happily multiply).

This roadblock can be hurdled using an inducible system: the cells carry the DNA expressing the chromobody, but it isn’t turned on to generate protein until you add a drug called doxycycline. So I first cloned the chromobody into an inducible vector (cloning experiment deposited to Zenodo). When co-transfected with the doxycycline-responsive Tet3G transcriptional activator, it showed beautiful induction by doxycycline in mouse striatal cells (induction experiment deposited to Zenodo).

But we want to work with cells from HD patients. It’s harder to get DNA into these cells, but we can do it with electroporation. To avoid this labour-intensive process every time I want to do an experiment, I’m making HD patient cell lines that stably express the inducible chromobody and doxycycline-responsive Tet3G activator. The Tet3G vector carries a drug resistance gene, so I can select the cells with the drug G418. A simple experiment (deposited to Zenodo) showed that the optimal concentration for G418 selection in fibroblasts is 50 ug/mL.

At this point, my luck ran out. The beautiful induction I saw in mouse striatal cells did not happen in HD patient fibroblasts. From the first few failed attempts, I learned the following:

  • If cells become too sparse during the G418 selection process, they die. Need to transfect a larger number of cells so that they can be downsized during the selection process and still maintain confluency >50% for cell health.
  • Transfection of fibroblasts is an issue. Need to use electroporation, and co-transfect H2B-mCherry to identify transfected cells
  • Transfection of pTRE-nucHCB2 (inducible chromobody), pEF1a-Tet3G (doxycycline-responsive transcriptional activator), and H2B-mCherry is far more toxic than the equivalent microgram quantity of sonicated salmon sperm DNA. Need to use pTRE-nucHCB2, sssDNA, and H2B-mCherry as the untransfected control (that is, -Tet3G) in order to compare rates of selection by G418 in untransfected versus transfected cells
  • In contrast to striatal cells, fibroblasts don’t seem to be inducing expression of nucHCB2 with doxycycline

After ruling out protein turnover, FBS concentration in the media, and different preps of DNA, the only difference between the nice result in mouse striatal cells and the confusing result in human fibroblasts is the method of transfection (the easier, polymeric method for striatal cells versus the more tedious—and expensive—electroporation method for fibroblasts). But this couldn’t possibly be the problem… could it? Only one way to find out: I set up a direct comparison experiment. To my great surprise, the striatal cells induced when transfected by the polymeric method, but not by electroporation! The experiment is posted on Zenodo.

At this point I recalled a suggestion made a few weeks prior, by Claudia Hung, a student in the lab: she asked whether the size of the plasmids could explain the results. I really didn’t think so at the time, but now that idea might make sense! The Tet3G vector is pretty large (7.9 kb), and sure enough, difficulty transfecting large vectors by electroporation is well documented (once you look for it!). This study by Lesueur et al explains that simply giving the cells a chance to recover from the electroporation before plating them can greatly enhance cell viability and transfection efficiency. This was my next move. There was a glimmer of hope in the results: the longer recovery time resulted in induction in a few cells. After taking a closer look at the Lesueur et al study, in which they used much larger amounts of DNA, I tried increasing the amount of DNA.

Eureka! Finally, after months of trouble shooting, I found conditions in which we can induce expression of a huntingtin-specific chromobody in cells from an HD patient (see the results on Zenodo). Next week I will be electroporating cells from HD patients who have different CAG lengths in their huntingtin genes, and selecting them in G418 to get stable cell lines. The result will be a panel of cell lines with different-sized huntingtin expansions, in which we can visualize the natural huntingtin protein by dropping in doxycycline—a great tool for our lab and HD researchers around the world.

If you’ve made it this far through this tedious blog post, thanks for reading. You now have a sense of the tiny incremental steps it takes to move a project forward. This is only one facet of a much larger goal, and each facet has its own set of obstacles. But with careful, calculated perseverance we can get through each road block and move our understanding of HD forward. This work is funded by the HDSA Berman/Topper HD Career Development Fellowship.

 

Measuring the rate of DNA repair in HD cells

Blog post by Dr. Tamara Maiuri

Last time, I wrote about how the system to pull down huntingtin and its associated DNA repair proteins works in cells from an HD patient. The drawback to using these cells is that they grow very slowly and don’t yield much protein. So, the last few weeks have been spent stock piling cells. My stash is growing, but it will be several more weeks before I have enough material to send for mass spectrometry, which will give us a list of all the proteins that interact with huntingtin under conditions of oxidative DNA damage.

In the meantime, I’ve been thinking about what we’re going to do with the information we get. What do we want to know about the huntingtin interacting proteins we identify?

Well, we know that DNA repair is an important aspect of disease progression. The age at which people get sick, and other signs of progression including brain structure, are affected by small changes in peoples’ DNA repair genes. What’s more, the huntingtin protein acts as a scaffold for DNA repair proteins. Maybe this job is affected by the expansion that causes HD.

Once we have a list of proteins that interact with huntingtin upon DNA damage, we want to know if, and how, they affect the DNA repair process in HD cells. What we need is a way to measure the DNA repair rates in HD patient cells. Then we can ask: if we tweak the proteins that interact with huntingtin upon oxidative DNA damage, what happens to the repair rates? That way, down the road, we could use those proteins as drug targets to improve the DNA repair situation.

But one step at a time. First we need the DNA repair measuring stick. There are a few options for this, but I recently came across a cool one. It works by first damaging DNA in a test tube, then introducing it into cells, then measuring how well the cells repair the damage in order to express a gene on the DNA. The gene encodes green fluorescent protein (GFP), so you can measure expression (as a proxy for DNA repair) by how many cells are glowing green.

Question 1: Does the system even work?

The first thing I did was to try this in the easy-to-use HEK293 cells (HD patient fibroblasts don’t take up DNA very easily, and this will be a challenge to overcome down the road!). The system worked quite nicely: the cells with damaged DNA didn’t express as much GFP as those with undamaged DNA, as expected. Also, repair of the DNA was slowed down by a drug called Veliparib, which inhibits the DNA repair protein PARP. See the results on Zenodo.

Question 2: Is there a difference in repair rates between normal and HD cells?

Once again, before tackling this question in HD patient cells, I opted for the easier-to-work-with mouse cells while I set up the system. In the first attempt (deposited to Zenodo), there were not enough HD cells recovered. From the few cells recovered, it looked like there might be a decreased DNA repair rate in the HD cells compared to the normal cells.

In the second attempt, enough cells were recovered to tell what was going on. The HD cells did in fact have a lower DNA repair rate, but inhibiting the DNA repair protein PARP had no effect (results on Zenodo). This could mean one of two things: either the difference we see between normal and HD cells is not because of DNA repair rates (which would be a bummer), or PARP inhibition is not working under these conditions. I’m hoping for the latter, and will try some different strategies to make sure we’re dealing with true DNA repair rates here. If we are, then we can use this method to further investigate the huntingtin interacting proteins we identify, and how they cooperate with huntingtin in the DNA repair process.

There are some other ways we can look at DNA repair rates in cells, as well as comparing the dynamics of the huntingtin protein (getting to and from damaged DNA) in normal versus HD cells. I will tackle some of those approaches and report them in the coming weeks.

This work is funded by the HDSA Berman/Topper HD Career Development Fellowship.

So you want to write a CIHR grant…

koninck_salomon-zzz-an_old_scholarMy Project applications are complete. Decided to offer some sage (old guy) advice on technical aspects of writing a CIHR grant, or any grant proposal.

The Equipment…

A good part of the job of any research scientist is writing. This is why I’m surprised to see people still working as they did in grad school, on the venerable laptop.  I rarely use a laptop, the screens are too small, they force poor ergonomics, they have iffy keyboards, they are near impossible to generate figures on, they break to easily or get stolen.

I use a variable-height desk (motorized, from IKEA), a desktop computer with redundant backup, battery UPS power supply, a high quality gaming keyboard with mechanical key action -they cost a lot. The actual brand and OS is irrelevant, for reasons that will be obvious below.  The monitor is a 4K screen at 39″ -it’s huge, many pixels, because we do a lot of image analysis and cell biology. Another alternative is two or three 1080p screens.

The Timing…

Scientists tend to procrastinate. I think it’s inherent to the overworked lifestyle of the scientific mind, but it is the single worst habit in science, next to the removal of the bottom half of error bars in bar graphs (it’s wrong, it’s misleading, just stop doing it).

Step one is to set a timeline of grant writing activities, with a goal of completion of the entire proposal one week before institutional deadline. This means the proposal sits unread for a week, before  final read prior to CIHR submission. Waiting for some awesome preliminary data? Bad practice, and this typically leads to poor quality preliminary data. Preliminary data does not mean poor data read with rose-colored glasses, it means publication-quality figures not yet published. Many proposals suffer from the idea that poor quality data is acceptable as ‘preliminary’.

It’s critical to leave the proposal for a week and re-read it. Can’t be done if you’re in the last hours to deadline.

The Software…

Until recently, I followed the classic paradigm of MS Word/Endnote/Reference manager/some draw program. The problem with this software is that they have had a far too comfortable market share for too long, the competition is gone, and we are left with mediocrity that can often be unstable. How many times have we be stuck for 30 minutes trying to get Endnote to see a reference? Ever try to embed figures in MS Word? It’s stochastic, at best. Does Microsoft care? Nope. But, there are inherent anachronisms inherent to this software:  poor third party cross-talk and instability (sometimes, the file is corrupted and just cannot be rescued),  file sharing is cumbersome and poorly implemented, and you can lose hours/days of work easily.

I’ve settled on the package of: MS Powerpoint, Google Docs, and the Google docs addon: Paperpile.  Last, a simple screen capture utility like Windows Snipping tool.

We’ve all had those nightmares…a power surge in your lab blows out you desktop, and on the way home you drop your laptop, two days before final deadline. This can have many versions in the nightmare dreamscape, including meteor hitting your office and an ominous black raven pecking out your laptop keyboard. Sure, it can all be fixed with time, but time has run out…

Google Docs is cloud-based in real time (MS now has this with Office), so the actual input device is irrelevant, and nothing is lost. Sure, as I write this, someone undoubtedly hacked the server and the world is in a tailspin, but the truly paranoid can backup to two cloud sources. The best parts of Google Docs are the integration of Paperpile and Document sharing.

Paperpile takes the Google Scholar engine and mates it seamlessly with Docs. For years, I would struggle with the AWFUL Endnote/RefManager search by bouncing back and forth between Pubmed, Google and the software, often having to build a citation from scratch. Tedious.

Once you install full Paperpile (just pay for it), wonderful things happen in your Browser: any Google search or Pubmed search items have a button appear beside them.

paperpile

Click and it’s in your library, and references are never missed (especially by PMID).

You can format references in any way (should be Nature -less space), because of the insanely stupid publishing industry that cannot settle on a single reference format (my theory is they also secretly work for the Canada “common” (LOL) CV.

For figure mockups, I use Powerpoint, with tools for bitmap corrections (crop, brightness, contrast, etc.). All aspects of figures are dropped into one PPT file, mocked up then captured as bitmap using the snipping tool.

sample figure

You can even adjust levels again within Docs. Full figures in minutes.

The figure bitmap is then pasted into Docs, set as “wrap text” with O margins. What you see is what you will get in the final PDF generated by Docs.  Very reliable. Magazine style, scalable figures.

The Writers…

Most PIs sit in their luxurious ivory tower offices and write, The Great Canadian Proposal™,  like some deranged hermit working on a manifesto linking mayonnaise, immigration and global climate change.

Man… it sounds all so awesome. Totally clear.

I review a lot of proposals, thousands, between CIHR, NIH, and HSC, and some are as clear as mud, because it is one writer caught in their own feedback loop of awesomeness, often empowered by a “high impact” publication that somehow validates everything for another $1M.

USE YOUR LAB. It’s a critical training tool to teach you trainees how to write in the bizarre language of science. We blather on like idiots, jumping From Acronym to Acronym (JFATA), or even better, making up our own acronyms (MUOOA), JFATA and MUOOA enough and the proposal is FUBAR. The problem is sometimes acronyms overlap in different areas -this can confuse a reader quickly.

Interestingly, we tend to write superfluously as if we are speaking aloud and trying to impress someone at a business pitch. This is wordiness. Interestingly, it leads to words like, interestingly. If you have to state one observation alone as “interesting”, your proposal is in big trouble.

The proposal at second draft should be shared to the lab. I mean the whole lab, from undergrads to PDFs. In Google Docs, this means in real time you can see who is simultaneously reading and commenting, with different color cursors and comments are a click to “resolve and go away”.

DUMB IT DOWN. It is very likely you do not have an expert reading your grant in Canada. We are a tiny country of mostly cancer researchers in biomedical science, and thanks to CIHR reforms, anyone can still be reading your proposal as a non-scientist, and scoring it (it sounds stupid when you say it aloud). Thus, it should be understandable to any undergrad working in the lab. The worst thing you can do is get a colleague in the same research field reading drafts -this is still the Feedback Loop of Awesomeness (FLOA). My lab uses some biophysics maybe three guys in Canada have ever even heard of -this gets lost fast.

Figures: no more than 8, references, no more than 100. I once saw a record-breaking proposal with >40 figures and >400 references, statements with >12 reference tags. I forget what it was about, but it should have been about obsessive compulsive disorder (OCD)12,23,34-56, 42, 187, 199-204, 206, 208, 210-14.

One easy killer comment is if they need so many data figures, why not just publish it. Thankfully, CIHR put the end to this with 10 page totals.

Lessons from the Triage pile….

If you are going to propose new methodology, make sure you know what you are doing. You are NOT going to CRISPR edit 45 genes and validate. Do not suggest FRET experiments unless you understand the caveats.

The Big killers:

The Amazing HEK293 Cell. Derived by Frank Graham at McMaster. There should be a moratorium on HEK and HeLa cells for anything other than over-expression of proteins for purification, they neither represent normal cells, nor cancer cells, definitely not neuronal cells, and they are not the route to translational studies in humans. They have shattered, hyper-variable, polyploid genomes with both two many chromosomal anomalies to list, and are never the same, even within one lab.  They are far from human. There are better alternatives for any disease. See ATCC or Coriell, however, Coriell is losing support because of so much scientific disinterest, no doubt because cell biology papers in major journals still publish studies of cell biology from one transformed and immortalized cell line and call it normal.

Pharmacology overdose. Take a “specific” drug with an established EC50, apply it at 100-10,000X. One wonders if these researchers, when they get a headache, take two aspirin or just quaff the whole bottle and hope for the best. These typical studies look at live versus very dead cells, and make specific conclusions, i.e.:

it-is-fresh

We measured NFKb levels, and they were altered, therefore this model  died from a  defective NFKb signaling pathway (also works for almost all clinical epidemiology studies) . 

I’ll just look busy for 5 years…

Descriptive aims, also known as Yadda Yadda syndrome. The vague listing of stuff to do because everyone else does this. This is Canada, do this and you will get scooped by post doc #46-12b at some institutional Death Star in the US. More importantly, neither innovative nor interesting.

If I ask for less money, It will have a better Chance…

This is a new consequence of CIHR reforms, where PIs typically funded at $40k projects are now shooting for the moon at $100K. A single tech, PDF and 2 student full CIHR project is a $240,000+ proposal. Our dollar is no longer at par, which means expendables are now 25% more. My last CIHR operating grant period spend >$30,000 in publication fees. Bad budget requests can indicate the PI does not know the real costs of the Project, nor will be able to complete. Some pencil pusher will cut your budget, but no one will increase it for you.

My Model is Best Model, because.

Models systems have utility for most diseases, they also have caveats, and not all models work for all diseases, and you cannot use a single successful model in one disease and blindly justify it across your focus. There are pathways entirely absent in many model systems relative to humans.

Some reviewers have the opinion that the very  poor success rate of genetic diseases research (we’ve got lots of genes, no therapies) has to do with over-dependence on animal model systems. Mice are not humans. They are shorter.

mouse