In September 2016 CIHR Foundation grant competition released their final reviews 11 months later, August 4th, 2017. This was just days before the 2018 registration deadline, although Stage 1 and Stage 2 applicants were informed months ago of their progress.
There were 600 applications, of which 234 were accepted to Stage 2, but only 229 actually submitted a full Stage 2 application. I was considering not applying further, despite Stage 2 acceptanceThese were very limited space grants. The Foundation scheme is was designed to support established investigators with seven years of support, collating multiple grants into one grant, with no opportunity to apply for more funding as a PI over those seven years. The idea was to reduce application load to the CIHR, and have the most productive mid-career scientists not bogged down with continually writing grants. But, something changed along this path…
At some point in the planning, Early Career Investigators had a significant portion of the funds reserved for them with 5 year grants, the same time frame as Project grants. This was never the intent of Foundation, and this cohort was dropped for 2018. The CIHR, at that time led by Dr. Alain Beaudet, wanted to stress aspects of research community not addressed by the old CIHR operating grants: community engagement, knowledge translation (KT), participation in peer review, and medical impact of basic science research. Part of this focus was the result of consultations from the NIH in the US. However, when the review scheme of anonymous online participation from multiple reviewers was presented, the NIH warned the CIHR this was a bad idea, and would promote superficial, trivial reviews and cronyism. This was also the feedback from Senior CIHR funded investigators during the consultation phase. This was ignored.
The CIHR allowed age to be considered for Early and Mid Career investigators, declaring anyone over 16 years as PI a Senior investigator. So, while there are age brackets for the younger investigators, there would be no age limits for the senior investigators. In the first pilot scheme competition, seven men over 70 years of age were funded in multi-million dollar grants, essentially the equivalent of 28-30 Project grants. Their 30 and 40 year CVs certainly out-shined the CVs of 16 year PIs.
With new management of CIHR, the Foundation Scheme has been reduced considerably, but still represents a significant portion of the total CIHR budget at $125M, as $75M was moved to Projects, to address the poor funding rates there. The plan in 2018 is to support 40 investigators with $125M, over $3M each, or in other words, about 140 old Operating grants now concentrated down to just 40 PIs. The ECI cohort was eliminated. So Foundation is now focused entirely on very senior citizens, most of which benefited heavily at mid and early career from mandatory retirement at 65, which was eliminated in Canada in the mid-2000s. As a reviewer in the Foundation scheme pilot phase, I saw some very impressive publication and trainee records, but certainly a tailing off of productivity above age 60, but that assessment no longer was relevant. Knowledge translation now meant self-promotion press releases or CBC interviews about the next “5 years to a cure” story, which Academics excel at. CIHR or NIH panel membership made no significant difference to scores. Thus, we now have a significant number of Senior investigators rescued from retirement and parking themselves at the highest salary brackets in academic institutions, with little or no incentive to actually produce anything, as they are unlikely to ever apply for funding again, meanwhile, the most productive 40-60 year old PIs are being hollowed out with no Foundation, and Project grants now in the hands of social media type peer review.
The net result in Foundation is that we may see the lowest ever productivity-per-dollar in CDN research history, but it will take 7-10 years to actually see the data, and that’s some other government’s problem.
As part of the Truant lab’s Open Science initiative, I will post my Foundation reviews online. I think people can easily understand the decision to return to face-to-face panels when they see a textbook example of superficial peer review and the failure of the Projects and Foundation schemes leading to cancellation and partial cancellation. There are no SO notes on any live discussion. Despite the Os and O+s, The proposal ranked very poorly. In fact, the lowest ranking of any proposal I have every written to any agency in 18 years.
Quality of the Program/Qualité du programme
Criterion/Critère: Research Concept/Idée de recherche
Strengths/Forces: The applicant aims to reveal novel insights how the polyglutamine expansion in Huntingtin protein leads to Huntington’s Disease (HD) and to develop analog compounds on a preclinical pipeline of his own lead hit, N6FFA/kinetin.
The applicant made a breakthrough discovery that huntingtin phosphorylation at Ser13 and Ser16 can be modulated by small-molecule drugs, which may have therapeutic potential in Huntington’s disease (Atwal et al., 2011, Nat. Chem. Biol.).
This proposal is based on this discovery. I am impressed by the originality of this proposal.
Criterion/Critère: Research Approach/Approche de recherche
Strengths/Forces: The applicant proposes experiments that can lead to the discovery of new drugs, which seems
exciting. The access to patients’ samples is plus.
Weaknesses/Faiblesses: The lack of in vivo model makes it difficult to evaluate the effects of potential drugs.
(The plan clearly states the use of two mouse models with two support letters, both of which are good models of HD)
The description of genetic modification of human cells including CRISPR/Cas9 mediated knockdown is too brief and the feasibility is not well addressed.
(The proposal does not suggest CRISPR knockouts or “knockdowns”, we defined experiments to create isogenic HD human primary cell lines, the reviewer clearly does not understand CRISPR/Cas9, how it works, the results, or likely what isogenic even means.)
Quality of the Expertise, Experience, and Resources/Qualité de l’expertise, de l’expérience et des ressources
Strengths/Forces: The applicant has strength in chemical biology.
Weaknesses/Faiblesses: The applicant stated the conversion of human cell lines into neurons, which is not easy and he did not show feasibility on this.
(the methods were fully referenced)
Criterion/Critère: Mentorship and Training/Mentorat et formation
Strengths/Forces: The applicant has trained one pdf, one clinician scientist and 19 graduate students. These are good
Weaknesses/Faiblesses: The applicant does not provide or mention a tracking record of his trainees.
There is literally no requirement for this at CIHR, but this was listed in the CV module. and somehow, the other reviewer picked over the details which apparently do not exist.
This was the most detailed review. This is a proper review.
Quality of the Program/Qualité du programme
Criterion/Critère: Research Concept/Idée de recherche
Strengths/Forces: The applicant plans to use human-derived fibroblasts from Huntington disease patients and controls to screen for drugs that correct deficient phosphorylation on ser13 and ser16 in the N-terminal region (N17) of the mutant (polyglutamine expanded) huntingtin protein. He will also screen for changes in DNA repair pathways associated with earlier than expected or later than expected age of onset, based on the CAG repeat length in the HD gene. In the last 2 sentences of this section, two additional goals are mentioned – determining whether mutant huntingtin itself or ROS load
can trigger somatic expansion of the CAG repeat in the HD gene.
Use of human cells is a major strength.
The focus on DNA repair and ROS is a strength, given the recent results of a large GWA study from Gusella and McDonald (Cell 2015), indicating that genes associated with these functions are modulators of HD age of onset.
The techniques the applicant has developed for screening include automated image processing to detect localization of huntingtin to the nucleus, which is also a major strength towards an unbiased approach.
Weaknesses/Faiblesses: The goals and objectives of the program are not well-defined or well-articulated. There are a couple of sentences buried in the narrative that mention goals, but nothing more and no objectives are listed. Introducing
somatic CAG expansion in the last 2 sentences does not build conceptual coherence.
It is difficult to follow the rationale or conceptual drivers of this program because it is so highly technical with no basic explanations of what is being measured. It is not clear what Fig. 2 is showing – what principle components are being
compared? Is this based on imaging data, and what about the imaging data is being used to derive the PCA? N6FFA is not defined other than as a “product of DNA base excision” – as such, it is unclear why “oral loading” would normalize huntingtin phosphorylation?
In Fig. 3 – what is “N18”? Is this blot showing phosphorylation state of huntingtin?
This is literally the focus of eight years of manuscripts. No one reads the references.
Criterion/Critère: Research Approach/Approche de recherche
Rating/Cote: EStrengths/Forces: The impression from reading this section (although it is very possible it has been misunderstood) is that the applicant will use an imaging system that is automated to allow acquisition from thousands of cells, which is a
strength; the patterns will be determined using supervised machine learning, another strength, to avoid subjectivity in the analysis of images and facilitate high throughput. Clusters of patterns will be found based on this analysis, and then subjected to unsupervised machine learning and principle component analysis.
The reviewer clearly does not understand blind PCA, or the technology, and is looking for experimental details in a seven year plan in a few pages. The instruction were to clearly provide a programmatic approach, the reviewer is expecting Operating Grant detail. Impossible in this minimal format.
Weaknesses/Faiblesses: Again, the lack of organization or clear explanation makes it very difficult to understand what
is to be done. How will the applicant define late onset or slow progressers among the HD patient cohort?
(This is outlined)
The applicant aims to create at least 50 lines (from patients or controls), but this is a small number to determine factors associated with later onset or slow progression (those patients will be a very small minority of the total), in particular since each cell line will have differences based on genetic/epigenetic background and varying CAG length.
The reviewer somehow thinks we are doing GWAS on cell lines? The reviewer does not understand outlier studies, or the point of outlier definition.
What is a “photosensitivity side effect”? This is suggested as a key feature of drugs that may normalize phosphorylation of mutant huntingtin, but it is not defined nor is it explained how this might be related to efficacy.
A reviewer of a biomedical research agency in charge of close to $1B in spending does not know what drug photosensitivity is. Let’s hope this reviewer is not a clinician.
Will CHDI be an industry partner on this program? If so, that should be clearly indicated at the front of the proposal.
CHDI is a Foundation, not an industry. Fully explained. Eight years of partnership listed in CV.
How will the Enroll-HD data set be used in this program?
This answers the earlier queries.
There is no mention of the lab team who will be carrying out experiments described in the program, so difficult to judge feasibility.
This is all delineated in the budget module, where it is supposed to be , yet obviously unread.
In most significant contributions section, he claims “In 2012 I described the robust effects of GM1 ganglioside on a HD mouse..” – in fact, this work was led and primarily accomplished in the lab of the senior, corresponding author, Prof. Sipione, at U Alberta. Although the applicant is a co-author, this statement is misleading as to his role.
We actually provided significant expertise, and a critical reagent with months of work, as indicated by authorships, and a second manuscript in which I was CI and Dr. Sipione an important contributor that same year. The point, entirely missed, is that my lab is the only lab in the world, to date, to show small molecule efficacy in HD models by two different pathways, with direct target engagement readout.
Weaknesses/Faiblesses: Although applicant mentions that he expects 3 quality publications from each PhD student, one of his students (redacted here -inappropriate to list this name here, or in the review) graduated without authoring a paper (based on pubs listed in CCV and also Pubmed search).
The applicant has one ongoing PDF who is finding it challenging to obtain a job.
The student mentioned was the result of me inheriting this student as the result of failure to tenure a local PI. The student would have otherwise been tossed out with nothing after three years. The thesis did encompass three publishable chapters, but the student left without any effort to actually submit a manuscript. Nice of the reviewer to ignore that the remaining seven PhDs actually averaged more than 5 publications each.
As for a PDF finding it challenging to find a job -this reviewer is obviously not active in science for the last decade, and this PDF just won a three-year full Scholarship support from an international competition, based on data disregarded by CIHR. This level of nitpicking is outstanding. When you cannot assess science, focus on this, but ignore peer review or true KT efforts.
Weaknesses/Faiblesses: There is no mention of support at the University level, as to whether the applicant’s field of research and/or the Biochem/Chem Biology program is a priority.
I have to agree with this, McMaster has never nominated me for any awards in 18 years, despite continual, uninterrupted funding in the millions.
As for the priority – I guess this reviewer does not know what a Full Professorship means at a University means. It means we do it all. Their PDFs come from somewhere.
Reviewers 3 and 4:
I’ll summarize, as both have “none” listed under weakness, yet scored in the lowest quartile possible. In total, 15 minutes of effort is obvious. But one reviewer, obviously not a scientist, was a standout:
1) The applicant should focus on describing what he discovered in his many publications and how the discoveries impact the field rather than hyping the fact that he has been the first to do something, which otherwise seems boastful.
Oh dear, being boastful on a competition being reviewed to distinguish and rank applicants, we can’t have that.
2) The applicant highlights the fact that he is fighting the prevailing dogma in the field but he never breaks down how that dogma is misleading our scientific progress.
Is everything that everyone else discovered wrong in HD research? (hard to believe)
I guess the reviewer missed the lines that there is literally nothing therapeutic in HD research in 25 years due to a dogmatic approach of protein homeostatic mechanisms being the trigger. And yes, GWAS, and failure to treat this disease in 25 years, has told us that certainly, most HD research is wrong, the majority irreproducible, but that won’t stop older investigators from getting millions more in support.
Instead the applicant should describe in greater detail how his specific discoveries remedy the situation.
Literally the entire research plan section. Incredible.
3) The applicant should break down the proposal into discrete aims. The proposal currently reads like a continuous stream
Even after reading the application multiple times, i have a hard time summarizing what exactly he intends to do.
The story telling aspect of this application is in the bottom half of my assigned grants.
The proposal literally has three delineated aims and sub-aims. I think the reviewer just read the summary and skipped significance.
Weaknesses/Faiblesses: Grantsmanship is a problem.
The first half page of the research approach has no leadoff.
Because the previous section is supposed to highlight significance? This small section was “research plan”, in which, surprisingly, I outlined the research plan.
The reviewer is forced to piece things together by themselves.
i.e read all sections.
There are obvious disconnected factiods listed that a reader has no hint of how they fit into the larger narrative.
For example Our own work, in (Fig. 6) shows that huntingtin N17 and P53 contain a similar CK2 site, which is unique to just these two proteins.
What does this have to do with anythings(sic)?
Literally explained in significance section, which was clearly unread, and the central hypothesis of the proposal.