NIH’s New AI Directive in Grant Writing: Well-Intentioned but Misdirected

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The National Institutes of Health recently announced a policy restricting the use of artificial intelligence in grant preparation, aiming to safeguard originality and prevent excessive submission volumes. While well-intentioned, these measures risk overlooking the deeper structural issues of the funding system. The directive prohibits proposals primarily authored by AI and caps the number of submissions per investigator, reflecting concerns about fairness, reviewer burden, and scientific quality. However, enforcing strict limits on technological tools may be impractical and misaligned with how modern research is conducted, where AI supports tasks from literature synthesis to study design. The core challenge lies not in the use of AI but in a funding model that prioritizes persuasive documents over demonstrated results. Drawing inspiration from venture capital, the authors argue for a staged approach: small, rapid awards for early proof-of-concept work, followed by larger investments contingent on progress. Such a shift would reduce waste, reward feasibility, and transform AI from a perceived threat into a driver of discovery. Ultimately, the directive highlights real risks, but a results-oriented funding structure may better align public investment with impactful science than prohibiting emerging tools.

Article activity feed