There’s an argument I occasionally hear that absolutely infuriates me. It’s usually some variation on the line, “The FDA doesn’t care about the quality of SEND, so why should I?” I know I’m paraphrasing, and probably stating this a little too bluntly, but it’s based on the fact that the FDA’s Technical Rejection Criteria (TRC) sets the quality bar rather low and so there’s an assumption that so long as the TRC is satisfied, that’s ‘good enough’.
As a committed long-term CDISC volunteer developing the SEND standard, I’m highly motivated by the fact that the agency uses SEND datasets to make critical safety decisions about experimental new drugs, and this is the last checkpoint before they are administered to the first human being. Therefore, it is vital that the SEND datasets are a complete and accurate representation of the study data.
The TRC requires little more than the inclusion of a demographics file accompanied by the correct study name and study start date in a Trial Summary file with an appropriate define.xml. Providing those data alone will not allow for the review and analysis required to determine the safety implications for the participants of clinical trials.
This question about passing TRC was again raised in the recent FDA SBIA webcast:
Question: Once the data passes the Technical Rejection Criteria, can it be rejected later during review?
FDA Answer: …If the data comes in and the reviewer finds that it’s not clear enough for them to proceed with reviewing or doing analysis, that could still be rejected by the review division.
(The webcast is here and the quote is taken from 50:30 into the video)
The webcast then goes on to implore the listeners to provide “really good quality data” and to do “…whatever you can do to help reviewers to review the data [and conduct] analysis of the data.” Clearly the agency knows the importance of providing high-quality SEND datasets.
I know that some organizations still view the SEND datasets as an unwelcome burden and expense. I know some would like to keep the cost and time of producing SEND to a minimum. I know that this question about quality and the TRC will continue to occasionally crop up, and I’ll continue to get annoyed and maybe I’ll continue to use this blog as a way of venting my frustration. However, I think that across the industry, there needs to be an acknowledgement that the TRC is just the first automated gate. Passing TRC is not an assurance of the quality or usability of the SEND datasets. That said, at the recent joint CDISC and FDA public webinar it was shocking to learn that in the past year, over 500 studies submitted to the agency failed the TRC (between 15th September 2021 and 15th September 2022). Even such a low bar is a little too high for some.
As you can see, this is a subject I feel very strongly about and so I’d love to hear your feelings on the topic. As usual you can let me know your thoughts at firstname.lastname@example.org
‘til next time