This week I caught up with Debra Oetzman from our SEND Services team. As one of the authors of the SEND standard, an active CDISC and a PHUSE volunteer, many of you may have already interacted with her. At Instem, Debra performs verifications of datasets from suppliers right across the industry.

We got talking about the blog, and some of the issues it raises and she said to me, “One of the things you’ve mentioned in your blog previously, Marc, is how flexible the SEND standard is and how that is both good and challenging at the same time.  When doing a verification, this is actually one of things that makes it more difficult…and more interesting…at the same time!

Our Verification service provides an opportunity to see datasets created for an array study types by a fair number of providers.  Certainly, when reviewing a v3.1 dataset, if you see ‘NORMAL’ in MISTRESC instead of ‘UNREMARKABLE,’ you can look at the SEND IG and determine ‘that is incorrect.’  However, if you look at some of the Trial Designs…well that is a gray area.  Does the set up represent the study?  Would it cause an issue with analysis?  Does the presentation cause (or increase) ambiguity…or maybe it is unusual, but it decreases ambiguity?  Does it align with the study report and protocol?  Are there errata, best practices, TCG statements, or other documentation that may influence a recommendation one way or the other?  When reviewing a dataset package, we try to put ourselves in the shoes of the reviewer…if we, who live and breathe SEND every day, are struggling to understand the package, perhaps we can provide some suggestions to make it easier once it is submitted.”

I also asked Debra about the quality of the SEND datasets she sees and she said “That is something else you have touched on in your previous blogs…getting to the point of having a large volume of good quality datasets.  We have been fortunate that the FDA has been willing to feed back to industry some of the pain points they have encountered when reviewing datasets.  Their Webinars are so useful to the industry and to the CDISC organization when working on the next version of the IG or, proactively, when creating standards for a new domain…making sure not to add the same pain point!

We’ve heard people say that they don’t believe the datasets are being used at the agency.  Certainly, there are many of us who have been in enough CDISC meetings with FDA representatives to KNOW that SEND datasets are being used whenever possible…perhaps not all of them, or all parts of all of them, and certainly that comes back, at least in part, to quality.  If the datasets do not follow the standard, if they do not align with the study report, or if out of three single-dose tox studies each has been presented differently, that makes it much more difficult to get reviewers trained and buying into the fact that ‘standardized’ data will make their lives easier.

I would say that, overall, the quality of the datasets we review has improved over time.  We can tell when vendors have made improvements to their systems (both collection systems and SEND generation systems) that have been implemented at different facilities; we can tell when processes have been adapted to make ‘good SEND’ easier to produce; we can tell when protocol and report templates have been improved to include some of the metadata that makes the dataset more traceable.  There is certainly still room for improvement, but the direction is very encouraging.”

I really appreciated getting her point of view and I thought you might find it insightful too.

Till next time

Marc

Published by Marc Ellison

Self-confessed SEND nerd who loves geek-ing out about everything to do with SEND. Active CDISC volunteer and member of the CDISC SEND extended leadership team. Director of SEND solutions at Instem responsible for all our industry leading SEND products and services.