There’s a theme developing here

Human beings have an inherent ability to see patterns in everyday objects, like recognizing shapes and faces in clouds. While that might seem ridiculous, pattern recognition is vital for us. Without it, we would not be able to do things as varied as being able to recognize faces; find the answers in a Word Search; or even appreciate music. However, this week no advanced pattern recognition skills were needed to see a common thread emerging.

My last blog post discussed the details of the FDA’s Technical Rejection Criteria (TRC). With little over one month to go (September 15, 2021) before the agency activates the TRC and begin rejecting submissions, it’s unsurprising that it’s proven to be my most popular post. Since it went live two weeks ago, there’s been a Federal Register Notice to confirm the September effective date for the TRC.

In addition, this week, we’ve seen CDISC launch the CORE project. This is a joint CDISC and Microsoft project to automate the application of the Conformance Rules. Launching with an hour and a half presentation, we were introduced to the vision and timeline for an exciting new technology that will automatically check for conformance to CDISC standards. CDISC started by creating standards including SEND, but then they added a written set of rules to ensure compliance to that standard. However, different tools may interpret that standard and even those rules in subtly, or possibly dramatically different ways. Regular readers of my blog will be well aware of my opinion on the difficulties caused by having a flexible, subjective standard that is open to interpretation. CORE sees CDISC take back ownership of the application of the rules to ensure that conformance is anything but subjective. Into this project, they are also adding the execution of non-CDISC rules, including the FDA Business Rules. CORE is something I’ll be watching and reporting back on in my blog as it takes shape.

This past week also saw CDISC publish SEND Conformance Rules v4.0 for evaluating the conformance of SENDIG-DART v1.1,  SENDIG v3.0, SENDIG v3.1, and SENDIG v3.1.1 datasets.

Throughout the week, I’ve been pouring over the FDA’s feedback on a few different studies, in order to assist some of our customers. When the FDA receive a SEND study, they will respond privately to the sponsor with feedback about their SEND datasets. This private feedback is one of the main methods employed by the FDA to continually improve the quality of SEND datasets being produced.

At first, these things: the TRC, CDISC CORE, new SEND Conformance Rules and the act of reviewing FDA feedback for several studies; may seem like a disparate set of unrelated topics, however standing back and reflecting at the end of the week, a clear pattern starts to emerge. Though independently, and seemingly co-incidentally, the quality of SEND datasets is being questioned and the bar is being raised. To that end, technology is being developed to ensure that we are all compliant, and that compliance is no longer a matter of opinion.

Till next time,

Marc

How to avoid rejection

It’s a basic human need: We want our work accepted and valued. Nobody wants to see their work rejected. It’s so obvious, it almost goes without saying. Worse still would be being rejected by a cold, heartless automated computerized system. That would be soul destroying. However, with the FDA’s latest deadline, automated rejections will start to occur.

In this post I’d like to discuss how to avoid rejection of your electronic study data – especially since we are less than two months (September 15, 2021) away from the enforcement of the Technical Rejection Criteria (TRC).

Up to now, failure of the TRC has not resulted in rejection, but instead FDA have simply issued warnings alerting the submitting sponsor to the issue. At the FDA’s recent SBIA (Small Business Industry Assistance) webcast, the agency presented data showing over 200 studies had been issued with such warnings in a single month (March 15, 2021, to April 17, 2021). From September, all such studies would result in a rejection instead of a warning.

I’ve included a link to the TRC at the end of this post and at first look, the rules can seem quite complicated. There’s mention of a TS domain, certain eCTD sections where the TRC does not apply, discussion about the study tagging file and so on.

At Instem, we are well versed in helping our customers navigate the intricacies of the TRC, and for certain studies, this can get quite complicated, however, in its simplest form, the principle is first to check if the eCTD section is applicable, and then to check the study start date.

Certain sections of the eCTD are exempt from TRC. These sections are listed in the TRC so we can easily check if a particular study would be exempt. If the study is to be placed in an applicable section, then the next things to consider are the study start date, submission type and study type.

For an IND submission:

  • Single Dose Toxicity, Repeat Dose Toxicity, and Carcinogenicity Studies starting after December 16, 2017 require a full SEND package
  • Cardiovascular and Respiratory Safety Pharmacology Studies starting after March 15, 2020 also require a full SEND package.

The NDA/BLA requirement is exactly the same as IND, expect in each case it is a year earlier, so December 16, 2016, for Repeat Dose Toxicity, for example.

However, even if the study starts before these dates, it is still not fully exempt if it still falls in the appropriate eCTD section. In that case, a ‘Simplified ts.xpt’ file is still required. This is an electronic data file stating the Study ID and the start date. While not a SEND file itself, it’s format generally follows the SEND standard for the TS domain, albeit with just a single record.

As I said when I started this post, nobody likes being rejected, so it’s vital that sponsors understand the TRC requirement as it becomes enforced in September. The TRC is just the first step in the process employed by the agency in ensuring they are receiving good quality, usable SEND datasets. Maybe in a future post we can discuss some of the further measures they employ.

Till next time,

Marc

Access the full TRC here: https://www.fda.gov/media/100743/download

A unique insight into SEND

This week I caught up with Debra Oetzman from our SEND Services team. As one of the authors of the SEND standard, an active CDISC and a PHUSE volunteer, many of you may have already interacted with her. At Instem, Debra performs verifications of datasets from suppliers right across the industry.

We got talking about the blog, and some of the issues it raises and she said to me, “One of the things you’ve mentioned in your blog previously, Marc, is how flexible the SEND standard is and how that is both good and challenging at the same time.  When doing a verification, this is actually one of things that makes it more difficult…and more interesting…at the same time!

Our Verification service provides an opportunity to see datasets created for an array study types by a fair number of providers.  Certainly, when reviewing a v3.1 dataset, if you see ‘NORMAL’ in MISTRESC instead of ‘UNREMARKABLE,’ you can look at the SEND IG and determine ‘that is incorrect.’  However, if you look at some of the Trial Designs…well that is a gray area.  Does the set up represent the study?  Would it cause an issue with analysis?  Does the presentation cause (or increase) ambiguity…or maybe it is unusual, but it decreases ambiguity?  Does it align with the study report and protocol?  Are there errata, best practices, TCG statements, or other documentation that may influence a recommendation one way or the other?  When reviewing a dataset package, we try to put ourselves in the shoes of the reviewer…if we, who live and breathe SEND every day, are struggling to understand the package, perhaps we can provide some suggestions to make it easier once it is submitted.”

I also asked Debra about the quality of the SEND datasets she sees and she said “That is something else you have touched on in your previous blogs…getting to the point of having a large volume of good quality datasets.  We have been fortunate that the FDA has been willing to feed back to industry some of the pain points they have encountered when reviewing datasets.  Their Webinars are so useful to the industry and to the CDISC organization when working on the next version of the IG or, proactively, when creating standards for a new domain…making sure not to add the same pain point!

We’ve heard people say that they don’t believe the datasets are being used at the agency.  Certainly, there are many of us who have been in enough CDISC meetings with FDA representatives to KNOW that SEND datasets are being used whenever possible…perhaps not all of them, or all parts of all of them, and certainly that comes back, at least in part, to quality.  If the datasets do not follow the standard, if they do not align with the study report, or if out of three single-dose tox studies each has been presented differently, that makes it much more difficult to get reviewers trained and buying into the fact that ‘standardized’ data will make their lives easier.

I would say that, overall, the quality of the datasets we review has improved over time.  We can tell when vendors have made improvements to their systems (both collection systems and SEND generation systems) that have been implemented at different facilities; we can tell when processes have been adapted to make ‘good SEND’ easier to produce; we can tell when protocol and report templates have been improved to include some of the metadata that makes the dataset more traceable.  There is certainly still room for improvement, but the direction is very encouraging.”

I really appreciated getting her point of view and I thought you might find it insightful too.

Till next time

Marc

Is SEND really that exciting?!

It was one of the finest moments of my career, but it didn’t exactly get off to a great start. At the Safety Pharmacology Society, I was invited to speak at the DSI Data Blast (think Safety Pharmacology meets WrestleMania). I introduced myself and explained that I was going to be talking about the Standard for the Exchange of Nonclinical Data. They booed. Actually booed. As anyone familiar with the Data Blast will know, it was a forum where booing wasn’t exactly discouraged, but still, this seemed more vehement than usual, and I swear I heard someone shout “No! We hate Standards!”. Actually, putting that in writing makes it sound far worse than it really was. In truth, it was more jovial than aggressive. I guess you needed to be there.

Still, the idea of ‘standards’ doesn’t exactly set the world alight. SEND is a data standard. A formal set of rules dictating how data are to be represented. Can you think of anything more dull, dry and less inspiring?

Why is it then, that this week I heard someone say, SEND is almost a religion to some people, they get so passionate about it and it was greeted with knowing smiles and affectionate nods. I thought “Yes, to some people it is, and you are addressing a few of us right now”.

In my first blog post, I admitted I was a total geek for SEND. That shows no signs of dissipating, and I know I’m not alone. It’s true, some people get really passionate about SEND. What, to some people would seem a dry, dull topic; can get some of us fully animated and soapbox preaching for hours.

For the uninitiated, it is a strange phenomenon to behold, but those of us up to our elbows in SEND really, really care about it.  We will give chapter and verse on the correct population of a variable, how that variable is to be used and the implications of the data representation.

Yes, we like feeling good about knowing how helps FDA reviewers be far more efficient enabling them to spend more time evaluating drugs than manipulating PDFs. Yes, we get excited by the possibilities of SEND, the opportunities it brings for data mining, cross study analysis and Historical Control Data. Yes, we’ll get enthusiastic about the benefits of being able to exchange clear, accurate electronic data. But, more than that, we are completely obsessive about our standard itself. Yes, we believe that this is our standard and so we are completely zealous in our dedication to ensuring that our variables are used and never misused.

Of all the things we could be so emotionally invested in, who would have thought that our obsession would be the standardization of the exchange of nonclinical data?

I started this post recalling the story of the most fervent display of resistance to SEND that I’ve ever witnessed. I fully accept that opinion, as I’m sure that there’ll be some readers of this blog with that opinion. Yet, I’m also certain that there’ll be readers who can fully relate to my efforts to capture in words, just how energized we can get about SEND.

Till next time

Marc

Proudly, I still have the T-shirt from the wonderful Data Blast!

The pros and cons of having a flexible standard

I feel I need to make a confession. I need to admit that being a vendor of SEND software and services, drives a strong bias in how I believe the SEND standard should be defined and implemented.

In the last couple of postings, we’ve been discussing how flexible, or ‘subjective’ the SEND standard is. You may have noticed from my tone, that I have a particular dislike for such flexibility.  I’ve tried to hide it, but unfortunately my own bias seeps through.

To explain how being a vendor drives that bias, I thought we’d have a quick look at a few of the pros and cons of having such a flexible standard.

Pros

Flexibility was meant to ensure that the SEND Standard was easier to adopt. Due to the vast differences in how some data are collected and reported, the SEND standard couldn’t be too prescriptive in what it expected of data. It needed to be loose enough that if the study had a particular piece of data or metadata associated with a result, then there was a place for it, but if it wasn’t captured, then that was ok too.

That has the advantages of meaning that all organizations can embrace SEND, regardless of whether or not they capture this particular piece of information. The same principle applies to timing variables. Looking at the study data, if there are only calendar dates, then that’s fine. If there are dates and times, then that’s fine too. However, if there are just study day numbers instead, then that’s equally acceptable. Just populate what timing information there is and don’t worry about what’s missing.

Taking this approach means we have a far more inclusive standard that should allow for a wider range of studies and collection methods.

That sounds great, right? So, what’s the problem?

Cons

For many organizations consuming SEND datasets, the lack of consistency from study to study presents various issues. The obvious one being a hamstringing of cross-study analysis. How can we compare 2 similar studies when the data are rendered so differently? This is a common struggle for anyone trying to develop such tools. It is difficult when we can’t rely on variables being populated in a consistent manner. As an example, just last week we were debating the Nominal label, as this is a variable that, as well as being able to contain either a day or week number, may or may not also include timepoint information, categorical information, or scheduling information. All of these are acceptable uses, but what does that mean for the tool consuming the data?

Another difficulty is that each organization has developed their own interpretation of the standard. That makes things difficult for the humble vendor who needs to produce tools to both create and consume SEND in a range of different ways to suit the various organizations. So, obviously my bias is coming through strong here as I’d much prefer it if we could all just do SEND the same way. No options, no flexibility.

I started this post with the confession that, as a vendor, I have a poorly concealed bias against allowing SEND to be so flexible, and by end, the post is laying that bias out for all to see. As well as making things much easier for the tool developers, I also think the whole industry would benefit from SEND being more restrictive. Whether you agree or disagree, I’d love to know your opinion. Emails to the usual address marc.ellison@instem.com.

Till next time

Marc

SEND: Can a standard be subjective?

You don’t need me to tell you that SEND is the standard for representing nonclinical data, but what would you think if I told you that just by looking at the data, it’s quite easy tell which software and organization produced the SEND datasets? How ‘standardized’ does that sound?

In my last post, I touched on the idea that interpretation of the SEND Standard is quite subjective. SEND veterans are well aware of this, but it often comes as a surprise to the newbie. This subjectivity leads to organizations interpreting SEND differently, meaning that SEND is a standard that everyone does slightly differently.

The result is that SEND is a ‘Subjective Standard’, which itself sounds like an oxymoron.

There are various things I can point at to illustrate this subjectivity, however, the easiest example, is that the SEND Implementation Guide contains many, many statements that either imply, or outright state the fact that there is a choice in how data are represented in SEND. It does this with statements along the lines of “In this case, the sponsor chose to represent the data like so…”. A quick search shows over fifty examples where such a choice is either stated or implied.

So how did we get here? Those defining the early versions of the SEND Standard, set out with the best of intentions and needed to address the particular landscape before them. That landscape included the various ways different organizations represented, what were essentially the same data; and often the only source of data were the legacy studies in PDF format. Also, it was intended to mean that no organization would need to change their internal practices – but hindsight has taught us something very different. I’ve discussed in this blog before, the naivety of thinking that SEND would not change data collection practices, but back then, that was the intention. Therefore, the standard needed to be flexible. It couldn’t be too prescriptive. We couldn’t have a standard that required information, data or metadata, that were never collected. Therefore, many SEND variables offered some level of choice as to whether or not they would need to be populated.

This resulted in a standard where some concepts are defined very loosely. As any SEND expert knows, probably the biggest offender is the Clinical Signs domain. The only variable really standardized here is the Category. The other variables offer a considerable amount of choice in how data are represented and which variables to populate.

There are plenty of other areas we could also point at for examples of SEND requiring a subjective opinion. Timing variables, for example. Just this week, we’ve again been debating population of the Nominal Label. Yet, we could also be debating which combination of timing variables could and should be populated.

We could also say so much about the subjective nature of Trial Design, but I think I’ve said enough to convey my general point:

We are all grappling with how we choose to represent our data in a standard that can be very subjective.

Maybe in a future blog post I’ll discuss the advantages and disadvantages of having such a flexible standard.

Till next time,

Marc

The SEND Traffic Jam

I hope you enjoyed my last blog post that discussed the differences in the analogue world versus the digital world, and how this gets us into a debate about whether SEND should represent the data we collect, or represent the data we report.

Following on from that, this analogue-versus-digital discussion has got me feeling a little nostalgic. If like me, you are of a certain vintage, then I’m sure you can remember having to travel to a VHS rental shop if you wanted to enjoy a movie for the evening; or having to wait days to have your photos developed. Back then, living in an analogue world without streaming, meant things took time.

Yet now, it seems that the whole world has got faster due to digital data and standardized formats. So why are we hearing about studies getting stuck in the SEND Traffic Jam? Why are we seeing wait times for SEND conversions increasing and some organizations seeming to have an ever growing SEND backlog?

At a recent industry meeting, the FDA asked about the impact of SEND and the amount of time it is taking to convert a single study. While Instem’s average is 4 weeks, the consensus was an average of 8 weeks of elapsed time.

Living in the digital world, we are expecting things to be faster, not slower, so why are we seeing this extra time needed after the creation of the antiquated analogue appendices?

So clearly, we are talking about safety assessment data and not simply our holiday photos. As such, SEND datasets are not naively accepted as they first roll out of the machine. They’re checked for accuracy, compliance and human beings decide if the data are rendered in a way that best suits the study. This subjective element is often the most time consuming, and that’s because interpretation of the SEND standard itself, is highly subjective.

These are just some of the factors, and all of these things take time. So, there are good, legitimate reasons to explain why SEND conversions are not as instant as simply streaming Netflix on a Friday night. It’s more like restoring an old 35mm movie into a high-definition multi-media experience. Just like SEND, it requires time, expertise and the results can certainly vary in quality.

Organizations have more choices than ever, including internal IT departments for creating SEND. It’s vital that they have the right SEND partners and collaborators to ensure they are meeting the requirements of sponsors, the FDA and any other consumers of the data.

As always, if you’d like to continue this conversation, or have a topic you’d like me to address, drop me a line.

Till next time

Marc

Skeuo-what now?!

Does SEND include any skeuomorphs?

Okay, strange word I know – stay with me…..

Why does my phone make the sound of shutter when I take a photo? It doesn’t have a shutter, and even if it did, I wouldn’t be able to control its volume level. Yet, I hear that satisfying old school shutter sound every time I click. Do you think there are kids alive today who have only known digital cameras and don’t even know what that artificial sound is trying to mimic?

When I read a PDF on my tablet, why I am swiping to turn imaginary pages? Why are there even imaginary pages?

I guess its human nature to try to mimic the analogue in the digital world. It’s familiar. It makes the transition easier.

By the way, such idiosyncrasies are called skeuomorphs. Taking a strange functional artifact of one media into another where it no longer has function. In this case, replicating an analogue quirk in digital format.

Does SEND include any skeuomorphs? It’s an interesting question. On face value we’d probably say not, but scratch beneath the surface and I think we get into the debate about whether SEND should be a representation of the collected data or the reported data. Most of the time, in the nonclinical world, these are the same thing, but not always. Food and Water Consumption is the entrance to this particular rabbit hole. We collect Given Amount and Residue Amount, but report a calculated consumption. Ok, that’s not a skeuomorph in itself, but we are just getting warmed up here. What about scenarios where food is collected daily, but reported weekly, or by some other interval, usually to save space on the page? If the SEND dataset reflects the weekly reporting, then are we representing our data a certain way in SEND to save space on a page?

Now, that’s definitely starting to sound like a skeuomorph.

Well, here’s something that cropped up here at Instem, in the last week or two as we were converting a DART study, but don’t worry, you don’t need to be a DART expert to accompany me through this example. Due to the volume of data on Embryo Fetal Development studies, it’s common practice to only report positive findings, and not list pages of ‘Unremarkable’ or ‘Normal’ or what ever term your system uses. So, should SEND include all of these unreported ‘Normals’? If omitted from SEND, and therefore matching the report, we’d be omitting data simply to save space on a page. Clearly a skeuomorph. So, in our software, we have chosen to include these records. We are going with what is collected, not what is reported. Yet, remember that differences between the SEND datasets and the study report need to be clearly explained in the nonclinical Study Data Reviewers Guide (nSDRG).

We also encountered other examples where collected and reported differ,  and sometimes it seemed appropriate to go with what was collected, and at other times we go with what was reported, but I’ll save those examples for another day.

Finally, I’ll leave you with the next question on this particular trail: What about Rationalized data? Whether it’s Pathology data, Clinical Signs, or Fetal Pathology, data are often rationalized at report time for ease of analysis and/or the best use of the available space on the page. Which do we include in SEND?

And now you’ll forever be reminded of SEND whenever you hear the artificial shutter sound on your camera’s phone. That’s one of those Skeuo-what-now thingies.

Till next time…

Marc

Is the world of SEND moving too fast?

Well, March 2021 turned out to be a big month in the world of SEND. The FDA issued their 90-day notice for the enforcement of the Technical Rejection Criteria (TRC) for Study Data. This means that the FDA will begin rejecting studies that don’t meet the criteria effective September 15, 2021.

We also had a new Technical Conformance Guide (TCG) drop from FDA, and yes as this blogger speculated, immune response data in an optional custom domain for IS, was added to the Guide.

We saw the Federal Register Notice (FRN) and then the obligatory update to the Data Standards Catalogue to support and later require SENDIG-DART v1.1 which is the CDISC SEND standard for Development and Reproductive Toxicology. This coincides with a lot of FDA activity around their Fit For Use (FFU) pilot for that DART standard, including the selection of sponsors participating in the pilot and then the scramble to produce the datasets in record time.

That was all just in one month. Damn, do those FDA folks ever sleep?!

Not wanting to be left out of the party, CDISC published SENDIG 3.1.1. While limited to changes to PC and PP domains, publication of a new version of SEND is a very significant moment.

We also of course squeezed in the Society of Toxicology, ToxExpo. Even when virtual like it was again, for many of us it remains the largest and most influential industry event of the year.  Also, like many of you, I am very much looking forward to being back on-site next March in the San Diego sun!

How do you feel about so much happening in a single month?

Personally, I’m very proud and excited. I was part of both of the teams that authored the SENDIG-DART 1.1 and the SENDIG 3.1.1 standards. I’ve volunteered so many hours to developing these standards, I want to see them adopted and required as soon as possible, and applied to as wide a rage of studies as is feasible. But I know that’s not everyone’s opinion. I know there will be many who struggle to keep up with latest changes. Many who do not even know where to go to look to find out what’s changing (maybe that’s how you first stumbled upon my blog!).

I’m sure there are many that see all of these updates and announcements and feel that it’s too much, too soon. I’m sure there are many also still getting to grips with some of the challenges of SEND 3.1, and the idea of SENDIG 3.1.1 and SENDIG-DART probably fills them with dread.

I know there’s an opinion in some quarters that SEND is a burden, a necessary evil, an unwelcome expense. Those are valid opinions and I understand what leads people to those conclusions.

Yes, I understand that point of view, but I still see the benefits outweighing the cost. Most of industry is on board with SEND. Whether your solution includes new tools, partnering with a service provider, or a combination of the two, most of our industry is over the initial hurdle and now up and running. I’m continually reminded of the benefits that SEND brings both to industry and to FDA. As I look back at March, I’m really excited and encouraged to see the pace of SEND adoption is picking up.

How do you feel about it? marc.ellison@instem.com to let me know.

Till next time

Marc

Supported but not required by FDA – What’s the point?

SENDIG-DART v1.1 is currently supported by FDA, but not actually required, so should anyone really care? What’s the point of a standard that isn’t required yet?

In this context, supported means that a sponsor could choose to include SEND datasets, and the FDA are willing and able to receive them and would expect to use them when reviewing the submission.

However, as they would not be required, why would any organization actually do this?

I suppose the first thing that comes to mind is the fact that SEND costs money. It just does. I think we’ve all got accustomed to that idea. It also takes time. Time and money are two very persuasive arguments, yet there are organizations already getting on board with this new standard.

We have seen this before. Whenever the agency adds to the Data Standard Catalogue, they always add a support date that’s in advance of the requirement date. Typically, they issue verbiage along the lines of “…strongly encourage the use of…” to indicate their desire to receive the standardized data ahead of the requirement.  FDA cannot simply add the standard and require it immediately, so industry gets a period to implement the new standard.

Anyone who remembers their similar position for SEND 3.0 and SEND 3.1 will know that any implementation period comes and goes far too quickly. Two years can feel like a long time, but it will be over all too soon.

So ‘Supported but not Required’ is a siren. A great, noisy, clanging call to action, telling us that this requirement will drop onto our desks with all the grace and elegance of a sledgehammer.

Some of you are already underway with implementation. Many more are getting educated on the new standard, which is something I’d highly recommend if you haven’t begun this already. Being better informed about its scope, content and expectation sooner rather than later will enable you to identify the impact of this new requirement on your organization.

Some organizations have already thought about how many studies will be impacted and given that it’s a relatively smaller number than general toxicology, they have already set about identifying an outsourcing partner to handle the requirement on their behalf.

However, would any organization actually provide SEND datasets as part of a submission, where they are supported and even strongly encouraged, but not actually required?

Well, I believe that several organizations did exactly that for SEND 3.0 and 3.1. Granted, not a great number of organizations, but a significant group for sure. Once ready, these organizations did in fact see value in putting their capability, the capability of their CROs and solutions partners to the test, secure in the knowledge that they had a safety net of knowing that there would be no refusal to file. Knowing there are so many complexities and moving parts, there’s certainly an argument to be made for testing out a new standard during the limbo period of ‘Supported but not Required’.

As usual, please send your comments, thoughts, disagreements or questions to marc.ellison@instem.com

Till next time

Marc