FDA provides more detail on the Scope of SEND

This past week was the CDISC SEND Virtual Face-to-Face meeting, and yes, I still snicker like a schoolboy at calling something a ‘virtual face-to-face’, but hopefully it’s the last one and by next Spring we can be face-to-face in person. Anyways, as is usual, the highlight of the week was the FDA public meeting. As you may imagine, this time it was dominated by discussions around the recent addition to the Technical Conformance Guide (TCG), momentously entitled the “Scope of SEND”.

I previously wrote about this when the FDA mentioned it at last year’s public meeting, and again when the recent TCG dropped. Then last week, the agency took the opportunity to take it apart, explaining their thinking and adding clarity.

The public meeting started with a reference back to the binding guidance. To serve as a reminder, that this is not optional, or subjective, and directly flows from that oh-so-important 2014 Guidance. Reinforcing the message that the recent TCG update is not adding anything new, it’s simply clarifying the position that harks back to that 2014 publication.

It was this time last year that the FDA clearly stated to industry that, although the Technical Rejection Criteria references the eCTD sections, the guidance doesn’t limit SEND requirements in the same way. A year later, we get a very detailed insight.

Without going into excessive detail, the guiding principle is this:

  • If the study starts after the requirement date, and;
  • The study needed to assess and support the safety of clinical investigations, and;
  • The study data can be represented in SEND;

Then the study requires SEND for submission.

Things like the age of the subjects, whether or not the study is GLP and the study report status were all discussed. It was clearly stated that these things do not affect whether or not the study requires SEND.

For some sponsors, this now causes a significant issue for certain studies. They may have studies already completed by their CRO, that at the time, incorrectly assumed that the study wouldn’t require SEND because it was in an eCTD section not covered by the Technical Rejection Criteria. So, they may have completed studies without SEND, which actually require SEND. However, fortunately for them, specialist SEND organisations like Instem can take completed studies and create SEND from the study report; allowing them to be compliant, even if they didn’t realise that SEND would be required when they initiated the study.

The FDA presentation also went to great lengths to talk about the value of SEND for regulatory purposes.  They described its value for primary reviews, and how it also assists in secondary and tertiary reviews. They described how SEND is “…used to conduct reviews that will impact regulatory decisions”.

Actually, it’s really inspiring hearing this, at a CDISC meeting focused on driving the SEND standard forward. The general feeling in the (virtual) room was really appreciative of the FDA’s openness in this way.

Till next time,

Marc

What’s the deal with the eCTD Study Tagging File (STF), SEND and the Technical Rejection Criteria?


As you are probably aware, Instem recently acquired PDS Life Sciences and so I’ve been working very closely with my new colleague, Mike Wasko. Mike is a prominent figure within our industry, and this week as we were catching up, we started discussing the question: What is the relationship between the eCTD Study Tagging File (STF) and SEND datasets and the impact on the Technical Rejection Criteria (TRC)? As this is an area that causes some confusion and can lead to an automated Technical Rejection of the submission, I asked Mike to write up his thoughts, so they could be shared in this blog. Here’s what he had to say:

As outlined in the Technical Conformance Guide (TCG), TRC and eCTD Guidance, the study-id used in the STF must match either the STUDYID or the Sponsor’s Reference ID (SPREFID) value in the TS domain. Additionally, this study-id must be present in the define.xml file in the StudyName element. This connection is clearly understood; however, knowing what study-id the Sponsor will use in the STF is another story.

Usually, study conduct and then the subsequent SEND dataset creation, are managed by the Study Director and a SEND team. Whether the SEND team is internal to the CRO or external, the SEND team typically has little to no contact with the submission team creating the STF.

Now that the FDA’s TRC is in effect (September 15, 2021), the connection between the SEND datasets and STF has become essential and communication regarding the study-id in the STF must adapt. In an ideal setting, the team creating the SEND dataset and the associated define.xml file, will be informed of the study-id to be used in the STF.

One would think that the study identifier should be obvious, but that is not always the case. This coupled with the fact that the STF is usually created after the SEND datasets are completed creates a unique problem. For studies conducted by CROs, a study might have several different identifiers, the Sponsor study id, the CRO study id, a report number or even something else. In this situation, it is difficult  for the SEND team to determine which identifier will be used in the STF, and remember that wrong assumptions will cause a technical rejection.

Even if a single study number is used in the report and SEND dataset package, another use case can cause a challenge. The eCTD and STF limit the type of characters that can be used due to folder structure. Special characters such as “/”, “\”, “.” and others should not be used. If the study report and protocol use such characters for the study number, how will it be represented in the STF? Was the character simply removed? Was the character replaced with an acceptable character such as “_” or “-”?

For submissions that occur near the time of a SEND dataset generation, proactive communication from the submission team and can resolve the issue. However, consider that SEND datasets can sit for years before submission say for an NDA. Staff may have changed or maybe the agreement wasn’t well documented. Now the Sponsor will need to ensure that the STF and the STUDYID or SPREFID align.  If they do not, the ts.xpt and define.xml files will have to be updated accordingly.  Currently, we are not aware of any validators that do this check automatically so Sponsors will also need to know where to look in the define.xml and ts.xpt to ensure the study-id specified in the STF matches the dataset and define.xml file. As time progresses, it seems more and more people will need to become SEND aware as it is no longer strictly a SEND team and Study Director responsibility.

I thought that was a great explanation for Mike to share, and you can expect that I will be asking Mike for future contributions as we continue to navigate the challenges and opportunities that SEND presents.

If you’d like to continue this conversation, drop me a line at marc.ellison@instem.com

Till next time,

Marc

Is SEND the key to unlocking Historical Control Data?

“You wait all day for a bus and then 3 come along at once.” It’s a phrase I used to hear a lot in my younger days when I would often ride public transport. There’s been some of that going on this week, though not with busses. For me, this week it’s been the role of SEND in Historical Control Data (HCD) systems.

It must have been a year or two since this topic last cropped up, but this week, completely coincidentally, it’s been raised several times by different people in completely different contexts. When that topic is raised, immediately people start considering the role of standardized data, and the possibilities SEND brings.

Any potential HCD system has three components:

  1. A large wealth of data to draw on
  2. A way of harmonizing those data, particularly when the data come from different sources utilizing different structures and terms
  3. A tool to query, aggregate and visualize the data

As an organization that for many years was simply a software vendor, tool development and data harmonization would have sat well within our comfort zone. Yet without being able to draw on a significant volume of electronic data, there wasn’t much value in developing such tools.

The SEND standard is now opening that up as a possibility. This is because more data are becoming available since CROs are no longer just supplying the PDF study report, but also providing standardized electronic data.

Standardized data means consistent data, regardless of CRO or data collection system. That’s the idea that really opens up the possibilities for HCD systems. That final stumbling block to the value of a system, is now overcome.

As well as some organizations drawing on their own data, others are considering the possibility of pooling their control data. It’s an intriguing possibility. Some SEND tools, like Instem’s SEND Explorer, already have build-in visualizations for querying Historical Control Ranges. These would provide far more value when hooked up to such a vast database. This then came with questioning if there’s need for independent curation of the data, and maintenance of the database.

Anyways, having not thought about HCD for a while, I was asked about it in the context of our own data collection systems, then a query about SEND Explorer’s functionality, and then the possibility of data curation. “Just like busses, three come along at once.”

Till next time,

Marc

Is the latest Technical Conformance Guide update, the most important to date?

It was late 2020, during the FDA public webinar, as part of the CDISC face-to-face meeting, that the agency made the simplest of statements, which seemed to turn the world of SEND upside down:

The placement of a study into the eCTD format does not determine the SEND requirement

It was just one little line sitting innocuously amongst many others. Online chat lit up. For years, the CDISC fraternity had proclaimed the eCTD section, together with the study start date, as the key foundations for determining, absolutely, if a study required SEND datasets or not.

On that October morning, without fanfare or forewarning, the agency let us all know, that simply was not the case. What followed was a period of questions and honestly, confusion. Then, last week, the FDA’s Technical Conformance Guide v4.8 was published. Three additional pages of text were added under a new section called, “Scope of SEND

Well, what does it say? It opens with the line, “The following is the Agency’s current thinking of the scope of SEND…”. So, everything stated is just the ‘current thinking’, and this target is moving.

It then broadly states that SEND datasets are required for as many studies as possible, an idea probably best captured in the line, “If the nonclinical pharmacology or toxicology study is required to support a regulatory decision by the Agency… then the nonclinical study would require SEND.”

Within this sweeping statement, the document then deals with some of the specifics, particularly focusing on areas that we’ve heard questioned over the past year. What happens when one study type incorporates endpoints from another study type? The document uses the example of general toxicity studies that incorporate cardiovascular safety pharmacology or genetic toxicity. It states that the study is still in scope for SEND with the expectation that any data which could be rendered in SEND, should be rendered in SEND.

The guide states that the age of the subject doesn’t impact the decision as to whether or not SEND is required. It states that SEND is required for juvenile studies as long as the study does not “…include multiple phases [which] cannot currently be modelled…

It further states that, “The requirement for SEND is not limited to the drug substance” nor “…study report status or the finalization of the study report…” Also, regarding the GLP status “As both GLP and non-GLP toxicity studies may be submitted to the FDA to support clinical safety, the decision for inclusion of SEND is independent of GLP status.

Over the course of three pages, the agency is attempting to be as clear and helpful as they can, to show that if a study can be rendered in SEND, for a study that informs the decision for the clinical safety of the drug, then those study data should be in SEND.

However, the text is littered with caveats and exceptions, and so appears both explicit and vague at the same time. For this reason, it encourages sponsors to enter into “discussion with the review division when there is any ambiguity on the SEND requirement…”. We have to appreciate that this is an effort to clarify the expectations, and though it seems to leave many questions unanswered, one thing is certain, they are encouraging the submission of SEND datasets.

So, does your study require SEND? That question just got a whole lot harder to answer, but certainly the answer is now more likely to be, “Yes.”

Till next time,

Marc

Why Define-XML files give me the Happy Mondays

During my formative years, there was a band from the north of England, not far from where I was growing up, called the Happy Mondays. Probably the most notable thing about them was one bandmember called ‘Bez’. His contribution was somewhere between cheerleading and performance art, as all he appeared to do was dance like a maniac while shaking maracas like his life depended on it. Nobody really knew why, yet he was a key member of the band. We couldn’t hear the maracas, so he wasn’t adding anything musically, but somehow the band would not have been the same without him.

Bez is the best metaphor I can think of to describe how most of our community views that strange little xml file that accompanies the SEND xpt files as part of the study package: the Define-XML file. It doesn’t contain data, just information about the data. We are not quite sure what it’s adding, we just know the package wouldn’t be complete without it.

I’ve heard it said that the FDA does not use the Define files. I’ve also heard it said that they do use them, and they are necessary for loading the data. I have seen FDA feedback provided to sponsors which remarks on the Define file. Still, I think there’s some confusion across the industry regarding what the FDA actually do with the Define file.

Most commercial SEND solutions produce some default Define file populated with some basic information, usually based off a generic template and often, not too study specific. The file follows the Define-XML standard, but it isn’t really adding any value. Usually, the intention is that such files are the starting point and it’s assumed that the organization will complete them manually to tailor them to the individual study. From what I have seen, some organizations are doing exactly this, and manually editing the XML. However, some organizations are not doing this and instead, simply supply the default file that has been auto generated.

The three reasons why some organizations only supply the default file, are quite clear:

  • They are unsure of the value of a well-formed Define file
  • They do not have the necessary tools to produce a well-formed Define file
  • They do not have the necessary expertise

This has been the way for some time now, but I’m starting to see a change in the tide here. As mentioned earlier, the FDA often include discussion of the quality of the Define file in their sponsor feedback. Also, tools will continue to be developed and improved to accommodate this.

The Define-XML standard is something separate from the SEND standard. The standard is the same for both clinical datasets (SDTM) and nonclinical (SEND). For this reason, this week I’ve been learning a little about how Define files are both produced and used in the clinical world. It appears that clinical have tapped into the value and purpose of the Define file, yet in nonclinical, we often still view them in a similar way to how my teenage self, viewed that maraca shaking curio.

Maybe in a future post, we can discuss some of the opportunities afforded by having well-formed Define files.

Till next time,

Marc

Can SEND datasets be fully compliant…but still wrong?

In a recent post, we discussed how there’s quite a bit of emphasis at the moment on ensuring SEND datasets are compliant with the SEND Standard. Obviously, the main driver here is the activation of the FDA’s technical rejection criteria, which will result in the agency automatically rejecting applications which do not meet the required criteria. We’ve also seen the CDISC’s move into automated compliance checking with the initiation of the ‘CORE’ project. In fact, the whole topic of ‘compliance’ seems to be the biggest talking point in the world of SEND right now.

It should go without saying that no matter how compliant the datasets are, they also need to be a correct representation of the study data. However, it’s clear that some organizations are putting effort into compliance but forgetting the more basic principle of making sure the data are correct. I mean, there’s no point having beautifully rendered SEND datasets, with all variables populated and formatted in accordance with the standard, if the actual values are incorrect.

For clarification, I’m referring to occasions when a result in a SEND dataset doesn’t match the value in the study report. Typical issues can be things like having result data, say bodyweights, which are being reported in SEND for dates long after the subject was terminated; or negative lab results that should never be negative. All examples of data which can be rendered in a compliant manner, but still just obviously ‘wrong’.

Some of us may struggle to believe that could really occur, but it’s surprisingly easy to see how that can happen. Some organizations will collect data, complete the study, produce the PDF tables, review them and any such errors found would be corrected in the tables directly. At some point later, the SEND datasets are produced from an export from the data collection system. At this point, any corrections in the tables are not reflected in the data collection system and therefore not in the SEND Datasets.

Other practices and process which don’t consider SEND from the outset, can result in similar issues. For this reason, I continually quote what I consider to be one of the most important, and most often overlooked statements in the FDA’s Technical Conformance Guide (Section 4.1.3.2 General Considerations) “The ideal time to implement SEND is prior to the conduct of the study as it is very important that the results presented in the accompanying study report be traceable back to the original data collected.

So, the first issue is that without proper forethought, incorrect results can occur. The second issue then is that they may be difficult and expensive to detect.

To a large degree, automated tools can be developed to check conformance, particularly with the publication of CDISC conformance rules and FDA validation rules. However, checking that a result in a SEND dataset matches the corresponding PDF table is something that still requires a human touch.

So yes, we need to ensure conformance to the standard, but how much more important is it to ensure that results themselves are correct?

Till next time,

Marc

Did you see the recent paper from the JPMA SEND Taskforce Team?

Okay, first – some context…

Without the FDA requiring SEND datasets, we would not have seen the industry-wide adoption and implementation of the standard. The change made by the industry, continues to fascinate me, in terms of both speed and scale.

This drive for submission provides us with a well-defined standard, and one that is well suited to single study review. However, it is becoming increasingly more apparent that there are some shortcomings for cross study analysis and data mining. The reason for this is that, while SEND allows for accurate representation of a study’s results in electronic, machine readable form, it also allows for a significant variability from study to study.

Now that I have set the scene,  I’d like to discuss a recent paper by the Japan Pharmaceutical Manufacturers Association (JPMA) SEND Taskforce Team. This describes their analysis of multiple SEND packages from a variety of suppliers. It details key areas of variability in how data are represented from study to study. If data mining and cross study analysis gets you as giddy-as-a-kiddy on a Christmas morning, then I’d highly recommend that you take a deep dive into the paper for yourself. It contains very detailed results, calling out things like the specific variables that are most prone to variation between providers.

One of the key areas that the paper discusses is the scope and application of SEND Controlled Terminology (CT). It will not come as a surprise to anyone who is routinely working with SEND Datasets, that many key variables do not have CT defined for them. They allow for a free text description. The paper calls out many examples, including Clinical Signs where even variables like the test name, and the severity are not controlled.

Stepping away from Clinical Signs, the discussion on CT reminded me of work being conducted by PHUSE regarding the lack of CT for the Vehicle being used on the study. While, for single study analysis, a free text description is perfectly adequate, when it comes to data mining, the lack of CT proves problematic. For this specific issue, PHUSE are recommending a particular structure, format and nomenclature be used to describe the vehicle.

Such recommendations, to enforce supplementary rules and standardization – essentially, further CT in addition to the regular SEND CT – adds complexity to the creation of SEND datasets. That complexity will then increase the time and cost to produce SEND Datasets. That discussion will open up another debate, which I’ll leave for a different day.

Suffice to say, that SEND provides an accurate representation of a study’s results in electronic form, well-suited to single study review. However, there are shortcomings relating to multi-study usage, but these can be overcome. The JPMA paper does a very good job of calling out these issues to address.

As usual, drop me a note if you like to discuss this further

Till next time,

Marc

There’s a theme developing here

Human beings have an inherent ability to see patterns in everyday objects, like recognizing shapes and faces in clouds. While that might seem ridiculous, pattern recognition is vital for us. Without it, we would not be able to do things as varied as being able to recognize faces; find the answers in a Word Search; or even appreciate music. However, this week no advanced pattern recognition skills were needed to see a common thread emerging.

My last blog post discussed the details of the FDA’s Technical Rejection Criteria (TRC). With little over one month to go (September 15, 2021) before the agency activates the TRC and begin rejecting submissions, it’s unsurprising that it’s proven to be my most popular post. Since it went live two weeks ago, there’s been a Federal Register Notice to confirm the September effective date for the TRC.

In addition, this week, we’ve seen CDISC launch the CORE project. This is a joint CDISC and Microsoft project to automate the application of the Conformance Rules. Launching with an hour and a half presentation, we were introduced to the vision and timeline for an exciting new technology that will automatically check for conformance to CDISC standards. CDISC started by creating standards including SEND, but then they added a written set of rules to ensure compliance to that standard. However, different tools may interpret that standard and even those rules in subtly, or possibly dramatically different ways. Regular readers of my blog will be well aware of my opinion on the difficulties caused by having a flexible, subjective standard that is open to interpretation. CORE sees CDISC take back ownership of the application of the rules to ensure that conformance is anything but subjective. Into this project, they are also adding the execution of non-CDISC rules, including the FDA Business Rules. CORE is something I’ll be watching and reporting back on in my blog as it takes shape.

This past week also saw CDISC publish SEND Conformance Rules v4.0 for evaluating the conformance of SENDIG-DART v1.1,  SENDIG v3.0, SENDIG v3.1, and SENDIG v3.1.1 datasets.

Throughout the week, I’ve been pouring over the FDA’s feedback on a few different studies, in order to assist some of our customers. When the FDA receive a SEND study, they will respond privately to the sponsor with feedback about their SEND datasets. This private feedback is one of the main methods employed by the FDA to continually improve the quality of SEND datasets being produced.

At first, these things: the TRC, CDISC CORE, new SEND Conformance Rules and the act of reviewing FDA feedback for several studies; may seem like a disparate set of unrelated topics, however standing back and reflecting at the end of the week, a clear pattern starts to emerge. Though independently, and seemingly co-incidentally, the quality of SEND datasets is being questioned and the bar is being raised. To that end, technology is being developed to ensure that we are all compliant, and that compliance is no longer a matter of opinion.

Till next time,

Marc

How to avoid rejection

It’s a basic human need: We want our work accepted and valued. Nobody wants to see their work rejected. It’s so obvious, it almost goes without saying. Worse still would be being rejected by a cold, heartless automated computerized system. That would be soul destroying. However, with the FDA’s latest deadline, automated rejections will start to occur.

In this post I’d like to discuss how to avoid rejection of your electronic study data – especially since we are less than two months (September 15, 2021) away from the enforcement of the Technical Rejection Criteria (TRC).

Up to now, failure of the TRC has not resulted in rejection, but instead FDA have simply issued warnings alerting the submitting sponsor to the issue. At the FDA’s recent SBIA (Small Business Industry Assistance) webcast, the agency presented data showing over 200 studies had been issued with such warnings in a single month (March 15, 2021, to April 17, 2021). From September, all such studies would result in a rejection instead of a warning.

I’ve included a link to the TRC at the end of this post and at first look, the rules can seem quite complicated. There’s mention of a TS domain, certain eCTD sections where the TRC does not apply, discussion about the study tagging file and so on.

At Instem, we are well versed in helping our customers navigate the intricacies of the TRC, and for certain studies, this can get quite complicated, however, in its simplest form, the principle is first to check if the eCTD section is applicable, and then to check the study start date.

Certain sections of the eCTD are exempt from TRC. These sections are listed in the TRC so we can easily check if a particular study would be exempt. If the study is to be placed in an applicable section, then the next things to consider are the study start date, submission type and study type.

For an IND submission:

  • Single Dose Toxicity, Repeat Dose Toxicity, and Carcinogenicity Studies starting after December 16, 2017 require a full SEND package
  • Cardiovascular and Respiratory Safety Pharmacology Studies starting after March 15, 2020 also require a full SEND package.

The NDA/BLA requirement is exactly the same as IND, expect in each case it is a year earlier, so December 16, 2016, for Repeat Dose Toxicity, for example.

However, even if the study starts before these dates, it is still not fully exempt if it still falls in the appropriate eCTD section. In that case, a ‘Simplified ts.xpt’ file is still required. This is an electronic data file stating the Study ID and the start date. While not a SEND file itself, it’s format generally follows the SEND standard for the TS domain, albeit with just a single record.

As I said when I started this post, nobody likes being rejected, so it’s vital that sponsors understand the TRC requirement as it becomes enforced in September. The TRC is just the first step in the process employed by the agency in ensuring they are receiving good quality, usable SEND datasets. Maybe in a future post we can discuss some of the further measures they employ.

Till next time,

Marc

Access the full TRC here: https://www.fda.gov/media/100743/download