Is there really that much to say about SEND?

Back in 2020 when the idea of blogging about SEND was first suggested to me, my initial thought was “What is there to talk about?”. This is now my 61st bi-weekly post. If you have been with me along that journey, then you will have spotted a few recurring themes. I like discussing scope, future standards development and how much SEND has changed the nonclinical world. However, more than simply repeating myself, I always feel there is something new to say. Something new to add to the discussion, and I think that is largely driven by having the unique viewpoint of being a vendor.

At Instem we serve organizations large and small; CROs and sponsors; specialists and full-service providers; SEND experts and SEND newbies. Our customers are varied and cover all corners of the safety assessment world. Most CROs have one main data collection system, one well-curated lexicon and a fairly consistent set of SOPs, study designs and tests. Instem is not a CRO. We see a wide variety of study designs, data collection practices and the like. Therefore, we get to see a wide variety of data and each study can feel like a new challenge to wrestle the data into shape.

And that’s just the data we are converting to SEND for submissions. We also provide SEND education and consultancy as well as services like bulk conversions for warehousing and data mining. And aside from SEND services, we also supply software and tools to convert data, view SEND files, generate Define-XML and the nonclinical Study Data Reviewers Guide. So again, when working with our customer’s software needs, we see a wide variety of data collection practices and related systems. We hear the different interpretations of SEND that are prevalent across our industry.

This blog post is not intended to sound like an advertisement for our products and services, rather to explain the unique viewpoint of being a vendor to our industry. Having passed the milestone of 60 blog posts, I think the reason why there’s always something more to discuss is because I’m often talking to a new customer or seeing a new presentation of data or hearing a new viewpoint which provokes a new blog post.

Back in 2020 I could never have imagined that there was so much to talk about but being a vendor keeps things fresh. Also, being a CDISC volunteer keeps me on the leading edge of standards development, and again that often inspires a blog post or two. So, when it comes to SEND “What is there to talk about?”, actually quite a lot.

For anyone attending the Society of Toxicology in Nashville next week, I’ll be co-presenting on the life of a SEND dataset, which includes the challenges and best practices of preparing SEND, and the opportunities that electronic data provide beyond submission. If you are attending, please stop by and say ‘Hi’.

’til next time

Marc

Who uses the CoDEx document anyway?

Recently the question was asked “Are bone length measurements in scope for SEND?”, which leads to another interesting question: “Who gets to decide what is in scope for SEND?”. There are two parts to this. Firstly, which data were intended to be covered by the various CDISC documents, and secondly, which data do FDA require in SEND?

I often reference the ‘Scope of SEND’ section in the FDA’s Technical Conformance Guide (TCG) section 4.1.3.4. As the title suggests, in this section, the FDA state as explicitly as they can, the breadth of studies they consider in scope and therefore required for submission. So, the FDA can require whatever they choose. However, this section is mainly dealing with the study as a whole. We are not down in the weeds, mulling over individual endpoints. To dig down that deep, we need to consult the CoDEx.

I know, ‘CoDEx’. It’s not the most intuitive name for a document, is it? I’ll be honest, whenever anyone asks me about it, I can never remember exactly what CoDEx stands for. I mean I know the terms Confidently Modelled, Exchange and Endpoints are in there somewhere, but I always seem to fluff my lines when trying to explain it. (The actual answer is “Confirmed Data Endpoints for Exchange”. I’m not sure why I keep thinking of the phrase ‘Confidently Modeled’?)

For those not familiar with the document, it is CDISC’s method of addressing the scope for SEND, down to the individual endpoints. Though not involved with authoring the document myself, I do remember how much care was given to the words chosen. While it was intended to define and communicate scope, it was also intended to not limit scope. By that I mean, if an endpoint was not listed as ‘in scope’ but could easily be represented in SEND, the document was not to imply that the endpoint shouldn’t be rendered in SEND. Consequently, the wording emphasized the idea that CDISC could confidently state that certain endpoints have been modelled, and therefore can be exchanged using SEND. But, just because CDISC have not confirmed that the endpoint can be exchanged in SEND, doesn’t mean it’s out of scope, and doesn’t tell us what the FDA may or may not consider as required.

So, how useful is the document, given so many caveats? Who uses it anyway? For the opening question that prompted this topic: “Are bone length measurements in scope for SEND?”, the CoDEx document states, “Although there are no examples in SENDIG, morphological measurement on specimens that result in numeric data can be modeled in this domain.

We don’t have example of exactly how to do this, but despite all the caveats we can tell from CoDEx that the endpoints are in scope. In this case, the often-overlooked CoDEx was really quite helpful.

Were you aware of the CoDEx document? Have you found it helpful? Let me know your thoughts at marc.ellison@instem.com

‘til next time

Marc

Are you ready for a tighter standard?

From my earliest introductions to SEND, one of the things that surprised me most was just how loose it was. It seemed that at every turn there were choices. The phrase “The sponsor chose…” occurs throughout the implementation guide when describing a particular representation of data. Several years ago, I was presenting at the Society of Toxicology meeting on this topic and I used the phrase “SEND feels like more of an artform than a standard”. I’d gone through the Implementation Guide and found over 50 instances where the guide implied some sort of choice as to how data could be represented.

When the SEND Implementation Guide was first written, it was intended to be flexible enough to accommodate the fact that data are collected differently in different labs and in different systems. Different organizations define and structure their glossaries and lexicons in different ways, and so the guide was written with adaptability in mind. There was a feeling that if the standard was too inflexible it would hamper adoption.

From what I have seen over the past few years, such flexibility has become its own hindrance. Adopters and implementers of the standard want clear and simple answers on how to represent data. The more subjective the reading of the Implementation Guide, the more time, effort and consideration need to be given.

It’s also worth remembering that the first versions of the Implementation Guide were written before anyone had ever thought of introducing CDISC conformance rules. The introduction of such rules highlighted the difficulty of having loose verbiage containing words such as “usually” or “typically”. I mean if a variable is “Usually expressed as the number” does that mean that it should break a rule if it was not numeric?

Today, as work continues on the content and updates for SEND 4.0, there’s a definite change in approach. The tone of the narrative sections is changing. Loose, flexible language is giving way to more specific, prescriptive language. While mainly driven by the need to have text that drives conformance rules, it feels like there’s also a general appetite to allow less variability between SEND providers. To make study representation more uniform, and to give implementors much clearer direction.

To me, this slight change in direction is a really positive move. I think this approach will be warmly welcomed when SEND 4.0 is published. Not only will it be much easier to define conformance rules, but it should also remove much of the subjective interpretation. Sponsors using multiple CROs and SEND providers should see a reduction in the variability in data representation. All of which seems a really positive step forward to me.

‘til next time

Marc

How creative should we be with SEND?

Recently, more and more, I’m being asked my opinion about the scope of SEND and if a certain study or even a certain endpoint should be provided in SEND. Clearly, I’m referring to data which do not have an obvious example in the SEND IG. However, the lack of example cannot be the defining criteria as it would be impossible to include an example of every possible endpoint. So, the answer becomes more subjective and relies on experience and good judgement and for that reason, I find these very interesting questions. I love the discussion that they provoke.

Often, there is an unusual endpoint that was not considered when the existing domain was designed, however it’s clear that the data easily fit. We still just have a test, a result value and it’s timing. To me, that becomes an easy answer to say “yes, such data should be represented in SEND”.

Then there’s times where a type of data almost fit. Maybe there’s an additional property of the data that doesn’t have a natural variable. We could ignore it or put it in a supplemental qualifier. There may even be an appropriate SDTM variable that already exists which we could add into our SEND domain. Then there’s times when there’s important data that just don’t fit in any of the existing domains. We could represent those data in a custom domain. Should we?

The Scope of SEND section in the FDA’s Technical Conformance Guide is our best reference point for such questions and I often summarize this as “if the data are important for determining the safety assessment, and the data can be represented in SEND, then they should be in SEND”. However, it’s the subjective nature of asking, “if the data can be represented in SEND,” that may be troublesome. Does that really mean going as far as creating a custom domain?

Not all organizations have the knowledge or tools to get as creative as including additional variables or using custom domains. In addition, consideration should be given to the FDA and if their tools would be able to consume and present the data for analysis. Simply being able to place the data somewhere in SEND doesn’t mean that their tools will be able to do anything useful with the data.

So, can a certain study or data type be represented in SEND? Most of the time they probably can, though maybe not easily, and how useful would it be? The more ‘creative’ we need to get to place data in SEND, then potentially the less useful it is.

This is the conversation I seem to be having more and more. I’d love to hear some feedback from the agency regarding things like custom domains. Do they have tools that could be used to analyze the data such a domain? Would they expect to receive custom domains? Would they be useful in determining the safety of the compound? I’d really like to know.

To let me know your opinion on the matter, email me at marc.ellison@instem.com

‘til next time

Marc

SEND in 2023

Being the first post of the year, I thought it would be good to consider what the year ahead looks like for the world of SEND.

The obvious place to start is looking at the standards that are going to become required by the FDA for submission. March will see SEND 3.1 become required for submissions to CBER. This follows the successful pilot and it’s great to see SEND being used by another FDA center. SEND 3.1.1 also becomes required for both CDER and CBER. While having two versions required at the same time may seem confusing, the agency clearly states in their Data Standards Catalogue that this means that the sponsor can select either of these versions as both are supported. The interesting point here, is that when SEND 3.1 follows the latest study data Technical Conformance Guide, it essentially represents the data as described in SEND 3.1.1, so the two are effectively the same. That said, the text and examples in version 3.1.1 for PC and PP data are far more useful and so I always advise people to consult SEND 3.1.1 when populating the PC and PP domains.

March also sees SENDIG-DART become a requirement for CDER. Limited to embryo fetal development studies, this has been a long time coming but it’s great to finally see it become a requirement. Also, let us not forget that Define-XML 2.1 becomes required from March.

As well as new requirements, this year we’ll also see some new standards being published by CDISC. We have SENDIG-GeneTox 1.0 for the representation of in vivo genetox data, and SENDIG-DART 1.2 which provides details and examples for representing juvenile tox studies. However, more than just expanding the scope to include juvenile studies, major sections of the guide relating to embryo fetal development were heavily rewritten to provide clarity on how data were expected to be represented. Additional examples were also provided for embryo fetal development studies and so those producing SENDIG-DART 1.1 for such studies would be well advised to consult version 1.2 for the improved examples and clarification.

Following on from the ‘Scope of SEND’ section recently added to the study data Technical Conformance Guide, we expect to see a continual increase in the demand for SEND as organizations are realizing that more and more of their studies are required to be in SEND.

Another trend that we’ve begun to see (that I think will expand further in 2023), is the desire to use the standardized data for warehousing, mining, sharing and various other uses that go beyond submission. Over the past year or two, I’ve seen the appetite grow for this type of usage and while this is currently a bit of a niche interest (as most organization are still just focusing on submission compliance) it’s an area that is gaining more interest and activity all the time.

However things end up playing out in the world of SEND, it certainly looks to be another interesting year ahead with continued expansion of scope, requirement and usage of the data.

‘til next time

Marc

So, was SEND Sensible in 2022?

Greetings of the season everyone! As this will be my last post of the year, I thought it would be interesting to look back at a few of the topics featured in the blog during 2022. Back in January I was hoping to see the publication of the SEND standards for juvenile toxicology (SENDIG-DART 1.2) and genetic toxicology (SENDIG-GT 1.0) as well as the addition of SEND 3.1.1 to the FDA’s data standards catalogue. While SENDIG-DART 1.2 has been approved for publication, it will actually come out in early 2023, shortly followed by SENDIG-GT. However, SEND 3.1.1 did make the catalogue in time resulting in a flurry of activity preparing for the requirement in March 2023.

Throughout the year, I’ve often discussed the passion and enthusiasm that I, and many other ‘SEND nerds’, have for SEND. In the post “SEND: it’s not everyone’s cup of tea…” I also acknowledged that not all of my readers feel as strongly about this standard as some of us do.

In the post entitled, “SEND Conformance Rules are NOT Boring!”, guest blogger, Christy Kubin described how creating conformance rules allows her and her team to properly “nerd-out” on SEND. In the “Who Cares About Define Files Anyway?” post, another guest blogger, Charuta Bapat, shared her passion for the Define-XML standard.

As you’d expect, I’ve given updates and opinions on the development of SEND and provided feedback from various CDISC and FDA webcasts and conferences in my attempt to keep this community informed.

There certainly have been one or two slightly controversial posts along the way too. Most notably this year, I talked about the amount of effort required to standardize Clinical Signs and dared to ask the question if it was really worth it, in my post called “The Headache of Clinical Signs”. I occasionally receive some correspondence following my posts, but this post in particular provoked far more reaction than usual. A few months later, a softly spoken gentleman, in the most respectful manner, gently told me that he strongly disagreed with my post. It was great to then have a discussion about it.

Since starting this blog over 2 years ago, I’ve treated it more like an editorial. A vehicle of sorts for putting across my opinions and asking provocative questions. I love it when the blogs generate debate and discussion.

Late in the year in “Nervous System Tests and SEND: What’s going on?” we dug into another controversial topic when I raised the question of the missing NV domain for nervous system tests. Clearly, I was not alone, and the topic was timely as others have been asking similar questions and provoking similar discussions. Hopefully, early in 2023 there’ll be an update on that for us to discuss…

Other recurring themes that held significant value for me throughout the year were around the scope of SEND, and the quality of SEND, which was the focus of the very popular post of “How Good is Good Enough?

So, we certainly have covered a lot of ground this year, and I look forward to 2023 when we’ll see even more developments in the world of SEND.

What has SEND meant to you during 2022? A challenge, an opportunity or a little of both? As always, I’d love to hear from you…

‘til next time

Marc

Nervous System Tests and SEND: what’s going on?

There have been rumors of a proposed Nervous System domain (NV) for many years, yet still nothing has been published and it’s not listed in the content for SEND 4.0. So, just what is going on?

At the recent joint FDA and CDISC SEND meeting, the full scope of SEND 4.0 was presented, and so in my last post I described that content (read it here), and I mentioned my frustration that we still won’t have a dedicated domain for representing CNS data such as Functional Operation Battery (FOB) and Irwin Tests.

Following on from that post, I thought it would be a good idea to discuss some of the checkered history of SEND and the elusive NV domain. It was originally created for inclusion in SEND 3.1, which was published back in 2016. It then came close to inclusion in SENDIG-DART 1.2 but again was pulled late in the process. SEND 4.0 is planned to be published in 2023, yet NV still didn’t make the cut.

At the moment, the intention is for it to be included in a dedicated Safety Pharmacology Implementation Guide (SENDIG-SP), due for publication in 2025. Assuming that all goes to plan, that’s nearly a full decade later than originally expected when it was written.

So, why has it been continually delayed? The main issue cited is disagreement on how to represent the full list of available values. Clearly, in order to make use of the data, the scores and values themselves are not sufficient, we need the context. We need to know what values were available for the assessment. Personally, I’ve always advocated that the Define-XML file should be used for this. Many don’t like using the Define file due to incompatibility with Excel. There were suggestions for inclusion in the nonclinical Study Data Reviewer’s Guide (nSDRG), but the counter arguments were overwhelming.

So, some have driven ahead with a solution to have a dedicated domain in SEND called Scoring Scales (SX), housing lists of all available values for such assessments. NV was removed from SENDIG-DART 1.2 for juvenile studies because it needed the SX domain for the scoring scales. With SX now included in SEND 4.0, why do we still not have the NV domain?

The answer given is that nervous system tests are predominantly used in Safety Pharmacology, and as such belong in their own IG (SENDIG-SP) and not the main SEND IG, which is now supposed to just focus on general toxicology. So, “what about the domains for ECG, cardiovascular and respiratory?”  I hear you ask. Good point. They will still be in SEND 4.0, but without NV.

Having dredged over the history and issues, I come back to the statement in the FDA’s Technical Conformance Guide “Overall, the expectation of SEND datasets for nonclinical studies is linked to the pharmacological and toxicological information required to provide FDA with the data needed to assess and support the safety of the proposed clinical investigations” which then goes on to talk about how SEND is needed to “…assess the risks to human subjects…”.

As the nervous system data can be vital to the safety assessment, shouldn’t we include it in SEND? If we needed to do that, without the NV domain, what other options do we have? Maybe that becomes the topic of a future blog.

I know feelings vary on this topic, so if you’d like to continue this conversation with me, just drop me a note at marc.ellison@instem.com

‘til next time

Marc

SEND 4.0 – Just how big is this change going to be?

It was going to be called SEND 3.2 but now it’s going to be called SEND 4.0. Why the change and does it really matter? As you may have guessed, the change is due to the scope of the updates. The addition of several new domains and data types mean this is seen as a more significant update to SEND than previously imagined. Not only that, but we will see the removal of a couple of domains too.

So, let’s break down the scope of changes for SEND 4.0.

We’ll start with the removal of a couple of domains. Firstly, Bodyweight Gains (BG) is being dropped. With the currently available tools offering such flexibility in the presentation of the bodyweight data in BW, there’s been little need for the BG for quite some time. The FDA’s Technical Conformance Guide was updated a while ago to state this, so it’s only natural that its days in the SEND IG are numbered.

The tumor findings (TF) domain is also going away, but this one is a little more complex. It is disappearing as a separate domain, but the crucial piece of data it holds, that’s the time in days to detection of tumor, is going to move to the Microscopic Findings (MIDETECT will replace TFDETECT). I think we can all agree that this is a simpler and more elegant solution.

So, BG and TF are leaving us, but we are getting 5 new domains:

  • Pharmacokinetic Input (PI) is going to give us a place to present the data that were used as the input for TK analysis.
  • Scoring Scales (SX) will provide a domain to list out all available values for any test that uses a scoring scale. It paves the way for future nervous system data, while in this version, providing useful context for certain lab tests.
  • Cell Phenotyping (CP) and Immunogenicity Specimen Assessments (IS) are going to be introduced.
  • Also, there is a new domain for Ophthalmic Examinations (OE). This OE domain will take ophthalmology data that are currently presented in the Clinical Signs (CL) domain, and instead give them a more appropriate home in their own domain.

After the publication of SEND 4.0, there will be subsequent guides for both safety pharmacology, and dermal and ocular studies. The OE and SX domains are both intended for use in those future implementation guides.

So, there we have an overview of many years of work by a vast number of volunteers, all described in around 500 words. I hardly feel that I’ve done it justice. As we near publication and eventual regulatory requirement, I’ll dig into each of these in more detail and give you my own thoughts and opinions. Well it wouldn’t be a blog without that, would it?

And on that note, I’ll leave you my one controversial, parting thought: having discussed what is ‘in’ SEND 4.0, it still pains me that the Nervous System (NV) domain didn’t make the cut. After all these years, we still will not have a proper home for our Functional Observation Battery (FOB) data. Maybe that particular rant can fill another blog some other time.

‘til next time

Marc

How good is good enough?

There’s an argument I occasionally hear that absolutely infuriates me. It’s usually some variation on the line, “The FDA doesn’t care about the quality of SEND, so why should I?” I know I’m paraphrasing, and probably stating this a little too bluntly, but it’s based on the fact that the FDA’s Technical Rejection Criteria (TRC) sets the quality bar rather low and so there’s an assumption that so long as the TRC is satisfied, that’s ‘good enough’.

As a committed long-term CDISC volunteer developing the SEND standard, I’m highly motivated by the fact that the agency uses SEND datasets to make critical safety decisions about experimental new drugs, and this is the last checkpoint before they are administered to the first human being. Therefore, it is vital that the SEND datasets are a complete and accurate representation of the study data.

The TRC requires little more than the inclusion of a demographics file accompanied by the correct study name and study start date in a Trial Summary file with an appropriate define.xml. Providing those data alone will not allow for the review and analysis required to determine the safety implications for the participants of clinical trials.

This question about passing TRC was again raised in the recent FDA SBIA webcast:

Question: Once the data passes the Technical Rejection Criteria, can it be rejected later during review?

FDA Answer: …If the data comes in and the reviewer finds that it’s not clear enough for them to proceed with reviewing or doing analysis, that could still be rejected by the review division.

(The webcast is here and the quote is taken from 50:30 into the video)

The webcast then goes on to implore the listeners to provide “really good quality data” and to do “…whatever you can do to help reviewers to review the data [and conduct] analysis of the data.” Clearly the agency knows the importance of providing high-quality SEND datasets.

I know that some organizations still view the SEND datasets as an unwelcome burden and expense. I know some would like to keep the cost and time of producing SEND to a minimum. I know that this question about quality and the TRC will continue to occasionally crop up, and I’ll continue to get annoyed and maybe I’ll continue to use this blog as a way of venting my frustration. However, I think that across the industry, there needs to be an acknowledgement that the TRC is just the first automated gate. Passing TRC is not an assurance of the quality or usability of the SEND datasets. That said, at the recent joint CDISC and FDA public webinar it was shocking to learn that in the past year, over 500 studies submitted to the agency failed the TRC (between 15th September 2021 and 15th September 2022). Even such a low bar is a little too high for some.

As you can see, this is a subject I feel very strongly about and so I’d love to hear your feelings on the topic. As usual you can let me know your thoughts at marc.ellison@instem.com

‘til next time

Marc

Virtual Control Groups and more – Update from the CDISC Fall meeting

Last week was the Fall 2022 CDISC meeting. As usual, the highlight of the meeting was the public joint presentation between CDISC and FDA. With each event, cross study analysis has slowly become more of a focus. This time, there were multiple presentations referring to this topic and the opportunity to use SEND beyond just FDA compliance.

The meeting began with a presentation on behalf of Japan Pharmaceutical Manufacturers Association (JPMA). While many interesting points were raised, probably the most thought-provoking topic was the discussion around the use of virtual control groups. While this is an idea that has occasionally been mooted in nonclinical safety assessment circles, it was fascinating to see it being stated in relation to SEND. JPMA stated quite clearly that they understand this is a highly controversial suggestion. According to the presentation, this was partly driven by the 3Rs, but also noted that there is difficulty in obtaining certain types of subjects. They also stated that “In clinical field, there is a movement to replace the placebo control group with historical control and/or real world data”. Carrying this principle into nonclinical will certainly provoke some interesting discussions!

Regarding the development and expansion of the SEND standard, the single biggest piece of news to drop from CDISC was the fact that the next version of the SEND Implementation Guide will be called version 4.0 and not version 3.2 as previously communicated. What is the significance of this? CDISC leadership stated that they wanted to communicate that this release includes quite substantial changes. There will be 5 new domains available:

  • Pharmacokinetic Input (PI)
  • Scoring Scales (SX)
  • Cell Phenotyping (CP)
  • Immunogenicity Specimen Assessments (IS)
  • Ophthalmic Examinations (OE)

This version will also drive a new version of the underlying SDTM model as 18 new variables are being added across the guide.

In addition to SEND 4.0, work continues with both the SENDIG-DART 1.2 for juvenile toxicology and SENDIG-GT 1.0 for genetic toxicology. Both standards have made good progress towards their 2023 publication dates. It’s also worth mentioning that good progress is also being made by the Tumor Combinations working group which is creating, publishing and maintaining recommendations for combining tumors for analysis in nonclinical using SEND Controlled Terminology.

Finally, the event concluded with an excellent presentation from the BioCelerate – FDA CDER SEND Harmonization/Cross Study Analysis Working Group who are looking to use SEND for cross study analysis in relation to the Target, and specifically “Understanding off-target toxicity”. The presentation included several novel visualizations that they have developed specifically for this application of cross study analysis.

While this joint presentation was the highlight of the week, significant progress was being made by all the CDISC SEND workstreams. I think that many involved will agree that it was an exhausting but very productive week.

‘til next time

Marc