The symbiotic relationship between the SEND Implementation Guide and the FDA’s Technical Conformance Guide

In theory, the relationship between CDISC’s SEND Implementation Guide (IG) and the FDA’s study data Technical Conformance Guide (TCG) is pretty straight forward: CDISC defines the IG and then the TCG communicates “general considerations on how to submit standardized study data using FDA-supported data standards located in the FDA Data Standards Catalog” (section 1.1).

The IG states how to standardize data and the TCG talks about the submission requirements of those standardized data. So, the IG leads with the standardization and the TCG follows with the submission requirement. However, recently it feels like things are working the other way around, with the IG development following the direction laid out by the TCG.

Let’s talk about a few specific examples. In SEND 4.0 the BG (bodyweight gain) domain is being removed as a direct consequence of the TCG stating “It is not necessary to include a BG domain in submissions” (section

In 2017, the agency added a section describing the specific variable population for PC (pharmacokinetic concentrations) needed to allow the agency to produce the visualizations and perform their analysis. This was to overcome an issue identified after the SEND 3.1 Fit-For-Use pilot, where the agency struggled with these data. This then led to the development of SEND 3.1.1 which brought the PC domain in line with the recommendation in the TCG. This is why the FDA’s study data catalog currently allows for either SEND 3.1 or SEND 3.1.1 to be used for submission, because when SEND 3.1 follows the TCG there should be no real difference between these versions.

Then in September 2021, the agency added the Scope of SEND text (section which stated that certain juvenile studies were required for submission, despite no examples or guidance having been published for such studies. This led the CDISC DART team to pivot away from our planned work to quickly produce SENDIG-DART v1.2 to include such examples and guidance. These examples were required because juvenile studies require data to be analyzed by age rather than by study day or by the number of days exposed.

Most recently, a new TCG was issued which further widened the scope of SEND for DART to include combination studies. Studies where only a certain period of study could be represented in the current SEND modeling. Again, no examples or guidance currently exist to describe how to handle such studies. So again, the CDISC DART team has pivoted, and we are working on quickly producing the necessary examples and guidance. This time, we are not looking to publish a new version of the IG with this information but are looking for a more immediate publication method.

I started this post with the theory that the IG leads and the TCG follows, but here we’ve looked at specific examples where the IG is playing catchup to the TCG. I guess that just shows that the relationship between the SEND Implementation Guide and the FDA’s Technical Conformance Guide has become more symbiotic than we previously anticipated.

‘til next time


Can we make cheaper, faster SEND without compromising quality?

Recently, I asked one of our Senior Information Scientists, “If a customer asked you what they could do to make data easier to convert to SEND, what would you say?” Honestly, I was expecting maybe two or three bullet points of slightly vague suggestions. What I got was about three pages of very specific and detailed recommendations. Oh, and he barely had to think about it. Didn’t need time to go away and contemplate, just rattled off the list immediately.

Before I share some of those details, let me first provide a little context. Our customers are a mixture from all sides of the industry: both sponsors and CROs; from very small to very large; from experienced in SEND to complete SEND newbies; from those with a strong SEND capability using us for their overflow, to those who have no in-house SEND capability. Whoever they are, they are supplying us data and receiving back a submission-ready SEND package.

However, one commonality is that we would all like more efficient SEND production, without compromising quality. To that end, we are committed to continual process improvement, tool improvement and we invest heavily in training and staff development, doing what we can with the things that are within our control. But what about the things outside of our control, namely the data supplied?

And this was the point of my question. What could our customers do to help themselves? What can they control that could help us make SEND faster and cheaper, without negatively impacting quality?

Let’s start with the PDF study report itself. Whether we are extracting the data directly out of the tables in the PDF or converting data from a system and then comparing back to the study report, we are constantly working with the PDF study report. Often, we run into issues with the way the PDF has been created. Without things like present and correct bookmarks and hyperlinks, the process really slows. The same is true if there are text sections or tables that are not fully searchable. Oh, and page orientation can be a big deal too. Imagine QC’ing a SEND dataset and comparing results back to the PDF tables when some are landscape and some are portrait, but the PDF has been rendered in such a way that it doesn’t orientate correctly on screen. Such a simple thing, but it really impacts the efficiency of the QC process.

These were just the first few items that related to the PDF, but let’s quickly mention some of the issues we see when converting data from a data collection system or Excel file. You’d be surprised how often we still see modifiers and other important information captured simply as comments. Comments that would need to be read by a conversion consultant and then parsed out to ensure that the correct piece of the data was placed in the correct variable. For each result. For each subject. This is a common scenario, particularly in relation to: chronicity, specimen condition, BLQ values or laterality.

Again, this was just one issue from a list of things he told me. So, for anyone working with a SEND provider, with a desire to have SEND produced faster and cheaper, then I’d highly recommend talking to your SEND partner and seeing what recommendations they have for improving efficiency. Often small tweaks to data collection can have a big impact downstream.

I’d like to thank my friend, Sagar Hadawale, with his help on this particular post.

‘til next time


Latest updates on SEND v4.0

This week’s blog will cut straight to the chase: The scope of SEND 4.0 has significantly increased of late, and so the timeline has been impacted too. That means in the next update to the main Implementation Guide, we are getting 8 new domains:

  • Cell Phenotyping (CP)
  • Immunogenicity Specimen Assessments (IS)
  • Ophthalmic Examinations (OE)
  • Pharmacokinetic Input (PI)
  • Scoring Scales (SX)
  • Nervous System Test Results (NV)
  • Skin Test Results (SK)
  • Genetic Toxicology – In Vivo (GV)

Since my introduction to SEND back in the SEND 3.0 days, I don’t think we’ve ever introduced that many new domains. SENDIG-DART 1.0 came close with 7 new domains, but this sets a new record. Also, I’m taken back by the breadth here. Unlike SEND 3.1, whose new domains were all focused on safety pharmacology, this new version is hitting multiple targets.

In addition to the domains, there are 16 new variables as well as a change to our nonstandard variables which are getting promoted to standard variables. Many will be glad of this last point as it should significantly reduce our reliance on supplemental domains.

Another significant change that’s worth mentioning is that Microscopic Findings (MI) will see new variables and an increased scope with additional tests to cover Sexual Maturity, Reproductive Cycle Phase, Targeted Staining and Macroscopic Examination Follow-up. Also, the Tumor Findings domain is being retired and so for carcinogenicity studies, the relevant data such as the time to detection, will now reside in MI.

Oh, and we should finally see the end of Bodyweight Gains in SEND as the BG domain is also to be retired.

Much of the above has been known for some time, however at the recent FDA public meeting, the changes for nervous system and skin tests were formally announced along with the inclusion of in vivo genetic toxicology. The addition means a further delay to the timeline which now is estimated to be published on 1st April 2025. I’m sure the fact that is April Fools Day is purely a coincidence!

At this point it seems pretty pointless to estimate when it might become a submission requirement, except to say that we have several years from now to get to grips with the significant implementation burden that this will bring.

The guide is due to go on public review this coming August, and I’d highly recommend any regular reader of this blog to set the time aside to go through this new implementation guide. It will be our first real opportunity to start to assess the impact on our individual organizations.

Does it mean more studies are going to come into scope for SEND? Yes, but it also means some significant changes to those studies already in scope. We are going to need new tools and processes. It’ll be a little scary for a while and I’m sure there’ll be some bumps in the road, but this is going to be a huge improvement to SEND and another step forward in getting more nonclinical data into a standardized format.

‘til next time


What is the impact of the new FDA Technical Conformance Guide?

There’s been some significant news since my last blog post. Firstly, the FDA released a new version of the study data Technical Conformance Guide (TCG), and then we had the combined CDISC and FDA public webcast where some major changes to SEND 4.0 were announced.

The biggest update to the TCG was the addition of a section describing the scope of SEND for SENDIG-DART for CDER submissions. At the public webcast, the FDA then gave a presentation going through this update in more detail. The new section is very similar to the existing section for SEND 3.1/3.1.1. The usual verbiage is present regarding the GLP status, the drug substance, etc. and how such factors do not impact whether or not SEND is required. No surprises there.

The interesting text comes a little later. This starts with a very specific statement that preliminary studies (smaller scale studies) which are used in lieu of a regular study for assessing clinical safety, are in scope. This makes perfect sense and is hardly unexpected.

Studies which are used to determine dose levels or dose schedules for a SENDIG-DART study are not required in SEND. The reasoning being that such studies do not impact clinical safety. It’s great to see this explicitly stated.

The surprises then come in the next sections:

Combo studies are in scope. Combo fertility/EFD and combo EFD/PPND studies are required in SEND, at least for the EFD portion. I’m calling this a surprise, but is it really? The equivalent section for SEND 3.1/3.1.1 has always described the situation, “When general toxicity studies incorporate other study types…”, however, the examples provided are where both study types are already in scope for SEND (it gives the example of general tox combined with cardiovascular). Still, the implication is the principle can be applied when two study types are combined: the general tox study design is in scope. Put in SEND what can be put in SEND and document the rest.

So now we have this principle applied to SENDIG-DART. However, for such studies, this is potentially more troubling. It’s not a case of simply omitting certain endpoints. We are potentially talking about omitting entire phases of the study. While this is quite possible to achieve, what concerns me is that this could be interpreted in different ways, resulting in inconsistent representation of similar studies from one organization to the next. For this reason, I believe that we need good guidance and examples to be produced as soon as possible.

The TCG also specifically called out enhanced PPND studies. These are studies where many of the same endpoints could be collected via the use of something like an ultrasound. The agency is stating that such studies are not required in SEND at this point. However, they do state “voluntary submission of these datasets is encouraged”. They are asking for SEND, but not requiring SEND. Does this mean they will make them required in the future? Maybe.

One thing that I don’t think is stated in the TCG, but was called out in the webcast, is that SENDIG-DART studies starting prior to the 15th of March 2023 currently do not require a simplified ts.xpt. Obviously, this is a significant difference to SEND 3.0/3.1/3.1.1 studies. The reason is that the technical rejection for SENDIG-DART doesn’t exist yet. The implication here is that this situation may change in future.

My 500 words are up and we didn’t get time to talk about the changes to SEND 4.0. Come back next time where I’ll discuss those.

‘til next time


It’s Sensible SEND Live! this week

I am not posting the usual written blog this week. Instead, we are having a ‘Sensible SEND Live!’ event on Thursday 13th April, at:

  • 3:00pm UK
  • 10:00am Eastern
  • 4:00pm Central Europe

If you would like to attend, but have not registered already, please use this link:

The event will be a live blog style presentation followed by a time for Q&A so please come with some questions and comments regarding burning topics in SEND!

’til then


SEND @SOT 2023: Same, but different

After my 4 years away, I finally got a Society of Toxicology (SOT) meeting in person again, and while much of it felt familiar, I was surprised by just how different some things were: How much our industry has moved on in those 4 COVID-impacted years.

The event felt like it was numerically back to the pre-COVID attendance. I don’t know if it really was, but it certainly felt busy. It was really nice catching up with many people who I’d only seen on-line during that time. And it just felt nice to be back. Familiarity implied normality and that was certainly comforting after what the world had just been though.

However, there were some significant differences too. From a SEND perspective, I think this was my first SOT in almost a decade where someone didn’t approach me and start a conversation with the line “I’ve got a study, does it need to be in SEND?”. Today, there’s a general acceptance that studies require SEND. That’s different.

Another difference is the audience for SEND. Over the past decade, my organization, amongst many others, has spent a great deal of effort educating industry on the SEND standard. I’d felt we’d got to a level in the past few years where we didn’t need to explain what it is, but rather focus on the value and opportunities for SEND. With that in mind, while presenting ‘The life of a SEND dataset’ at this year’s event I deliberately did not, at any point, use the phrase ‘Standard for the Exchange of Nonclinical Data’. I just felt we all knew this now and re-stating it again would be a little patronizing.

It turns out I was wrong. It turns out that SEND has now opened up to a wider audience. It turns out that this new audience maybe more accepting that SEND is required but are still at the point of asking ‘and what is SEND?’. Whether they are new to the world of SEND due to working in biologics or DART, whatever the reason, there are now more SEND newbies grappling with the concepts many of us were first introduced to a decade ago.

Another difference this year was the increased interest in data sharing and translational science. Using SEND (amongst other formats) to share data and ultimately drive improvements in drug development. Topics like virtual control groups are generating more discussion than ever. There was also much talk of Artificial Intelligence and Machine Learning.

While such topics existed last time I attended SOT, I don’t remember them really impacting the program. If they were ever mentioned, it would only ever be in passing and they’d be greeted with a certain level of skepticism. This year felt different.

So, in many ways this year’s SOT had a welcome sense of familiarity and normality, yet I was pleasantly surprised by just how different it felt too. And on that note, I now need to go and prepare for my next SEND presentation: Sensible SEND Live!

I’m assuming you got your invite?

‘til next time


Is there really that much to say about SEND?

Back in 2020 when the idea of blogging about SEND was first suggested to me, my initial thought was “What is there to talk about?”. This is now my 61st bi-weekly post. If you have been with me along that journey, then you will have spotted a few recurring themes. I like discussing scope, future standards development and how much SEND has changed the nonclinical world. However, more than simply repeating myself, I always feel there is something new to say. Something new to add to the discussion, and I think that is largely driven by having the unique viewpoint of being a vendor.

At Instem we serve organizations large and small; CROs and sponsors; specialists and full-service providers; SEND experts and SEND newbies. Our customers are varied and cover all corners of the safety assessment world. Most CROs have one main data collection system, one well-curated lexicon and a fairly consistent set of SOPs, study designs and tests. Instem is not a CRO. We see a wide variety of study designs, data collection practices and the like. Therefore, we get to see a wide variety of data and each study can feel like a new challenge to wrestle the data into shape.

And that’s just the data we are converting to SEND for submissions. We also provide SEND education and consultancy as well as services like bulk conversions for warehousing and data mining. And aside from SEND services, we also supply software and tools to convert data, view SEND files, generate Define-XML and the nonclinical Study Data Reviewers Guide. So again, when working with our customer’s software needs, we see a wide variety of data collection practices and related systems. We hear the different interpretations of SEND that are prevalent across our industry.

This blog post is not intended to sound like an advertisement for our products and services, rather to explain the unique viewpoint of being a vendor to our industry. Having passed the milestone of 60 blog posts, I think the reason why there’s always something more to discuss is because I’m often talking to a new customer or seeing a new presentation of data or hearing a new viewpoint which provokes a new blog post.

Back in 2020 I could never have imagined that there was so much to talk about but being a vendor keeps things fresh. Also, being a CDISC volunteer keeps me on the leading edge of standards development, and again that often inspires a blog post or two. So, when it comes to SEND “What is there to talk about?”, actually quite a lot.

For anyone attending the Society of Toxicology in Nashville next week, I’ll be co-presenting on the life of a SEND dataset, which includes the challenges and best practices of preparing SEND, and the opportunities that electronic data provide beyond submission. If you are attending, please stop by and say ‘Hi’.

’til next time


Who uses the CoDEx document anyway?

Recently the question was asked “Are bone length measurements in scope for SEND?”, which leads to another interesting question: “Who gets to decide what is in scope for SEND?”. There are two parts to this. Firstly, which data were intended to be covered by the various CDISC documents, and secondly, which data do FDA require in SEND?

I often reference the ‘Scope of SEND’ section in the FDA’s Technical Conformance Guide (TCG) section As the title suggests, in this section, the FDA state as explicitly as they can, the breadth of studies they consider in scope and therefore required for submission. So, the FDA can require whatever they choose. However, this section is mainly dealing with the study as a whole. We are not down in the weeds, mulling over individual endpoints. To dig down that deep, we need to consult the CoDEx.

I know, ‘CoDEx’. It’s not the most intuitive name for a document, is it? I’ll be honest, whenever anyone asks me about it, I can never remember exactly what CoDEx stands for. I mean I know the terms Confidently Modelled, Exchange and Endpoints are in there somewhere, but I always seem to fluff my lines when trying to explain it. (The actual answer is “Confirmed Data Endpoints for Exchange”. I’m not sure why I keep thinking of the phrase ‘Confidently Modeled’?)

For those not familiar with the document, it is CDISC’s method of addressing the scope for SEND, down to the individual endpoints. Though not involved with authoring the document myself, I do remember how much care was given to the words chosen. While it was intended to define and communicate scope, it was also intended to not limit scope. By that I mean, if an endpoint was not listed as ‘in scope’ but could easily be represented in SEND, the document was not to imply that the endpoint shouldn’t be rendered in SEND. Consequently, the wording emphasized the idea that CDISC could confidently state that certain endpoints have been modelled, and therefore can be exchanged using SEND. But, just because CDISC have not confirmed that the endpoint can be exchanged in SEND, doesn’t mean it’s out of scope, and doesn’t tell us what the FDA may or may not consider as required.

So, how useful is the document, given so many caveats? Who uses it anyway? For the opening question that prompted this topic: “Are bone length measurements in scope for SEND?”, the CoDEx document states, “Although there are no examples in SENDIG, morphological measurement on specimens that result in numeric data can be modeled in this domain.

We don’t have example of exactly how to do this, but despite all the caveats we can tell from CoDEx that the endpoints are in scope. In this case, the often-overlooked CoDEx was really quite helpful.

Were you aware of the CoDEx document? Have you found it helpful? Let me know your thoughts at

‘til next time


Are you ready for a tighter standard?

From my earliest introductions to SEND, one of the things that surprised me most was just how loose it was. It seemed that at every turn there were choices. The phrase “The sponsor chose…” occurs throughout the implementation guide when describing a particular representation of data. Several years ago, I was presenting at the Society of Toxicology meeting on this topic and I used the phrase “SEND feels like more of an artform than a standard”. I’d gone through the Implementation Guide and found over 50 instances where the guide implied some sort of choice as to how data could be represented.

When the SEND Implementation Guide was first written, it was intended to be flexible enough to accommodate the fact that data are collected differently in different labs and in different systems. Different organizations define and structure their glossaries and lexicons in different ways, and so the guide was written with adaptability in mind. There was a feeling that if the standard was too inflexible it would hamper adoption.

From what I have seen over the past few years, such flexibility has become its own hindrance. Adopters and implementers of the standard want clear and simple answers on how to represent data. The more subjective the reading of the Implementation Guide, the more time, effort and consideration need to be given.

It’s also worth remembering that the first versions of the Implementation Guide were written before anyone had ever thought of introducing CDISC conformance rules. The introduction of such rules highlighted the difficulty of having loose verbiage containing words such as “usually” or “typically”. I mean if a variable is “Usually expressed as the number” does that mean that it should break a rule if it was not numeric?

Today, as work continues on the content and updates for SEND 4.0, there’s a definite change in approach. The tone of the narrative sections is changing. Loose, flexible language is giving way to more specific, prescriptive language. While mainly driven by the need to have text that drives conformance rules, it feels like there’s also a general appetite to allow less variability between SEND providers. To make study representation more uniform, and to give implementors much clearer direction.

To me, this slight change in direction is a really positive move. I think this approach will be warmly welcomed when SEND 4.0 is published. Not only will it be much easier to define conformance rules, but it should also remove much of the subjective interpretation. Sponsors using multiple CROs and SEND providers should see a reduction in the variability in data representation. All of which seems a really positive step forward to me.

‘til next time


How creative should we be with SEND?

Recently, more and more, I’m being asked my opinion about the scope of SEND and if a certain study or even a certain endpoint should be provided in SEND. Clearly, I’m referring to data which do not have an obvious example in the SEND IG. However, the lack of example cannot be the defining criteria as it would be impossible to include an example of every possible endpoint. So, the answer becomes more subjective and relies on experience and good judgement and for that reason, I find these very interesting questions. I love the discussion that they provoke.

Often, there is an unusual endpoint that was not considered when the existing domain was designed, however it’s clear that the data easily fit. We still just have a test, a result value and it’s timing. To me, that becomes an easy answer to say “yes, such data should be represented in SEND”.

Then there’s times where a type of data almost fit. Maybe there’s an additional property of the data that doesn’t have a natural variable. We could ignore it or put it in a supplemental qualifier. There may even be an appropriate SDTM variable that already exists which we could add into our SEND domain. Then there’s times when there’s important data that just don’t fit in any of the existing domains. We could represent those data in a custom domain. Should we?

The Scope of SEND section in the FDA’s Technical Conformance Guide is our best reference point for such questions and I often summarize this as “if the data are important for determining the safety assessment, and the data can be represented in SEND, then they should be in SEND”. However, it’s the subjective nature of asking, “if the data can be represented in SEND,” that may be troublesome. Does that really mean going as far as creating a custom domain?

Not all organizations have the knowledge or tools to get as creative as including additional variables or using custom domains. In addition, consideration should be given to the FDA and if their tools would be able to consume and present the data for analysis. Simply being able to place the data somewhere in SEND doesn’t mean that their tools will be able to do anything useful with the data.

So, can a certain study or data type be represented in SEND? Most of the time they probably can, though maybe not easily, and how useful would it be? The more ‘creative’ we need to get to place data in SEND, then potentially the less useful it is.

This is the conversation I seem to be having more and more. I’d love to hear some feedback from the agency regarding things like custom domains. Do they have tools that could be used to analyze the data such a domain? Would they expect to receive custom domains? Would they be useful in determining the safety of the compound? I’d really like to know.

To let me know your opinion on the matter, email me at

‘til next time