SEND in 2023

Being the first post of the year, I thought it would be good to consider what the year ahead looks like for the world of SEND.

The obvious place to start is looking at the standards that are going to become required by the FDA for submission. March will see SEND 3.1 become required for submissions to CBER. This follows the successful pilot and it’s great to see SEND being used by another FDA center. SEND 3.1.1 also becomes required for both CDER and CBER. While having two versions required at the same time may seem confusing, the agency clearly states in their Data Standards Catalogue that this means that the sponsor can select either of these versions as both are supported. The interesting point here, is that when SEND 3.1 follows the latest study data Technical Conformance Guide, it essentially represents the data as described in SEND 3.1.1, so the two are effectively the same. That said, the text and examples in version 3.1.1 for PC and PP data are far more useful and so I always advise people to consult SEND 3.1.1 when populating the PC and PP domains.

March also sees SENDIG-DART become a requirement for CDER. Limited to embryo fetal development studies, this has been a long time coming but it’s great to finally see it become a requirement. Also, let us not forget that Define-XML 2.1 becomes required from March.

As well as new requirements, this year we’ll also see some new standards being published by CDISC. We have SENDIG-GeneTox 1.0 for the representation of in vivo genetox data, and SENDIG-DART 1.2 which provides details and examples for representing juvenile tox studies. However, more than just expanding the scope to include juvenile studies, major sections of the guide relating to embryo fetal development were heavily rewritten to provide clarity on how data were expected to be represented. Additional examples were also provided for embryo fetal development studies and so those producing SENDIG-DART 1.1 for such studies would be well advised to consult version 1.2 for the improved examples and clarification.

Following on from the ‘Scope of SEND’ section recently added to the study data Technical Conformance Guide, we expect to see a continual increase in the demand for SEND as organizations are realizing that more and more of their studies are required to be in SEND.

Another trend that we’ve begun to see (that I think will expand further in 2023), is the desire to use the standardized data for warehousing, mining, sharing and various other uses that go beyond submission. Over the past year or two, I’ve seen the appetite grow for this type of usage and while this is currently a bit of a niche interest (as most organization are still just focusing on submission compliance) it’s an area that is gaining more interest and activity all the time.

However things end up playing out in the world of SEND, it certainly looks to be another interesting year ahead with continued expansion of scope, requirement and usage of the data.

‘til next time


So, was SEND Sensible in 2022?

Greetings of the season everyone! As this will be my last post of the year, I thought it would be interesting to look back at a few of the topics featured in the blog during 2022. Back in January I was hoping to see the publication of the SEND standards for juvenile toxicology (SENDIG-DART 1.2) and genetic toxicology (SENDIG-GT 1.0) as well as the addition of SEND 3.1.1 to the FDA’s data standards catalogue. While SENDIG-DART 1.2 has been approved for publication, it will actually come out in early 2023, shortly followed by SENDIG-GT. However, SEND 3.1.1 did make the catalogue in time resulting in a flurry of activity preparing for the requirement in March 2023.

Throughout the year, I’ve often discussed the passion and enthusiasm that I, and many other ‘SEND nerds’, have for SEND. In the post “SEND: it’s not everyone’s cup of tea…” I also acknowledged that not all of my readers feel as strongly about this standard as some of us do.

In the post entitled, “SEND Conformance Rules are NOT Boring!”, guest blogger, Christy Kubin described how creating conformance rules allows her and her team to properly “nerd-out” on SEND. In the “Who Cares About Define Files Anyway?” post, another guest blogger, Charuta Bapat, shared her passion for the Define-XML standard.

As you’d expect, I’ve given updates and opinions on the development of SEND and provided feedback from various CDISC and FDA webcasts and conferences in my attempt to keep this community informed.

There certainly have been one or two slightly controversial posts along the way too. Most notably this year, I talked about the amount of effort required to standardize Clinical Signs and dared to ask the question if it was really worth it, in my post called “The Headache of Clinical Signs”. I occasionally receive some correspondence following my posts, but this post in particular provoked far more reaction than usual. A few months later, a softly spoken gentleman, in the most respectful manner, gently told me that he strongly disagreed with my post. It was great to then have a discussion about it.

Since starting this blog over 2 years ago, I’ve treated it more like an editorial. A vehicle of sorts for putting across my opinions and asking provocative questions. I love it when the blogs generate debate and discussion.

Late in the year in “Nervous System Tests and SEND: What’s going on?” we dug into another controversial topic when I raised the question of the missing NV domain for nervous system tests. Clearly, I was not alone, and the topic was timely as others have been asking similar questions and provoking similar discussions. Hopefully, early in 2023 there’ll be an update on that for us to discuss…

Other recurring themes that held significant value for me throughout the year were around the scope of SEND, and the quality of SEND, which was the focus of the very popular post of “How Good is Good Enough?

So, we certainly have covered a lot of ground this year, and I look forward to 2023 when we’ll see even more developments in the world of SEND.

What has SEND meant to you during 2022? A challenge, an opportunity or a little of both? As always, I’d love to hear from you…

‘til next time


Nervous System Tests and SEND: what’s going on?

There have been rumors of a proposed Nervous System domain (NV) for many years, yet still nothing has been published and it’s not listed in the content for SEND 4.0. So, just what is going on?

At the recent joint FDA and CDISC SEND meeting, the full scope of SEND 4.0 was presented, and so in my last post I described that content (read it here), and I mentioned my frustration that we still won’t have a dedicated domain for representing CNS data such as Functional Operation Battery (FOB) and Irwin Tests.

Following on from that post, I thought it would be a good idea to discuss some of the checkered history of SEND and the elusive NV domain. It was originally created for inclusion in SEND 3.1, which was published back in 2016. It then came close to inclusion in SENDIG-DART 1.2 but again was pulled late in the process. SEND 4.0 is planned to be published in 2023, yet NV still didn’t make the cut.

At the moment, the intention is for it to be included in a dedicated Safety Pharmacology Implementation Guide (SENDIG-SP), due for publication in 2025. Assuming that all goes to plan, that’s nearly a full decade later than originally expected when it was written.

So, why has it been continually delayed? The main issue cited is disagreement on how to represent the full list of available values. Clearly, in order to make use of the data, the scores and values themselves are not sufficient, we need the context. We need to know what values were available for the assessment. Personally, I’ve always advocated that the Define-XML file should be used for this. Many don’t like using the Define file due to incompatibility with Excel. There were suggestions for inclusion in the nonclinical Study Data Reviewer’s Guide (nSDRG), but the counter arguments were overwhelming.

So, some have driven ahead with a solution to have a dedicated domain in SEND called Scoring Scales (SX), housing lists of all available values for such assessments. NV was removed from SENDIG-DART 1.2 for juvenile studies because it needed the SX domain for the scoring scales. With SX now included in SEND 4.0, why do we still not have the NV domain?

The answer given is that nervous system tests are predominantly used in Safety Pharmacology, and as such belong in their own IG (SENDIG-SP) and not the main SEND IG, which is now supposed to just focus on general toxicology. So, “what about the domains for ECG, cardiovascular and respiratory?”  I hear you ask. Good point. They will still be in SEND 4.0, but without NV.

Having dredged over the history and issues, I come back to the statement in the FDA’s Technical Conformance Guide “Overall, the expectation of SEND datasets for nonclinical studies is linked to the pharmacological and toxicological information required to provide FDA with the data needed to assess and support the safety of the proposed clinical investigations” which then goes on to talk about how SEND is needed to “…assess the risks to human subjects…”.

As the nervous system data can be vital to the safety assessment, shouldn’t we include it in SEND? If we needed to do that, without the NV domain, what other options do we have? Maybe that becomes the topic of a future blog.

I know feelings vary on this topic, so if you’d like to continue this conversation with me, just drop me a note at

‘til next time


SEND 4.0 – Just how big is this change going to be?

It was going to be called SEND 3.2 but now it’s going to be called SEND 4.0. Why the change and does it really matter? As you may have guessed, the change is due to the scope of the updates. The addition of several new domains and data types mean this is seen as a more significant update to SEND than previously imagined. Not only that, but we will see the removal of a couple of domains too.

So, let’s break down the scope of changes for SEND 4.0.

We’ll start with the removal of a couple of domains. Firstly, Bodyweight Gains (BG) is being dropped. With the currently available tools offering such flexibility in the presentation of the bodyweight data in BW, there’s been little need for the BG for quite some time. The FDA’s Technical Conformance Guide was updated a while ago to state this, so it’s only natural that its days in the SEND IG are numbered.

The tumor findings (TF) domain is also going away, but this one is a little more complex. It is disappearing as a separate domain, but the crucial piece of data it holds, that’s the time in days to detection of tumor, is going to move to the Microscopic Findings (MIDETECT will replace TFDETECT). I think we can all agree that this is a simpler and more elegant solution.

So, BG and TF are leaving us, but we are getting 5 new domains:

  • Pharmacokinetic Input (PI) is going to give us a place to present the data that were used as the input for TK analysis.
  • Scoring Scales (SX) will provide a domain to list out all available values for any test that uses a scoring scale. It paves the way for future nervous system data, while in this version, providing useful context for certain lab tests.
  • Cell Phenotyping (CP) and Immunogenicity Specimen Assessments (IS) are going to be introduced.
  • Also, there is a new domain for Ophthalmic Examinations (OE). This OE domain will take ophthalmology data that are currently presented in the Clinical Signs (CL) domain, and instead give them a more appropriate home in their own domain.

After the publication of SEND 4.0, there will be subsequent guides for both safety pharmacology, and dermal and ocular studies. The OE and SX domains are both intended for use in those future implementation guides.

So, there we have an overview of many years of work by a vast number of volunteers, all described in around 500 words. I hardly feel that I’ve done it justice. As we near publication and eventual regulatory requirement, I’ll dig into each of these in more detail and give you my own thoughts and opinions. Well it wouldn’t be a blog without that, would it?

And on that note, I’ll leave you my one controversial, parting thought: having discussed what is ‘in’ SEND 4.0, it still pains me that the Nervous System (NV) domain didn’t make the cut. After all these years, we still will not have a proper home for our Functional Observation Battery (FOB) data. Maybe that particular rant can fill another blog some other time.

‘til next time


How good is good enough?

There’s an argument I occasionally hear that absolutely infuriates me. It’s usually some variation on the line, “The FDA doesn’t care about the quality of SEND, so why should I?” I know I’m paraphrasing, and probably stating this a little too bluntly, but it’s based on the fact that the FDA’s Technical Rejection Criteria (TRC) sets the quality bar rather low and so there’s an assumption that so long as the TRC is satisfied, that’s ‘good enough’.

As a committed long-term CDISC volunteer developing the SEND standard, I’m highly motivated by the fact that the agency uses SEND datasets to make critical safety decisions about experimental new drugs, and this is the last checkpoint before they are administered to the first human being. Therefore, it is vital that the SEND datasets are a complete and accurate representation of the study data.

The TRC requires little more than the inclusion of a demographics file accompanied by the correct study name and study start date in a Trial Summary file with an appropriate define.xml. Providing those data alone will not allow for the review and analysis required to determine the safety implications for the participants of clinical trials.

This question about passing TRC was again raised in the recent FDA SBIA webcast:

Question: Once the data passes the Technical Rejection Criteria, can it be rejected later during review?

FDA Answer: …If the data comes in and the reviewer finds that it’s not clear enough for them to proceed with reviewing or doing analysis, that could still be rejected by the review division.

(The webcast is here and the quote is taken from 50:30 into the video)

The webcast then goes on to implore the listeners to provide “really good quality data” and to do “…whatever you can do to help reviewers to review the data [and conduct] analysis of the data.” Clearly the agency knows the importance of providing high-quality SEND datasets.

I know that some organizations still view the SEND datasets as an unwelcome burden and expense. I know some would like to keep the cost and time of producing SEND to a minimum. I know that this question about quality and the TRC will continue to occasionally crop up, and I’ll continue to get annoyed and maybe I’ll continue to use this blog as a way of venting my frustration. However, I think that across the industry, there needs to be an acknowledgement that the TRC is just the first automated gate. Passing TRC is not an assurance of the quality or usability of the SEND datasets. That said, at the recent joint CDISC and FDA public webinar it was shocking to learn that in the past year, over 500 studies submitted to the agency failed the TRC (between 15th September 2021 and 15th September 2022). Even such a low bar is a little too high for some.

As you can see, this is a subject I feel very strongly about and so I’d love to hear your feelings on the topic. As usual you can let me know your thoughts at

‘til next time


Virtual Control Groups and more – Update from the CDISC Fall meeting

Last week was the Fall 2022 CDISC meeting. As usual, the highlight of the meeting was the public joint presentation between CDISC and FDA. With each event, cross study analysis has slowly become more of a focus. This time, there were multiple presentations referring to this topic and the opportunity to use SEND beyond just FDA compliance.

The meeting began with a presentation on behalf of Japan Pharmaceutical Manufacturers Association (JPMA). While many interesting points were raised, probably the most thought-provoking topic was the discussion around the use of virtual control groups. While this is an idea that has occasionally been mooted in nonclinical safety assessment circles, it was fascinating to see it being stated in relation to SEND. JPMA stated quite clearly that they understand this is a highly controversial suggestion. According to the presentation, this was partly driven by the 3Rs, but also noted that there is difficulty in obtaining certain types of subjects. They also stated that “In clinical field, there is a movement to replace the placebo control group with historical control and/or real world data”. Carrying this principle into nonclinical will certainly provoke some interesting discussions!

Regarding the development and expansion of the SEND standard, the single biggest piece of news to drop from CDISC was the fact that the next version of the SEND Implementation Guide will be called version 4.0 and not version 3.2 as previously communicated. What is the significance of this? CDISC leadership stated that they wanted to communicate that this release includes quite substantial changes. There will be 5 new domains available:

  • Pharmacokinetic Input (PI)
  • Scoring Scales (SX)
  • Cell Phenotyping (CP)
  • Immunogenicity Specimen Assessments (IS)
  • Ophthalmic Examinations (OE)

This version will also drive a new version of the underlying SDTM model as 18 new variables are being added across the guide.

In addition to SEND 4.0, work continues with both the SENDIG-DART 1.2 for juvenile toxicology and SENDIG-GT 1.0 for genetic toxicology. Both standards have made good progress towards their 2023 publication dates. It’s also worth mentioning that good progress is also being made by the Tumor Combinations working group which is creating, publishing and maintaining recommendations for combining tumors for analysis in nonclinical using SEND Controlled Terminology.

Finally, the event concluded with an excellent presentation from the BioCelerate – FDA CDER SEND Harmonization/Cross Study Analysis Working Group who are looking to use SEND for cross study analysis in relation to the Target, and specifically “Understanding off-target toxicity”. The presentation included several novel visualizations that they have developed specifically for this application of cross study analysis.

While this joint presentation was the highlight of the week, significant progress was being made by all the CDISC SEND workstreams. I think that many involved will agree that it was an exhausting but very productive week.

‘til next time


Go Together

If you want to go fast, go alone but if you want to go far then go together.Proverb

I don’t like moving slowly. I like to feel that things are moving quickly, and the end is in sight. I can become frustrated when things do not move quick enough. Yet, I continually see the value of the slower pace of going together.

While something that can be applied to many areas of life, it’s particularly true of standards development. I oversaw the development and eventual publication of SEND 3.1.1. Despite being an important yet relatively minor change, it literally took several years to go from idea to eventual publication. Many committees needed to approve, and many key stakeholders needed to be on board. The consultation was extensive, soliciting feedback across both nonclinical and clinical.  Much time was spent debating the possible use of ADaM (clinical analysis datasets) instead of SEND. Though it was slow, there is certain rigor which comes with going together.

SENDIG-DART v1.2 is just completing its 3-month public review. It feels like it’s taken an age to complete. Even getting to public review felt slow and arduous. We are currently scheduled for publication early in 2023 and even then, we’ll be years away from it being required by the Data Standard Catalogue.

Again, though it has felt slow, the truth is that the reviews and scrutiny result in a significantly improved standard. The multiple reviews and reviewers have ultimately been a huge benefit to the implementation guide. CDISC has a more rigorous process now that demands an example ‘proof of concept’ study along with conformance rules. Previously such things would be produced after publication. Now they are required before public review, far in advance of publication. The result is a better product, but at the expense of speed.

The eventual publication of SEND 3.2 will have taken significantly longer than the publication of SEND 3.1, but again with the benefit of having a far wider range of contribution and a more rigorous review process.

So, yes, the standards development process seems slower than ever and seems to require more effort now than it ever has before. But that’s the cost of going together.

‘til next time


SENDIG-DART is more tricky than regular SEND

I write this as I arrive at the European Teratology Society in Antwerp, Belgium to present a poster relating to the SENDIG-DART. This feels significant for a couple of reasons. Firstly, and personally, this is my first in-person conference or meeting since the COVID lockdown. To be travelling for business really feels like things are getting back to normal again.

Secondly, and more relevant for this blog, is the fact my poster, “Preparation for the regulatory requirement for SEND for developmental and reproductive toxicology studies” is being shown with just months to go before the requirement date of March 15th, 2023 kicks in.

It describes the SENDIG-DART in terms of the scope of studies covered, and how the electronic data are to be standardized for tabulation. Put simply, a very brief introduction for the uninitiated.

However, the main focus of it is to provide advice and feedback from the learnings from the FDA’s Fit-For-Use (FFU) pilot. Instem prepared the SEND datasets for one of the studies that took part in the pilot. The lessons we learnt through that process were invaluable in preparing us ahead of the expected demand for conversions. Though we had our Submit™ software ready well in advance and we trained our team in the new standard, there’s nothing like working with a real study to see just how ready we are. Real data always throws up unexpected complications and challenges.

As well as describing the learnings from Instem’s participation, I also describe the main takeaways from the FFU. My personal summary is simply: SENDIG-DART is more tricky than regular SEND. I would say that though SENDIG-DART is quite focused and specific, it certainly shouldn’t be underestimated. Some of the organizations taking part in the FFU hit a few pitfalls. When the CDISC team took a look at the FFU, it was obvious that the Implementation Guide wasn’t clear enough in certain areas. While I wouldn’t go as far as saying it was incorrect, or even misleading, re-reading it through, we could see where some of the guidance may have been a little too subtle. For SENDIG-DART v1.2, much of the text was addressed in order to make the meaning far clearer. The meaning hasn’t changed, but hopefully now it’s far less likely to be misinterpreted.

I’m sure I’ve mentioned this before, but version 1.2 is currently out for public review, and only has a week or two left, so anyone intending to review it who has not done so yet, would be well advised not to delay.

My personal opinion when it comes to the SENDIG-DART, is that any organizations with studies needing converting, unless they have a high volume of studies, I’d advise that they use a specialist SEND vendor as the complexity and cost of implementing SENDIG-DART simply wouldn’t be worth it for only a small number of studies per year. Those with a higher volume would be well advised to engage with a SEND specialist vendor with experience with SENDIG DART for specialist training and consultancy.

‘til next time


Reflecting on the last 10 years of SEND

Spoiler Warning: This week’s blog post is even more self-indulgent than usual.

I’m recognizing two very significant anniversaries in my professional career. This month marks 25 years of working in the nonclinical space. September 1997 saw me take a programming job at a preclinical software vendor, fully in the expectation that it would be a temporary position for no more than two years, just to pay the bills until I became a rock star. I’m not being flippant. Through a combination of naivety and arrogance, I genuinely believed I was destined to a be a guitar hero. 25 years later, I’m still here so you can see how well that worked out for me.

In the context of SensibleSEND, the more significant anniversary is marking 10 years since I joined CDISC and was first exposed to SEND. So, it seems a good time to take a moment to see how much has been accomplished.

Thinking back to 10 years ago, while the FDA were making all the right noises about their enthusiasm for SEND, much of our industry never really believed SEND would be mandated for submission. Today, the inclusion of SEND datasets for submitted studies has just become business as usual, having been a requirement since 2016 for NDAs and 2017 for INDs.

Initially just covering general toxicology and carcinogenicity studies, we’ve seen SEND expand to cover cardio and respiratory safety pharmacology studies with the introduction of SEND 3.1.

My own involvement with SEND was triggered by the move to cover Developmental and Reproductive studies. It may have taken 10 years, but we are now only months away from these studies requiring SEND datasets for submission too, and I’m sure you are well aware of the upcoming requirement for SEND for CBER submissions.

So, for me it’s been 10 years. In any given year, we seem to make less progress than I would like. However, looking back over a decade, I see just how far we have come. SEND is now my full-time job. Even 10 years ago, I’m not sure how many of us would have imagined that SEND could be anyone’s full time job. Yet, today within nonclinical, it’s a serious career path. My organization, like many others, has entire departments dedicated to SEND. We have SEND experts, consultants, trainers and other service providers, for whom SEND is their full-time job.

I’ve often quoted the line “SEND has been the biggest change to our industry since the introductions of GLP”. Someone once said this to me, and looking back at my 25-year anniversary, this certainly rings true. So much has changed, so much is still changing and so here’s to the next 10 years.

‘til next time


New standards on review

The CDISC SEND team have several new standards edging their way towards publication.

The SENDIG-Genetox is the CDISC SEND standard for the representation of in vivo genetic toxicology studies. This has completed an internal CDISC review and should soon be heading to a full public review. For those unaware, this first version of SENDIG-Genetox focuses on in vivo Micronucleus and in vivo Comet assays.

Currently in internal CDISC review, and also on the path towards its public review, is version 1.0 of the SEND Tumor Combinations. While not an Implementation Guide, it has been produced at the request of the FDA.  It was created by a working group of biopharmaceutical experts from the Society of Toxicologic Pathology, FDA, and members of the CDISC SEND team. Its primary goal is to assist pharmacology/toxicology reviewers and biostatisticians in statistical analysis of nonclinical tumor data. It is a spreadsheet to provide a user-friendly, high level hierarchy of tumor types or categories correlating the tumor names from the INHAND publications with those available in the NEOPLASM CDISC Controlled Terminology code list in SEND.

Further along in its publication process we have the SENDIG-DART v1.2. Regular readers will be well aware that I lead the CDISC team that produced this next version of the SEND standard for DART studies. This version was created in direct response to the FDA’s ‘Scope of SEND’ section of their study data Technical Conformance Guide. That addition by the agency let the industry know that juvenile toxicology studies are in scope of the SEND requirements. However, for studies where the data need to be analyzed by the postnatal day (i.e. analyzed by age rather than by dose), there are no examples included in any of the published SEND IGs.

SENDIG-DART v1.2 adds one new domain, that is the DP domain for development milestones. While the domain is new, there are no new variables added to the Implementation Guide (IG). The update shows how the existing concepts introduced in v1.1 can now be applied to juvenile studies.

As well as this addition, some existing sections were updated to add clarification, particularly in light of the results of the FDA’s Fit-For-Use (FFU) pilot. Feedback from the FFU pilot made it apparent that there was confusion around the representation of the study day, the dose day and the reproductive phase day. A new section has been added which describes how these are independent concepts and how each should be represented. I hope that this addition really clears things up for those implementing SENDIG-DART.

SENDIG-DART 1.2 is currently out for public review, so please take the time to review and comment on the document. Once complete with public review, and assuming that no significant issues are found, the document is due for publication in early 2023. For the public review, the document is accompanied by an updated version of the SEND Conformance Rules and a proof of concept study. The CDISC SEND DART team has put a huge amount of work and many volunteer hours into producing this update and its supporting materials. I’m very thankful to them for all the effort and expertise they offer.

‘til next time