So, what did we learn from the FDA’s DART Fit For Use Pilot?

For most organizations, the FDA’s SENDIG-DART v1.1 Fit-For-Use (FFU) pilot started with the Federal Register notice being posted back in October 2020. For some of us it began earlier when it became apparent that the FDA had an appetite to adopt the SEND standard for Developmental and Reproductive Toxicology (DART). That meant a pilot was inevitable.

Fast forward several years to 2022. We can now see what lessons were learned from that pilot as CDISC have published the details here on their wiki. There is a good deal of material to digest. There are the study data, study reports, FDA reports and a wealth of supporting documents. However, the learnings from the exercise are summarized in a set of PowerPoint slides which were compiled by CDISC. Still, that’s 21 slides crammed with information and recommendations which is a lot to consume. Yet it is very timely considering we are less than a year away from the standard becoming a regulatory submission requirement.

So, for those implementing SENDIG-DART v1.1, I’ll highlight just one area that seemed particularly troublesome to some of the organizations participating in the FFU pilot: The representation of the details of the timings of the results.

While for a General Toxicology study, the study day and dose day are practically synonymous and interchangeable, this is not typically the case with a DART study. Here we have the Dose Day, the Study Day and the Gestation Day as 3 independent concepts. They may happen to occur on the same calendar date, but they are independent. Of these, the Gestation Day is the concept typically used to report data. Obviously, anyone coming to SENDIG-DART from a background in SEND 3.1 is going to find such a concept quite jarring. In General Toxicology, Study Day is king, yet in DART it takes a back seat as the repro phase day – in this case the gestation day – takes precedent for the reporting and analysis. This is something that is described in the SENDIG-DART, but not always understood by all FFU participants.

While there are many details noted and points to be considered in the FFU learnings, I wanted to highlight this issue as it is so crucial. Like many of the other issues highlighted by the pilot, the guidance and instructions are in the SENDIG-DART but are not always well understood. Therefore, it seems to me that the SENDIG-DART is even more tricky to interpret than the main SEND Implementation Guide. Yes, SENDIG-DART includes everything from the main guide and all the pitfalls we’ve all spent years learning to navigate, but DART now brings with it its own set of complications and subtleties.  I’m strongly recommending that any organization that is implementing SENDIG-DART pay close attention to every part of the Implementation Guide. Every detail is important and not to be glossed over. For many organizations, perhaps those with an already stretched SEND team, or with a low volume of DART studies, I’d go as far as to suggest that they shouldn’t look to implement SENDIG-DART themselves, and instead consider outsourcing it to an organization already experienced and capable of supporting this tricky standard. However, for those that are implementing, I’d advise them to take advantage of the training, support and assistance that their partners can provide.

‘Till next time


Silence speaks volumes

In my last posting, we were talking about the CDISC working conference where I mentioned that one of the highlights of the week was the FDA public webcast. In the past, sometimes these have included surprising announcements which have generated a flurry of activity as we all try to digest and respond to the latest announcement. This time around was a little different, but none the less curious.

Over the years, this public meeting has grown and changed in format. Back when I first became a CDISC volunteer, this meeting seemed like a cozy fireside chat between some FDA reps and some CDISC volunteers. Full of candor, it had the air of old friends catching up over a coffee. Maybe I have nostalgic goggles on. Maybe.

Today it has grown into a far more formal and inclusive event. Presenters now include representatives from the likes of PMDA in Japan and BioCelerate. The discussion between CDISC and FDA is also now more formalized with written questions being submitted in advance to eData, and an FDA representative giving pre-prepared answers with no provision for further questions or follow up discussion.

To the casual observer, the meeting itself would have perhaps seemed a little uneventful with the agency choosing to answer a few questions with fairly uncontroversial answers. However, scratch beneath the surface and there’s more to this story. To me, far more interesting is what was not said.  I’m referring to the fact that the original questions submitted included detailed background and assumptions regarding the scope of SEND before we got the actual 7 individual questions. The agency then chose to answer only 2 of these and chose not to respond to the other 5. Obviously, it is highly speculative to try to guess why the agency chose to respond to some questions and not others, but my personal opinion is that in not answering those questions, the message is being given that the ‘Scope of SEND’ section in the Technical Conformance Guide is clear and should be followed, regardless of the challenges it may bring. I say this because the questions that were not answered generally relate to the difficulty a few organizations face in following the Scope of SEND statement. In the presentation, the agency acknowledged that they didn’t answer the specific questions, but again wanted to reiterate the point that if the study is required to support safety decisions, and that if the study can be modeled in SEND, then it should be submitted in SEND. No matter the challenges that brings.

Some organizations and specialist SEND service providers are in a good place and already comply with the Scope of SEND statement. Yet, I am aware that there are some organizations really struggle with it due to lack of investment, resources or expertise. Also, some may not have adequately planned SEND into their timeline.

As I mentioned earlier, the event is more inclusive now, and interestingly it was the BioCelerate presentation regarding cross-study analysis that seemed the most controversial and provoked the most discussion. Soon the discussion turned to the question “is SEND for analysis, harmonization or exchange?” but I think that’s probably a great blog discussion for another day.

‘Till next time


A week of working on 3 separate Implementation Guides

This is the week of the CDISC Spring SEND Virtual Face-to-Face meeting, and yes that’s still an oxymoron, but hopefully this will be the final time as future Face-to-Face meetings are planned to back in person again. While tee week mainly focuses CDISC volunteers on progressing the standard, one of the highlights of the event is the FDA public meeting on Wednesday 27th April. This sees FDA give feedback on SEND and is open to anyone wishing to attend.  

For the Spring Face-to-Face there are 3 Implementation Guides being worked on in parallel as they each grow nearer completion. Personally, I’m heavily focused on the SENDIG-DART v1.2 scheduled for publication early 2023. This widens the scope of the DART IG to include Juvenile Tox studies reported by Post-Natal Day. In other words, reporting by age rather than by study day. This was in direct response to the FDA requirement in the Technical Conformance Guide:

The age of the animal at study start does not impact whether the SEND requirement applies. Dedicated juvenile animal studies that typically include multiple phases cannot currently be modelled in FDA-supported SENDIGs and therefore would not require SEND. However, when general toxicology studies (single- or repeat-dose) are conducted with juvenile animals (e.g., young, post-weaning animals), SEND is required as outlined above.

FDA Technical Conformance Guide

So, our team are not going as far as modeling “…studies that typically include multiple phases” but we are providing examples to represent “general toxicology studies (single- or repeat-dose) are conducted with juvenile animals”. Of course, this can all be represented in the current version of SEND, however there are no examples of representing results in a way that allows them to be analysed relative to the age of the subject. So, aside from Developmental Milestones, this version of the DART IG doesn’t really introduce any new concepts, but simply provides examples of best practice of using the existing concepts.

Also being worked on this week is the SENDIG-GT for Genetic Tox, which models Micronucleus and Comet Assay data in SEND, also due for publication in 2023.

However, by far the most effort is focused at SEND 3.2, the next iteration of the main Implementation Guide. Many, if not all sections of the guide are being worked on in some way as the work involves dealing with a plethora of suggested minor improvements. In addition, there are a few areas receiving a more significant change. Probably the most noteworthy is the addition of new domains to facilitate the modeling of Immune System data. There are a couple of new domains being developed for these data. There’s also a new domain for TK data and significant work on both EX and Trial Design. In fact, there are many subteams busily beavering away in their own areas, too numerous to mention individually in this one blog post. Though not due for publication until 2024, this version has been many years in development already and so this feels like the final push to get the content complete so the CDISC publication process and public review can begin.

‘Till next time,


In the room where it happens

For a variety of different reasons, at several times this last week, I’ve reiterated my motivations for being a CDISC volunteer. I’ve discussed how I first fell into volunteering almost accidentally, not knowing what I was getting into, and discovering I had a real passion for SEND. I’ve also been talking about the importance of being involved in the process of shaping SEND.

With lockdowns and restrictions easing, I recently got to take my daughter to see Hamilton. Originally booked for 2020, finally it’s back in production. One of the big musical numbers has the hook like “in the room where it happens” referring to being party to important decisions, in the case of the Hamilton play, decisions that shaped the United States.

For me, and my fellow CDISC volunteers, it’s important that we are “in the room where it happens”. It’s one of the things that allows us to speak about SEND with such confidence and authority. It’s no co-incidence that many of the SEND related presentations this year at SOT in San Diego were given by CDISC volunteers. Those who were “in the room where it happens”.

While we are talking about SOT, I was unable to attend myself, but the reports from my on-site team suggested that for many individuals and organizations, the conversation had moved on from “what is SEND?”, “Do I have to have SEND?” and “Is SEND really required?”, to discussions around visualizing SEND and using the standardized data for mining, cross study analysis and generally getting greater value from the data. Indeed, for many organizations, there seems to have been a shift away from converting only the minimum required amount of data, to an appetite to have more and more data in SEND, such is its value.

I started this post saying that I’ve been reiterating my motivations for volunteering, and hearing about the value of SEND speaks to my biggest motivation of all: I believe in SEND. I believe that it ensures that the experimental drugs which are given to vulnerable patients on clinical trials, are safer. I believe that having more studies in SEND leads to better decisions and ultimately that means safer drugs and better treatments. I believe that the more data in SEND ultimately speeds up drug development, and to paraphrase my own organization’s mission statement, it brings new treatments to people, faster.

I hope that idea comes across to even the most casual reader of this blog. Ultimately, SEND, though just one piece in a vast jigsaw, really does improve the drug development process. For that reason, just like I’m hearing from many organizations, I want to see more SEND. I want to see more studies in SEND. I want to see more datatypes in SEND. I love working with the other pioneers who share the desire to see beyond compliance.

So, I love being “in the room where it happens”, I love debating SEND, and I love seeing SEND put to greater use because I believe in the value of SEND.

As usual, if you would like to discuss any of these points, please feel free to drop me a line

Till next time,


What’s the deal with the FDA Feedback Letters?

This week I caught up with my colleague and fellow SEND obsessive, Mike Wasko. I asked him for his opinions on the FDA feedback letters that sponsors receive after IND submissions. Specifically, his thoughts on their importance and impact – Here’s what he had to say:

Whenever a sponsor receives a letter or feedback from the FDA, the initial knee jerk response of many is to panic.  I’d say approximately 95% of the time, there is no need to panic. A little context of what these letters/reports are, and why the FDA are providing them may be helpful.

When SEND first started being required, the industry desired feedback from the FDA, and to this date, it still does to aid with the moving target of best practices and new standards. In the first few years of the SEND requirements, only the inner circle of the SEND industry was able to get feedback on quality and preferences. Granted, extensive validation rules did not exist at this time, so the feedback was even more important. But this feedback only reached the participants (approximately 50 people) of the CDISC SEND Face to Face meetings with the FDA, which occur twice a year. To increase communication and feedback to the industry, the FDA introduced some new communication methodologies. First, they would start having SBIA (CDER Small Business and Industry Assistance) webinars to help give feedback to the community on submissions, areas of improvement, SEND at the FDA and more. The second was study specific feedback to sponsors. The assessment was provided by the edata team at the FDA, as part of the Kickstart services. The assessment included automated and manual review of the send package for compliance, rules, recommendations, consistency, and ensure the agency can reproduce the summarizations in the actual study report. 

So, the next logical question for sponsors is “what do I have to do” if you’ve received a letter of feedback from the FDA. Most of the time, nothing. The document itself states something along the lines of no action or response is needed unless specified. It also states the feedback is to aid in improving future submissions of SEND datasets. Great news, right?  Yes, if there are things that are very wrong, the report will indicate which findings must be fixed and a new package submitted. Industry would be well advised to implement the recommendations so that future studies don’t have similar issues. With that said, the question still exists, that if one receives such feedback from an IND submission, should you fix it for the NDA submission? The time between and IND and an NDA can be many years, and with the rate of change in SEND in terms of best practices, updating implementation and when to update is really a sponsor decision. My recommendation would be to implement the changes recommended at IND in order to ensure a smoother review at NDA. It is helpful to share the feedback with your SEND teams and your SEND partners/CRO/vendors to help the whole industry advance and improve. Especially since it is easy to apply small amounts of changes to your SEND practices as you go, rather than many all at once.

I thought that was a great update for Mike to share.

As usual, feel free to join the conversation and drop me a note at

Till next time,


Countdown to the next SEND Requirements

So here we are again in March, probably the most significant month in the SEND calendar. At Instem we are busy gearing up for the Society of Toxicology meeting, which will finally be in person again after a two-year hiatus, what with you-know-what. March also means that we are due to have the latest FDA’s Study Data Technical Conformance Guide land (TCG). What surprises will the latest version hold for us? Further clarification on the scope of SEND, perhaps? Possibly an update to the list of ‘desired’ Trial Summary study parameters? I guess we’ll have to wait and see.

Most importantly though for SEND is the significance of March 15th when it comes to the FDA’s Data Standards Catalogue. This is the date when the clock starts ticking on new requirements. If you are not fully up to speed with what I’m referring to, I won’t bore you with the technicalities, but suffice to say that March 2022 signals the 12-month countdown to some new FDA requirements for SEND.

Of these, the latest addition to the Data Standards Catalogue is SEND 3.1.1 which includes minor improvements to the PC and PP domains to bring them in line with the FDA’s TCG. Any organization who has been following TCG regarding these domains, should have no issues adopting this new version. As for the Implementation Guide itself, it now includes much improved examples for PC and PP to show how these domains function together and how they relate to EX.

One very curious thing to note with this addition to the Data Standards Catalogue is the fact that SEND 3.1.1 has been added, but without an ‘End Requirement Date’ for SEND 3.1. This means that come next March, organizations have the choice of which version of SEND they choose to use for their submission. This is quite different to the hard change over we saw between SEND 3.0 and 3.1. However, if we consider SEND 3.1 following the TCG as just being the same as 3.1.1, then there’s no real difference.

More significantly for some will be the start of the requirement for SENDIG-DART 1.1. This brings Embryo Fetal Development studies into scope for the SEND requirement. Following the successful recent FDA Fit-For-Use pilot, tools and organizations should all be ready to create and consume these new SEND packages.

Although far more specialized, SENDIG-AR also becomes a requirement for Animal Rules studies. These are clinical efficacy studies where animals are used as surrogates for humans because human clinical trails would not be possible.

Also, we are now 12 months away from SEND being required for CBER. There are no special SEND considerations specifically for CBER in the SEND Implementation Guide or TCG, the only real additional requirement for CBER regards immune response data. When these data are collected, they should be submitted in either a custom IS domain or as part of the LB domain.

Finally, the requirement that will impact us all is the move to Define-XML 2.1. There’s probably a broader topic for us to discuss at some point, to consider the wider implications of the value of the define fine.

So, it’s March and the clock is ticking. The next 12 months are going to see a flurry of activity as we all gear up for these new requirements to take effect in March 2023.

Till next time


SEND Conformance Rules are NOT Boring!

Hey everyone – I’ve asked Christy Kubin over at Charles River Laboratories to share some of her thoughts here on Sensible SEND. As well as being a long term CDISC volunteer and co-author of the SEND IG, Christy now leads the SEND Conformance Rules Team and like me, she is a bit of a SEND nerd. Enjoy her post and until next time, Marc

By Christy Kubin:

When Marc first asked me to put together some thoughts about SEND Conformance Rules, I’ll admit I was a bit intimidated. Marc’s love of SEND is contagious, and his blog talks about the best and worst parts of working with SEND in a way that is approachable and fun; which are not words most people associate with either “conformance” or “rules”.  While some people might assume that working with conformance rules is boring, it has quickly grown to be one of my favorite projects. Conformance Rules are impacted by all aspects of SENDIG development, and working from the conformance rules perspective has allowed me to better see how all aspects of the SEND standard work together. 

Like Marc, I truly love donating my time to CDISC to help develop the SEND standard.  The SEND Conformance Rules team was my first opportunity to lead a team, and I am lucky to work with such an amazing team (all of which are “SEND Nerds”).  SEND Conformance Rules are the translation of the SEND Implementation Guide text into logical statements documenting the characteristics a SEND dataset must have in order to be in conformance with the SEND standard. Writing SEND Conformance Rules is the ultimate opportunity to “nerd-out” on SEND as we pour through the SENDIG line by line and analyze every word. One of the most interesting things I found about writing conformance rules is the opportunity it gives the team to really dig into implementation guide text and challenge our preconceptions.  This has certainly led to many fascinating and passionate conversations!  In the end, SEND Conformance Rules must be about fact, and rules must be attributable to definitive statements in the SENDIG.  No conformance rules are written for assumed best practices or unwritten rules; there is no room in conformance rules for reading between the lines. Only definitive SENDIG statements can be translated into rules. 

When the CDISC requirement for conformance rules began, our team was tasked with writing rules for the published SENDIGs (SENDIG v3.0, SENDIG v3.1, and SENDIG-DART v1.1). As you might imagine, that proved to be an interesting challenge as those implementation guides were not written with the idea of rules in mind.  The team is taking what we have learned from those initial rule sets to suggest best practices for future SENDIGs that would ensure that future published implementation guides (such as the upcoming SENDIG-GeneTox) clearly and definitively state what is necessary for conformance to the SEND standard. This should result in cleaner, unambiguous versions of SENDIGs and conformance rules moving forward.

Thank you for allowing me to share my thoughts on SEND Conformance Rules with you.

Best regards,


Are we there yet – Where’s the GeneTox IG?!

This week I caught up with my colleague and fellow SEND obsessive, Mike Wasko who leads the CDISC team developing the SEND standard for Genetic Toxicology. I asked him “What is the current scope and timeline of the SEND Genetox Implementation Guide?”, and here’s what he had to say:

“Some context is in order first.  The CDISC SEND GeneTox team, founded in 2015, has been working in an evolving state, as there have been many changes to scope, leadership and even CDISC practices.  The initial scope and intent was to model and create domains to support in vivo Micronucleus and Comet assays, as well as in vitro Micronucleus and in vitro Ames assays.  In addition to the representation of the raw data and trial design for in vivo data, the team planned to find a way to represent historical control data.  In vitro data and historical control data are two concepts which no CDISC team had ever ventured to undertake. 

As the team worked through the scope initiatives, it became apparent that in vitro and historical control data had not been attempted for a good reason.  It was extremely challenging to represent data without a subject attached to the data.  After many trials and tribulations, it was decided to reduce the scope to the smallest effective scope, which would provide investigators and agencies a way to standardize the genetox data at least for in vivo.  This could set a foundation for the future as well as lessons learned.  This would be similar to the process of SENDIG v3.0 being released: industry implementing and then lessons learned driving future releases.  Thus, the scope was reduced to in vivo Micronucleus and in vivo Comet assays.  Then with agency feedback, it was determined historical data did not need to be included.  The rationale was that either methodologies would be too complex to represent historical data, or it would severely increase the size/width of the GeneTox SEND domain representing the raw data.  Neither was ideal; thus it was agreed to remove historical data. 

Upon decision of the refined scope, work in the SEND GeneTox IG has been moving quickly.  Granted new CDISC practices have been introduced along the way.  Currently the draft IG is almost complete, including 3 detailed examples to represent the following:

  • in vivo Micronucleus slide based assay,
  • in vivo Micronucleus flow based assay and
  • in vivo Comet assay. 

The team has already worked with the Controlled Terminology team to develop controlled terminology and is working with the SEND Conformance Rules team.  As part of newer CDISC processes, not only must an IG contain examples, but for CDISC Internal and Public Review, it must be accompanied by the Conformance Rules as well as Proof of Concept datasets.  This will allow a more thorough review as well allow agencies and vendors to assess with their SEND tools.  The current roadmap puts the SENDIG-GeneTox out for public review in early Q3 2022, with approval and publication hopefully by end of Q1 2023.  It has been a road of challenges and changes, but the team came together and are getting it done.”

I thought that was a great update for Mike to share.

As usual, feel free to join the conversation and drop me a note at

Till next time,


SEND: it’s not everyone’s cup of tea…

I make no secret of how much I love working with the SEND standard. I really enjoy both the time I donate to CDISC to help develop the standard; and the time I spend helping develop solutions that allow organizations to create and consume SEND datasets. This year marks my 10-year anniversary of working with SEND. Back in 2012, I attended my first CDISC conference, and I’ve discussed in this blog, how strange that experience was.

Prior to that meeting, I would have never believed that I’d devote the rest of my career to the world of data standards. “Data Standards”, even the phrase itself sounds dull and boring. Could there be anything less creative, less inspiring than “Data Standards”? Yet for some reason, I love nothing more than debating the best way to standardize the representation of nonclinical data.

Without seeing firsthand two individuals passionately argue opposing opinions on SEND, it’s hard to imagine that anyone could get so worked up about such things, but these debates happen all the time. And I love them.

And I’m not alone. Many organizations in our industry have individuals just like me, who care deeply for SEND. Who want to see their organization creating the best quality datasets possible, to provide the FDA with the cleanest, clearest data so the reviewer can focus on evaluating the safety, and not waste time grappling with poorly rendered data. We see SEND as vital to helping bring about faster and safer products, and that’s something that touches everyone.

I know many that read my blog, are equally SEND obsessed. It’s good to know we are amongst friends.

I also know that there are many who read this who are only on the fringes of SEND. Those who may be reluctantly engaging with the standard simply because they can escape it no longer. They do not share the passion or obsession. Understandably, their interests lay in other areas, and SEND is simply an additional burden.

Every organization needs a SEND obsessive. Either their own SEND nerd, or a partner who can just deal with all of this ‘SEND stuff’ for them. They need them, so that everyone else can take comfort in knowing that there’s at least one person that’s taking care of this. Worrying about the SEND datasets so others don’t have to. Ensuring that the next looming regulation date is going to be met. Pouring over every update to the FDA’s Technical Conformance Guide and making sure their organization remains compliant.

So, yes when it comes to SEND, there are obsessives, but it’s not everyone’s cup of tea. Fortunately, there are enough of us who love to nerd-out on this stuff, that everyone else can rest assured that someone, somewhere, is taking care of it.

‘Til next time,


SEND in 2022 – so what’s next?

Looking forward to the significant events and milestones that lay ahead over the next 12 months for SEND, barring any major issues, we should see the publication of the next version of the SENDIG-DART. That is the CDISC SEND standard for Reproductive and Developmental studies and this time around will focus on Juvenile Tox in direct response to the recent TCG update.

We should also see the publication of the SEND standard for Genetic Toxicology studies. In its first version, this standard is limited to Micronucleus and Comet Assays for In Vivo studies only, but even so, this is a significant publication, further widening the scope of studies now covered by SEND.

Also, we’d expect at least two updates to the FDA’s Technical Conformance Guide (TCG), based on their standard schedule, and I’m hoping to see the addition of SEND 3.1.1 to the Data Standards Catalogue before March to start the ball rolling on it’s adoption by the agency.

The reason that I’m so keen to see this added to the catalogue is that SEND 3.1.1 arose out of a significant issue seen during the original FDA pilot. Although seemingly small, it was a serious flaw in the definition of the Pharmacokinetic Concentration (PC) domain. A CDISC team was quickly assembled to resolve the issue, and the solution was released in the next TCG in the form of guidance of how best to represent the data in this PC domain. This was back in 2017. Finally, in 2021, SEND 3.1.1 was published which brings the Implementation Guide in line with the FDA’s TCG, and hopefully in 2022 we see it added to the catalogue.

2021 also saw the TCG add the much-debated “Scope of SEND” section. Juvenile studies that include multiple phases are still out of scope, while all other juvenile studies are in scope. Yet, the SEND Implementation Guides give no examples or assistance on how to represent such studies, and in particular, how to denote the Post-Natal Day number, which is a key piece of information for many of these studies. Resolving such issues is the focus of the next version of the SENDIG-DART, and so this is another example of the standard chasing something already published in TCG.

Personally, I’m a big fan of how quickly the TCG moves and sometimes that means that the SEND standard needs to play catch up. CDISC development and publication process is slow, yet the TCG would appear to be far more agile, and so we end up in these situations of the Standard needing to chase something already adopted by the agency.

I’ve made no secret of my opinion on the slow progress of CDISC and this is another example where that seems out of step with the pace of the agency. As last year saw the publication of SEND 3.1.1 to bring SEND inline with the TCG, I’m looking forward to SENDIG-DART being brought inline this year.

Drop me a note to share your opinion(s) at

‘Til next time,