A picture paints a thousand words

I know the title is the obvious cliché, but I had to do it. Sometimes things are obvious for a reason.

I’ve just finished putting together my SOT/ToxExpo presentation and I’m still lamenting the fact that it’s virtual again this year. Anyway, as I alluded to in my previous blog post, this year’s presentation looks at both where we are today, and where we might be going tomorrow with SEND. I’m convinced that Visualization is key to that conversation.

It’s at this point I’m reminded of an experience at SOT a couple of years ago where we were demonstrating our own visualization solution known as, SEND Explorer® to someone who previously, had to draw their conclusions using only the PDF tables. The individual was shown all manner of graphs, charts and other visualizations and the demonstrator was able to dynamically drill into the results as the person was asking questions of the data.

They were getting more and more engaged and enthusiastic as they were seeing the speed and ease at which the demonstrator was able to bring up data, customize how it was viewed and perform various on-screen comparisons.

Yes, of course nonclinical data has been displayed in graphical form before. I’m not suggesting that this is all due to the advent of SEND, but SEND does change the landscape because the data are standardized. It’s this standardization which has paved the way for commercial off-the-shelf tools to be developed which can work with data regardless of the CRO responsible for running the study; regardless of the Data Collection system used for capturing the data.

Also, the introduction of services to convert historic study data to SEND, often locked away in old PDFs, has opened new possibilities for accessing value found in those old studies. Legacy conversions like this then provide visualization tools like SEND Explorer, with a wealth of study information to work with.

This has got me wondering: How long until this becomes just the run of the mill way of analyzing nonclinical data? How long until SEND simply replaces the PDF appendices in the study report? Back when SEND was first being introduced, that seemed to be the logical assumption for the next significant step. One day, there won’t even be PDF appendices.

That’s an idea I’ve been thinking about a lot. There are very good reasons why that won’t happen in the near term, but I think that there are ways to overcome these roadblocks. I think that will have to be the topic for a separate blog post….

For today, I’m really pleased whenever I get to see people getting excited about discovering the potential for working with visualizations of standardized nonclinical study data. I’m longing for in-person conferences and shows when we can have these types of conversations face to face again.

Till next time

Marc

Beyond Compliance?

Beyond /bɪˈjɒnd/ Preposition “more extensive or extreme than; further-reaching than.”

This week I have been writing the abstract for my talk at this year’s SOT meeting. Even though the event is again virtual, as you may have guessed by now, I’ll take any opportunity to inflict my opinions about SEND on anyone who’ll show up to listen. In writing the abstract, I think back to previous SOT expos and I’m reminded that across our industry, we are a wide and varied crowd when it comes to SEND.

There are still a significant number who are asking “What is this SEND thing, and does my study really need it?”. There are also the Many and the Few. The Many are up and running with SEND, but for them, it is harder than they imagined, more expensive and it’s slow. Oh boy, does it slow things down sometimes! In an industry where time, speed and expertise are often in short demand, the last thing anyone needed was something like SEND. Right?

Then there are the Few. Those exploring the opportunities of standardized study data. The small number for whom SEND is not a burden, it’s just a mechanism by which they can unlock a world of new possibilities.

The Many may have heard talk of this mythical place Beyond Compliance, but the Few are already heading out on the journey of exploration.

So, is there a SEND El Dorado that lays beyond compliance? Certainly, Big Pharma believe so. We can see their investment in consortia where they are pooling their knowledge and building warehouses of shared study data in SEND format. Obviously, such activity comes with considerable investment, and they would only do this if their belief was that the benefits and opportunities outweigh the cost.

However, we don’t have to be part of an exclusive consortia in order to see the benefits of working with standardized electronic data. Visualization tools like SEND Explorer allow regulators, sponsors, and others to interact with nonclinical study data in a fast, dynamic way. Previously, hours would have needed to be spent leafing through PDF pages of a study report, but now we click a button and graphs, charts and other visualizations can appear. See something interesting? Then just another click or two and we are drilling down so see what’s really going on in the study. Or zoom out and summarize to a higher level: Look across a set of studies to see if trends are consistent from study to study, or is there something peculiar about this one particular study?

It doesn’t matter which CRO supplied the data, it’s standardized. It doesn’t matter which collection system was used, it’s standardized. That standardization has allowed for tools like SEND Explorer.

However, that means that their value can only be seen when they are fed a good volume of standardized data.  I’ve often heard the opinion “Wow, that looks great, but I just don’t have enough SEND Data to make use of it”. I’ve spoken to those who know they have a wealth of historic study data, that they would love to be able to use with SEND Explorer, but unfortunately those study data are stuck in PDF. So, the challenge became: how to be able to scrape data from PDFs into SEND in order to feed the warehouses and visualization tools? The vast number of CROs can’t entertain doing this work. Understandably, they are grappling with the challenges of today, without trying to unlock the study data of the past. However, for those who are endeavoring to pitch camp in this land of Beyond Compliance, we’ve developed the tools and techniques to do just this. And we are doing it, today. In parallel with our work converting data for submission, we are also handling data that are going Beyond Compliance.

You can probably tell that I’d be quite excited to have my SOT presentation focus on the possibilities of the land beyond compliance, but I acknowledge that I need to deliver something that is going to be as relevant for the Many as well the Few.

Till next time…..

Marc

ADA & SEND: I’m conflicted

In the last CDISC SEND Core Team meeting, we got to discussing Anti-Drug Antibody (ADA) data, and its representation in SEND. We were debating what we’d recommend, and this is the cause of my conflict.

The purist within me wants to scream “Custom Domain!”. The pragmatist knows the industry isn’t ready for this. I want to shout about that fact that at Instem, we have the tools and services to support custom domains; but I’m not supposed to use CDISC for such promotion. I want to tell everyone listening that we have the products and services available to help industry create these custom domains; but that would be overstepping the mark. So, as a card-carrying member of the CDISC SEND Core Team – I’m forced to admit to myself that industry just isn’t there yet.

So, I’m conflicted.

This arose due to the work over at FDA CBER and their recent pilot of the SEND standard. They have a real need to see ADA in SEND. In the current version of SEND, there isn’t a domain for these results, however SDTMIG (the implementation guide for the clinical equivalent of SEND) has such a domain. It’s called IS – Immunogenicity Specimen Assessment; but it’s not in the SENDIG.

PHUSE have previously looked into this situation and came up with two solutions, which can be summarized as showing how to either, force-fit ADA data into the existing LB domain; or borrowing IS from SDTMIG and adding it to a SEND package as a custom domain. (I’m probably doing a huge disservice to its authors by reducing an entire white paper to a single sentence!)

At this point, I’ll acknowledge but completely sidestep the question of whether or not this would actually be considered in scope or not; and totally dodge the issues around SDTM version compatibly and the fact that CDISC are adding an IS domain to SEND 3.2. If you want to dive into that, then drop me a line, marc.ellison@instem.com, because I’d be happy at taking the discussion down that particular rabbit hole!

So, I’m conflicted; and a big part of that conflict is due to the fact that I relish being at the cutting edge of SEND. The ‘right’ answer is, that as these data are important for safety evaluation, that they should be represented in SEND. This being the case, then the IS domain is a better fit than a misused LB domain. However, by and large, the industry isn’t at the cutting edge and as such many probably don’t have the tools or expertise needed. This is unsurprising as at CDISC, I think we could have done a better job at making the information available. We have an Implementation Guide that includes custom domains, but at no point acts to guide implementation of a custom domain. That being the case, I’ll hold my hand up as a CDISC volunteer and take my share of the responsibility for that.

Being at the cutting-edge means having developed the tools and services already. Not only does our Submit™ software create and consume such domains, but our SEND Explorer software can visualize them, and our services team can create and verify them. So, we are now sitting pretty with our ducks all aligned. But I know that doesn’t really apply to many others.

So, I’m still conflicted.

This is not supposed to sound like shameless self-promotion, of “look what we can do”; it’s relating the personal challenge of taking my CDISC responsibility seriously and ensuring I don’t overstep my privileged position of having a seat at the SEND Leadership table.

Till next time

Marc

SEND verses the Study Report

SEND is just an electronic representation of your study report.”

This used to be my stock phrase. I actually believed it. It was how the world seemed a few years ago. I meant every word of it.

The concept is admirable, but the reality seems to be getting more removed all the time. The intention was that industry would have a standardized representation of the study’s results in order that they could be exchanged electronically, because previously they were just in PDF reports which, while being very human readable, are far from being machine readable. Data would be collected and reports produced as normal. The only change would be that the study report would now have corresponding files containing the data.

So, “SEND is just an electronic version of the study report” is what I used to tell people. Oh, how naïve. The world must have seemed like such a simple place.

Today I heard about a CRO who now refuses to produce SEND 3.0 because they’ve changed their data collection practices for SEND 3.1. My younger self would have been outraged at the idea that not only the SEND standard, but a particular version of SEND, would dictate how data are collected.

Yet, for where we are today, I know this is pretty reasonable for various reasons. Probably the main one is the significant change to Histo Pathology data that was introduced in SEND 3.1. As part of a move to be able to better compare microscopic findings from one study to another, SEND 3.1 brought in Controlled Terms and a much tighter specification of Histo findings. If these data were not collected using a SEND 3.1 compatible glossary, then retrospectively converting the data into SEND would cause challenges. At best this would be time consuming, and at worst this could be near impossible without the Pathologist on hand to unpick all the findings.

Another point worth considering is the idea that we don’t always want the report and the SEND dataset to match. “You mean the SEND data would be different from the data in the study report?!” I hear my younger self ask in horror. Well, actually yes. There are occasions where data on a page may be filtered, formatted or even summarized in order to best fit that page. A SEND dataset doesn’t have such constraints. This notion came up again this week. There are occasions where subjects are observed, and only abnormalities are listed in the report in order to save space. The report would also include a line to explain this. However, in the SEND dataset, all records should appear, and the consumer of the data can then filter them out using their own tools as they so desire.

The same principle could apply to something like food consumption on a longer-term study, which may well be collected daily, yet reported weekly. So, it’s entirely possible that the data on the page is summarized at a slightly higher level than the data in SEND.

The examples we’ve discussed here are probably the most obvious ones, yet there are many more.

So, here we are, and I look back on the naivety of the opinions of my former self. Conceptually, there’s still some truth in the notion that “SEND is just an electronic representation of your study report”, yet once we get into the detail, we see so many exceptions to this notion that the very idea can seem farcical at times.

As usual, hit me with an email (marc.ellison@instem.com) with your own thoughts on the matter, whether you agree, disagree or just wonder what on earth I’m rambling on about.

I’ll leave you with one final thought, “How do you write a good joke?”, “Well, I start with a laugh and work backwards!”. While a cheap gag in its own right, there’s a lesson there: Start with the end in mind. The end used to be a Study Report, but today that’s the SEND Dataset too.

Till next time,

Marc

Out with 2020, SEND in 2021

Well Happy New Year one and all!

After saying a final sayonara to 2020, I’m sure that you, like me and the rest of the world are hoping that 2021 ends up being so much better than its predecessor. That’s got me thinking about what’s happening with SEND in 2021.

As discussed a couple of postings ago, the recent Federal Register Notice specified the coming February 26 as the date by which Sponsors must register their application to participate in the FDA’s Fit-For-Use pilot for SENDIG-DART. The expectation is then that March through May should see participants creating datasets. Later in the year, the findings of the pilot together with any recommendations, should be made public by FDA.

As usual, we can expect FDA publishing Technical Conformance Guides in March and October again. Will they use this to further clarify their position on requesting that more studies should be submitted in SEND format (See this posting for more details)? Will we see updates to reflect the needs of CBER and that center’s recent pilot of SEND? I’m thinking specifically here about representing ADA in SEND.

I’m sure we’ll see an update to the Technical Rejection Criteria too.

Somewhere around the end of the first quarter, we should finally see the publication of SEND 3.1.1 from CDISC, with its associated Conformance Rules. For me personally, this will be a huge milestone. SEND 3.1.1 is change to the PC and PP domains and the reason this means so much to me personally, is that at CDISC, I lead the SEND PC/PP team. We have been working on this change since about 2017.

In its plainest terms, this change is simply that the PC domain has two timing variables moved from Perm to Exp while two others moved the other way going from Exp to Perm. Also, the PCUSCHFL variable was added as permissible in order to make it consistent with the other findings domains

SEND 3.1.1 has completed 2 rounds of public review, so the actual changes have been public for a while, but if you’d like further information, then feel free to drop me a line.

While on the face of it, this seems only a very slight change, the team have also completely reworked all of the examples and supporting text in order to be more applicable for nonclinical data. So yes, only a minor tweak to the actual variables, but a significant reworking of these sections of the IG and something that should improve the IG’s usability in this area.

2020 was the strangest of years. Here’s hoping that 2021 is better for us all.

Till the next post

Marc

Should SEND dictate your choice of CRO?

This month marks the anniversary of the start of the requirement for SEND. It has been 3 years since SEND became required for INDs and 4 years since it’s requirement for NDAs. In our world, that really is not a long time. Yet, for many of us, SEND has completely changed our world. So, in this blog post, I’d like to take a quick look at how much has changed and how much of that was unexpected and offer some insights on how to best deal with that change.

There was a time when we used to hear some people say things like:

  • SEND is just an electronic representation of the Study Report Appendices for the Individual Animal Data Tables
  • If you didn’t collect those data before, you won’t need to start to collect them just for SEND
  • SEND is just a file format

I’ve even said some of those things myself. That first line was almost my mantra at conferences in the early days.

Yet today we are electronically capturing more data than ever before, and we seem to be making all manner of changes to accommodate SEND. So, what happened? How big of an impact is SEND having?

A director at a large CRO, once told me:

“SEND has had the biggest impact on the industry since the introduction of GLP”

The magnitude of that statement has always stuck with me.

SEND has actually impacted the entire process from placing a study at a CRO; then impacting how data are collected; dictating changes to lexicons to make them more SEND friendly; electronically capturing data that previously may have only been in an SOP or noted in the study report; providing new tools and methods for how studies can be analyzed; and right through to giving standardized electronic data a value beyond just the submission.

Obviously, that’s far too much to tackle in one blog post, so here I’ll look at how SEND impacts the decisions when placing a study with a CRO and I’ll return to the other topics in future posts.

Sponsors now need to understand their CRO’s capability and standard practices around SEND to ensure that both parties have aligned expectations. For some sponsors, this can be a challenge as they do not have in-house SEND expertise, and therefore may not know what their expectations around SEND should be and are unclear about what the CRO is providing to them and if it meets their needs.

Some CROs have a policy stating that sponsors must state their SEND requirements upfront as trying to accommodate them retrospectively may be quite challenging (to put it mildly), so it is important that such things are discussed and agreed from the outset. To be fair to the CROs, this is a fairly understandable position and at this point it’s worth noting a line from the FDA’s Technical Conformance Guide document:

“The ideal time to implement SEND is prior to the conduct of the study as it is very important that the results presented in the accompanying study report be traceable back to the original data collected.”

The FDA themselves are advising that we don’t let SEND become an afterthought. We are advised to consider SEND prior to conducting the study.

Also, what if you are a Sponsor with an established relationship with one of the smaller CROs who does not have any SEND capability? You may have been advised to stop using this CRO in favor of placing studies at one of the larger CROs; a CRO with an established track record in SEND. While on the face of it this may seem like sensible advice, you may find this unpalatable due to rising costs, or the smaller CROs may have specialism in a certain area; or simply you may have an established relationship and work well together. Let’s face it, nobody wants SEND production to dictate which CRO they use, yet the SEND datasets shouldn’t e overlooked as they are the very data that the FDA will use in their review.

So, at this point you may be asking yourself:

  • What if I don’t even know what my SEND requirements are? How can I discuss them with the CRO?
  • What do I do if I’ve left it too late and my study is already complete?
  • What do I do if I want to work with a CRO who cannot provide SEND?

Don’t worry, that’s why we’re here. Uh no, a commercial is coming…  Instem provides SEND conversion, verification and consultancy services; and yes, that even includes creating SEND datasets from a final PDF study report. I’m not a salesperson, but we are the leading provider of SEND services with the foremost experts (sorry for the advert as I couldn’t resist, and our Marketing VP will love it).

Seriously, as always please email me with comments, questions or even disagreements at  marc.ellison@instem.com

Till next time…

Marc

The DART pilot is getting ready to fly: Part 2

In Part 1, I went into the detail of what the new SENDIG-DART standard covered; and now I’ll conclude by describing what’s involved in ensuring the industry is equipped with the necessary tools and services in order to successfully participate in the FDA’s Fit For Use (FFU) pilot for SENDIG-DART.

The view from the trenches

My involvement, from co-authoring the standard through to developing software and services for producing SENDIG-DART datasets, gives me a fairly unique point of view when it comes to the pilot.

These are the tools and the services, on which the industry depends. Fortunately, earlier this year, we had some indication from CDISC and FDA that this pilot was coming. This forewarning was invaluable because the software can take the best part of a year to develop as this needs to be done while still developing additional functionality and improvements for supporting SEND 3.1. This new DART functionality comes straight after we have just completed work for two key SEND 3.1 items: supporting the creation and consumption, firstly of Custom Domains; and secondly, of Latin Square Safety Pharmacology studies. It felt like there was not time to draw breath before we began adding the 7 new DART domains.  As solutions director, arguably my most critical task is evaluating and prioritizing our development plans. Over the past few years, we prioritized SEND 3.1 over SENDIG-DART, even though both were published around the same time. This year, we have prioritized SENDIG-DART over SENDIG-AR and have had to delay some of the myriad of other improvements that we’d love to implement to get us closer to that elusive ‘push button SEND’. It’s a difficult choice to implement support for a standard that is likely to not be required for at least another 3 years. That’s a huge investment for a vendor, to create something that will get used briefly for a pilot, but then sit dormant until it eventually becomes a submission requirement.

Getting ready for the FFU has been challenging, at times difficult but also wonderful to finally see this standard come into reality. Yes, for me this is a journey that started in 2012, but finally we have the tools to create the datasets. Finally, we can provide the services for DART that the industry will rely on to create and verify these datasets.

Implementing such solutions puts the standard under pressure and we see just how robust it is. I discover there are one or two items that could have had a clearer definition. One or two concepts that could have been better realized. Now I’m seeing things from the other side. Actually, I’m thrilled to finally have these datasets. We imagined them in CDISC meetings years ago and now they are a reality. Despite now knowing some improvements I’d like to see in the SENDIG-DART, the truth is that it does hold up to the pressure and I’m confident that this FDA FFU pilot will be very successful. It’s been a long, and at times difficult road, but I’m really excited about it.

For reference, the Federal Register Notice is here:

https://www.federalregister.gov/documents/2020/10/22/2020-23393/fit-for-use-pilot-program-invitation-for-the-clinical-data-interchange-standards-consortium-for

Yes, this 2-part posting was a little more technical than usual and I brought us all  into the weeds, even listing and describing actual domains. For my next post, normal service will be resumed, and you’ll be treated to my usual ramblings and single-minded opinions on the impact that SEND is having on our industry, looking at the whole process from placing a study at a CRO through to reporting and analyzing the study results.

As always, if you want to continue this conversation – drop me a line at marc.ellison@instem.com

Till next time

Marc

The DART pilot is getting ready to fly

It’s here. It’s finally here. It’s actually real. This is going to happen.

This is part 1 of a 2 part mini-series, hang in there…

I’m referring to the FDA Federal Register Notice for the pilot of SENDIG-DART (that’s SEND for Reproductive Tox). It’s a new SEND standard, and the FDA are going to run a Fit-For-Use (FFU) pilot throughout 2021 as the first step in the processes of adoption and ultimately making it a required standard. Excitement and trepidation in equal measure.

For me, this was the slippery slope that ultimately caused me to be consumed by SEND. Back in 2012, I was asked to contribute to the development of the standard due to my background in developing the Repro Tox functionality in our study management solution, Provantis®. Four years later, in 2016, the standard was published – incidentally, within about a month of SEND 3.1 also being published. That actually makes for an interesting comparison as SEND 3.1 widened the scope to include both Safety Pharmacology studies and the use of Custom Domains for non-modeled data; yet SEND 3.1 has been required now since March 2019 (though 2020 for IND submissions) but SENDIG-DART has just quietly sat in the background, waiting for its day in the sun.

Time to Get Technical

I hope you are sitting comfortably, because this is where things get a little more technical than I’d normally include in a blog.

Let’s start by answering the burning question: What’s in SENDIG-DART v1.1?

It only covers Embryo-Fetal Development (EFD) studies. It doesn’t support Fertility, Juvenile Tox or any kind of multi-generation study. So, it expects females to arrive on study already pregnant and to be dosed during gestation and the study ends at C-Section. As far as DART is concerned, it’s just about the simplest and most common study in the package.

To support EFD, the standard itself include 7 new domains:

  • 2 Trial Design domains: Trial Stages and Trial Paths – stages and paths are to Reproduction, what Elements and Arms are to treatment.
  • 1 Special Purpose domain: Subject Stages – equivalent to Subject Element but for Reproduction stages instead of Treatment Element
  • 4 new findings domains:
    • Implantation Classification: IC – a little bit like demographics for the contents of the uterus.
    • Nonclinical Pregnancy Results: PY – Results against the female like the Pregnancy status, Pre and Post implantation loss
    • Fetal Measurements: FM – Results against individual fetuses, like the Fetal Weight
    • Fetal Pathology Findings: FX – Examination data for each fetus

In addition to this, SENDIG-DART v1.1 inherits everything from SEND 3.1, so all the existing domains are included (BW, CL, LB etc) though they have been enhanced with the addition of Repro Timing Variables, so date/time of a result is shown as a number of days of Gestation as well as still showing Study Day.

Also, additional test codes are available in BW for Gravid Uterus Adjusted Bodyweight.

Come back in 2 weeks for Part 2, here’s an opening excerpt:

The view from the trenches

From co-authoring the standard through to developing software and services for producing SENDIG-DART datasets, gives me a fairly unique point of view when it comes to the pilot.

As always, if you want to continue this conversation – drop me a note at marc.ellison@instem.com

Till then

Marc

There is nothing permanent except change

There’s a saying around here: If you don’t like the weather… wait 10 minutes!”. Pre-COVID, I was fortunate that my work allowed me to travel, and especially around the US. This is a phrase I’ve heard several times and everyone thinks it’s unique to their area, but it’s actually pretty common.

I was reminded of the phrase this week as it seems pretty applicable to the world of SEND. Those of us who’s world revolves around SEND have needed to get accustomed to change. Things can change rather quickly, and sometimes uncomfortably; but they do keep changing.

It’s only been 6 years since the guidance was published in 2014, yet in that time so much has changed. SEND is mandated for submission and a whole new industry – and in some cases brand new companies – have sprung up to service that need. My SOT presentation this year was entitled “Is the tail wagging the dog? Tackling the impact of SEND” looking at 3 keyways that SEND has drastically changed everything from data collection through to data analysis. We see new Technical Conformance and Business rules continually being published from FDA and this year saw a new SEND Standard hit the data standards catalogue (SENDIG-AR v1.0). Just last week saw the Federal Register Notice for a Fit-For-Use (FFU) Pilot for SENDIG-DART 1.1 (that’s SEND for Repro studies, incase you are not familiar with the IG). This sets us on the path to seeing Repro Studies being required in SEND by FDA.

So, we’ve got used to change. We’ve learnt to be the type of people that are comfortable adapting. Comfortable embracing the new. Comfortable with having our horizons redrawn and our thinking challenged.

Last week threw up the latest seismic shift. It wasn’t the FRN about Repro studies, it was actually a passing line on a single slide of an FDA Public Meeting changed everything. One bullet point hidden amongst many others seemed to tare up the rule book. Am I being overly dramatic? Perhaps, but I know I’m not alone in feeling this way.

Wednesday morning, I’d just published my previous blog where I talked about the question “Does my study require SEND” and how answering that simplest of questions, seemed overly complex, but ultimately led us to the eCTD and how I’d praised the FDA in using the eCTD structure as a good, clear indicator to determine exactly which studies are in scope and which are out of scope.

I published it just before the FDA Public Meeting which then included the line “The placement of a study into the eCTD does not determine the SEND requirement”. It’s since become clear that the agency has always considered SEND to have wider scope than just a few key eCTD sections. FDA reviewers are embracing SEND and see it as massively useful. They do not want the requirement to be limited in this way. They are requiring more SEND. The message is clear: “If your study could be represented in SEND, then it should be in SEND”.

This begs the question: How do I know if my study could be represented in SEND? Answering that now it takes a lot more SEND expertise (otherwise known as SEND-nerdiness!). This new definition is certainly a lot more difficult to work with than simply referencing the eCTD sections.

FDA may have always had this understanding of the scope, but it’s certainly taken industry by surprise. This means that FDA clearly expect more studies to be rendered in SEND than they are currently seeing.

So, it feels like this is the latest, in a long line of changes where SEND is further impacting industry. As sizable as this change is, I know we’ll adapt. We always do.

Till next time

Marc

SEND: Why did they have to make it so damn hard?!

In my previous blog, I recalled the tale of my first experience with SEND. I’d assumed that standardizing nonclinical data would be straightforward, after all we have standard glossaries and lexicons from the likes of INHAND, and isn’t one 28-day study pretty much like another? Yet, this SEND thing seemed so alien and impenetrable to me at first. I suspect it is for a lot of other people too.

The suspicious amongst us may wonder if those creating the standards would intentionally make it as difficult as possible to keep themselves in a job for life. However, having now got on the inside of this CDISC thing, I can reassure us that this is not the case.

I think for most people, the first time they get a hint that this thing is going to be more complicated than they expected, is when they ask the deceptively simple question: “Does my study require SEND?”.  You’d be forgiven for expecting a straight Yes or No answer, but instead you’d be hit with a barrage of unexpected questions like “What is the study start date?”; “Where will it be placed in eCTD structure?”; “Which FDA center will it be submitted to?”; “What type of submission is it?”. While these questions may take us by surprise at first, the reason behind them is that not all study types are applicable for SEND, and the eCTD sections are used as a good indicator of what’s in scope and what’s out of scope. Similarly, not all centers within FDA have adopted SEND, and so the list goes on. The agency has tried to be as gentle as they can with industry, giving us time to implement SEND, and making sure that it’s only required where it is applicable.

Getting past that simple of question, is often like taking the Matrix’s Red Pill and as the film’s character, Morpheus, says “You take the red pill…you stay in Wonderland, and I show you how deep the rabbit hole goes” because what comes next is a whole new world of interconnecting standards, guidance, regulations, publications, terminology and the like. We suddenly realize that SEND isn’t a single standard, it’s multiple standards. For example, a single study may have data using the SEND standard, but also requires the Define-XML standard. It needs to apply Controlled Terminology (known as CT, and at the time of writing there are almost 40 different versions to choose from). The study also includes an nSDRG and should follow the FDA’s TCG and comply with their TRC and we suddenly find ourselves in a head-spinning world of acronyms (that’s Nonclinical Study Data Reviewers Guide; Technical Conformance Guide and Technical Rejection Criteria, just in case you were wondering).

Now drowning in multiple standards, you’d be forgiven for asking “Why didn’t they just write it all down in one place?” The answer to that is that each piece of this spinning jigsaw is written by a different organization or group. For example, CDISC writes the standard, but FDA requires the standard and PHUSE provides implementation guidance while NCI provides the CT (what? more acronyms!). It was important for FDA that they used an existing standard created, by industry themselves. So, we have multiple moving parts, each separately version controlled and each on different release cycles and timelines. For example, CT is published quarterly; FDA’s TCG is published twice annually while the main SEND IG is updated roughly every 5 years.

Then we come to the thing that makes SEND so damn hard, by far: Clinical Baggage. The fact is that that SEND is not really a nonclinical standard. It’s actually a clinical standard that has been butchered I mean amended, into supporting nonclinical studies. This brings with it many, many advantages, but it also makes SEND really difficult for the novice.  Subjects on human clinical trials visit their clinic and are examined, questioned and have results taken. Often, they don’t turn up exactly when they should, and this is all captured in the concept of a visit; which turns up in SEND because it was baked into the clinical standard up on which SEND was based. This is just one of many examples where we may feel like we’ve slipped into a surreal parallel world. (For the most surreal nonclinical example, check out page 215 of the SEND IG where it talks about noting why a result is NULL and actually gives the example “patient was asked but didn’t know”). Yes, I’m being a little flippant here, but these are extreme examples of an underlying truth that SEND is based on clinical and so has concepts that seem a little out of whack with nonclinical. In order to correctly render our nonclinical data, we need to understand this and know how to use these concepts in order to best represent our data.

Oh, and I didn’t even get time to explain why SEND has its own language and ask if you knew what an “Ex-Teepee Tea” might be.

Till next time

Marc