Questioning my own SEND vocabulary

This week I’ve been thinking about some of the verbiage that we use to describe how we work with SEND. I’m worried that sometimes this may be misleading, misused or misunderstood.

An obvious example that springs to my mind is the use of the word ‘dataset’. The correct use of this term in the context of SEND is to describe a single xpt file of data. However, the word is often misused to describe the entire SEND package, which itself could contain 20 or 30 datasets, plus the define.xml and nonclinical study data reviewer’s guide.

Another word which causes me some discomfort in the context of SEND, is ‘converting’ as in ‘converting the study data to SEND’. I must admit that I try to avoid this and instead I try to use the phrase ‘representing the study data in SEND’.  The data are not converted. They are the same data whether they are shown on a PDF table in the appendices of study report, or if they are shown in a SEND xpt file. These are just 2 different representations of the same data. To me the most accurate description would be ‘to render data in SEND’, however this is not a phrase I’ve heard anyone else ever use.

I am fully aware that I’m likely being overly fussy here. While I strive for a level of accuracy in how I describe the way we work with SEND, I understand that language evolves, and I am a strong believer in the idea that language is correct by usage. So, I accept that most people understand and are comfortable with the phrase ‘converting data to SEND’. And so, from time to time I find myself using it too.

This week was one of those times as I’ve largely been focused on discussing the comprehensive range of services we are offering regarding SEND. Like other organizations that offer services around SEND, our mainstay has always been our ability to ‘convert data to SEND’. That term has become a useful shorthand to describe our main service. So, despite my reservations with the term, I’ve found myself using it quite a lot this week with the various industry presentations I have been involved with.

In case you are interested in watching one of those recent presentations, you can check it out here: https://go.instem.com/webcast_send_advantage_services

So, yes, spending a week talking about the services we offer has got me questioning the vocabulary we use to describe the work.

‘till next time

Marc

The headache of clinical signs

How do you create a standardized model for data that drastically vary from lab to lab? Of course, I’m talking about clinical signs. When the clinical signs domain (CL) was first defined in SEND, it was clear to the modellers that every lab collects and reports clinical signs differently. So, the answer was to have a super-flexible domain with very little controlled terminology. No matter how data are collected or reported, they can somehow be fit into this domain. Which means that any study can be represented. However, it also means that it’s near impossible to perform any kind of meaningful cross-study analysis.

Several brave souls banded together to finally address the headache of clinical signs and solve the unsolvable. Trying to impose a level of standardization which would mean that the clinical signs data could be used in cross-study analysis. To attempt impose rules on the unruliest of domains.

And so, there is currently a highly controversial proposal on the table. To take the permissible and often ignored Result Category (CLRESCAT), add controlled terminology, and suddenly elevate its importance. Data can still be collected as they currently are. Symptoms, descriptors and modifiers do not need standardization; we simply assign each result to a category from a controlled list, which can then be used in cross-study analysis. It sounds like a simple and elegant solution to an impossible problem.

However, a robust critique found this to be one of the most controversial proposals I’ve seen put forward to the CDISC SEND team. The concerns are valid. We all know that this domain really needs true standardization of the CLSTRESC variable (that’s the Clinical Signs Standardized Result in Character format for those not yet fully fluent in SEND-ese), however nobody is really prepared to attempt to boil that particular ocean.

It has given voice to the question that some may have thought, but never uttered out loud: What’s the point of cross-study analysis of clinical signs? I mean, the data are so subjective. One person’s mild symptom is another person’s moderate. How much value can we really put on these data?

I really admire the effort and ambition of those trying to add such standardization to clinical signs, yet I also question the value of it.

Such thoughts have then led to the wider discussions to question the value of certain nonclinical data. Food and water data for example. In hushed circles, in a surreptitious discussion, there was the comical scenario described, of group-housed subjects ensuring that the food is not spilled and is equally shared out to give an accurate measure of consumption per subject. We know this is not the case and therefore the data are not accurate, so why do we place such value on the data?

It reminds me of the words a senior scientist told me early in my career: You know, for nonclinical data, the only things that really matter are pathology and clinical pathology data. Everything else is too subjective.

Surely clinical signs are the most subjective and variable of the nonclinical data. Trying to add any truly useful standardization is admirable, but is it an impossible task?

I’d love to hear your thoughts. You can contact me at the usual address marc.ellison@instem.com

‘till next time

Marc

Still confused about the scope of SEND?

Since the FDA publicly presented clarification on the scope of SEND back in October 2020, the topic has been continually discussed and queried. The Technical Conformance Guide (TCG) was updated in September 2021 to give yet more detail and address specific areas that continue to cause confusion.  It seems to me that prior to that point, there had been quite widespread misunderstanding around areas such as the GLP status of a study and whether that impacted the agency’s requirement for SEND datasets.

I think that the TCG update is extremely helpful in clarifying many areas of confusion. It clearly states that studies require SEND datasets, regardless of study report status, GLP status, the drug substance or the age of the subject. On that  last point, it means that many juvenile studies are in-scope. It also clearly states that carcinogenicity studies are in scope as well as combined study types.

Aside from the requirement regarding the study start date, the document is essentially saying that the only studies out of scope of the SEND requirement, are those that are not modelled in the current SEND Implementation Guide (IG). I’ve often paraphrased this to simply state “If the study can be represented in SEND, then it should be represented in SEND”.

I believe it is widely acknowledged, that for many organizations this is quite, well, unpalatable. Our industry is now at a point where SEND feels just about manageable for most of the typical studies. Having to generate SEND datasets for less-typical studies becomes problematic. These studies will and do require greater levels of SEND expertise and individual attention. Obviously, specialist SEND vendors exist that can assist in these situations.

At this point it’s worth considering the question “Which studies can in fact be represented in SEND?”. It may seem like a straightforward question; however, the answer is open to debate. Not every endpoint will have an example in the Implementation Guide, so looking at the Controlled Terminology (CT) gives us long lists of tests that can be included in SEND datasets, yet these CT lists are extensible, meaning that if a study includes a test not listed, it can still be included in SEND. For data which do not naturally fit in a published domain, SEND provides the facility to have custom domains, and of course we can include additional detail through non-standard variables.

Therefore, as we have such a flexible and customizable standard, answering this question becomes difficult, as almost any study can be represented in SEND depending upon the lengths the data provider is willing to go to in order to provide customized datasets.

So, despite being nearly two years since the agency first clarified the scope of the SEND requirement, we still see many organizations questioning it and seeking further clarification. While the TCG does provide specifics, the reality is that different data providers are able to interpret it differently depending upon their capability with SEND.

As usual, if you would like to share your thoughts on the subject or would like some help in clearing up any confusion, you can reach me at marc.ellison@instem.com

‘till next time

Marc

Everyone is talking about it, but is anyone doing it?

It’s the thing that immediately springs to mind when we first learn that SEND and SDTM are so closely related. The idea that there may be shared databases and tools, and most importantly, the nonclinical data and clinical data could be compared.

I’m guessing that anyone reading this blog will know that SEND is the standard for the representation of nonclinical data, but not all of us will be familiar with SDTM. It is the Study Data Tabulation Model, and the SDTM Implementation Guide (IG) is the clinical equivalent of SEND. Both the SEND IG and SDTM IG are implementation of the same model, meaning that they share the same variables, concepts, and rules. Well, mostly. Enough that SDTM domains and SEND domains, look the same. At least at first glance.

 A lab result in SDTM-IG has pretty much the same variables and concepts to an equivalent result in SEND.

Upon realizing this, it’s no great leap to start to wonder about the possibilities of shared tools and visualizations. Both SDTM and SEND data could quite comfortably live in the same warehouse, which brings us on to the really big question: “If we see something unexpected in clinical, could we look back and see if we could have predicted this from the nonclinical data?”.

Yes, unsurprisingly, lots of individuals are talking about this possibility, at least on the nonclinical side, but is anyone actually doing it?

One of the counter arguments often cited is that while SEND and SDTM maybe almost identical in form, they are quite different in function. In the clinical world, tables, summaries and statistics are generally not run directly against the SDTM data, but rather the data are first represented in ADaM (Analysis Dataset Model). In the nonclinical world, we generate tables, summaries and statistics straight off the SEND datasets. So, in terms of function, SEND is closer to ADaM than SDTM, yet ADaM is not based on the same underlying model, and so can look very different to SEND.

However, when we see that SDTM and SEND look almost identical, we can’t help but be tempted to consider the possibilities of exploring what one could tell us about the other. So, I return to my opening thought. Everyone is talking about it, but nobody knows how to do it. We can all see the possibilities and so we all think everyone else is doing it. We think we should be doing it too. Everyone is talking about, but is anyone actually doing it?

As usual, please reach out to me to let me know your thoughts. If this is an area you are working in, I’d love to discuss these ideas further. Drop me a line at marc.ellison@instem.com

‘Till next time

Marc

Thinking about the challenges facing today’s SEND newbie

I started Sensible SEND thinking back to 2012 and my initial impressions & experiences of SEND. You can read that early post here, but as a fish-out-of-water I really struggled and was completely disoriented by this new world I’d landed in. I remember thinking to myself “These people speak a different language! What’s a Tee-Pee Tea Ref anyway?

However, thinking about it now, I had it easy. Back in 2012 there was only one SEND Implementation Guide and it was SEND 3.0. There were no FDA Business Rules, Validator Rules or CDISC Conformance Rules. Nobody talked about custom domains. I’m not sure there was even much discussion about nsdrg either. Back then, the define.xml was typically assumed to be some simple auto-generated table of contents. SEND was still several years away from becoming a submission requirement.

My biggest challenge was simply learning this new language. Just understanding why people kept talking about the differences between cats and rez-cats was as much as I could handle.

I know how long it took for me to feel that I was finally getting a handle on SEND, so I want to take a moment to spare a thought for those who are today’s SEND newbies.

They have all the same challenges of getting to grips with learning the IG and it’s disorienting variable names. However, they also have to get to grips with a regulatory landscape which seems far more complex now. We have multiple IGs, some of which inherit from a parent IG, like in the case of SENDIG-DART being the child of the main SEND 3.1 IG. We have different IGs with different requirement dates, and in the case of the main SEND IG, we have the same IG with different dates depending upon the FDA center receiving the submission.

Also, the situation regarding validator rules and tools is far more complex these days. Understanding the difference between FDA Business Rules, Validator Rules and CDISC Conformance Rules is challenging enough, but they keep changing and all the tools are soon to change as CDISC CORE rolls out for SEND. CORE is the CDISC Open Rules Engine, which is just one more on the ever-growing lists of acronyms that the SEND newbie needs to learn.

Fortunately, the quality of SEND training available today has significantly improved, and while available SEND experts are in limited supply, various organizations now offer consultancy and other services to assist in all aspects of managing SEND.

So, while SEND is more complex, there is more support available than ever before.

Thinking back to when I fell headfirst in to SEND, it was tough, but I wouldn’t have missed it for anything!

‘Till next time

Marc

Who cares about Define files anyway?

For this week’s blog post, I thought it would be good to discuss the current use of the define.xml file within the nonclinical industry, so I’ve asked Charuta Bapat to discuss this with us as her enthusiasm for the Define-XML is bordering on infectious. Charuta works alongside me and is responsible for our define tools and is a member of the CDISC nonclinical define team. Here’s her take on the subject:

Some people are passionate about music, some like art, some love nature, and in our pre-clinical world there are many SEND-addicts. I enjoy working with SEND, but ever since I entered the regulatory industry, my heart has been set on Define-XML. Yes, you read that right! I am one of those rare specimens who is excited by the define.xml and strongly believe in the value of creating great define files.

Imagine you are in a restaurant and looking at the menu – the menu has a great cover with attractive colours. As you open it, you are expecting to see a list your favourite dishes and their pricing. But when you look closely, the menu doesn’t make sense at all. There are spelling mistakes, the description is not readable, the appetizers are mixed with desserts, and you cannot understand the price list. How would that make you feel? Would you start doubting the quality of food? Would you rather go to a different restaurant? Would you feel frustrated that you have to ask a series of questions to the waiter? I suppose regulatory reviewers get the same sort of feeling when they get an improperly structured define file. define.xml  much like my menu analogy, how the file is laid out impacts a reviewer’s first impression. It is basically a table of contents for your study and most useful in describing the structure and content of the submitted data.

Despite Define-XML being in the picture of data submission for more than a decade, it’s evident that the nonclinical industry is still struggling with it. It’s either a neglected or worrying bit of submission that regulatory authorities have not been transparent about when it comes to their expectations. Sadly, many organizations treat define.xml like a household chore.

As a result, many choose to take an approach that suits them, some don’t give it much importance, some submit system-generated default files, and some don’t understand the specification. But I have also seen those who go to great lengths to create truly good define files. Of course, they are the ones who see the benefit they deserve – their regulatory review is much smoother than others.

Everyone feels the Define-XML specification is quite technical in nature and it’s true, but I think it offers an advantage as well – it brings discipline and homogeneity. It can not only be consumed by the software but can also be read manually. It creates a framework in a reviewer’s mind as well as in the software to receive the actual data.

In my opinion, the define.xml has tremendous potential and we all need to work together with CDISC and the regulatory authorities to make the best use of it and unveil its true ability. All we need is the right mindset and efficient tools. We could turn this once laborious household chore into a key piece of valuable data for SEND packages. I know I could never stop talking about define. If you are interested in learning more about improving define files or chatting about the FDA’s feedback on this topic, get in touch with me – I’d love to connect with you!

Charuta

So, what did we learn from the FDA’s DART Fit For Use Pilot?

For most organizations, the FDA’s SENDIG-DART v1.1 Fit-For-Use (FFU) pilot started with the Federal Register notice being posted back in October 2020. For some of us it began earlier when it became apparent that the FDA had an appetite to adopt the SEND standard for Developmental and Reproductive Toxicology (DART). That meant a pilot was inevitable.

Fast forward several years to 2022. We can now see what lessons were learned from that pilot as CDISC have published the details here on their wiki. There is a good deal of material to digest. There are the study data, study reports, FDA reports and a wealth of supporting documents. However, the learnings from the exercise are summarized in a set of PowerPoint slides which were compiled by CDISC. Still, that’s 21 slides crammed with information and recommendations which is a lot to consume. Yet it is very timely considering we are less than a year away from the standard becoming a regulatory submission requirement.

So, for those implementing SENDIG-DART v1.1, I’ll highlight just one area that seemed particularly troublesome to some of the organizations participating in the FFU pilot: The representation of the details of the timings of the results.

While for a General Toxicology study, the study day and dose day are practically synonymous and interchangeable, this is not typically the case with a DART study. Here we have the Dose Day, the Study Day and the Gestation Day as 3 independent concepts. They may happen to occur on the same calendar date, but they are independent. Of these, the Gestation Day is the concept typically used to report data. Obviously, anyone coming to SENDIG-DART from a background in SEND 3.1 is going to find such a concept quite jarring. In General Toxicology, Study Day is king, yet in DART it takes a back seat as the repro phase day – in this case the gestation day – takes precedent for the reporting and analysis. This is something that is described in the SENDIG-DART, but not always understood by all FFU participants.

While there are many details noted and points to be considered in the FFU learnings, I wanted to highlight this issue as it is so crucial. Like many of the other issues highlighted by the pilot, the guidance and instructions are in the SENDIG-DART but are not always well understood. Therefore, it seems to me that the SENDIG-DART is even more tricky to interpret than the main SEND Implementation Guide. Yes, SENDIG-DART includes everything from the main guide and all the pitfalls we’ve all spent years learning to navigate, but DART now brings with it its own set of complications and subtleties.  I’m strongly recommending that any organization that is implementing SENDIG-DART pay close attention to every part of the Implementation Guide. Every detail is important and not to be glossed over. For many organizations, perhaps those with an already stretched SEND team, or with a low volume of DART studies, I’d go as far as to suggest that they shouldn’t look to implement SENDIG-DART themselves, and instead consider outsourcing it to an organization already experienced and capable of supporting this tricky standard. However, for those that are implementing, I’d advise them to take advantage of the training, support and assistance that their partners can provide.

‘Till next time

Marc

Silence speaks volumes

In my last posting, we were talking about the CDISC working conference where I mentioned that one of the highlights of the week was the FDA public webcast. In the past, sometimes these have included surprising announcements which have generated a flurry of activity as we all try to digest and respond to the latest announcement. This time around was a little different, but none the less curious.

Over the years, this public meeting has grown and changed in format. Back when I first became a CDISC volunteer, this meeting seemed like a cozy fireside chat between some FDA reps and some CDISC volunteers. Full of candor, it had the air of old friends catching up over a coffee. Maybe I have nostalgic goggles on. Maybe.

Today it has grown into a far more formal and inclusive event. Presenters now include representatives from the likes of PMDA in Japan and BioCelerate. The discussion between CDISC and FDA is also now more formalized with written questions being submitted in advance to eData, and an FDA representative giving pre-prepared answers with no provision for further questions or follow up discussion.

To the casual observer, the meeting itself would have perhaps seemed a little uneventful with the agency choosing to answer a few questions with fairly uncontroversial answers. However, scratch beneath the surface and there’s more to this story. To me, far more interesting is what was not said.  I’m referring to the fact that the original questions submitted included detailed background and assumptions regarding the scope of SEND before we got the actual 7 individual questions. The agency then chose to answer only 2 of these and chose not to respond to the other 5. Obviously, it is highly speculative to try to guess why the agency chose to respond to some questions and not others, but my personal opinion is that in not answering those questions, the message is being given that the ‘Scope of SEND’ section in the Technical Conformance Guide is clear and should be followed, regardless of the challenges it may bring. I say this because the questions that were not answered generally relate to the difficulty a few organizations face in following the Scope of SEND statement. In the presentation, the agency acknowledged that they didn’t answer the specific questions, but again wanted to reiterate the point that if the study is required to support safety decisions, and that if the study can be modeled in SEND, then it should be submitted in SEND. No matter the challenges that brings.

Some organizations and specialist SEND service providers are in a good place and already comply with the Scope of SEND statement. Yet, I am aware that there are some organizations really struggle with it due to lack of investment, resources or expertise. Also, some may not have adequately planned SEND into their timeline.

As I mentioned earlier, the event is more inclusive now, and interestingly it was the BioCelerate presentation regarding cross-study analysis that seemed the most controversial and provoked the most discussion. Soon the discussion turned to the question “is SEND for analysis, harmonization or exchange?” but I think that’s probably a great blog discussion for another day.

‘Till next time

Marc

A week of working on 3 separate Implementation Guides

This is the week of the CDISC Spring SEND Virtual Face-to-Face meeting, and yes that’s still an oxymoron, but hopefully this will be the final time as future Face-to-Face meetings are planned to back in person again. While tee week mainly focuses CDISC volunteers on progressing the standard, one of the highlights of the event is the FDA public meeting on Wednesday 27th April. This sees FDA give feedback on SEND and is open to anyone wishing to attend.  

For the Spring Face-to-Face there are 3 Implementation Guides being worked on in parallel as they each grow nearer completion. Personally, I’m heavily focused on the SENDIG-DART v1.2 scheduled for publication early 2023. This widens the scope of the DART IG to include Juvenile Tox studies reported by Post-Natal Day. In other words, reporting by age rather than by study day. This was in direct response to the FDA requirement in the Technical Conformance Guide:

The age of the animal at study start does not impact whether the SEND requirement applies. Dedicated juvenile animal studies that typically include multiple phases cannot currently be modelled in FDA-supported SENDIGs and therefore would not require SEND. However, when general toxicology studies (single- or repeat-dose) are conducted with juvenile animals (e.g., young, post-weaning animals), SEND is required as outlined above.

FDA Technical Conformance Guide

So, our team are not going as far as modeling “…studies that typically include multiple phases” but we are providing examples to represent “general toxicology studies (single- or repeat-dose) are conducted with juvenile animals”. Of course, this can all be represented in the current version of SEND, however there are no examples of representing results in a way that allows them to be analysed relative to the age of the subject. So, aside from Developmental Milestones, this version of the DART IG doesn’t really introduce any new concepts, but simply provides examples of best practice of using the existing concepts.

Also being worked on this week is the SENDIG-GT for Genetic Tox, which models Micronucleus and Comet Assay data in SEND, also due for publication in 2023.

However, by far the most effort is focused at SEND 3.2, the next iteration of the main Implementation Guide. Many, if not all sections of the guide are being worked on in some way as the work involves dealing with a plethora of suggested minor improvements. In addition, there are a few areas receiving a more significant change. Probably the most noteworthy is the addition of new domains to facilitate the modeling of Immune System data. There are a couple of new domains being developed for these data. There’s also a new domain for TK data and significant work on both EX and Trial Design. In fact, there are many subteams busily beavering away in their own areas, too numerous to mention individually in this one blog post. Though not due for publication until 2024, this version has been many years in development already and so this feels like the final push to get the content complete so the CDISC publication process and public review can begin.

‘Till next time,

Marc

In the room where it happens

For a variety of different reasons, at several times this last week, I’ve reiterated my motivations for being a CDISC volunteer. I’ve discussed how I first fell into volunteering almost accidentally, not knowing what I was getting into, and discovering I had a real passion for SEND. I’ve also been talking about the importance of being involved in the process of shaping SEND.

With lockdowns and restrictions easing, I recently got to take my daughter to see Hamilton. Originally booked for 2020, finally it’s back in production. One of the big musical numbers has the hook like “in the room where it happens” referring to being party to important decisions, in the case of the Hamilton play, decisions that shaped the United States.

For me, and my fellow CDISC volunteers, it’s important that we are “in the room where it happens”. It’s one of the things that allows us to speak about SEND with such confidence and authority. It’s no co-incidence that many of the SEND related presentations this year at SOT in San Diego were given by CDISC volunteers. Those who were “in the room where it happens”.

While we are talking about SOT, I was unable to attend myself, but the reports from my on-site team suggested that for many individuals and organizations, the conversation had moved on from “what is SEND?”, “Do I have to have SEND?” and “Is SEND really required?”, to discussions around visualizing SEND and using the standardized data for mining, cross study analysis and generally getting greater value from the data. Indeed, for many organizations, there seems to have been a shift away from converting only the minimum required amount of data, to an appetite to have more and more data in SEND, such is its value.

I started this post saying that I’ve been reiterating my motivations for volunteering, and hearing about the value of SEND speaks to my biggest motivation of all: I believe in SEND. I believe that it ensures that the experimental drugs which are given to vulnerable patients on clinical trials, are safer. I believe that having more studies in SEND leads to better decisions and ultimately that means safer drugs and better treatments. I believe that the more data in SEND ultimately speeds up drug development, and to paraphrase my own organization’s mission statement, it brings new treatments to people, faster.

I hope that idea comes across to even the most casual reader of this blog. Ultimately, SEND, though just one piece in a vast jigsaw, really does improve the drug development process. For that reason, just like I’m hearing from many organizations, I want to see more SEND. I want to see more studies in SEND. I want to see more datatypes in SEND. I love working with the other pioneers who share the desire to see beyond compliance.

So, I love being “in the room where it happens”, I love debating SEND, and I love seeing SEND put to greater use because I believe in the value of SEND.

As usual, if you would like to discuss any of these points, please feel free to drop me a line marc.ellison@instem.com

Till next time,

Marc