How do you create a standardized model for data that drastically vary from lab to lab? Of course, I’m talking about clinical signs. When the clinical signs domain (CL) was first defined in SEND, it was clear to the modellers that every lab collects and reports clinical signs differently. So, the answer was to have a super-flexible domain with very little controlled terminology. No matter how data are collected or reported, they can somehow be fit into this domain. Which means that any study can be represented. However, it also means that it’s near impossible to perform any kind of meaningful cross-study analysis.

Several brave souls banded together to finally address the headache of clinical signs and solve the unsolvable. Trying to impose a level of standardization which would mean that the clinical signs data could be used in cross-study analysis. To attempt impose rules on the unruliest of domains.

And so, there is currently a highly controversial proposal on the table. To take the permissible and often ignored Result Category (CLRESCAT), add controlled terminology, and suddenly elevate its importance. Data can still be collected as they currently are. Symptoms, descriptors and modifiers do not need standardization; we simply assign each result to a category from a controlled list, which can then be used in cross-study analysis. It sounds like a simple and elegant solution to an impossible problem.

However, a robust critique found this to be one of the most controversial proposals I’ve seen put forward to the CDISC SEND team. The concerns are valid. We all know that this domain really needs true standardization of the CLSTRESC variable (that’s the Clinical Signs Standardized Result in Character format for those not yet fully fluent in SEND-ese), however nobody is really prepared to attempt to boil that particular ocean.

It has given voice to the question that some may have thought, but never uttered out loud: What’s the point of cross-study analysis of clinical signs? I mean, the data are so subjective. One person’s mild symptom is another person’s moderate. How much value can we really put on these data?

I really admire the effort and ambition of those trying to add such standardization to clinical signs, yet I also question the value of it.

Such thoughts have then led to the wider discussions to question the value of certain nonclinical data. Food and water data for example. In hushed circles, in a surreptitious discussion, there was the comical scenario described, of group-housed subjects ensuring that the food is not spilled and is equally shared out to give an accurate measure of consumption per subject. We know this is not the case and therefore the data are not accurate, so why do we place such value on the data?

It reminds me of the words a senior scientist told me early in my career: You know, for nonclinical data, the only things that really matter are pathology and clinical pathology data. Everything else is too subjective.

Surely clinical signs are the most subjective and variable of the nonclinical data. Trying to add any truly useful standardization is admirable, but is it an impossible task?

I’d love to hear your thoughts. You can contact me at the usual address

‘till next time


Published by Marc Ellison

Self-confessed SEND nerd who loves geek-ing out about everything to do with SEND. Active CDISC volunteer and member of the CDISC SEND extended leadership team. Director of SEND solutions at Instem responsible for all our industry leading SEND products and services.

One reply on “The headache of clinical signs”

Comments are closed.