A weekly summary of chat.fhir.org to help you stay up to date on important discussions in the FHIR world.
@Grahame Grieve when you can share any updates, we are about to have a conversation with HL7 CG WG (Tuesday April 4th) on this topic - from a broader perspective of "FHIR Test Cases" in general. Just wondering if any notes can be shared.
not at this point
only that I am planning to add a set of test cases for $expand and $validate-code to the test cases github repo that explore known issues around expansion and validation
@Grahame Grieve , @Michael Lawley : If I may believe this: https://confluence.hl7australia.com/display/COOP/2023-03+Sydney+Connectathon , the Australian connectathon was on March 23rd... Do you already have some news about the Publisher/Terminology server interface?
nothing final yet. I will update when there's news
ok there's news. There's test cases here: https://github.com/FHIR/fhir-test-cases/tree/master/tx
From the next release of the validator, you can run them like this:
java -jar validator.jar -txTests -source https://github.com/FHIR/fhir-test-cases -output /Users/grahamegrieve/temp/txTests -tx http://tx-dev.fhir.org -version 4.0
Where:
there's a fair bit of work to go here, but this is the shape of where things are going
@Grahame Grieve What's the preferred way to provide feedback on these tests - questions, apparent bugs, etc?
Currently I have issues with the REGEX test, a bunch of the language tests, and the big-echo-no-limit
test which seems to require a system to refuse to return an expansion with more than 1000 codes?
Wrt the language tests, language-echo-en-en
, language-echo-de-de
seem to suggest that the expansion should set ValueSet.language
based on the displayLanguage
parameter to $expand
. But, that would then imply that the entire result ValueSet is in that language rather than just the ValueSet.expansion.contains.display
values (which is all that parameter is really requesting).
For the translated CodeSystems in the language tests, none of the translations have a use
value, so I (Ontoserver) can't know that they should be used as the preferredForLanguage
display value.
Last question: is there a branch available with the -txTests option
What's the preferred way to provide feedback on these tests - questions, apparent bugs, etc?
discussion here first, I think.
Currently I have issues with the REGEX test
what?
the big-echo-no-limit test which seems to require a system to refuse to return an expansion with more than 1000 codes?
well, this is something we'll have to figure out. It's my test that that's how my servers work. It's not necessarily how other systems have to work, so we'll have to figure out how to say that in the tests
Wrt the language tests, language-echo-en-en, language-echo-de-de seem to suggest that the expansion should set ValueSet.language based on the displayLanguage parameter to $expand. But, that would then imply that the entire result ValueSet is in that language rather than just the ValueSet.expansion.contains.display values (which is all that parameter is really requesting).
I sure expected some discussion on this. There's two different things that you might want - languages on display, and languages on the response. The way the tests work, if you specify one or more display languages, you get displays defined for those languages
But the language of the response - the ValueSet.language, that's based on the language parameter of the parameters, which the controls how the available displays are represented in the response, based on the value of ValueSet.language
with regard to the use parameter, I don't believe that the spec says anywhere that there is a preferredForLanguage code, so how can that be in the tests?
is there a branch available with the -txTests option
the master has that now
I now have the validator test runner going, but I think it is being really overzealous in the level of alignment its looking for between the expected response and the actual response.
First two issues: .meta
and .id
-- I don't think either of these should be included in the comparison.
Next one: ValueSet.expansion.id
-- that's purely a server-specific value
.meta and .id... I'm not producing them, right?
.expansion.id? or expansion.identifier?
Regarding the regex issue, we're limited to Lucene's flavour which does not include character classes like '\S' or '\d'.
.id
is in simple/simple-expand-all-response-valueSet.json
for example. I produce .meta
but not .id
ouch. would you like to propose an alternative regex?
".{4}[0-9]"
would work for me in this example, but it's not quite the same. The more accurate "[^ \t\r\n\f]{4}[0-9]"
would also work.
And yes, I did mean expansion.identifier, but I think this was a false negative -- me misreading the output
I will commit some changes when I can
btw, what are you putting in meta?
Require? no id or just don't care? I think a bunch of things should be don't care
Meta was including a version (doesn't really make sense) but also a lastUpdated
I think for id, it shouldn't have an id? I just stopped regurgitating the id, which was basically an oversight
It would also potentially propagate tags
What if it's using a stored expansion?
in this context?
Well, no, but I'm thinking that these tests should really only be looking for things that are known to be wrong
perhaps. They're also my own internal qa tests. that might be too much, I guess, but I'm hoping not
I was thinking that the expected response in the test would set the scope of required elements, and other things would just be ignored
you assume that I'm sure what the answer is there
I'm guessing there's a way to require an element but ignore the value
I'm not even sure that it can have a known answer
there is, yes
I've got a bunch of time later today to dig into this in detail, so I can hopefully provide coherent feedback rather than piecemeal reactions
ok great
Back quickly to .expansion.identifier
, this is what I'm seeing:
Group simple-cases
Test simple-expand-all: Fail
string property values differ at .expansion.identifier
Expected :$uuid$
Actual :4aa6f81f-ab79-41b2-96e2-6faa0aadc38c
well, that's not a valid value
oh! it needs the urn:uuid:
prefix?
yes
But the type is uri? which can be absolute or relative
well... a URI can be, but in this case:
uniquely identifies this expansion of the valueset
I think it should be absolute
There's several places in the spec where we missd this when we allowed relative URIs
uniquely in what scope though? wrt that specific tx server endpoint, or globally, or in some deployment environment?
I don't think you can legitimately enforce it to be a UUID (it might be something like [base]/expansion/[UUID]
, which would be "unique" and absolute)
This one is perhaps tricky:
Several tests expect an expansion parameter for excludeNested
but Ontoserver always behaves as if this was true, and so omits it because its value does not affect Ontoserver's behaviour.
That’s less than ideal from my pov. and probably excludes ontoserver from serving for hl7 igs. Maybe. I’ll think about the testing ramifications. Is that fact visible in the terminology capabilities statement?
You have it as a uuid anyway, so prefixing isn’t going to be a problem? And the intent is global since expansions are sometimes cached and reused. Sometimes at scale
Globally unique is fine, but then I'd be tempted to adopt a URI based on the template [base]/expansion/[UUID]
e.g., https://tx.ontoserver.csiro.au/expansion/4aa6f81f-ab79-41b2-96e2-6faa0aadc38c
.
But in principle, if the spec says URI, unique identifier, then I don't think it's good form to impose additional constraints.
Ontoserver does return TerminologyCapabilities.expansion.hierarchical = false
But the meaning of excludeNested
is only about the result representation (true
=> MUST return a flat expansion), it does not affect the logical content of the expansion.
Is there a reason you think that parameter should be included?
Conversely, Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging.
I fully expect we'll need to do some adjustments in this space
Is there a reason you think that parameter should be included?
IG Authors have raised issues before when the expansion in the IG loses the heirarchy
@Michael Lawley I've been thinking about this one:
omits it because its value does not affect Ontoserver's behaviour.
That's wrong - the parameters are to inform a consumer how the value set was expanded. Whether or not Ontoserver can or can't is not the point, it's how it acted when doing the expansion
Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging
offset = 0, presumably, but what's count in that case?
But the presence/absence/value of excludeNested
doesn't affect "expansion" (i.e., which codes are present), it only potentially affects how those codes are returned in the ValueSet.expansion.contains
.
Grahame Grieve said:
Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging
offset = 0, presumably, but what's count in that case?
MAXINT
it still affects the expansion even if it doesn't affect which codes are present
If a consumer is looking through a set of expansions, instead of just generating a new one, then it's going to be input into their choice
I had been approaching it from the perspective of judging whether or not a persisted expansion is re-usable for a different expansion request.
(Which is something that Ontoserver does when it has a ValueSet with a stored expansion.)
indeed, but you're only thinking of it in your context, it could/would also be done in expansion users that can't make the assumption you're making
I'm trying to think about this from the perspective of a client / consumer of ValueSet.expansion -- under what circumstances do they need to know excludeNested = true
? What is it actually telling them?
One answer might be "this value was provided for this expansion parameter in the original request"?
that this is expansion will not contain nested contains even if that might be relevent for this value set
Also, what should Ontoserver do if the request was $expand?excludeNested=false
? Should it state that in the parameters even though the actual expansion may have (if it was present) flattened any nesting? Or, should it change it to true
because flattening might have happened.
Perhaps the message is just "as a client, you do not have to look for nested codes when processing this expansion"?
well I think that the server should return an exception if the client asked it to do something it can't do
But that's not what excludeNested=false
means. It's not the same as saying "include nested"
no that's true
and you don't know whether flattening is a thing that happened or not, I presume
correct
Now looking at all the validation test cases, the system
parameter has the wrong type (valueString
not valueUri
) and, in the responses, code
also has the wrong type (valueString
instead of valueCode
)
and similarly for system
in the responses
wow that's a bad on my part. Fixed
nearly - still problems with the system
parameter
diff --git a/tx/validation/simple-code-bad-code-request-parameters.json b/tx/validation/simple-code-bad-code-request-parameters.json
index 077c424..59d292a 100644
--- a/tx/validation/simple-code-bad-code-request-parameters.json
+++ b/tx/validation/simple-code-bad-code-request-parameters.json
@@ -8,6 +8,6 @@
"valueCode" : "code1x"
},{
"name" : "system",
- "valueString" : "http://hl7.org/fhir/test/CodeSystem/simple"
+ "valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
}
In validation/simple-code-implied-good-request-parameters.json
, there is a non-standard parameter implySystem
:
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "url",
"valueUri" : "http://hl7.org/fhir/test/ValueSet/simple-all"
},{
"name" : "code",
"valueCode" : "code1"
},{
"name" : "implySystem",
"valueBoolean" : true
}]
}
indeed there is, and there should be, right?
it indicates that it is intentional that there's no system and the server should infer what the system is
But that is an invented non-standard parameter?
The use-case here seems to be that the system isn't knowable by the calling client, but in the context of validation, why wouldn't the system be known; there should be bindings available?
it's a code type, so there's only a code, and the server is asked to imply the system from the code and the value set
agree I haven't proposed that parameter, but it's still needed
Yes, its a code type, but that must exist in some context, right? The context should provide the system?
the value set itself is the context
What are the boundaries here? Can the ValueSet contain codes from > 1 code system? Can the code be non-unique in the valueset expansion?
The value set can contain codes from more than one code system, yes. A. number of them do. The code must be unique in the value set else it's an error
Presumably the system parameter does also need to be provided (from the documentation of $validate-code.code: "the code that is to be validated. If a code is provided, a system or a context must be provided"). Does the client just pass a dummy system that is ignored?
no the system is not provided in this case
since there isn't one
and yes, that violates the documentation on that parameter
And is it only ever used when supplying the code
parameter?
yes. it must be accompanied by a code and a value set
I fixed the remaining system parameters
for examples like validation/simple-code-bad-display-response-parameters.json
why is the result true
when the display is invalid? The specification for the result
output parameter is:
True if the concept details supplied are valid
Another test case issue: mis-named input parameter. See, for example, validation/simple-code-bad-version1-request-parameters.json
which includes a parameter version
that should instead be systemVersion
.
why is the result true when the display is invalid?
Because that's just a warning - the code has been judged to be in the value set
a parameter version that should instead be systemVersion
ouch
why is the result true when the display is invalid?
Because that's just a warning - the code has been judged to be in the value set
But if a display is provided it should be validated and if its not any of the displays listed by the CodeSystem, then it is invalid -- the definition of result
is not "True if the code is a member of the ValueSet/CodeSystem", but rather "if the concept details supplied are valid"; display
is one of these details.
I am very uncomfortable about relaxing display validation affecting the outcome due to the prevalence of EHRs that allow for the display to be edited arbitrarily.
well, I'm very sure that if I changed to an error instead of a warning, the IG authoring community would completely rebel, but I guess TI might want to have an opinion. So what do other people think?
There are lots of reasons for display not being valid. (E.g. If someone has a code system supplement the validator doesn't know about.)
Why is the IG authoring community using non-valid displays?
there's 4 reasons that I've seen:
Note, I am more concerned about the clinical community than the IG community.
If this is an impasse, perhaps the mode
flag should be used to relax things?
Either way, I think we need an explicitly agreed mechanism to use the "issues"
to flag the invalid display text.
Also, I think the test extensions-echo-all
is wrong at least in assuming supplements will be automagically included
ValueSet display should succeed
TI decided otherwise; that's no longer allowed
I expect that TI will choose to decide this in NOLA. You going to be there?
Either way, I think we need an explicitly agreed mechanism to use the "issues" to flag the invalid display text.
the tests are doing that now
This has been discussed, at some length, with regard to SNOMED CT descriptions, and I recall that @Dion McMurtrie produced a table with various permutations in the early days of SNOMED on FHIR.
Unless the edition and version of SCT is provided, it's not possible to determine the validity of an unrecognized description. Otherwise, the best a server can do is return the preferred term from its default edition & version and a warning.
well, this discussion is not just about SCT that's for sure
Also, I think the test extensions-echo-all is wrong at least in assuming supplements will be automagically included
why?
That's precisely the intent of this test - make sure that supplements such as this are automagically included
language supplements
Grahame Grieve said:
well, this discussion is not just about SCT that's for sure
Sure - but things are a lot more straightforward for single edition, single language Code Systems.
that's not much of hill to climb given how complex SCT is
It's far more complex with things like LOINC where the same complexity (different national editions and local extensions) exists, but where everyone does it differently and often poorly.
Re extensions-echo-all
, the supplement contains extensions (some I think are technically not valid where they're being used), and then expecting corresponding property values in the output (eg weight
)
which ones are not valid?
ItemWeight - only goes on Coding and in a Questionnaire
I think I created a task about that one
I used a property where I could, and an extension where I had to
We can force the overhead of a CodeSystem supplement, but we can't count on the supplement being available when performing production-time validation. And that means that non-matching display names shouldn't be treated as an error.
If you're doing prod time validation without all the base info, then you're only going to get half answers - do you tolerate missing profiles? But, if your use case is tolerant of bad displays, just omit them from the validate-code calls, or let's have an explicit parameter that the client passes telling the server to only treat as warnings
@Michael Lawley to increase your happiness, I'm just adding tests for supporting these 3 parameters from $expand for $validate-code: system-version, check-system-version, force-system-version, and as I'm doing that, I'm checking that they apply to Coding.version as well
One of the other challenges with the txTests is that Ontoserver returns additional expansion.parameters and this causes the test to report a false failure
Most implementations don't care about the display values - and will be sloppy with them. So the default behavior should be warnings - errors should require the explicit parameter.
One of the other challenges with the txTests is that Ontoserver returns additional expansion.parameters and this causes the test to report a false failure
I'm assuming that this is something we'll sort out, so I'm not worrying about that today
but it's a test problem, not an implementation problem
Given that displays are what clinicians see and interpret, being sloppy is bad -- we've seen real clinical risks here.
And just because (a group of) ppl are sloppy doesn't mean we should enable that by default.
but it's a test problem, not an implementation problem
It's a test problem yes, but it's making it very hard for me to work through the cases because it bails out early and hides potential actual problems in the rest of the response.
fair.
Do you have a list of the extra parameters? In general, some extra parameters would be fine but others might not be, and I don't want to simply let anything go by
The reality is that the displays in many code systems are not appropriate for clinician display. By 'sloppy' I mean that systems make the displays what they need to be for appropriate user interface, not worrying too much about diverging from the 'official' display names if the 'official' names aren't useful for the purpose. I'm not saying that the display names chosen are typically inappropriate/wrong.
Do you have a list of the extra parameters?
version
is the main one, and it seems strange that it's not expected in the result
Also, I'm getting a missing error for includeDesignations
. Again, this seems like our interpretations of "parameters that affected expansion" are mis-aligned. I interpret this as being the calculation of the matching codes, not the specific representation that gets returned (noting that displayLanguage
is counted since it affects the computed display
value)
Coding.display A representation of the meaning of the code in the system, following the rules of the system.
"following the rules of the system", not "following the rules of some system implementer".
Also, if a display is not appropriate, then get it fixed -- either at source (in HL7 / THO) or with the external party. If the external party won't play ball, then fix it in a shared supplement so everyone can benefit rather than lots of (potentially incompatible) fixes spread over many different IGs.
it sounds so easy when you say it like that
Sure. Except that's not what systems do today. They just load the codes into their databases and make the display names say what they want them to say. And they're not going to change that just because we might like them to.
If that's all they did I'd be less concerned. What they REALLY DO is allow people to change the display text on-the-fly to absolutely anything (and people do this), and the results sometimes bear zero resemblance to the code's meaning. This is why I say we're concerned about the clinical use case over the IG use case, and why I want the caller to explicitly request that an invalid display not return an error; then the onus is on the caller.
version is the main one, and it seems strange that it's not expected in the result
where is it missing? I just spent a while hunting for it, and yes, it was missing from the validate-code results, but I can't see where it's missing from the $expand results
Let's start with simple/simple-expand-all-response-valueSet.json
-- it only has:
"parameter" : [{
"name" : "excludeNested",
"valueBoolean" : true
}],
.. and ..?
Where is the version of the CodeSystem that was used in the expansion?
that code system doesn't have a version, so there's no parameter saying what it is
These days I guess that should be called system-version
? But it's a canonical, so I would expect http://hl7.org/fhir/test/CodeSystem/simple|
as the value
really? I would not expect that
That says "I use a version-less instance of this code system", rather than just not saying anything.
so firslty, it's not system-version - that's something else, an instruction about the default version to use. version is the actual version used. Though I just spent 15min verifying that for myself, and it could actually be documented
At least it's "not wrong"
+1 for documenting these :)
That says "I use a version-less instance of this code system", rather than just not saying anything
I
I'm not sure that it does. I just read the section on canonicals again, and at least we can say that this is not clear
I don't see another way to say it -- the trailing |
might be optional, but is, I think, in the spirit of things?
I think that the IG publisher would blow up on this:
"parameter" : [{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple|"
}]
If you want a versionless canonical, you omit the '|'. I would expect (and have only ever seen) the '|' there if there's a trailing version.
no wouldn't blow up, just wouldn't make sense in the page, because the code makes the same assumption as Lloyd
Hmm, that looks like it might be HAPI behaviour -- I'm guessing if you set the version to "" rather than null.
Investigating...
Yep, that is the issue.
Would IG publisher cope sensibly without the trailing |
"parameter" : [{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
it'll ignore that one. As it will ignore http://hl7.org/fhir/test/CodeSystem/simple| from the next release
if there's no version, there's nothing to say
And I'll work around HAPI to leave off the |
but you won't leave the parameter out?
what about in the response to $validate-code when there's no version on the code system?
that's why you should leave it out
I'll have to look & think deeper - if the ValueSet has two code systems but one has no version, then it could be misleading / confusing to have only one "version" reported? I think leaving it out means clients may have to work harder.
why would clients have to work harder?
Just looking now at extensions/expand-echo-bad-supplement-parameters.json
-- we've used PROCESSING as the code rather than BUSINESS-RULE ; seems a somewhat arbitrary distinction
It is but I don't mind changing
clients (that care) have to know that a missing version means a code-system didn't have a version. And, they have to scan the expansion to find all the code systems in scope (and this may not be a complete set if paging).
Additionally, what if the valueset references two "versions" of the same code system, and one is empty...hmm, not sure if that is possible with Ontoserver.
Re PROCESSING vs BUSINESS-RULE, ideally the test would allow either
what if the valueset references two "versions" of the same code system, and one is empty
You should go bang on that case
clients (that care) have to know that a missing version means a code-system didn't have a version
But they have to scan to decide that either way
Not if all the code systems are listed directly in expansion.parameters."version"
.
Another edge case - a code system is referenced in the compose, but no codes actually match - you'd never know it was in scope
Regarding setting the ValueSet.language
to the value of $expand
's displayLanguage
parameter, will this not be misleading if only some of the codes have translations in the requested language?
sticking to version for now... you're really using it as more than a version - you're using it as a dependency list
I'm thinking that clients might be doing that, yes
well, if we're going to use it to report things that don't contain versions, then we should change it's name. Or would you not consider that?
Regarding setting the ValueSet.language to the value of $expand's displayLanguage parameter, will this not be misleading if only some of the codes have translations in the requested language?
possibly, it that's what was going on, but it's not
well, the tests now have version as optional
though I think we should consider renaming it
did you want to talk about other parameters before we talk about language?
and going back, I sure don't understand this:
Also, I'm getting a missing error for includeDesignations. Again, this seems like our interpretations of "parameters that affected expansion" are mis-aligned. I interpret this as being the calculation of the matching codes, not the specific representation that gets returned (noting that displayLanguage is counted since it affects the computed display value)
what's it got to do with the calculation of matching codes?
https://github.com/hapifhir/org.hl7.fhir.core/pull/1246 - work to date if you won't want to wait for some weird testing thing to be resolved
Ignore the includeDesignations
thing - I'm just including it if a value was supplied.
(deleted)
Back on display validation, the example in the spec suggests that the appropriate response is to fail:
http://www.hl7.org/fhir/valueset-operation-validate-code.html#examples
Is there appetite for adding another mode, e.g. ALLOW_INVALID_DISPLAY ?
the example certainly does suggest failure is appropriate
As a status update, I think we're very close to passing except for the errors relating to unexpected "version"
values which manifest like:
Group simple-cases
Test simple-expand-all: Fail
array properties count differs at .expansion.parameter
Expected :1
Actual :2
and also some spurious validation of the actual error message strings:
Test validation-simple-codeableconcept-bad-system: Fail
string property values differ at .parameter[0].resource.issue[0].details.text
and
Test validation-simple-codeableconcept-bad-version1: Fail
string property values differ at .parameter[0].resource.issue[0].details.text
I figured the question of the actual error messages would come up at some point
but good to hear, thanks
Is there appetite for adding another mode, e.g. ALLOW_INVALID_DISPLAY ?
I don't think I'd like to add another mode for this. Or at least, not this alone. I'm considering the ramifications of just saying that's an error, and then picking through the issues in the IG publisher and downgrading it to a warning if the issues are only about displays.
Either way, I'll be putting this question to the two communities (TI and IG editors) in New Orleans
I think we're very close to passing
Well, too soon :-)
Seems the test harness complains about Ontoserver including extensions.
It also doesn't account for the expansion.contains
being flat when excludeNested
is not true
.
But I believe these are txTests issues, not Ontoserver issues
A new spec issue -- expansion.parameter.value[x]
doesn't support canonical
only uri
.
image.png
Which means the test responses that have an expansion.parameter
like:
{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
}],
are invalid.
yeah I discovered that last night. I'm midway through revising them for other reasons and then I'll make another commit
@Michael Lawley I committed fixed up tests.
with regard to error messages, can you share a copy of the different error messages with me? I'm going to set the tests up so that the messages have to contain particular words. (I think)
I'm going to set the tests up so that the messages have to contain particular words. (I think)
Um, ok.
The specified code 'code1x' is not known to belong to the specified code system 'http://hl7.org/fhir/test/CodeSystem/simple'
A version for a code system with URL http://hl7.org/fhir/test/CodeSystem/simple was not supplied and the system could not find its latest version.
A version for a code system with URL http://hl7.org/fhir/test/CodeSystem/simplex was not supplied and the system could not find its latest version.
None of the codes in the codeable concept were valid.
The provided code "#code1x" was not found in value set http://hl7.org/fhir/test/ValueSet/simple-all
The provided code "http://hl7.org/fhir/test/CodeSystem/en-multi#code1" exists in the ValueSet, but the display "Anzeige 1" is incorrect
The provided code "http://hl7.org/fhir/test/CodeSystem/simple#code2a" was not found in value set http://hl7.org/fhir/test/ValueSet/simple-filter-regex
Another test case error:
validation-simple-code-good-display
The ValueSet specifies a version for the code system 1.0.0
but the display value supplied in the request "good-display" is that from version 1.2.0
AND the response says that version 1.2.0
was used in the validation.
I think that's fixed up now?
No - https://github.com/FHIR/fhir-test-cases/blob/master/tx/validation/simple-code-good-display-response-parameters.json still shows version 1.2.0
, last updated 20 hrs ago
but what's the request?
duh. I forgot to push :sad:
and now the request has valueString not valueUri for the system :man_facepalming:
ah, that's an ongoing issue -- I just have local changes to work around :-)
I'll fix
ok pushed
Thanks! At least with my test harness the main outstanding issues is the display validation issue.
Now looking at extensions-echo-enumerated
:
ValueSet.extension
in the output expansion ValueSet?ValueSet.compose
, ValueSet.date
, and ValueSet.publisher
should all be optional.the display validation issue?
whether an invalid display causes result to be false
oh right. yes
Why are the top-level ValueSet.extension in the output expansion ValueSet?
Because they might matter, so the server should echo them
Not just for this expansion, but all) - ValueSet.compose, ValueSet.date, and ValueSet.publishershould all be optional.
I guess. I don't think it matters to me? I'll check if I care
Why are the top-level ValueSet.extension in the output expansion ValueSet?
Because they might matter, so the server should echo them
That suggests a stronger link between specification and the expansion than I expect. This appears to be the key statement from 4.9.8 Value Set Expansion
A resource that represents a value set expansion includes the same identification details as the definition of the value set
What is the scope of "identification details"?
regarding ValueSet.compose: I have a parameter includeCompose
for whether it should be returned or not, but I don't ever use it, and I wouldn't currently miss the compose
Is that not what includeDefinition
is for?
Also, looking at the OperationOutcome
s, why use .details.text
rather than .diagnostics
(given that there's no .details.coding
values)
dear me it is
diagnostics is for things like stack dumps etc. The details of the issue go in details.text
That suggests a stronger link between specification and the expansion than I expect. This appears to be the key statement from 4.9.8 Value Set Expansion
I didn't understand that
What is the scope of "identification details"?
url + version + identifiers, I think
OperationOutcome.issue.diagnostics
Comment: This may be a description of how a value is erroneous [...]
But happy to update - it's all new
Stronger link...
Why would an extension on a ValueSet definition be relevant to its expansion (as a general rule)?
it shouldn't be but it might be relevant to the usage of the expansion
hence why I echo it
Hmm, ok
Should that be a requirement here?
no, in fact, they are only included if includeDefinition is true.
pushed new tests. code for running the tests is in the gg-202305-more-tx-work2 branch of core
my local copy of tx-fhir-org still fails one of the tests... might have more work to do on the tester
open issues - text details, + the display validation question which is going to committee in New Orleans
So, turns out that it is HAPI's code that's populating the OperationOutcome and putting the text into diagnostics and not details.text
This is only in the case of things like code system (supplement) or value set not found/resolvable since that's a 404 response
this one definitely matters.
Yep, I'll have to take over from the default interceptor behaviour
Thanks @Grahame Grieve I have the new tests and the gg-202305-more-tx-work2 branch running locally.
A bunch of tests are failing because the expected expansion is hierarchical, but Ontoserver returns a flat expansion so there are errors like:
Group parameters
Test parameters-expand-all-hierarchy: Fail
array properties count differs at .expansion.contains
Expected :3
Actual :7
so why is Ontoserver returning a flat expansion? does it need a parameter?
Because it's allowed to, and unless you're returning "all codes", it's a hard problem to cut nodes out of a tree/graph
Let alone order them
but that one is all codes
All codes is very low on our priority list (infrequent use case) so we haven't done special-case work to preserve hierarchy.
It's also something that we've rarely been asked about.
it's certainly come up from the IG developers
and I'm surprised... structured expansions are a real thing for UI work
What we have heard is that some people want to have an explicit hierarchy on expansion that doesn't match the code system's hierarchy (eg where things are grouped differently from the normal isa hierarchy). In these cases the simplest approach we've found is to have them express the desired hierarchy in the stored expansion.
that might be, but as you see, there's reasons people want a heirarchy
But for IG developers, why do they care about the (on the wire) expansion; if the IG tooling needs to render the hierarchy, then it's in the CodeSystem already, or can be recovered from the ValueSet with $expand?property=parent
.
it IG tooling defers to the tx service on this matter. It doesn't try to impose heirachy on what the tx server chooses to return
@Michael Lawley we're going to do triage here on our open issues tomorrow. What I have in my mind:
have I missed anything?
Wrt "How a server reports that it doesn't do heirarchical expansions", a server may do this in some circumstances but not others. For example, Ontoserver (currently) does not do them when calculating the expansion itself, but may return them if its (re-)using a stored expansion.
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
it IG tooling defers to the tx service on this matter. It doesn't try to impose heirachy on what the tx server chooses to return
Then it's effectively choosing to be happy with what the tx server returns, and in that case anything that is in-spec with the general FHIR tx services spec should be acceptable.
Then it's effectively choosing to be happy with what the tx server returns, and in that case anything that is in-spec with the general FHIR tx services spec should be acceptable.
It is acceptable from the infrastructure's pov, but not acceptable from the consumer's pov
it might be acceptable to some consumers, the ones who choose to use Ontoserver, but I think that would mean many editors would not be ok with HL7 using Ontoservrer
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
but that's how the test case we're talking about works
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
If $expand?url=vs1
returns a hierarchical expansion, then I define vs2 as "include vs1", should it not also return a hierarchical expansion?
It is acceptable from the infrastructure's pov, but not acceptable from the consumer's pov
From my perspective, the consumer here is using a tool that could provide this behaviour itself by using the CodeSystem directly (or by reconstructing the hierarchy from parent relationships), but the tool chooses to hand it off to the tx server. Since this is a context-specific behaviour, why not have the tool that wants it, implement it?
Of course, if Ontoserver users call for this behaviour, then that's something we would strong consider, but otherwise it seems like there's an undocumented set of use-cases where a specific behaviour is desired that we have to discover in a trial by error manner.
well, here you are, discovering it :grinning:
returning an hierarchical expansion when the value set includes all of a hierarchical code system is a required feature for HL7 IG publication
Probably because I'm grounded in HL7 culture, but for me that's totally obvious and hardly needs to be stated as a requirement, so there you go. However, Ontoserver doesn't need to do that to be used by the eco-system as an additional terminology server
I'm thinking about how to handle that in the tests - that's why I asked whether this is a feature that surfaces in the metadata anywhere. But it doesn't :sad:
other than parameters-expand-all-hierarchy, parameters-expand-enum-hierarchy, and parameters-expand-isa-hierarchy, does this affect any other tests?
on the subject of display error/warning, I'll be advocating for a parameter that defaults to leaving the tx server returning an error.
is it another mode flag? or something else?
I think another mode flag works. With the default being return error, and the flag saying don't error on displays, just warn.
I've just updated https://r4.ontoserver.csiro.au/fhir with the work-in-progress changes to align better with the requirements as expressed in the txTests
I believe that many of the reported failures are false negatives, and some are very hard to understand what's going on, e.g.:
Test validation-simple-code-good-version: ... Exception: Error from server: Error:org.hl7.fhir.r4.model.CodeableConcept@11b455e5
org.hl7.fhir.r4.utils.client.EFhirClientException: Error from server: Error:org.hl7.fhir.r4.model.CodeableConcept@11b455e5
at org.hl7.fhir.r4.utils.client.network.FhirRequestBuilder.unmarshalReference(FhirRequestBuilder.java:263)
at org.hl7.fhir.r4.utils.client.network.FhirRequestBuilder.execute(FhirRequestBuilder.java:230)
at org.hl7.fhir.r4.utils.client.network.Client.executeFhirRequest(Client.java:194)
at org.hl7.fhir.r4.utils.client.network.Client.issuePostRequest(Client.java:119)
at org.hl7.fhir.r4.utils.client.FHIRToolingClient.operateType(FHIRToolingClient.java:279)
at org.hl7.fhir.convertors.txClient.TerminologyClientR4.validateVS(TerminologyClientR4.java:137)
at org.hl7.fhir.validation.special.TxTester.validate(TxTester.java:252)
at org.hl7.fhir.validation.special.TxTester.runTest(TxTester.java:191)
at org.hl7.fhir.validation.special.TxTester.runSuite(TxTester.java:163)
at org.hl7.fhir.validation.special.TxTester.execute(TxTester.java:95)
at org.hl7.fhir.validation.ValidatorCli.parseTestParamsAndExecute(ValidatorCli.java:227)
at org.hl7.fhir.validation.ValidatorCli.main(ValidatorCli.java:148)
I'll investigate
it's sure not a useful error message
I noticed also that the test fixtures are not automatically created?
Also language/codesystem-de-multi.json
has elements like title:en
which fails when I tried to load it in (using the 5->4 converter in HAPI)
oh. right
you can't use those directly, no
I forgot - I was playing around with that format and left it in
in the case of that test, the error should be
Error from server: Error:[0a8c6743-42a8-43fe-bca5-1138aa91595d]: Could not find value set http://hl7.org/fhir/test/ValueSet/version-all-1 and version null. If this is an implicit value set please make sure the url is correct. Implicit values sets for different code systems are specified in https://www.hl7.org/fhir/terminologies-systems.html.
I noticed also that the test fixtures are not automatically created?
I'm not sure what that means
All the test code systems, and valuesets identified in test-cases.json
are not automatically loaded into Ontoserver when I run the txTests thing. Instead, I needed to run my own loader
no they're passed in a tx-resource
parameter with each request
I didn't notice this until just now, running against the new r4.ontoserver deployment since previously I was testing against a local server that I'd already loaded things onto
Aha! Another magic parameter -- is support for that part of the test?
this is already known. You and I discussed it in the past. see FHIR-33944. It's very definitely required
The test cases do it this way since support is required to support the IG publisher
https://github.com/hapifhir/org.hl7.fhir.core/pull/1255 for the execution problem
Yes, I recall the proposal.
The test cases do it this way since support is required to support the IG publisher
that's effectively what I was asking.
Does this also extend to FHIR-33946 and the cache-id
parameter?
that one is optional - the client looks in the capability statement to see if cache-id is stated to be supported before deciding that the server is capable of doing that
though the test cases don't try that
I'm going to have to put some considered thought into how we support tx-resource
.
Non-exhaustive list of considerations:
None of these are a problem for us with ValueSet resources (we already support contained ValueSets), but they are for CodeSystems.
for me, those are not a thing - they are never written. You probably can't avoid that. But what's 'name clashes' about?
What happens when the resource passed via tx-resource
has the same URL as one that is already on the server? Does it shadow it? It may have an older version than the one on the server and the reference from the request may not be version-specific; should the older version supplied via tx-resource
be preferred over the newer one?
here's what I drafted about that:
One or more additional resources that are referred to from the value set provided with the $expand or $validate-code invocation. These may be additional value sets or code systems that the client believes will or may be necessary to perform the operation. Resources provided in this fashion are used preferentially to those known to the system, though servers may return an error if these resources are already known to the server (by URL and version) but differ from that information on the server
@Michael Lawley I updated the test cases for the new mode parameter
Thanks. I note that it is still complaining about extension content (Ontoserver includes some of its own extensions). I would have expected addition extension content to be generally ignored?
which extensions?
Michael Lawley said:
Coding.display A representation of the meaning of the code in the system, following the rules of the system.
"following the rules of the system", not "following the rules of some system implementer".
Also, if a display is not appropriate, then get it fixed -- either at source (in HL7 / THO) or with the external party. If the external party won't play ball, then fix it in a shared supplement so everyone can benefit rather than lots of (potentially incompatible) fixes spread over many different IGs.
Remembering that things get done according to the path of the least resistance, I see very little instruction and zero examples of using supplements in http://hl7.org/fhir/valueset.html - so chances of them being used for this purpose are very slim. Any changes in this area must offer a path of less or at most equal resistance compared to trimming the display text to what you mean.
well, we can provide examples, that's for sure.
Yep, at the same time, there is dragon text on the supplements:
The impact of Code System supplements on value set expansion - and therefore value set validation - is subject to ongoing experimentation and implementation testing, and further clarification and additional rules might be proposed in future versions of this specification.
That would need to go away as well to get confidence in using them
Otherwise hard to say 'this is what you shall use' when it's an experimental thing.
we're coming out of the experimentation phase :grinning:
and talking about the additional rules
Michael Lawley said:
If that's all they did I'd be less concerned. What they REALLY DO is allow people to change the display text on-the-fly to absolutely anything (and people do this), and the results sometimes bear zero resemblance to the code's meaning. This is why I say we're concerned about the clinical use case over the IG use case, and why I want the caller to explicitly request that an invalid display not return an error; then the onus is on the caller.
I don't see how this will improve the situation. It would just become an almost mandatory thing you do "just because the spec requires it" and it wouldn't carry the intended meaning.
Good use of supplements would, that way the IG can be explicit about the display codes it is tweaking to better fit the purpose. I'd be happy to do that in my IGs!
@Michael Lawley I finally got to a previously reported issue:
However, I'm trying to use tx.fhir.org/r4 as a reference point but I can't get it to behave.
For example http://tx.fhir.org/r4/ValueSet/$validate-code?system=http://snomed.info/sct&code=22298006&url=http://snomed.info/sct?fhir_vs=isa/118672003 gives a result=true even though the code is not in the valueset. In fact the url parameter seems to be totally ignored?
Indeed. It's an issue in the parser because there's 2 =
in the parameter - it's splitting on the second not the first
it works as expected if you escape the second =
I believe the correct strategy is to take the query part (everything from the 1st ?
) and split on &
, then split each of these on the first =
only
I didn't say I was happy with what it's doing
ah, not your parser code then?
it is. it's the oldest code I have. I think I haven't touched it since 1997 or so
PR time?
maybe. The URL itself is invalid so the behaviour isn't wrong, but I don't like it much
Why is that URL invalid?
an unescaped = in it. I think that's not valid according to the http spec. But I upgraded the server anyway, and it should be OK now
according to https://www.rfc-editor.org/info/rfc3986 it is valid, and '=' is considered to be a subdilimiter.
that doesn't really relate to it's use in key/value pairs
I don't see where an unescaped = is illegal?
@Michael Lawley a new issue has raised it's ugly head.
consider the situation where a value set refers to an unknown code system, and just includes all of it, and a client asks to validate the code
e.g.
{
"resourceType" : "ValueSet",
"id" : "unknown-system",
"url" : "http://hl7.org/fhir/test/ValueSet/unknown-system",
"version" : "5.0.0",
"name" : "UnknownSystem",
"title" : "Unknown System",
"status" : "active",
"experimental" : true,
"date" : "2023-04-01",
"publisher" : "FHIR Project",
"compose" : {
"include" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simpleX"
}]
}
}
and
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "url",
"valueUri" : "http://hl7.org/fhir/test/ValueSet/unknown-system"
},{
"name" : "code",
"valueCode" : "code1"
},{
"name" : "system",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simpleX"
}]
}
This is a pretty common situation in the IG world, and the IG publisher considers this a warning not an error.
but it's very clearly an error validating
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "issues",
"resource" : {
"resourceType" : "OperationOutcome",
"issue" : [{
"severity" : "error",
"code" : "not-found",
"details" : {
"text" : "The CodeSystem http://hl7.org/fhir/test/CodeSystem/simpleX is unknown"
},
"location" : ["code.system"]
}]
}
},
{
"name" : "message",
"valueString" : "The CodeSystem http://hl7.org/fhir/test/CodeSystem/simpleX is unknown"
},
{
"name" : "result",
"valueBoolean" : false
}]
}
... only... the validator decides that this is one of those cases because there's a parameter
"cause" : "not-found"
where cause is taken from OperationOutcome.issue.type.
but I removed cause from the returned parameters, and now I have no way to know that the valueset validation failed because of an unknown code system
the case above says that there is an unknown code system, but it doesn't explicitly say that the result is false because of the unknown code system.
This is a "fail to validate" rather than a "validate = false" situation -- I'd expect a 4XX series error from the Tx and an OperationOutcome about the CodeSystem not found.
Will that work?
I'm pretty sure Ontoserver does something like this
I don't think that's right - other issues can still be detected and returned
So I don't follow why you have removed cause
?
it wasn't a standard parameter. And it was pretty loose anyway
it's kind of weird to just put 'cause : not found' and assume everyone knows that means validation failed because the code system needed to determine value set membership wasn't found
I need a better way to say it...
you also have location: ["code.system"]
and the details.text
I do have that, but I'm going to be second guessing the server to decide whether that's the cause, or an incidental finding
Does this come down to identifying which one (or more?) of the issues was the trigger for result = false?
yes that's one way to look at it
Can it be as simple as "all the issues with severity = error"?
no I don't think it can. There's plenty of scope of issues with severity = error whether or not the code is in the value set
Doesn't that depend on how you interpret things? For example, if validating a codeableConcept, then you validate each contained Coding. If they all fail, then each contributes an issue with severity of error, but if any passes, then the issues from the others would just be warning?
This seems to be in line with
Indicates how relevant the issue is to the overall success of the action
I certainly don't think levels work like that. If a system is wrong, or a code is invalid, then that's an error
at the local level, but not at the level of the overall operation
issue.code
has this comment:
For example, code-invalid might be a warning or error, depending on the context
really?
really
Comments:
Code values should align with the severity. For example, a code offorbidden
generally wouldn't make sense with a severity ofinformation
orwarning
. Similarly, a code ofinformational
would generally not make sense with a severity offatal
orerror
. However, there are no strict rules about what severities must be used with which codes. For example,code-invalid
might be awarning
orerror
, depending on the context
(my emphasis)
oh I believed you. And I probably did write that. But I've noodled on it for a couple of hours, and in the context of the validator, invalid codes are invalid codes, whether they're in the scope of the value set or not.
and on further noodling, I think this is OK to be an extension for tx.fhir.org - the notion of 'it's not an error because the code system is unknown' is kind of centric to the base tx service, and not to additional ones. So I'm going with a parameter name of x-caused-by-unknown-system
for the link, and the tests won't require that
also @Jonathan Payne
@Grahame Grieve Looks nice... :+1:
other/codesystem-dual-filter.json
is invalid -- it has a duplicate code: AA
Also, HAPI is complaining about language/codesystem-de-multi.json
:
HAPI-0450: Failed to parse request body as JSON resource. Error was: HAPI-1825: Unknown element 'title:en' found during parse
hmm
hapi probably doesn't support JSON 5 either. can you try commenting that line out?
So, the testing/comparison aspect is complaining about / rejecting extensions that Ontoserver includes that are not part of the expected result.
e.g.,
Group simple-cases
Test simple-expand-all: Fail
properties differ at .expansion.contains[1]: missing property extension
Test simple-expand-enum: Fail
properties differ at .expansion.contains[1]: missing property extension
Test simple-expand-isa: Fail
properties differ at .expansion.contains[0]: missing property extension
Test simple-expand-prop: Fail
properties differ at .expansion.contains[0]: missing property extension
Test simple-expand-regex: Fail
properties differ at .expansion.contains[1]: missing property extension
what extensions are yo including?
One is http://ontoserver.csiro.au/profiles/expansion
what is it?
Why does that matter? It's an extension, if you don't understand it you can (should) ignore it.
(It's actually legacy from DSTU2_1 to indicate inactive status)
it doesn't matter for the tests, no, but I'm just interested for the sake of being nosy
:laughing:
I'll think about the testing part
@Michael Lawley https://github.com/hapifhir/org.hl7.fhir.core/pull/1303
I have rewritten these two pages:
I have removed the section on registration - I'm rewriting that after talking to @Michael Lawley, more on that soon
I reconciled the two pages and changed the way the web source reference works
@Grahame Grieve Hi, I am running the fhir tx testsuite against Snowstorm. For some tests, there are complaints about a missing "id" property, and the test fails. Turns out that the resource that is returned contains an "id" whereas the "reference" resource does not contain an "id". Is this a real "fail", or is the "id" property supposed to be optional?
Expected:
{
"$optional-properties$" : ["date", "publisher", "compose"],
"resourceType" : "ValueSet",
"url" : "http://hl7.org/fhir/test/ValueSet/simple-all",
"version" : "5.0.0",
"name" : "SimpleValueSetAll",
"title" : "Simple ValueSet All",
"status" : "active",
"experimental" : false,
"date" : "2023-04-01",
"publisher" : "FHIR Project",
"compose" : {
"include" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
},
"expansion" : {
"identifier" : "$uuid$",
"timestamp" : "$instant$",
"total" : 7,
"parameter" : [{
"name" : "excludeNested",
"valueBoolean" : true
},
{
"name" : "used-codesystem",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
},
{
"$optional$" : true,
"name" : "version",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
}],
"contains" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code1",
"display" : "Display 1"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"abstract" : true,
"inactive" : true,
"code" : "code2",
"display" : "Display 2"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2a",
"display" : "Display 2a"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2aI",
"display" : "Display 2aI"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2aII",
"display" : "Display 2aII"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2b",
"display" : "Display 2b"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code3",
"display" : "Display 3"
}]
}
}
Actual:
{
"resourceType": "ValueSet",
"id": "simple-all",
"url": "http://hl7.org/fhir/test/ValueSet/simple-all",
"version": "5.0.0",
"name": "SimpleValueSetAll",
"title": "Simple ValueSet All",
"status": "active",
"experimental": false,
"publisher": "FHIR Project",
"expansion": {
"id": "f4b71bf6-3ef4-4c30-a4ea-ab3f4ae3dad6",
"timestamp": "2024-10-09T15:08:23+02:00",
"total": 7,
"offset": 0,
"parameter": [
{
"name": "version",
"valueUri": "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
},
{
"name": "displayLanguage",
"valueString": "en"
}
],
"contains": [
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code1",
"display": "Display 1"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2",
"display": "Display 2"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2a",
"display": "Display 2a"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2aI",
"display": "Display 2aI"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2aII",
"display": "Display 2aII"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2b",
"display": "Display 2b"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code3",
"display": "Display 3"
}
]
}
}
it's not an error to return a populated id element. If doesn't even have to the same id. Probably it shouldn't be, but that's a style question
which means that the test is wrong, really
I updated the tests to allow id, but you'll have to wait for the release of a new validator to use them, unfortunately
about 24 hours
As you may know from other messages, I am investigating the options to make Snowstorm fhir tx testsuite compliant. As our reference server for terminology in Belgium is an Ontoserver (now 6.20.1 since yesterday), and I want the Snowstorm behaviour to be as similar as possible to the Ontoserver behaviour, I also ran the fhir tx testsuite against Ontoserver. I got a result of 16% fails.
I know from #Announcements > Using Ontoserver with Validator / IG Publisher that Ontoserver is considered compatible. How should I interpret the 16% failed tests? Is any software allowed to fail 16% tests? Any 16%, or only that specific 16% of the tests? What is also strange, is that the highest amount of failures is in the "simple-cases" test group. Is the "simple-cases" test group the test of _basic_ behaviour, and are these tests of a greater weight? What does this say about the interplay between IGPublisher and the tested terminology server?
I don't know about 16% failure. What version are you running? I test the public ontoserver everyday and get 100% pass rate
is that the highest amount of failures is in the "simple-cases" test group
hmm. maybe you need to set a parameter for flat rather than nested? Ontoserver doesn't do nested expansions, and that's a setting you pass to the test cases
try -mode flat
Ah yes, I had forgotten about that option
@Michael Lawley @Grahame Grieve Errors have gone down to 10% with
-mode flat
But that is still a lot... Any other suggestions? Since there are 'only' 21 failed testcases now, I'll post a list of their names here.
{
"name" : "simple-expand-isa-o2",
"status" : "fail",
"message" : "properties differ at .expansion.contains[0]: missing property abstract"
},
{
"name" : "simple-expand-isa-c2",
"status" : "fail",
"message" : "properties differ at .expansion: missing property offset"
},
{
"name" : "simple-expand-isa-o2c2",
"status" : "fail",
"message" : "string property values differ at .expansion.contains[0].code\nExpected:\"code2aI\" for simple-expand-isa-o2c2\nActual :\"code2a\""
},
{
"name" : "simple-lookup-1",
"status" : "fail",
"message" : "string property values differ at .parameter[6].part[2].valueCode\nExpected:\"code2aI\" for simple-lookup-1\nActual :\"code2aII\""
},
{
"name" : "simple-lookup-2",
"status" : "fail",
"message" : "array item count differs at .parameter[9].part\nExpected:\"2\" for simple-lookup-2\nActual :\"3\""
},
{
"name" : "validation-simple-code-bad-valueSet",
"status" : "fail",
"message" : "array item count differs at .issue\nExpected:\"1\" for validation-simple-code-bad-valueSet\nActual :\"2\""
},
{
"name" : "validation-simple-coding-bad-valueSet",
"status" : "fail",
"message" : "array item count differs at .issue\nExpected:\"1\" for validation-simple-coding-bad-valueSet\nActual :\"2\""
},
{
"name" : "validation-simple-codeableconcept-bad-valueSet",
"status" : "fail",
"message" : "array item count differs at .issue\nExpected:\"1\" for validation-simple-codeableconcept-bad-valueSet\nActual :\"2\""
},
{
"name" : "validation-simple-codeableconcept-bad-version2",
"status" : "fail",
"message" : "string property values differ at .parameter[1].resource.issue[1].details.text\nExpected:\"A definition for CodeSystem 'http://hl7.org/fhir/test/CodeSystem/simpleXX' version '1.0.4234' could not be found, so the code cannot be validated. Valid versions: []\" for validation-simple-codeableconcept-bad-version2\nActual :\"A definition for CodeSystem 'http://hl7.org/fhir/test/CodeSystem/simpleXX|1.0.4234' could not be found, so the code cannot be validated\""
},
{
"name" : "validation-simple-code-bad-language",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-code-bad-language\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language-header",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language-header\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language-vs",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language-vs\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language-vslang",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language-vslang\nActual :\"4\""
},
{
"name" : "validation-simple-codeableconcept-bad-language",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"7\" for validation-simple-codeableconcept-bad-language\nActual :\"5\""
},
{
"name" : "big-echo-no-limit",
"status" : "fail",
"message" : "string property values differ at .resourceType\nExpected:\"OperationOutcome\" for big-echo-no-limit\nActual :\"ValueSet\""
},
{
"name" : "notSelectable-reprop-true",
"status" : "fail",
"message" : "number property values differ at .expansion.total\nExpected:\"1\" for notSelectable-reprop-true\nActual :\"0\""
},
{
"name" : "notSelectable-reprop-false",
"status" : "fail",
"message" : "number property values differ at .expansion.total\nExpected:\"1\" for notSelectable-reprop-false\nActual :\"0\""
},
{
"name" : "notSelectable-reprop-true-true",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"5\" for notSelectable-reprop-true-true\nActual :\"6\""
},
{
"name" : "notSelectable-reprop-false-false",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"5\" for notSelectable-reprop-false-false\nActual :\"6\""
},
{
"name" : "act-class",
"status" : "fail",
"message" : "properties differ at .expansion.contains[10]: missing property property"
}
well, that's weird. like I said, 100% on the public ontoserver. is that what you get testing that one?
No, that is what I get testing the Belgian one. Sadly I cannot give you the URL, because it is not publicly accessible for the moment. But it's newly setup, so its setup might differ from the Australian setup.
well, how about you test the public Australian one. If that passes, then you have a baseline. Btw, the output will point you at a temp directory where you can use a diff program to look at the difference between expected and actual
@Michael Lawley I know you are in contact with our terminology man David Op de Beeck and his team. Could you suggest any possible modifications in the Belgian setup to get the test cases working?
it'd be a lot easier if you'd look at the differences and tell us why... a language thing?
that seems most likely to me
And is there a tad of documentation available on that topic?
which topic?
I mean from the Ontoserver side, how to pass the tests...
@Grahame Grieve
These are the cli options I am using now:
-txTests -source ./tx -output ./output -tx https://belgian.tx.server -version 4.0 -mode flat
Do I find the temp directory in test-results.json? Or in the stdout/stderr of the validator_cli.jar?
your output should start something like this:
Run terminology service Tests
Source for tests: /Users/grahamegrieve/work/test-cases/tx
Output Directory: /Users/grahamegrieve/temp/local.fhir.org
Term Service Url: http://local.fhir.org
External Strings: false
Test Exec Modes: []
Tx FHIR Version: true
look in the output directory
Just catching up with this thread. We too get an error for:
{
"name" : "simple-expand-isa-o2",
"status" : "fail",
"message" : "properties differ at .expansion.contains[0]: missing property abstract"
},
for example, because Ontoserver includes "abstract = true" for "code2" (implied because it has the property notSelectable = true"). The expected response in the test doesn't include this, but I think should be updated (in line with a number of the other expected responses in the "simple" set).
@Grahame Grieve do you perhaps have a "quarrantine" list that whitelists a bunch of reported (but wrong/trivial) failures?
not to my knowledge. I'll investigate
hmm so this is where I eat humble pie. it turns out that I don't check the outcomes at all. Here's the code in the my JUnit test cases:
String err = tester.executeTest(setup.suite, setup.test, modes);
Assertions.assertTrue(true); // we don't care what the result is, only that we didn't crash
and when I looked at that in surprise, I remembered what I was thinking. You (@Michael Lawley) might recall that occasionally, the tester crashed testing ontoserver. so I added the ontoserver tests to ensure that it didn't crash on you (which it hasn't since I added the tests)
But I erroneously got it in my mind that it was testing ontoserver, as is evident earlier in the thread. Now that I've changed it, I'm getting 100 failures
I'll dig into them over the weekend. @Bart Decuypere my apologies for giving you the run around on this
100?!? Are you setting mode flat?
yep
some of them at tx.fhir.org only tests, don't know why they're running, but that's only maybe 20. I haven't looked at the others
how often do you run them?
Every build runs the tests, but we have a quarantine list so that certain failures are tolerated.
I can share that - the intention had been to work through that list and get the corner cases sorted out (some have crept in as new tests were added), but with Jim on paternity leave we've not had the bandwidth
well, we better work through them then
and congratulations @Jim Steel btw
i'm not pointing at the Ontoserver message file. I better do that
what is this?
"extension" : [{
"extension" : [{
"url" : "inactive",
"valueBoolean" : true
}],
"url" : "http://ontoserver.csiro.au/profiles/expansion"
}],
lots of the fails are because of this
Yes, it's a "private" extension, kept for backward compatibility, that predates the R5 property stuff.
But it should be ignored for test purposes; unknown extensions that are not must understand are safe to ignore
it didn't used to be present
and this one is weird given that there's also inactive = true directly
Grahame Grieve said:
@Bart Decuypere my apologies for giving you the run around on this
No offense taken, I've seen worse in my life...
I am still eager to see the actual differences:
Java: 18 from C:\openjdk-18\jdk-18 on amd64 (64bit). 8148MB available
Paths: Current = C:\Temp\toy\fhir-test-cases, Package Cache = C:\Users\eh089\.fhir\packages
Params: -txTests -source ./tx -output ./output -tx https://belgian.tx.server -version 4.0 -mode flat
Run terminology service Tests
Source for tests: ./tx
Output Directory: ./output
Term Service Url: https://belgian.tx.server
External Strings: false
Test Exec Modes: [flat]
Tx FHIR Version: 4.0
Load Tests from ./tx
I can't find any files to diff in the output directory, only test-results.json
really? weird.
BTW: I forgot to paste the version:
FHIR Validation tool Version 6.3.32 (Git# 54bf319161d4). Built 2024-10-14T06:04:19.383Z (3 days old)
if you run it with these parameters:
-txTests -source /Users/grahamegrieve/work/test-cases/tx -tx https://tx.ontoserver.csiro.au/fhir -mode flat
you should get this in your output directory:
OK, I'll try...
The -output
option will need an overhaul, I presume... without it, it works as you described. The percentage of failures however is identical (10%).
@Michael Lawley So the Australian and the Belgian Ontoserver seem to have the same setup with regard to the FHIR tx testcases.
I'm not understanding that bit about the -output option.
If I specify the -output
option, the "actual" files do not get logged in the output directory, but in another directory (which is not visible in the stdout/stderr). Only the test-results.json file is written to the directory specified in the -output
option.
it works for me? Weird. I don't know how to investigate that
no I happened to have it set to the value it's hardcoded to use. Fixed next release
new Publication: STU 1 of theFHIR Shorthand implementation guide: http://hl7.org/fhir/uv/shorthand/STU1
New Publication: STU 1 of the FHIR Da Vinci Unsolicited Notifications Implementation Guide: http://hl7.org/fhir/us/davinci-alerts/STU1
New Publication: STU 1.1 of the C-CDA on FHIR Implementation Guide: http://hl7.org/fhir/us/ccda/STU1.1
New Publication: STU 1 of the Vital Records Mortality and Morbidity Reporting FHIR Implementation Guide: http://hl7.org/fhir/us/vrdr/STU1/index.html
New Publication: STU1 of the CARIN Consumer Directed Payer Data Exchange (CARIN IG for Blue Button®) FHIR Implementation Guide: http://hl7.org/fhir/us/carin-bb/STU1
New Publication: STU1 of the HL7 Payer Data Exchange (PDex) Payer Network, Release 1 - US Realm Implementation Guide: hl7.org/fhir/us/davinci-pdex-plan-net/STU1
New Publication: STU1 of the HL7 Prior-Authorization Support (PAS), Release 1- US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-pas/STU1
New Publication: STU1 of the HL7 Payer Data Exchange (PDex), Release 1 - US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-pdex/STU1
New Publication: STU1 of the HL7 Da Vinci - Coverage Requirements Discovery (CRD), Release 1- US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-crd/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Payer Coverage Decision Exchange, R1 - US Realm: http://hl7.org/fhir/us/davinci-pcde/STU1
New Publication: STU1 of the FHIR® Implementation Guide: Documentation Templates and Payer Rules (DTR), Release 1- US Realm: http://hl7.org/fhir/us/davinci-dtr/STU1/index.html
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Risk Based Contract Member Identification, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-atr/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Pharmacist Care Plan Document, Release 1 - US Realm: http://hl7.org/fhir/us/phcp/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Clinical Guidelines, Release 1: http://hl7.org/fhir/uv/cpg/STU1
Newly Posted: FHIR R4B Ballot #1: http://hl7.org/fhir/2021Mar
New Publication: Normative Release 1 of the HL7 Cross-Paradigm Specification: Clinical Quality Language (CQL), Release 1: http://cql.hl7.org/N1
New Publication: STU Release 1 of the HL7/NCPDP FHIR® Implementation Guide: Specialty Medication Enrollment, Release 1 - US Realm: http://hl7.org/fhir/us/specialty-rx/STU1.
New Publication: STU Release 3 of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures STU3 for FHIR R4: http://hl7.org/fhir/us/davinci-deqm/STU3
Lynn Laakso said:
New Publication: STU Release 3 of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures STU3 for FHIR R4: http://hl7.org/fhir/us/cqfmeasures/STU3
new Publication: STU Release 1 of the HL7 Immunization Decision Support Forecast (ImmDS) Implementation Guide: http://hl7.org/fhir/us/immds/STU1
New Publication: STU Release 4 of the HL7 FHIR® US Core Implementation Guide STU 4 Release 4.0.0: http://hl7.org/fhir/us/core/STU4
File not found ;-)
well that's not supposed to happen
it'll work now
The change log appears to be empty? http://hl7.org/fhir/us/core/history.html
Grahame has to fix that, it'll be 12 hours
fixed
New Publication: STU Update Release 1.1 of HL7 FHIR® Implementation Guide: Consumer Directed Payer Data Exchange (CARIN IG for Blue Button®), Release 1 - US Realm: http://www.hl7.org/fhir/us/carin-bb/STU1.1
I don't know as it matters but the directory of published versions doesn't show this version. http://hl7.org/fhir/us/carin-bb/history.html
it does for me. You might have a caching problelm
New Publication: STU Update Release 1.1 of HL7 FHIR® Profile: Occupational Data for Health (ODH), Release 1 - US Realm: http://hl7.org/fhir/us/odh/STU1.1
New Publication: STU Release 1 of HL7 FHIR® Implementation Guide: Vital Records Common FHIR Profile Library, Release 1: http://hl7.org/fhir/us/vr-common-library/STU1
New publication: STU Release 1 of HL7 FHIR® Implementation Guide: NHSN Inpatient Medication COVID-19 Administration Reports, Release 1- US Realm: http://hl7.org/fhir/us/nhsn-med-admin/STU1
New Publication: STU Release 1 of HL7 FHIR® Implementation Guide: NHSN Adverse Drug Event - Hypoglycemia Report, Release 1- US Realm: http://hl7.org/fhir/us/nhsn-ade/STU1
New Publication: STU Update (STU1.1) of HL7 FHIR® Implementation Guide: DaVinci Payer Data Exchange US Drug Formulary, Release 1 - US Realm: http://hl7.org/fhir/us/Davinci-drug-formulary/STU1.1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Release 1 - US Realm: http://hl7.org/fhir/us/bfdr/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Dental Data Exchange, Release 1 - US Realm: http://hl7.org/fhir/us/dental-data-exchange/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Post-Acute Care Cognitive Status, Release 1- US Realm: http://hl7.org/fhir/us/pacio-cs/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Post-Acute Care Functional Status, Release 1- US Realm: http://hl7.org/fhir/us/pacio-fs/STU1
New Publication: Release 4.0.1 of the CQF FHIR® Implementation Guide: Clinical Quality Framework Common FHIR Assets: http://fhir.org/guides/cqf/common/4.0.1/. (note: this is not a guide published through the HL7 consensus process, but according to the FHIR Community Process, so it's posted on fhir.org)
STU Update Publication of HL7 FHIR® Implementation Guide: Prior-Authorization Support (PAS), Release 1- US Realm: http://hl7.org/fhir/us/davinci-pas/STU1.1
STU Publication of HL7 FHIR Implementation Guide: minimal Common Oncology Data Elements (mCODE) Release 1 STU 2 – US Realm: http://hl7.org/fhir/us/mcode/STU2
STU Publication of HL7 FHIR® Implementation Guide: Electronic Case Reporting (eCR), Release 2: http://hl7.org/fhir/us/ecr/STU2
STU Update Publication of HL7 FHIR® Profile: Quality, Release 1 STU 4.1- US Realm: http://hl7.org/fhir/us/qicore/STU4.1
STU Publication of HL7 FHIR Implementation Guide: Profiles for ICSR Transfusion and Vaccination Adverse Event Detection and Reporting, Release 1 - US Realm: www.hl7.org/fhir/us/icsr-ae-reporting/STU1
Normative Publication of HL7 FHIR® Implementation Guide: FHIR Shorthand, Release 2: http://hl7.org/fhir/uv/shorthand/N1
STU Publication of HL7 FHIR® Structured Data Capture (SDC) Implementation Guide, Release 3: http://hl7.org/fhir/uv/sdc/STU3
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1- US Realm: http://hl7.org/fhir/us/davinci-cdex/STU1
STU Publication of HL7 FHIR® Implementation Guide: Health Record Exchange (HRex) Framework, Release 1- US Realm: http://hl7.org/fhir/us/davinci-hrex/STU1
STU Errata Publication of HL7 FHIR® Profile: Quality, Release 1 - US Realm STU 4.1.1: http://hl7.org/fhir/us/qicore/STU4.1.1
@David Pyke and @John Moehrke are pleased to announce the release of HotBeverage #FHIR Implementation Guide release April 1st - Based on IETF RFC 2324 allows for the fulfillment of a device request for an artfully brewed caffeinated beverage. http://fhir.org/guides/acme/HotBeverage/1.4.2022
STU Update Publication for HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex) Payer Network, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pdex-plan-net/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Quality Measures, Release 1 STU 3 - US Realm: http://hl7.org/fhir/us/cqf-measures/STU3
Informative Publication of HL7 EHRS-FM Release 2.1 – Pediatric Care Health IT Functional Profile Release 1 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=593
STU Publication of HL7 FHIR® IG: SMART Web Messaging Implementation Guide, Release 1: http://hl7.org/fhir/uv/smart-web-messaging/STU1
STU Publication of HL7 FHIR® Implementation Guide: Clinical Genomics, STU 2: http://hl7.org/fhir/uv/genomics-reporting/STU2
STU Publication of HL7 Domain Analysis Model: Vital Records, Release 5- US Realm: see http://www.hl7.org/implement/standards/product_brief.cfm?product_id=466
STU Publication of HL7 FHIR® Implementation Guide: Personal Health Device (PHD), Release 1: http://hl7.org/fhir/uv/phd/STU1
STU Publication of HL7 CDA® R2 IG: C-CDA Templates for Clinical Notes STU Companion Guide, Release 3 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Publication of HL7 FHIR® US Core Implementation Guide STU5 Release 5.0.0: http://hl7.org/fhir/us/core/STU5
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Health Care Surveys (NHCS), Release 1, STU Release 2.1 and STU Release 3.1 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=385
STU Publication of HL7 FHIR® Implementation Guide: Risk Adjustment, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-ra/STU1
Informative Guidance Publication of HL7 Short Term Solution - V2: SOGI Data Exchange Profile: http://www.hl7.org/permalink/?SOGIGuidance
Errata Publication of CDA® R2.1 (HL7 Clinical Document Architecture, Release 2.1): https://www.hl7.org/documentcenter/private/standards/cda/2019CDAR2_1_2022JUNerrata.zip
Errata Publication of US Core STU5 Release 5.0.1: http://hl7.org/fhir/us/core/STU5.0.1
STU Publication of HL7 FHIR® Implementation Guide: Digital Insurance Card, Release 1 - US Realm: http://hl7.org/fhir/us/insurance-card/STU1
STU Publication of HL7 FHIR® Implementation Guide: Subscription R5 Backport, Release 1: http://hl7.org/fhir/uv/subscriptions-backport/STU1
STU Update Publication of HL7 CDA® R2 Implementation Guide: Reportability Response, Release 1 STU Release 1.1- US Realm: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=470
STU Update Publication Request of HL7 CDA® R2 Implementation Guide: Public Health Case Report - the Electronic Initial Case Report (eICR) Release 2, STU Release 3.1 - US Realm: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=436
Informative Publication of HL7 FHIR® Implementation Guide: COVID-19 FHIR Clinical Profile Library, Release 1 - US Realm: http://hl7.org/fhir/us/covid19library/informative1
STU Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1 STU1.1.0 - US Realm: http://hl7.org/fhir/us/davinci-cdex/STU1.1
STU Update Publication of HL7 FHIR Profile: Occupational Data for Health (ODH), Release 1.2: http://hl7.org/fhir/us/odh/STU1.2
STU Publication of HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex) Drug Formulary, Release 1 STU2 - US Realm: http://hl7.org/fhir/us/davinci-drug-formulary/STU2
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Release 1 STU2 - US Realm: http://hl7.org/fhir/us/vrdr/STU2
STU Update Publication of HL7 FHIR® Implementation Guide: Electronic Case Reporting (eCR), Release 2.1 - US Realm: http://hl7.org/fhir/us/ecr/STU2.1
R5 Ballot is published. http://hl7.org/fhir/2022Sep/
STU Publication of HL7 FHIR® Implementation Guide: Vital Signs, Release 1- US Realm: http://hl7.org/fhir/us/vitals/STU1/
STU Publication of HL7 Cross Paradigm Specification: CDS Hooks, Release 1: https://cds-hooks.hl7.org/2.0/
New release of HL7 Terminology (THO) v4.0.0: https://terminology.hl7.org/4.0.0. (Thanks @Marc Duteau)
STU Publication of HL7 FHIR® Implementation Guide: Hybrid/Intermediary Exchange, Release 1- US Realm: http://www.hl7.org/fhir/us/exchange-routing/STU1
Errata publication of C-CDA (HL7 CDA® R2 Implementation Guide: Consolidated CDA Templates for Clinical Notes - US Realm): https://www.hl7.org/implement/standards/product_brief.cfm?product_id=492
STU Publication of HL7 FHIR® Implementation Guide: Security for Registration, Authentication, and Authorization, Release 1- US Realm: http://hl7.org/fhir/us/udap-security/STU1/
STU Publication of HL7 FHIR® Implementation Guide: FHIR for FAIR, Release 1: http://hl7.org/fhir/uv/fhir-for-fair/STU1
STU Publication of HL7 FHIR® Implementation Guide: PACIO Re-assessment Timepoints, Release 1 - US Realm: http://hl7.org/fhir/us/pacio-rt/STU1
STU Publication of HL7 FHIR® Implementation Guide: Medicolegal Death Investigation (MDI), Release 1 - US Realm: http://hl7.org/fhir/us/mdi/STU1
STU Publication of HL7 CDA® R2 Implementation Guide: ePOLST: Portable Medical Orders About Resuscitation and Initial Treatment, Release 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=600
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Results Interface, Release 1 STU Release 4 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=279
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Orders (LOI) from EHR, Release 1, STU Release 4 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=152
STU Publication of HL7 Version 2 Implementation Guide: Laboratory Value Set Companion Guide, Release 2- US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=413
New release of HL7 Terminology (THO) v5.0.0: https://terminology.hl7.org/5.0.0
This also means that the THO freeze has been lifted.
You can view the UTG tickets that were implemented in this release using the following dashboard and selecting 5.0.0 in the first pie chart. https://jira.hl7.org/secure/Dashboard.jspa?selectPageId=16115
Informative Publication of HL7 V2 Implementation Guide Quality Criteria, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=608
STU Publication of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.0 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2
STU Update Publication of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures, STU3.1 for FHIR R4 - US Realm: http://hl7.org/fhir/us/davinci-deqm/STU3.1/
STU Update Publication of HL7 FHIR® Implementation Guide: International Patient Summary, Release 1.1: http://hl7.org/fhir/uv/ips/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Consumer-Directed Payer Exchange (CARIN IG for Blue Button®), Release 1 STU2: http://hl7.org/fhir/us/carin-bb/STU2
STU Publication Request for HL7 Domain Analysis Model: Nutrition Care, Release 3 STU 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=609
Errata Publication of HL7 CDA® R2 Implementation Guide: Quality Reporting Document Architecture - Category I (QRDA I) - US Realm, STU 5.3: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=35
Snapshot3 of FHIR Core spec: http://hl7.org/fhir/5.0.0-snapshot3. This is published to support the Jan 2023 connectathon, and help prepare for the final publication of R5, which is still scheduled for March 2023
Informative Publication of HL7 EHRS-FM R2.0.1: Usability Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=611
STU Publication of NHSN Healthcare Associated Infection (HAI) Reports Long Term Care Facilities (HAI-LTCF-FHIR), Release 1 - US Realm: http://hl7.org/fhir/us/hai-ltcf/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Subscription R5 Backport, Release 1, STU 1.1: http://hl7.org/fhir/uv/subscriptions-backport/STU1.1/
New release of HL7 Terminology (THO) v5.1.0: https://terminology.hl7.org/5.1.0
The Final Draft version of FHIR R5 is now published for QA : http://hl7.org/fhir/5.0.0-draft-final. There's a two week period to do QA on it. In particular, we'd like to focus on the invariants - there'll be another announcement about that shortly
STU Update Publication of minimal Common Oncology Data Elements (mCODE) Implementation Guide 2.1.0 - STU 2.1: http://hl7.org/fhir/us/mcode/STU2.1/
STU Update Publication of HL7 FHIR Profile: Occupational Data for Health (ODH), Release 1.3: https://hl7.org/fhir/us/odh/STU1.3/
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1 STU 2 - US Realm: http://hl7.org/fhir/us/davinci-cdex/STU2/
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Release 1 STU2.1 - US Realm: https://hl7.org/fhir/us/vrdr/STU2.1/
I have started publishing R5. Unlike the IGs, R5 is rather a big upload - it will take me a couple of days. In the meantime, you might find discontinuities and broken links on the site, and confusion between R4 and R5 as bits are copied up. Also you may find missing and broken redirects too. I will make another announcement once it's all uploaded
STU Publication of HL7 FHIR® Implementation Guide: International Patient Access (IPA), Release 1: http://hl7.org/fhir/uv/ipa/STU1
STU Publication of HL7 FHIR® Implementation Guide: Longitudinal Maternal & Infant Health Information for Research, Release 1 - US Realm: http://hl7.org/fhir/us/mihr/STU1/
STU Publication of HL7 FHIR® Implementation Guide: Patient Cost Transparency, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pct/STU1
STU Publication of HL7 FHIR® Profile: Quality, Release 1 - US Realm (qicore) STU Release 5: http://hl7.org/fhir/us/qicore/STU5
Normative Publication of HL7 CDA® R2 Implementation Guide: Emergency Medical Services; Patient Care Report Release 3 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=438
STU Publication of HL7 Consumer Mobile Health Application Functional Framework (cMHAFF), Release 1, STU 2: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=476
STU Publication of HL7 FHIR® Implementation Guide: Data Segmentation for Privacy (DS4P), Release 1: http://hl7.org/fhir/uv/security-label-ds4p/STU1
STU Publication of HL7 FHIR® IG: SMART Application Launch Framework, Release 2.1: http://hl7.org/fhir/smart-app-launch/STU2.1
STU Publication of HL7 Version 2 Implementation Guide: Diagnostic Audiology Reporting, Release 1- US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=620
STU Publication of HL7 FHIR® R4 Implementation Guide: Clinical Study Schedule of Activities, Edition 1: http://hl7.org/fhir/uv/vulcan-schedule/STU1/
STU Update Publication of HL7 FHIR® Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-FHIR), Release 1 STU 1.1 - US Realm: http://hl7.org/fhir/us/hai-ltcf/STU1.1
STU Publication of HL7 CDA® R2 Implementation Guide: Personal Advance Care Plan (PACP) Document, Edition 1, STU3 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=434
STU Publication of HL7 CDA® R2 Implementation Guide: Pharmacy Templates, Edition 1 STU Release 2: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=514
STU Publication of HL7 FHIR® R4 Implementation Guide: Single Institutional Review Board Project (sIRB), Edition 1- US Realm: http://hl7.org/fhir/us/sirb/STU1
STU Publication of HL7 CDA® R2 Implementation Guide: C-CDA Templates for Clinical Notes STU Companion Guide Release 4 - US Realm Standard for Trial Use May 2023: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Publication of HL7 FHIR® US Core Implementation Guide STU6 Release 6.0.0: http://hl7.org/fhir/us/core/STU6
STU Publication of HL7/NCPDP FHIR® Implementation Guide: Specialty Medication Enrollment, Release 1 STU 2 - US Realm: http://hl7.org/fhir/us/specialty-rx/STU2/
STU Publication of Vulcan's HL7 FHIR® Implementation Guide: Retrieval of Real World Data for Clinical Research STU 1 - UV Realm: http://hl7.org/fhir/uv/vulcan-rwd/STU1
Version 6.1.0-snapshot1 of US Core for public review of forth coming STU update to STU6 - US Realm: http://hl7.org/fhir/us/core/STU6.1-snapshot1
STU Publication of HL7 FHIR® Implementation Guide: Military Service History and Status, Release 1 - US Realm: http://hl7.org/fhir/us/military-service/STU1
STU Publication of HL7 FHIR® Implementation Guide: Identity Matching, Release 1 - US Realm: http://hl7.org/fhir/us/identity-matching/STU1
STU Publication of HL7 FHIR® Implementation Guide: Making Electronic Data More Available for Research and Public Health (MedMorph) Reference Architecture, Release 1- US Realm: http://hl7.org/fhir/us/medmorph/STU1/
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Healthcare Safety Network (NHSN) Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-CDA), Release 1, STU 1.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=546
STU Update Publication of HL7 CDA® R2 Implementation Guide: C-CDA Templates for Clinical Notes Companion Guide, Release 4.1 STU - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Update Publication of HL7 FHIR® US Core Implementation Guide STU6 Release 6.1.0: http://hl7.org/fhir/us/core/STU6.1
STU Publication of HL7 FHIR® Implementation Guide: Cancer Electronic Pathology Reporting, Release 1 - US Realm: https://hl7.org/fhir/us/cancer-reporting/STU1
STU Publication of HL7 FHIR Implementation Guide: Electronic Medicinal Product Information, Release 1: http://hl7.org/fhir/uv/emedicinal-product-info/STU1
Unballoted STU Update Publication of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.1 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2.1
STU Publication of HL7 FHIR® Implementation Guide: CodeX™ Radiation Therapy, Release 1- US Realm: http://hl7.org/fhir/us/codex-radiation-therapy/STU1
STU Publication of HL7 FHIR® Implementation Guide: US Public Health Profiles Library, Release 1 - US Realm: http://hl7.org/fhir/us/ph-library/STU1
STU Publication of HL7 FHIR® Implementation Guide: ICHOM Patient Centered Outcomes Measure Set for Breast Cancer, Edition 1: http://hl7.org/fhir/uv/ichom-breast-cancer/STU1
STU Publication of HL7 FHIR® Implementation Guide: Health Care Surveys Content, Release 1 - US Realm: http://hl7.org/fhir/us/health-care-surveys-reporting/STU1
STU Publication of HL7 FHIR® Implementation Guide: Physical Activity, Release 1 - US Realm: http://hl7.org/fhir/us/physical-activity/STU1
STU Publication of HL7 FHIR® Implementation Guide: Quality Measures, Release 1 STU4 - US Realm: http://hl7.org/fhir/us/cqfmeasures/STU4
Unballoted STU Update Publication of HL7 FHIR® Implementation Guide: Healthcare Associated Infection Reports, Release 1, STU 2.1 —US Realm: http://hl7.org/fhir/us/hai/STU2.1
STU Publication of HL7 Cross Paradigm Specification: Health Services Reference Architecture (HL7-HSRA), Edition 1:https://www.hl7.org/implement/standards/product_brief.cfm?product_id=632
Errata publication of HL7 CDA® R2 Attachment Implementation Guide: Exchange of C-CDA Based Documents, Release 2 US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=464
Informative Publication of HL7 EHR-S FM R2.1 Functional Profile: Problem-Oriented Health Record (POHR) for Problem List Management (PLM), Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=630
STU Publication of HL7 CDA R2 Implementation Guide: Gender Harmony - Sex and Gender representation, Edition 1 - Component of: HL7 Cross-Paradigm Implementation Guide: Gender Harmony - Sex and Gender representation, Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=633
Informative Publication of HL7 Cross-paradigm Implementation Guide: Gender Harmony - Sex and Gender Representation, Edition 1: http://hl7.org/xprod/ig/uv/gender-harmony/informative1
STU Publication of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures, Edition 1 STU4 - US Realm: http://hl7.org/fhir/us/davinci-deqm/STU4
STU Publication of HL7 FHIR® Implementation Guide: Human Services Directory, Release 1 - US Realm: http://hl7.org/fhir/us/hsds/STU1
STU Update Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Common FHIR Profile Library R1.1: http://hl7.org/fhir/us/vr-common-library/STU1.1
Errata:
I wrong wrote:
STU Publication of HL7 Cross-Product Implementation Guide: HL7 Cross Paradigm Implementation Guide: Gender Harmony - Sex and Gender Representation, Edition 1: http://hl7.org/xprod/ig/uv/gender-harmony/
This was a copy paste error on my part, sorry. This is an informative publication, not a trial-use publication
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Release 1.1: http://hl7.org/fhir/us/bfdr/STU1.1
STU Update Publication of Vital Records Death Reporting FHIR Implementation Guide, STU2.2 - US Realm: http://hl7.org/fhir/us/vrdr/STU2.2
STU Publication of HL7 FHIR® Implementation Guide: Coverage Requirements Discovery, Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-crd/STU2
STU Publication of HL7 FHIR Implementation Guide: minimal Common Oncology Data Elements (mCODE) Release 1 STU 3 - US Realm: http://hl7.org/fhir/us/mcode/STU3
STU Publication of HL7 FHIR® Implementation Guide: Documentation Templates and Rules, Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-dtr/STU2
STU Update Publication of HL7 CDA R2 Implementation Guide: Personal Advance Care Plan (PACP), Edition 1 STU 3.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=434
STU Publication of HL7 FHIR® Implementation Guide: Protocols for Clinical Registry Extraction and Data Submission (CREDS), Release 1 - US Realm: http://hl7.org/fhir/us/registry-protocols/STU1
Informative Publication of HL7 Informative Document: Patient Contributed Data, Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=638
STU Update Publication of HL7 FHIR® Implementation Guide: Medicolegal Death Investigation (MDI), Release 1.1 - US Realm: http://hl7.org/fhir/us/mdi/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Prior-Authorization Support (PAS), Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-pas/STU2
FHIR Foundation Publication: HRSA 2023 Uniform Data System (UDS) Patient Level Submission (PLS) (UDS+) FHIR IG, Release 1- see http://fhir.org/guides/hrsa/uds-plus/
HL7 DK Publication: DK Core version 3.0 is now published at https://hl7.dk/fhir/core/index.html
STU Publication of Health Level Seven Arden Syntax for Medical Logic Systems, Edition 3.0: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=639
STU Publication of HL7 FHIR® Implementation Guide: Integrating the Healthcare Enterprise (IHE) Structured Data Capture/electronic Cancer Protocols on FHIR, Release 1- US Realm: http://hl7.org/fhir/uv/ihe-sdc-ecc/STU1
1st Draft Ballot of HL7 FHIR® R6: http://hl7.org/fhir/6.0.0-ballot1
Release of HL7 FHIR® Tooling IG (International): http://hl7.org/fhir/tools/0.1.0
Ballot for the next versions of the FHIR Extensions Pack (5.1.0-ballot1): http://hl7.org/fhir/extensions/5.1.0-ballot/
Ballot for CCDA 3.0.0: http://hl7.org/cda/us/ccda/2024Jan/
This is a particularly important milestone for the publishing process. Quoting from the specification itself:
Within HL7, since 2020, an initiative to develop the same underlying publication process tech stack across all HL7 standards has been underway. The intent is to provide the same look and feel, to leverage inherent validation and versioning, to ease annual updates, and to avoid the unwieldy word and pdf publication process. This publication of C-CDA R3.0 is the realization of that intent for the CDA product family.
Many people have contributed to this over a number of years, and while I'm hesitant to call attention to any particular individuals because of the certainty of missing some others who also deserve it, it would not have got across the line without a significant contribution from @Benjamin Flessner
Informative Publication of HL7 FHIR® Implementation Guide: Record Lifecycle Events (RLE), Edition 1: http://hl7.org/fhir/uv/ehrs-rle/Informative1
STU Update Publication of HL7 FHIR® Implementation Guide: Patient Cost Transparency, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pct/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: PACIO Personal Functioning and Engagement, Release 1 - US Realm: http://hl7.org/fhir/us/pacio-pfe/STU1
STU Publication of HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex), Release 2 - US Realm: http://hl7.org/fhir/us/davinci-pdex/STU2
STU Publication of HL7 FHIR® Implementation Guide: Member Attribution List, Edition 2- US Realm: http://hl7.org/fhir/us/davinci-atr/STU2
STU Publication of HL7 FHIR® Implementation Guide: PACIO Advance Directive Interoperability, Edition 1 - US Realm: http://hl7.org/fhir/us/pacio-adi/STU1
STU Publication of HL7 FHIR® R4 Implementation Guide: QI-Core, Edition 1.6 - US Realm: http://hl7.org/fhir/us/qicore/STU6
STU Update Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2.2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
Interim Snapshot 5.1.0-snapshot1 of the Extensions package (hl7.fhir.yv.extensions#5.1.0-snapshot1) has been published to support publication requests waiting for a new release of the extensions package @ http://hl7.org/fhir/extensions/5.1.0-snapshot1/
STU Publication of HL7 FHIR® Implementation Guide: C-CDA on FHIR, STU 1.2.0 - US Realm: http://hl7.org/fhir/us/ccda/STU1.2
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Healthcare Safety Network (NHSN) Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-CDA), Release 1, STU 1.2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=546
STU Publication of HL7 CDS Hooks: Hook Library, Edition 1: https://cds-hooks.hl7.org/
STU Publication ofHL7 FHIR® R5 Implementation Guide: Adverse Event Clinical Research, Edition 1: http://hl7.org/fhir/uv/ae-research-ig/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Digital Insurance Card, Release 1 - US Realm: http://hl7.org/fhir/us/insurance-card/STU1.1/
STU Publication of HL7 FHIR® R4 Implementation Guide: Adverse Event Clinical Research R4 Backport, Edition 1: http://hl7.org/fhir/uv/ae-research-backport-ig/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Central Cancer Registry Reporting Content IG, Edition 1- US Realm: https://hl7.org/fhir/us/cancer-reporting/STU1.0.1
STU Publication of HL7 FHIR® Implementation Guide: SMART Application Launch Framework, Release 2.2: http://hl7.org/fhir/smart-app-launch/STU2.2
STU Publication of HL7 FHIR® Implementation Guide: Pharmaceutical Quality (Industry), Edition 1: http://hl7.org/fhir/uv/pharm-quality/STU1
STU Publication of HL7 FHIR® US Core Implementation Guide STU 7 Release 7.0.0 - US Realm: http://hl7.org/fhir/us/core/STU7
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Orders from EHR (LOI) Edition 5 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=152
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Results Interface (LRI), Edition 5 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=279
Ok, a significant milestone has been reached with two new publications:
STU Publication of the HL7 FHIR® R4 Implementation Guide: Electronic Long-Term Services and Supports (eLTSS) Edition 1 STU2 - US Realm: http://hl7.org/fhir/us/eltss/STU2
STU Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports for Antimicrobial Use in Long Term Care Facilities (AULTC), Edition 1.0, STU1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=646
STU Publication of HL7 FHIR® Implementation Guide: Central Cancer Registry Reporting Content IG, Edition 1- US Realm: http://hl7.org/fhir/us/central-cancer-registry-reporting/STU1
STU Publication of HL7 FHIR® Implementation Guide: Using CQL With FHIR, Edition 1: http://hl7.org/fhir/uv/cql/STU1
STU Publication of HL7 FHIR® Implementation Guide: Canonical Resource Management Infrastructure (CRMI), Edition 1: http://hl7.org/fhir/uv/crmi/STU1
STU Publication of HL7 FHIR® Implementation Guide: Value Based Performance Reporting (VBPR), Edition 1 - US Realm: http://hl7.org/fhir/us/davinci-vbpr/STU1
STU Update Publication of HL7 FHIR® R4 Implementation Guide: At-Home In-Vitro Test Report, Edition 1.1: http://hl7.org/fhir/us/home-lab-report/STU1.1
STU Publication of MCC eCare Plan Implementation Guide, Edition 1 - US Realm: http://hl7.org/fhir/us/mcc/STU1
Normative Reaffirmation Publication of HL7 Version 3 Standard: Event Publish & Subscribe Service Interface, Release 1 - US Realm and HL7 Version 3 Standard: Unified Communication Service Interface, Release 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=390 or https://www.hl7.org/implement/standards/product_brief.cfm?product_id=388
Normative Reaffirmation Publication of HL7 Version 3 Standard: Regulated Studies - Annotated ECG, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=70
Normative Reaffirmation Publication of Health Level Seven Arden Syntax for Medical Logic Systems, Version 2.10: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=372
Normative Reaffirmation Publication of HL7 Healthcare Privacy and Security Classification System, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=345
Normative Reaffirmation Publication of HL7 EHR Clinical Research Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=16
Normative Reaffirmation Publication of HL7 EHR Child Health Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=15
Normative Reaffirmation Publication of HL7 Version 3 Standard: XML Implementation Technology Specification - Wire Format Compatible Release 1 Data Types, Release 1 and HL7 Version 3 Standard: XML Implementation Technology Specification - V3 Structures for Wire Format Compatible Release 1 Data Types, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=357 and https://www.hl7.org/implement/standards/product_brief.cfm?product_id=358
Normative Reaffirmation Publication of HL7 Version 3 Standard: Privacy, Access and Security Services; Security Labeling Service, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=360
Reaffirmation Publication of HL7 Version 3 Implementation Guide: Context-Aware Knowledge Retrieval Application (Infobutton), Release 4: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=22
Normative Publication of HL7 FHIR® Implementation Guide: FHIR Shorthand, Edition 3.0.0: http://hl7.org/fhir/uv/shorthand/N2
STU Publication Request for HL7 FHIR® Implementation Guide: Medication Risk Evaluation and Mitigation Strategies (REMS), Edition 1- US Realm: http://hl7.org/fhir/us/medication-rems/STU1
Normative Reaffirmation Publication of HL7 Cross-Paradigm Specification: FHIRPath, Release 1: http://hl7.org/FHIRPath/N2
STU Update Publication of HL7 FHIR® Implementation Guide: Security for Registration, Authentication, and Authorization (FAST), Edition 1 - US Realm: http://hl7.org/fhir/us/udap-security/STU1.1
Informative Publication of HL7 Guidance: AI/ML Data Lifecycle, Edition 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=658
Unballoted STU Update of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.2 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2.2
Normative Publication of HL7 Clinical Document Architecture R2.0 Specification Online Navigation, Edition 2024: https://hl7.org/cda/stds/online-navigation/index.html
Normative Publication of Health Level Standard Standard Version 2.9.1 - An Application Protocol for Electronic Data Exchange in Healthcare Environments: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=649
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Common Library, Edition 2 - US Realm: http://hl7.org/fhir/us/vr-common-library/STU2
Normative Retirement Publication of HL7 V3 Patient Registry R1, Person Registry R1, Personnel Management R1 and Scheduling R2: Patient Registry, Person Registry, Personnel Management and Scheduling.
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Edition 2 - US Realm: http://hl7.org/fhir/us/bfdr/STU2
STU Publication of HL7 FHIR® Implementation Guide: Prescription Drug Monitoring Program (PDMP), Edition 1 - US Realm: http://hl7.org/fhir/us/pdmp/STU1
Normative Retirement Publication of HL7 Version 3 Standard: Security and Privacy Ontology, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=348
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Edition 3 - US Realm: http://hl7.org/fhir/us/vrdr/STU3
STU Update Publication of HL7 FHIR® Implementation Guide: Personal Health Device (PHD), Release 1.1: http://hl7.org/fhir/uv/phd/STU1.1
tx.fhir.org is not responding
working for me
trying again. thanks
Is tx.fhir.org down? I don't seem to be able to reach it.
it is. restarted it
Still down?
Terminology server http://tx.fhir.orgException in thread "main" org.hl7.fhir.exceptions.FHIRException: Unable to connect to
terminology server. Use parameter '-tx n/a' to run without using terminology services to validate LOINC, SNOMED, ICD-X etc. Error = Error fetching the server's capability statement: Connect timed out
It appears to still be down for me as well
back
tx.fhir.org appears to be down. I'm checking if I can restart it; @Grahame Grieve
I'm restarting it
tx.fhir.org seems to be down again.
I'll check it now.
Hmm. The server itself appears to be offline. I can't get a remote connection to it - and without that, I can't do anything. I think this probably will require @Grahame Grieve or @David Otasek or @Mark Iantorno to restart the server VM instance.
@Chris Moesel
Thanks for trying, @Rob Hausam!
@Grahame Grieve appears to be online now.
Nah. Phone only. I’ll have a look when I get back to the hotel in a few hours
Is the server still down? Or is there something else going on...
Bad gateway 502
at tx.fhir.org
yup. working on it
back
I'm getting 502s when trying to connect to tx.fhir.org.
Seems to be responsive now?
Yeah, it's back for me.
tx or tho might not be fully populated?
A definition for CodeSystem 'http://terminology.hl7.org/CodeSystem/icd9cm' could not be found, so the code cannot be validated
that's because you should be using http://hl7.org/fhir/sid/icd-9-cm
Ill change, but I didn't see that before
did you consider looking in THO? https://terminology.hl7.org/ICD.html
im sure I found it somewhere before. I would not have known to use terminology.hl7.org at all.
what would you know to use? How do we get people to look in the right place? Where else would you look?
I note I fixed this in a different IG back in January. so I guess I have been chastised more than once now
I better go look at my other IGs.
the IG Publisher seems to be getting a new response from tx.fhir.org on codeSystems that are not in tx.fhir org. For example with a inaccessible codeSystem (propritary) I am now getting an ERROR vs the WARNING that I used to be able to put into ignorewarnings.txt
Error reading Http Response from http://tx.fhir.org/r4: Error parsing JSON source: Unexpected char '<' in json stream at Line 1 (path=[null])
clear your txcache
yup... :-( sorry
restarting the server again to deal with what was seen in the last crash
back
Grahame Grieve said:
what would you know to use? How do we get people to look in the right place? Where else would you look?
It would be really nice if the old links like http://hl7.org/fhir/icd.html, http://hl7.org/fhir/R4/icd.html, http://hl7.org/fhir/R4/snomed.html redirected to the new less-memorable places at THO
do you have a full list?
(no, but for the partial list, I'd love id https://build.fhir.org/snomedct-usage.html could link prominently to https://terminology.hl7.org/SNOMEDCT.html)
you can make a PR to do that one
https://hl7.org/fhir/R4/terminologies-systems.html#4.3.0 The comment column of the table of external vocabularies seems like a good sourcw of the special purpose pages we used to have
I can also make a list of the pages that do exist, but changing any of them is a pretty big deal - a technical correction.
the pages that don't exist anymore, that i can just fix, but that's what I wondered if there was a list
Yeah, the /R4/ ones I think are fine, because they are fixed to the FHIR version. Rather it's the unversioned ones from exactly that column
well, technically, they are fixed to R5
The ones that used to exist and are listed in that table at /fhir/r4 and don't exist in / ... those URLs should be added as redirects in /fhir
:point_up: I mean http://hl7.org/fhir/R4/icd.html should not change, but http://hl7.org/fhir/icd.html should redirect to https://terminology.hl7.org/ICD.html rather than giving a 404
If you were feeling generous, then yes, http://hl7.org/fhir/R5/icd.html could also redirect to https://terminology.hl7.org/ICD.html
the server is restarting again to fix an issue with R2 support
26 messages were moved here from #committers/announce > tx.fhir.org by David Pyke.
ok @Michael Lawley I think they do now
I feel like I should know this, but what are the intended terms of use for tx.fhir.org? Is it intended only for testing and supporting things like IG builds, or for more than that (production use-cases)?
@Grahame Grieve
testing, validation, IG publication. It's not for supporting unit testing or production usage. The terms of service include that I can take it down at any time without warning (and that it can do that to itself :sad:)
Would production usage that subscribes to changes to code systems or value sets to update a different production tx server be reasonable?
(With allowance for the fact that it's only a "moderately availabile" service)
Would production usage that subscribes to changes to code systems or value sets to update a different production tx server be reasonable?
I don't know what this means
If Canada were to have a tx server intended for production use that subscribed to tx.fhir.org to retrieve updates to certain value set definitions or code system definitions, would that be a violation of usage terms?
I don't think that tx.fhir.org supports subscriptions like that
but if it did, no it wouldn't be
Is the content loaded in from packages?
yes
and some external terminologies
There should be some way for downstream servers targeted to production uses to leverage the loading that we do with tx.fhir.org. Saying that everyone needs to repeat that effort doesn't scale well.
if you run a clone of tx.fhir.org using the same software, you just point it at the same config. No subscription needed.
if you're running a different piece of software... well, then, the terminologies supported by tx.fhir.org fall into two categories: large external terminologies which are supported using custom formats. Other terminology servers have their own approach for this, so there's nothing to reuse. All other terminology content is delivered using packages, so just load the latest version of said package, and the tx.fhir.org config is available if you want to list the relevant packages.
so I haven't been thinking about this problem at all
National terminology services do think about this, and it's a governance question; they're hardly going to listen to what we do on tx.fhir.org
Pretty much where AU lands is that, where possible, relevant FHIR terminology content is loaded from packages, SNOMED and LOINC are special, so get loaded from native formats, and then there's a bunch of FHIR resources that (currently) exist outside of IGs, so have config that points at the directly.
All of this is built around an open extension to the ATOM syndication file format (also adopted by SNOMED International).
So, just like tx.fhir.org, it's all config based and available for any conforming system to consume the same config, but different governance requirements lead to different config.
The format is documented here: https://www.healthterminologies.gov.au/specs/v3/conformant-server-apps/syndication-api/syndication-feed/
Hi Folks,
New to this chat, so saying to all :)
I have a couple questions regarding the setup and running a clone of tx.fhir.org (https://confluence.hl7.org/display/FHIR/Running+your+own+copy+of+tx.fhir.org).
I've been able to setup this service up and have it run via console mode on a windows server; I also have a docker service that is running the inferno Validator (https://github.com/inferno-framework/fhir-validator-wrapper)
One curious thing that came up, when I point my local validator to my local terminology server, I get these odd "Access violation" errors vs to when I point my local validator to tx.fhir.org.
For example, when there are certain Observation validations that silently fail, eg:
Validate Observation against http://hl7.org/fhir/StructureDefinition/bodyweight|4.0.1.. ..Access violation
Validate Observation against http://hl7.org/fhir/StructureDefinition/vitalsigns|
Validate Observation against http://hl7.org/fhir/StructureDefinition/bodyweight|4.0.1.. ..Access violation
And there are some Access violation that aren't thrown in the logs, but are surfaced via the OperationOutcome, eg:
....
"severity": "error",
"code": "code-invalid",
"details": {
"text": "Error from http://10.0.0.209:8099/r4: Access violation"
},
"expression": [
"Bundle.entry[6].resource.medication.ofType(CodeableConcept).coding[0]"
]
....
When using tx.fhir.org, I never see any of these Access violation... I figure it's something to do with my local configuration, alas, in my www searching, I've not been able to find anything.
I created the below composite screen shot to illustrate the config (via the fhirconsole exec) :
terminology_server_config.png
Question: what am I missing configuration wise ? Happy to provide any further artefacts that may be of interest to review.
I probably should add why I'm standing up a clone of this; it's for a Canadian project, and there very strick rules about network traffic leaving (and entering) the private data center, hence, the need to have a local copy so nothing related to any FHIR submissions being validated leaves the "four walls" of the data center.
My second question is around the building of a docker service. My initial pondering with the window service was to get a 'feel' of this service. I have been able to build a docker container using the provided source code (https://github.com/HealthIntersections/fhirserver) and post build, I was happy to see it start, however, I was faced with an odd exception about a missing lang file - I believe this is the lang.dat file that ships with the windows service.
]:/opt/fhir_tx_server/fhirserver-master# docker container logs --tail 1000 fhir_tx_server
07:16:40 00:00:00 1815b 0% FHIR Server 3.4.3 Linux/FreePascal, Development Build
07:16:40 00:00:00 1911b 0% Running on "f3920d21f4a4": "Ubuntu" v"22.04.4 LTS (Jammy Jellyfish)". 20.7 GB/ 0 bytes memory
07:16:40 00:00:00 1981b 0% Logging to /tmp/fhirserver.log. No Debugger.
07:16:40 00:00:00 2091b 0% /work/fhirserver/exec/64/fhirserver -cmd console -cfg /config/config.ini -local /terminology (dir=/work/fhirserver)
07:16:40 00:00:00 2091b 0% Command Line Parameters: see https://github.com/HealthIntersections/fhirserver/wiki/Command-line-Parameters-for-the-server
07:16:40 00:00:00 1793b 0% Loading Dependencies
07:16:40 00:00:00 282kb 0% TimeZone: UTC @ UTC
07:16:40 00:00:00 282kb 0% Loaded
07:16:40 00:00:00 282kb 0% Local config: /config/config.ini (exists = False)
07:16:40 00:00:00 282kb 0% Actual config: /work/fhirserver/exec/64/fhirserver.cfg
07:16:40 00:00:00 315kb 0% Using Configuration file /work/fhirserver/exec/64/fhirserver.cfg
07:16:40 00:00:00 316kb 0% Start Telnet Server on Port 44123
07:16:41 00:00:01 328kb 0% Run Number 40
07:16:41 00:00:01 328kb 0% Load Terminologies
07:16:41 00:00:01 1119kb 0% Error starting: EFslException: Unable to find the lang file ""
07:16:41 00:00:01 1119kb 0% stopping:
07:16:41 00:00:01 1119kb 0% close web server
07:16:41 00:00:01 1119kb 0% stop internal thread
07:16:41 00:00:01 1119kb 0% stop web server
07:16:41 00:00:01 1119kb 0% closing
07:16:41 00:00:01 1119kb 0% stopped
07:16:41 00:00:01 1119kb 0% Exception [EFslException] in Service Execution:
Unable to find the07:16:40 00:00:00 321kb 0% Thread start Telnet Server 00007B372B366640
07:16:41 00:00:01 327kb 0% Thread Finish Telnet Server
lang file ""
I figure it's a matter of importing that lang.dat file, I'm just not that entirely familiar with the project setup and the Dockerfile, so I'm not sure sure where that file should go?
Is it just in the config/ directory ?
eg:
image.png
Any other helpful hints folks here might know on building this docker container ? Any pointers to documentation will be highly appreciated :)
it's in the exec/pack folder, and the files in there need to go in the same folder as the server
Tx server is down. @Grahame Grieve @Rob Hausam @Mark Iantorno
Lynn Laakso has marked this topic as resolved.
Lynn Laakso has marked this topic as unresolved.
of course it goes down while I'm on a 17hour flight
back
WARNING: Running without terminology server - terminology content will likely not publish correctly
curl: (22) The requested URL returned error: 502
Offline (or the terminology server is down), unable to update. Exiting
I have restarted the server. It should be up again soon.
sorted, thanks
Hi Everyone,
New to this chat.
Working on running own copy of tx.fhir.org on ec2 instance and have all the configurations.
Previously was running version 3.3.10.
We are trying to get to latest version 3.4.6 but running into below error
2kb 2% FHIR Server 3.4.6 Windows/FreePascal, Production Build
14:03:20 00:00:00 1283kb 2% Running on "EC2AMAZ-1D44V9M": Windows NT 6 [6.2.9200]. 15.8 GB/ 34.5 GB memory
14:03:20 00:00:00 1283kb 2% Logging to C:\ProgramData\TerminologyServer\logs\tx-server.log. No Debugger. No Leak Dialog
14:03:20 00:00:00 1283kb 2% FHIR Server running as a Service
14:03:20 00:00:00 1283kb 2% Command Line Parameters: see https://github.com/HealthIntersections/fhirserver/wiki/Command-line-Parameters-for-the-server
14:03:20 00:00:00 1282kb 2% Loading Dependencies
14:03:20 00:00:00 1651kb 2% TimeZone: America/New_York @ -04:00
14:03:20 00:00:00 1651kb 2% Loaded
14:03:20 00:00:00 1651kb 2% Local config: C:\Program Files\TerminologyServer\fhirserver.ini (exists = True)
14:03:20 00:00:00 1651kb 2% Actual config: C:\ProgramData\TerminologyServer\tx-server-config.cfg
14:03:20 00:00:00 1689kb 2% Using Configuration file C:\ProgramData\TerminologyServer\tx-server-config.cfg
14:03:20 00:00:00 1691kb 2% Start Telnet Server on Port 44123
14:03:20 00:00:00 1698kb 2% Thread start Telnet Server 000015A4
14:03:20 00:00:01 1714kb 12% Run Number 1
14:03:20 00:00:01 1714kb 12% Load Terminologies
14:03:21 00:00:01 4Mb 96% load ucum-essence-2.0.1 from C:\ProgramData\TerminologyServer\ucum-essence-2.0.1.xml
14:03:21 00:00:01 5Mb 96% load loinc_274_a from C:\ProgramData\TerminologyServer\loinc_274_a.cache
14:03:21 00:00:01 5Mb 96% Error starting: ESQLite3Error: fdb_sqlite3_objects error: file is not a database
We were able to get 3.4.1 working with current config
This is the config for sql lite
what's the size of loing_274_a.cache?
try deleting it. If it still doesn't work, can you open that file in the sqlite db browser?
Deleted the config for the loinc and that part worked. Is the loinc cache required for the tx server ?
yes
it'll be redownloaded
Just to clarify, I'll keep remove the source (loinc file reference) from the config file for the loinc_274 and it will re-download ?
image.png
yes
I'll try that. Thank you!
Trial 1:
We tried removing the source file under the loinc config but that didn't help. Got the below error
11:28:41 00:00:01 5Mb 100% Error starting: EFslException: Unable to find the loinc file ""
Trail 2:
We removed the source and also tried specifying the version 2.78, but it didn't download anything. But the server started fine and didn't return any error. There was no log w.r.t LOINC. It seemed to skip the loinc step.
Will you be able to provide an example of what the config look like where it can auto download the loinc cache and also if we want to download the latest version 2.78 of loinc, how do we specify the version in the config file ?
Thank you.
I don't see that we've processed LOINC 2.78 yet
We tried removing the source file under the loinc config but that didn't help. Got the below error
I guess you can't do that
LOINC 2.78 is now available (as of early this morning - my time).
@Chirag Kular @Grahame Grieve
Hello Everyone!
I am validating a mCode Primary Cancer Condition profile (with Inferno validator and locally hosted copy of tx.fhir.org) which resulted in this:
"Error from <terminology server> : Error:A definition for the value Set 'http://hl7.org/fhir/us/mcode/ValueSet/mcode-primary-cancer-disorder-vs|3.0.0' could not be found."
Though the Condition.code validates fine, missing valueset definition fails the resource. Tried to $expand mcode-primary-cancer-disorder-vs and its not found.
So, tried loading the mcode IG by adding "hl7.fhir.us.mcode#3.0.0" under the r4 packages in the tx-server-config.cfg. For some reason, as soon as I start the server with this config, the "hl7.fhir.us.mcode#3.0.0"
entry is auto removed from the config before loading the terminologies.
How do I load the mCode IG? Am I missing something else?
Thanks for any help!
if you are running a local copy of tx.fhir.org, it will be exactly the same as the master; you can't change the setup. You have to clone the set up if you want to do that
but why is it trying to validate that on the server? Usually, the validator would handle that internally - can you change what inferno loads? (ask on #inferno?)
Thank you. I have asked on #inferno.
Running the production release with local config file (of course, the first load was with zero config pulling the terminologies from tx.fhir.org).
I assumed I should be able to add or remove the packages in the local config file as needed.
If I need to load any other published IG, can't I do that with the local config?
I've completed a draft of IPS-AU - the Australian IPS. I wrote it to support IPS Adoption in Australia; hopefully it will become an HL7 Australia spec, but I also wrote it to show countries how I think this should be done
The spec is here:
it's very literally a profile in IPS that does nothing but say, 'not only must this resource conform to IPS, it must also conform to the Australian Core profiles'
the release of the validator that's about to come out supports this fully with the parameter -ips:au
- just that parameter and the document file name, and it'll validate propoerly
(so far the only additional constraint I've found from AU Core is that Condition.category is mandatory in AU Code and it isn't in IPS)
I can add support for ips:XX
easily - countries just have to point me at the IPS profiles they wish to use in that case
I'd be interested to see how practical the bindings are on Medication.code - if they're restricted to SNOMED CT codes that are in the IPS Terminology that may not provide sufficient coverage. Certainly, in NZ not all of our NZMT codes map to SNOMED CT medicinal products, so the alternative is to incur the wrath of the Validator or put the description in the text element of the Codeable Concept.
where would the wrath of the validator come from?
Code System URI 'http://nzmt.org.nz' is unknown so the code cannot be validated. Actually, that generates an information message, so not really wrath-inducing. :)
The meds in my own IPS instance do map to SNOMED MP concepts in the IPS Terminology, so it validates fine against both the IPS and NZ Base IGs - but I'd get the above message if I slipped in an NZMT concept.
that generates an information message, so not really wrath-inducing. :)
hah. Might still produce some wrath :grinning:
Only reference elements that with constrained referents in IPS are constrained in the new IG.
For example Allergy Intolerance (AU IPS).patient is constrained but Allergy Intolerance (AU IPS).encounter is not.
AU Core Allergy Intolerance.encounter is constrained to AU Core Encounter.
Is this intended?
it is intended, but it's certainly something to discuss
Must support is interesting here. AU IPS does not require systems to support all of the things that AU Core does. That might be useful, but it needs to be made clear.
how does it not do that?
The snapshot in https://build.fhir.org/ig/HealthIntersections/au-ips/StructureDefinition-AllergyIntolerance-au-ips.html does not have any must supports beyond those in AllergyIntoleranceUvIps.
So I take that as saying that MS is not incorporated from AU Core AllergyIntolerance
you shouldn't
Interested in people's thoughts on the use of the _summary=data
request parameter which (if the Server implements it - not all do) results in all of the text elements (other than those within other elements such as CodeableConcept) being removed from a response. Should this be permissible within an IPS (or any Composition.section)?
no it shouldn't work in that context
Most of the IPS-AU profiles only restrict references Patient to be references to au-core-patient (and some Practitioner, Org etc.). Most national profiles would do the same, I looked at NL, DE, UK, US a while ago and a lot of profiling is done on Patient, Practitioner, Organization and much less on others, except restricting those to xx-core-patient etc. That leads to a proliferation of profiles which all do more or less the same. Would it not be easier to state on IG level: all references to patient in this IG must be to xx-core-patient?
And thinking further: does it even make sense to restrict references in say Condition or Observation to be references to xx-core-patient? The Patient itself is already profiled to be only xx-core-patient, so the ecosystem behind the IG only allows xx-core-patients. Since all patients in the ecosystem already will be xx-core-patient, why add numerous profiles which only state that references to Patient must be to xx-core-patient - which all patients already will be.
@Grahame Grieve. In that case, should servers ignore the Summary Type parameter or reject that request with an OperationOutcome stating that the parameter is invalid within the context?
One of the two. I don’t know which
@Marc de Graauw this is a good question but I'm deferring to IPS, which doesn't have a position on this. I'll do whatever IPS does. @John D'Amore
For the time being, I've decided to ignore it if the operation is $summary.
It looks like we need to decide what IPS will do with this. I'm happy to be corrected if all or part of what I say here is incorrect, but I believe that the situation that @Marc de Graauw describes where "the ecosystem behind the IG only allows xx-core-patients" would be true only if the IPS or IPS-AU Patient profile is based on xx-core-patient and is declared as 'global' in the IG. And at present neither IPS or IPS-AU are declaring any global profiles. So, unless that changes, I think if we want to ensure that the xx-core-patient profile will be used throughout the IG, then each reference to Patient in the other resource profiles within the IG will need to be explicitly constrained to xx-core-patient. We can certainly discuss what we think this should be in IPS - either (or both) on an upcoming call, or possibly have some initial discussion on it in our IPS quarter tomorrow Q3 in EHR (if there is time).
@John D'Amore @Grahame Grieve
Well, let’s keep away from “global” which is potentially problematic and just confine our language to what’s in the ips. Would we say that we want only one patient resource in the ips? I don’t think we can actually say that. But we could say that if there’s any patients they must be ips patients. Would we say that for all the other profiled resources?
If we want to say that, I’ll think about how to say that
Ok. And that is the question. We avoided specifically saying that before - I think to avoid being overly (unnecessarily) constraining. We can revisit that. And happy to hear how you think we can/should say that, if we do.
And I agree with keeping away from global profiles.
I think if we want to ensure that the xx-core-patient profile will be used throughout the IG, then each reference to Patient in the other resource profiles within the IG will need to be explicitly constrained to xx-core-patient
Not sure that is really true. In effect, the IPS IG describes a Bundle with a patient's IPS. If the Patient in the Bundle is xx-core-patient, and the IG for country xx requires the Patient in xx-IPS to be xx-core-patient, where is the need to ensure that xx-core-patient is used throughout the IG?
The drawback in lots of profiles which only constrain Reference to Patient, is that implementers still have to look at all profiles just to see what's in there. "Oh, just ref to xx-core-patient, which we already have, next one". We had a lot of trouble with proliferation of templates in CDA, which made it really hard on implementers. The situation with profiles is much better, since they have diffs, but it still seems unnecessary overhead. Happy to discuss this further.
We do constrain Composition.subject as 1..1 Patient (IPS). And, given the FHIR Document rules, that does effectively constrain the references in the Bundle, as you say. So I agree with you on that. And maybe your earlier question of "why add numerous profiles which only state that references to Patient must be to xx-core-patient" also makes sense. But since I don't think we have (or will have) any profiles that "only state that references to Patient must be to xx-core-patient", I don't know if that's actually an issue?
that does effectively constrain the references in the Bundle, as you say
I don't think this is true. I can just add another patient resource, and reference that
or I can just reference it and not add it to the IPS
nothing about IPS says that the other resources have to reference the same resource as Composition.subject
perhaps everyone just assumes that, but we should decide whether it's true and say it explicitly if we do
Fair enough. You could do that, since the Bundle profile doesn't impose any constraints on Patient. So that's getting back toward my original thought. And, particularly in light of this discussion, we should try to make it explicit.
You guys are making my neurotransmitters salivate ... maybe this will become the first FHIR thing where I actually get into the weeds :smile:
See you in Q1!
In the Dutch PS we do state that it's about a single patient. I think in the CDA IPS it will almost by default be about a single patient (does CDA even allow multiple patients in a single doc?). ISO 27269:2021 does not have cardinalities, but just a single required patient section. So it's fair to make explicit that the FHIR IPS is also about a single patient, which I believe should be true. (Would not be true if someone added say family history or child delivery to the IPS, but that's not the case now.)
It seems reasonable to me, too, to make it clear and specify in the profile(s) that the iPS document is for a single patient. If someone ends up coming forward with a credible use case for a "multi-patient IPS", then I expect we could consider that. But that's a separate use case which I'm guessing probably isn't very likely - and if it would happen, it would make sense for that to be a different profile.
In the Dutch PS we do state that it's about a single patient.
is that the same as 'only one patient resource'? What about transplants etc
does CDA even allow multiple patients in a single doc?
sure does, at every level
Yes, in the Dutch PS we state it's about a single patient. We don't handle transplants (well, as a general Procedure but not with donor). Should have looked up CDA, silly question from me
about a single patient
is not quite the same as 'only one patient resource' - is that an explicit rule?
It's explicit in our functional docs. In FHIR we don't really say anything about it, but our PS is not a FHIR Document, just a bunch of queries which will constitute a PS. A bit what $summary does, but then with queries. None would normally return another Patient - at least not within what we specified in the functional specs. It's imaginable (and probably real) that say a caesarian would reference another Patient - not _include it though.
The EN ISO 27269 standard defines the IPS as an electronic patient summary, which is defined as an electronic health record extract. A patient summary is defined as a health record extract [...] of a subject of care's health information and healthcare. With "a subject of care" it explicitly excludes more subjects of care. There are no references to other patients in the hierarchy of (up to) seven levels of data elements within an IPS section, so according to the EN ISO IPS standard there should be no reference to other patients in an IPS. Of course, the IPS is non-exhaustive, so additional information could be added, but it is also minimal, which means that there should be a well identified need (specialty- agnostic and condition-independent) to include references to other patients.
In short: In my mind it would be safe to assume that only one patient resource will appear in an IPS bundle.
How about with maternity cases where the mother and unborn child are both present?
Should the unborn child(ren) be included? Along with the data relevant to those too? (medications/treatments of the unborn)
Other than pregnancy status and brief history, I don't see IPS as the appropriate medium for exchanging detailed information about maternity cases. Certainly not based on my experiences in working on 3 different maternity systems during my career.
(Just a question only, not a request)
If the subject of the procedure or medication administration is a group (and you can't have more than 1 patient without using Group), then I would guess these procedures would belong to the patient summaries of all those patients, and there would be no indication who where the other patients in that group?
If that is the correct logic for groups, then it would also apply in case one patient is inside the other.
How about with maternity cases where the mother and unborn child are both present?
I think the conclusion I draw from this is that they are involved in each other's care and likely to appear as RelatedPerson in each other's summary, but not as Patient
And therefore we should be explicit about this: 1 patient resource only in the IPS
So the mother's ips shouldn't have anything that is associated to the unborn patient resource. (that isn't directly related to her patient resource)
What about Observation where she's the subject but not the focus?
We need to discuss and further think through the pregnancy situation (particularly current pregnancy, and also some pregnancy history details potentially) for IPS. That may lend to needing to revise the proposed "1 patient resource only" rule. The CHOICE group (which I participate in) may have some thoughts on this.
RelatedPerson in that case, surely?
As an obstetrician, I can tell you the that clinically, the line between the maternal and fetal patient is very blurry. Quick example: who does the placenta belong to? However, if I understand IPS (and I probably don't), the I think that @Peter Jordan is probably on the right track re: pregnancy status and brief history. I think there is a working group in HL7 working on modelling the maternal / fetal relationship(s), they have likely spent some time thinking through this stuff and may have some insight.
EDIT: oops... I missed @Rob Hausam's comment as he already pointed out the CHOICE group. I would love to hear his thoughts, and CHOICE's thoughts.
@Carl Severson I also agree that @Peter Jordan's comments are on the right track generally. But at the moment I am leaving open the possibility that in some cases there might be a need for some (not all) further details regarding pregnancy and the fetus in the patient summary context. The CHOICE call originally scheduled for Sept 26 needed to be cancelled, and the next one will be Oct 10. I should be able to bring this up then, and we could consider having some broader Zulip or email discussion beforehand.
I agree @Rob Hausam, there are likely cases where further details about the maternal patient, fetus, or the dyad, are needed. Just hard to figure out what those are! I have been stalking the CHOICE meeting minutes and keep an eye on the google doc, thanks for doing this important work.
as a follow up to this, beyond patient, can we say that any Device, Medication, Organization, Practitioner, and PractitionerRole present in an IPS are required to conform to the IPS library profiles?
So far (as of the IPS STU 1.1 publication), we do specify but do not require the use of the IPS library profiles for a conformant IPS document instance. The Bundle entry and Composition section.entry slices specify the IPS profiles (where they exist - not all of them do for a few of the optional sections). But the slicing rules are open. And the element level constraints on section.entry are to the base resource(s) plus also DocumentReference, which allows an IPS document to be a conformant IPS instance even if it contains a resource which is of the appropriate type but where that resource instance doesn't conform to the IPS-specific profile.
Whether that's what we should continue to do in the upcoming IPS 2.0 ballot and publication is another matter. If we would want to consider changing this, now would be the time to do it. So feedback on that is definitely welcome.
Right. it's allowed outside what's explicitly profiled. I believe that we should be clear in a way that the validator can test: all the resources have to conform to the profiles defined by IPS if it profiles the resources
Would you allow for a DocumentReference "exception" - e.g., someone wants to use a PDF (rather than structured data in a resource) for one or more of the entries?
I'm not following how that would make sense. not allowed now, right?
DocumentReference for the entries is explicitly included in the Composition section.entry slices, and it's always allowed anyway by the open slicing in the Composition and Bundle profiles. That was the outcome of some earlier discussions. I don't think anyone has actually asked for or implemented that, as far as I know - but the idea was about allowing as much flexibility as (reasonably?) possible and giving guidance (not requirements) on how to do it, particularly when an implementation may need to deal with legacy data. But, again, maybe it's time to revisit this?
@Grahame Grieve
well, i see that the IPS doesn't constrain DocumentReference, nor does it discuss that ramifications of providing a document reference in the section entries. I suppose that the intent is that we at least say that the document reference points at a logical equivalent of the logical entry alternatives
I hope we can do that. And that means that a document reference can't be used for one of the kinds I listed above, right?
@Grahame Grieve it appears as though IPS-AU is missing from the build server:
org.hl7.fhir.exceptions.FHIRException: The package 'hl7.fhir.au.ips' has no entry on the current build server
I think someone just needs to do an empty commit to get it back.
done
What is the latest on the story of generating Java data models for profiles?
(I checked fhir-codegen but I see it has this issue)
( blast from the past https://github.com/jkiddo/hapi-fhir-profile-converter )
@Vadim Peretokin I might have a colleague that would like to pitch in some effort
To the MS codegen project
works, but there's some open issues with it
What are those?
don't remember :-(
@Grahame Grieve and this class here https://github.com/hapifhir/org.hl7.fhir.core/blob/master/org.hl7.fhir.r5/src/test/java/org/hl7/fhir/r5/profiles/PETests.java illustrates how it can be used, correct? It isn't wrapped in any executable or something like that already, right?
that tests out the underlying engine.
I don't think it tests out the generated code itself
This is a great start!
I've played around with the code generation and found the following issues, sorted in priority:
Would you like me to file them so we can keep track? Both me and @Jens Villadsen agree this is something worth developing further, perhaps we can get some community traction on this :)
This is gonna be a fun ride!
ca.uhn.fhir.model.api.annotation.*
are used in the generated results.6 missed a 'not'. But you explained it in 8, so nvm
@Vadim Peretokin how to reproduce #2?
I'll have a look again. On the road atm, so it'll be in a few days. Thanks for checking it out
@Grahame Grieve try generate something for e.g. https://hl7.dk/fhir/core/StructureDefinition-dk-core-gln-identifier.html
also these slices: https://hl7.dk/fhir/core/StructureDefinition-dk-core-patient-definitions.html#diff_Patient.identifier
hmm ... wait ... I'll share some code that can reproduce it ...
I thought I set up for the validator to do the code generation, but I can't see that now
where ?
I didn't do it
but what does this have to do with the validator?
it has all the knowledge etc, so it can do the code generation
java -jar validator.jar -codegen -ig x -ig y -profiles a,b,c -output {dir}
mmmkay ...
never tried that
it doesn't work now. Cause I never did it
lol
I'll most likely do some wrapping of it as well and put it somewhere public
but it will be a few days
why not put it in the validator where everyone can use it?
separation of concerns
also ... I'd like to be using whatever libraries as I see fit
also ... I do not know of the the release cycle of the validator
rarely more than a week
but if the code produced fits into the validator then I'll gladly make a PR
it's the generation that goes in the validator, not the generated code
yes
("the code produced" -> the wrapping code that I'll be producing - not the generated code )
I also consider building it as a maven plugin
@Vadim Peretokin
polymorphic types not supported
is that:
Attempt to get children for an element that doesn't have a single type
?
the next version of the validator will generate code on request:
--
The easiest way to generate code is to use the FHIR Validator, which can generate java classes for profiles. Parameters:
-codegen -version r4 -ig hl7.fhir.dk.core#3.2.0 -profiles http://hl7.dk/fhir/core/StructureDefinition/dk-core-gln-identifier,http://hl7.dk/fhir/core/StructureDefinition/dk-core-patient -output /Users/grahamegrieve/temp/codegen -package-name org.hl7.fhir.test
Parameter Documentation:
Options
-option {name}: a code generation option, one of:
narrative: generate code for the resource narrative (recommended: don't - leave that for the native resource level)
and it fixes a couple of those problems, though I have no doubt there's plenty more work to do
that looks funky
nvm .... didn't use the version property before. Its kinda odd though. Why isnt' that property automatically set since the PECodeGenerator
is already package specific?
it is now
its getting there now -> https://github.com/jkiddo/espresso
So far it supports R4 and R5 (automatically detects the version), by default selects all profiles and it works as a maven plugin
It supports IG's from the registries as well as any IG that has a public available package.tgz file
and local files as well ofc
let me know if you find other generation issues
you cant have -
in the naming. ENTERED-IN-ERROR, // "Entered in Error" = http://hl7.org/fhir/observation-status#entered-in-error
With ClinicalUseDefinition being able to model most of the clinical particulars a drug database might have, it'd be nice to be able to package that knowledge up and share it through CRMI. This'd be simpler if the subject could be a CodeableReference, as it'd allow terminology to be used in place of a shared substance register (likely included in or depended on by the package). For example, we have some 1500 substances with some 30000 interactions across them. An enormous package either way, but the Substance resources don't really add much value, given that we'd still have to fall back to terminology in order to map between any local substance register and them.
I suppose this is a case of a global / local problem, where the clinical knowledge is authored against a global (canonical) substance, and is then used against a local substance. For example, in Finland, prescribing is done through a combination of ATC codes and a package identifier called VNR. These are mapped to a local substance identifier. Now, both EMA and the Finnish Medicines Agency Fimea are working on centralised knowledge bases focusing on the FHIR medication definition -module. Neither seems to be directly tackling clinical particulars at the moment, leaving a need for third-party drug databases and CDS -services around them. One such example would be interactions, as mentioned above. So far, our approach for bridging this gap has been focused on terminology; through ValueSet (of, say ATC, RxNorm, and SNOMED CT codes) and/or ConceptMap resources for extracting the global substance from a local resource, like MedicationRequest. (As well as the administration routes, but that's a different discussion, and can be handled rather well with an extension on ClinicalUseDefinition.)
There was a previous discussion about using a code as the subject of a ClinicalUseDefinition being a direction people have been wanting to avoid. Am I missing some context or an obvious solution here? What's the alternative? A national substance register? A regional one, like the EMA SPOR SMS? A custom substance register for each drug database? A shared base CRMI package for canonical substances? Not trying to step on the toes of regulatory work and national knowledge bases, but from my point of view, terminology seems like an easier match for drug databases in the CDS context. Any previous work, ideas or discussions on the topic?
At the moment you could achieve this use a reference to a contained resource, that just had a code.
Sure, but is that going against the intended usage of the resource? I'm trying to understand why a resource reference is preferred. Most of this experimentation we're working on stems from seeing ClinicalUseDefinition on the CRMI IG roadmap, and trying to figure out how we might provide clinical particulars in that form.
Would you be able to search by a code that is in a contained resource? Not entirely sure, but I think you can't.
Some (more) reasons why resource references are seen as preferred:
It's not the first time we're discussing this, and inbetween current and previous discussion I've talked to other people struggling with the same problem: the resource looks like it would be great for an interaction catalogue, but usually we don't build them between resources, but between terminology concepts.
@Kari Heinonen, your arguments have a theoretical point. However, the resource allows CodeableConcept as the interactant. So... If you can have a concept from terminology as one interactant, why should it not be allowed for the other one (in subject)? Just to make searching more difficult?
So, I created a Jira ticket: https://jira.hl7.org/browse/FHIR-48630
Yes. But a) is it explicitly prohibited to "repeat" the .subject reference resource(s) as an .interactant using CodeableConcept ? And b) references form a graph i.e. .subject could list references to multiple "targets" (forming links that can be back tracked) of different types - something that is much harder to accomplish with CodeableConcept semantics for codings contained within.
Kari Heinonen said:
Yes. But a) is it explicitly prohibited to "repeat" the .subject reference resource(s) as an .interactant using CodeableConcept ? And b) references form a graph i.e. .subject could list references to multiple "targets" (forming links that can be back tracked) of different types - something that is much harder to accomplish with CodeableConcept semantics for codings contained within.
b)
There are many use cases where you'd find a reference is a better solution.
We're saying CodeableConcept should be allowed - CodeableReference would allow implementers to go with CodeableConcept OR Reference according to what they need to achieve.
a)
It's not explicitly prohibited but semantically it would make exactly zero sense. It would basically say the subject has an interaction with itself. :)
Is there any implementation / material available where I could better understand what kind of (typed) subject graphs are used in practice? Or could you maybe summarise some experiences? I can't really see what kinds of different resource types a particular ClinicalUseDefinition might be pointing to. Not an expert on that, though, so I may well be missing something.
Our work covering indications, contraindications, interactions, risks, and various warnings tend to all be authored against substances (and administration routes, which don't seem to belong in the Substance resource either). Now the usage-side of things has the issue of whatever local prescribing system we're dealing with. So far, the best bet there has been terminology mapping, or using a ValueSet. (Edit: certainly, indications and contraindications have the Observation / Condition component to them as well, but those have been typically dealt with as terminologies too on our end.)
IMHO ClinicalUseDefinition is a part of much much bigger module that uses other Medication Definitional resources to form a self-standing graph/database. So not just Substances, it would e.g. have *Definition resources for both actual and abstract medical products. Each "product" resource instance having direct reference links (without searching as such) to backtrack to all relevant ClinicalUseDefinition instances.
Additional issue that did come to my mind :smile: Concerning directly identifying .subject using terminology instead of reference - happens when multiple codings are needed to achieve the necessary level of fidelity. Doing the search directly on ClinicalUseDefinition might not (?) be that straightforward in FHIR. Of course, sometimes this is actually desired (thinking ATC here), sometimes some "custom known corrections and post search clean ups" are needed. In a way references "shift" this matching to happen on "target resource" side, based on their properties, for better or worse.
AFAIK the bigger picture you outlined aligns with how the larger knowledge bases like EMA SPOR approach things. It is very reasonable, but there still seems to be a need to include third-party content for clinical particulars within that graph/database, if we are to have e.g. interaction checks, pharmacogenomics, etc. included. We'll need a way to publish compatible content, which can then be slotted into the graph. Hence the global/local problem I've been on about. We'd need to produce ClinicalUseDefinition resources (and other medication definition resources) that fit into the particular local knowledge base, preferably though CRMI. Maybe EMA SPOR streamlines this in the EU, and we might reference those as canonical. For a clinical knowledge author, it's a lot more manageable when the authoring can be done against a single canonical substance (or product) register. Now, in a perfect world, we'd get the definitional resources straight from the regulatory processes for the clinical particulars as well, but that's still a ways off, I'm afraid.
Perhaps it's still too early to see how things'll pan out. We're not really dead set on using terminologies, but we are very interested in trying to see if we could publish clinical particulars in a way that complements the existing (national) knowledge bases. In the short term, it's looking like terminology is the way.
I might be harassing :smile: you at this rather late hour with a solution where .subject contains, say, more product related references and .interactant is based on (global) terminology where some component or combination thereof of .subject product might be the actual .interactant given by code. And then have these referenced .subject product parts as contained, potentially unsearchable, resources with minimum data content (mainly identifiers). That would keep the core of ClinicalUseDefinition searchable using terminology concepts at the cost of making linking to product side more arduous ? Contained Med/Prod resources could possibly represent multiple local product registries (source identified by some property), if so needed, keeping the actual ClinicalUseDefinition core knowledge content intact and purely terminology based. Maybe ? <Insert Big Disclaimer Here>
Yes. Every authorised product has a list of indications, contraindications and interactions, which in the future will hopefully be coded and distributed as ClinicalUseDefinition resources. That future is not just around the corner. It takes time.
However, even this would not help a clinician who is prescribing in a generic manner - not a product, but a substance or a virtual concept. The need for a more generic terminology-based decision support catalogue will remain.
For now, it would maybe be the easiest for you to use Medication resource - this would combine substance and route of administration. I do see the benefit of using a terminology with appropriate concept properties, though.
Going for a contained Medication, the searchability would have some limitations, but it should be workable (see this). Estonian attempt at the same thing can be seen here (it's a draft, don't trust too much).
Thank you for the discussion, the pointers, and for making the Jira ticket! I'll be sure to give each a proper review as we move forward. ClinicalUseDefinition is one of the more exciting FHIR resources in recent memory, particularly for us dealing with MDR here in EU.
You're welcome, except that I lied. Medication resource doesn't have route of administration :)
MedicationKnowledge or AdministrableProductDefinition would have ingredient + route but neither of those resources is allowed to be referenced in .subject :D
So needs more interlinked contained resources then :grinning_face_with_smiling_eyes: Plus there's the more serious issue of existing systems using FHIR R4, not R5, which has ClinicalUseDefinition among other potentially relevant definitional resources ...
Hmmm, Looking into ePI interaction FHIR IG for Vulcan at
http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ClinicalUseDefinition-interaction-uv-epi.html
gives me a strong impression that .subject and .interactant are intended to be/allow concepts at different levels of fidelity and having a CodeableReference might present some issues to enforce that. Former has description of "The medication, product, substance ..." and latter talks about "The specific medication ... that interacts" or "The specific substance that interacts" for CodeableConcept. But maybe I'm just splitting semantic hairs with this.
For a definitional resource, there isn't really a way of including the specific interacting instance at authoring-time. A code can get us to a Ph. Eur. Monograph, a CAS number, or whatever level of specificity one might need for the global substance code. We'll likely start from SNOMED CT, and go from there.
We're looking at using an extension on ClinicalUseDefinition for the administration routes for now. We'll figure something out for interactions, where we need one for the interactant as well. For a CodeableReference, we'll have to evaluate whether it makes sense to include both the substance and the administration route or not. It'd certainly be useful if a particular ClinicalUseDefinition had multiple subjects. This isn't really the case for the knowledge we author, where each specific article is for a specific substance (or pair of substances). I suppose this is one of those places where we might be misusing the ClinicalUseDefinition resource if its' intended use is to point at a variety of subjects from a single resource. If that's the case, I'd like to hear more about how that actually works. I suppose it'd be useful for condensing something like a long list of subject substances interacting with grapefruit..
Joonatan Vuorinen said:
For a definitional resource, there isn't really a way of including the specific interacting instance at authoring-time.
In that context "specific" does not necessarily mean the resource instance per se. What IMO the spec is trying to say is that .subject (which, by the way, seem to have cardinality of 0..*) for example could be a product definition having multiple Ingredients and then .interactant "names" those that are relevant either using reference (and annoyingly needs to follow multiple links to make the "connection" to Ingredient as it is not directly allowed either) or terminology.
Right, I get your point about specificity. Not so sure I understand the benefits of having both (re-)defined in a ClinicalUseDefinition if the interactant is more specific. I guess it'd make the graph more explicit, but also deconstructs information about the ingredients of a particular medicinal product into a different resource.
FHIR in general tends to do that sort of deconstructing a lot - and developers tend to push back either by adding numerous extensions to "bubble up" data elements from deeper layers of FHIR model and/or using contained resources :smile:
Interesting. Just noticed that according to the official spec ClinicalUseDefinition does NOT define standard SearchParameter for interactant.item[x] at all ? I wonder if that is actually correct - this, of course, being something that can be remedied by custom FHIR server implementation.
We could definitely improve the list of standard search parameters. I don't think we've had a lot of feedback on what people need to search on. In fact many standard servers allow custom search parameters, so in the short term it may not even need a custom server implementation.
Btw you can also often add a custom search parameter for contained resources. Searching contained resources is supported by FHIR but not well supported by current servers, I understand. But custom search parameters are well supported. The result is that in practice you can usually search into contained resources just fine. And of course an alterative to contained resources is "uncontained" (normal) resources. They work fine right now, but are just a little verbose.
The downside of changing subject from reference to CodeableReference is that it will break every current implementation.
At the moment, afaik, the Medication Definition resources can go from R5 to R6 with no breaking changes. This would be the thing that prevents that. Something to consider. Our implementers may not thank us. We are not in a position where breaking changes are not permitted, but we do have to consider all factors.
I don't think it's reasonable to refrain from changes in FMM2 resources out of fear of breaking implementations that are not even there yet. You are only starting to hear feedback from people who are implementing those resources.
CodeableReference only became available in R5, so with your logic, it should have never been used almost anywhere. But it is used, even on MedicationRequest and Procedure, which have way more implementers than ClinicalUseDefinition.
Obviously we can't expect every change request to be approved, but maturity level 2 resource should not avoid breaking changes if they bring value to new implementers.
Just thinking aloud to double-check my understanding, please bear with me :smile: And shoot me cruelly down if needed.
Currently the impact is that 100% "out-of-the-box" searching to find interaction for given product(s) etc. would be solely based on resources referenced by .subject list. Soooo - would that mean that for example to represent interaction between,
Needs to
a) have at least 5 outgoing links in .subject to cover search both by product and substance plus
b) "duplicate" the two actually interacting substances (likely using code) as .interactants
c) potentially (i.e. without normal resources to reference) have a rather large bunch (depending on whether Medication or MedicinalProductDefinition etc. is used as search anchor in .subject) of contained, interlinked resources repeated in each ClinicalUseDefinition - provided that support does exist in server in the first place
IMHO that is a rather complex structure to maintain, at least in the scale of comprehensive FHIR interaction knowledge base.
The one thing we would absolutely want to avoid is coupling the publishing process of an interaction catalogue with all of the local knowledge bases (read: substance & product registers) it interfaces with. Substances tend to have a publishing cadence of roughly up to four times a year. Local registers update much more frequently, as I understand the biweekly cadence of the Finnish basic register is on the slower side of things. If we had to map each of the products as subjects for each ClinicalUseDefinition, it'll be a no-go for now. Us producing our own generic substance register and distributing that as a part of, say, an interaction catalogue is a workaround that might work fine -- but it still disconnects the references made by this third-party knowledge about clinical particulars from the actual graph a particular local knowledge base has built for itself.
(Edit: This is especially important for CDS services operating under MDR, where it just isn't possible to publish biweekly; much less daily. Quarterly would work fine, if we can defer the local mappings to terminologies that we can handle separately.)
@Kari Heinonen re a) Why would you use subjects of substances when you already have that substance in the product ingredient. Products can already linked to their ingredient substances.
@Joonatan Vuorinen are you saying you only want to map to substances and not products? That is allowed. No one is forcing you to use products I think.
Rik Smithies said:
re a) Why would you use subjects of substances when you already have that substance in the product ingredient. Products can already linked to their ingredient substances.
Because depending on where you start on product side there are multiple and more importantly different "hoops" to go through to associate product and substance - some use Substance directly, some go Ingredient/SubstanceDefinition route; some references, so to say, "go this way and some that way" in relation to where one wants the search to be directed. That is not particularly nice from developer perspective.
@Rik Smithies Yes. If we were doing this with terminologies, we'd have, say SNOMED CT 386963006 as a subject, which is the Ph. Eur. Monograph 2769 and the INN 6539. We've had success automatically mapping from the Finnish basic register to these substances through the product identifiers or ATC codes used in prescribing today.
If there eventually is some substance register that does this, and to which we can point a canonical reference, it'd work for us. AFAIK, EMA SPOR SMS is aiming to be that in the EU, but I haven't found how that would work in practice. Then there's the relation to a local knowledge base, such as the one Fimea is working on in Finland. Would we still reference SMS substances, or something else?
Granted, these are early days, and I'm sure there are still many things to see through. Just trying to give context around how terminology would make this easier for us at present.
EMA SPOR SMS is available and I'm quite sure that FIMEA and KELA both have mappings to it, so in a way it makes sense to use it.
However, it's still a code system, and more difficult to use than SNOMED CT as it's still quite raw.
@Rik Smithies Also, given the critical nature of the information itself, controlled amount of data redundancy might not be a bad thing. To be able to check that .interactant substance matches one of the referenced, referenced substance does in fact belong to some of the referenced product etc. Simply to catch obvious errors that are otherwise buried much deeper within the FHIR model.
And then there's the case of interacting Substance without it being an ingredient of any medicinal product - either the product does not currently belong to (national) product registry or it is out-of-scope of medicinal products altogether. I seem to recall there being some quite well known cases of that - and if .interactants can not be searched directly based on terminology that usually do have these ...
@Kari Heinonen the models allow different levels of detail in some places e.g. just a code for an ingredient or a reference to a bigger structure. But a given implementation would not generally use both methods. So it is uncommon to have to write code for two methods and you don't tend to get "go this way and some that way" in actual use.
I don't personally like the idea of de-normalizing the data to avoid a somewhat complicated search. That duplicated data may itself be confusing for clients. But if that is your choice you are free to do so, but you would also have to accept your own consequences of making the data larger, redundant, and more complex to maintain.
You could always create a custom operation to make searching easier.
That does not cover the use case of @Joonatan Vuorinen where the "operational medicinal product database" is NOT governed by the same organization supplying the interaction knowledge, correct ? So ClinicalUseDefinition would have only rather limited idea at what level/path between medicinal resources is used in the environment where it is integrated to. Hence preference for using de-normalization or "lowest common denominator" for any medicinal registry.
I suppose if you are pointing at data that in effect has different implementations within it, then yes you will need to allow for that.
I would imagine you would know what data you were pointing at before hand, and would notice if the implementation changed (anything could change in theory - they may start using contained resources one day ;-) ).
But ultimately yes there is some flexibility in the method/level of detail that all FHIR resources capture (e.g. dumb example, but someone's name may be found via reference.reference or in reference.display, so you need to code for both in theory).
So, unless you are able to constrain one of the versions out, then you will need to accommodate both. If it is too hard for your clients you could add an operation the does the hard work behind the scenes (and still allow more sophisticated clients to do it the "vanilla" way).
I would not clone the data to make that work. There is no need. But if you want a different workaround then feel free.
The longer this topic has gotten, the more I feel this is a relevant question:
Is the medication definition module intended to be used such that a ClinicalUseDefinition implements a third-party drug database for decision support? I.e. is it fundamentally designed to be such that a single (national) implementer builds the graph and that's it -- or is it supposed to support a scenario where multiple clinical content providers add to that graph?
Looking at the current documentation, I get the impression that what is marked as "prescribing support" is exactly the kind of thing I've been trying to describe. Is that the case, or have I mistaken the intended purpose? If I am mistaken, then perhaps some rewording of the docs might help avoid further confusion, especially around the CRMI roadmap, which otherwise aligns with the kind of knowledge we author around clinical reasoning.
We have no real trouble with using a Substance for now, whether it be a contained resource or not. We could reference some canonical substance, if and when such a register exists and is accessible both to us and the prescribing system (e.g. an EHR). Sounds like EMA SPOR SMS is just an export for non-regulatory actors. What to reference, then, if we'd like to make a ClinicalUseDefinition for SMS_ID 100000092656
(which is the example INN I used earlier)? The actual substance we are pointing to is known to the molecular structure at authoring-time. How a particular local system decides to build their own registers isn't. I cannot read between the lines whether we are doing something that is simply wrong.
We're not trying to be difficult here, and if it turns out that what we're trying to do is fundamentally incompatible with the module, then we'll find another way of modelling CDS for medications, and leave the FHIR integration to something like a CDS Hooks API.
It is for all such uses. Basicially any situation where you need to talk about indications, interactions etc. Resources are data oriented. Whenever you have that data, in any setting, or architecture, you use that resource. We don’t always predict or document all use cases, but that doesn't mean it is not appropriate.
At this point I'll play the "That's a good question" card :big_smile: and see myself out. Maybe the obvious should also be noted: resource ids essential to creating references are server dependent (and I don't think we are going to have canonicals for all substances, medications etc.). Sorry for dragging this discussion on.
So the 3rd party CDS content package needs to be somehow "imported" into the prescribing system UNLESS the ids that ClinicalUseDefinition use in references come from that same system (so that CDS content can be directly POSTed to server endpoint). Which kinda speaks for an architecture of a central medication database with real-time API for EHRs instead of "publishing and distributing CDS content" separately. Alternatively the CDS content package must contain big and detailed enough fragment of its own relevant FHIR resource graph to allow the client to figure out the mapping.
Right but there is nothing to stop you using the id (and server address) of a resource on another server, pre-allocated by that server. Obviously you need to know what that id is, before referencing it. But that applies to clients making references to resources on your "own" server also. Naturally if your system involves several servers you will need some assurances that the resources are going to stick around, but that is a business level problem.
Here are two examples, one that Rutt provided earlier, and one from Finland. These are the kinds of "local registers" we want to interface with:
Both of these are drafts of the Medication resources that a prescribing system and a national medication list would use. You might say that these are the resources that a ClinicalUseDefinition should refer to as a subject, but I'd like to underline that for any medical device under MDR, the lead time from a notified body alone is easily over a month. As such, there's no chance we could have a satisfactory publishing cadence with these resources being the subject. The authored knowledge is tied to substances, and for any region using Ph. Eur., those change three times a year. INN lists have similar cadences, as well. Local product registers might change daily. There's a massive discrepancy, regardless of how efficient we might try to be. Sure, this could be a moot point, if we could somehow determine that CDS based on these resources isn't a medical device -- but i'd rather not dive into that ditch here.
Neither example has a Substance resource defined at all. The Estonian example points to a separate CodeSystem containing the substances, and the Finnish example seems to just go by ATC. As such, there is no substance register (i.e. FHIR server accessible to both us and the prescribing system) to reference.
It is possible for us to map from either of these resources to our own Substance definition, i.e. to a SNOMED CT code, or some other precise definition. For the Estonian example, it's a mapping from this CodeSystem, likely with a ConceptMap. For the Finnish example, it's a mapping from either the ATC code or the product identifier (VNR). The product identifier can be mapped to an internal substance id (like the Estonian one) from a separate database export that Fimea publishes. From there, we can build similar ConceptMap resources. With terminology, we can integrate with both local systems today.
Without a CodeableReference, we'll have to publish our own Substance register as a part of our drug databases. It isn't a huge deal, but it really is just an extra step with little added value. Every clinical content provider has to provide their own, given that there is no incentive to share these canonical substance registers. And even if we do share, the connection to any local register still happens through terminology or some external id, losing the nice property of being able to navigate the graph through references.
Furthermore, EMA SPOR SMS doesn't seem to provide a FHIR server that non-regulatory actors (i.e. clinical content providers and EHRs) can access, and that we could then point canonical Substance references to. The Finnish medication knowledge base (lääketietovaranto, described in Finnish here) doesn't directly mention a FHIR server for substances either. It may well end up such that both provide CSV and/or XML exports, like they do today, even when the project is completed in ~2026 or onwards. This leaves us with terminologies in that case as well.
I hope I could illustrate the benefits of using a CodeableReference as the subject. I get that it's a tradeoff, and the negative side of the breaking change is something that has to weighed, as well.
hi Joonatan
Thanks. It is easy to see that someone may want to refer to external content that doesn't have a Substance resource defined.
Currently, in R5 (which is likely to be the only version with software support for a couple more years, and so will represent a significant amount of implementations), you would need to create a "dummy" resource to be able to reference this.
This is verbose, but since no user ever sees it, and data is large and verbose anyway, it doesn't seem to matter all that much. The resource can be contained, or not. There isn't much advantage in using contained. It may look slightly neater, but how neat things look isn't all that important. I would really not call this "making your own substance catalogue". It's just some plumbing - a shim. The data is full of such connections.
The advantage of this approach is that it works now and all references will be the same (in your system and in others that do define their own substances). Searching will work just fine, out of the box.
I can see that using a direct "code reference" has some advantages, but it won't be practical for a couple of years probably, because of the timeline for R6, and the fact that software support tends to have a significant time lag after a version of FHIR is developed. Any solution based on this code reference would likely never be able to exchange these resources with systems that exist or are in development now (until the R5 system was updated - and if R6 has no big functional advantage, the effort, including migrating all the existing data, would be unlikely to ever happen, imho).
Consequently a single, well intentioned, change like this may be responsible for splitting the world's data into two incompatible versions.
Thank you all for the discussion once more. I don't really have much more to add. For now, we will extract our own canonical Substance definitions into a separate FHIR package, which will then be a dependency for all of our drug databases containing ClinicalUseDefinition resources.
I am trying to make a minimal example using observation-based population, I created patient, got id=2, and then created observation, questionnaire (got id =6) as follows, and then GET request to base url/Questionnaire/6/$populate?subject=2, get a questionnaireResponse back, however the item did not include answer field. Any help are appreciated @Brenin Rhodes and others!
patient
{
"resourceType": "Patient",
"meta": {
"profile": [
"http://hl7.org/fhir/us/core/StructureDefinition/us-core-patient|7.0.0"
]
},
"name": [
{
"family": "Shaw",
"given": [
"Amy"
]
}
]
}
observation
{
"resourceType": "Observation",
"id": "f002",
"status": "final",
"code": {
"coding": [
{
"system": "http://loinc.org",
"code": "21112-8",
"display": "Birth Date"
}
]
},
"subject": {
"reference": "Patient/2"
},
"effectivePeriod": {
"start": "2013-04-02T10:30:10+01:00",
"end": "2013-04-05T10:30:10+01:00"
},
"issued": "2013-04-03T15:30:10+01:00",
"valueDateTime": "1970-01-01"
}
questionnaire
{
"resourceType": "Questionnaire",
"id": "questionnaire-sdc-no_obs_window",
"meta": {
"profile": [
"http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-pop-obsn"
]
},
"url": "http://hl7.org/fhir/uv/sdc/Questionnaire/questionnaire-sdc-profile-example-ussg-fht",
"version": "1.0.0",
"name": "questionnaire-sdc-no_obs_window",
"title": "Questionnaire with no Observation Window",
"status": "active",
"subjectType": [
"Patient"
],
"item": [
{
"linkId": "1",
"definition": "http://loinc.org/fhir/DataElement/21112-8",
"code": [
{
"system": "http://loinc.org",
"code": "21112-8"
}
],
"text": "Date of Birth",
"type": "dateTime"
}
]
}
got
questionnaireResponse
{
"resourceType": "QuestionnaireResponse",
"id": "6-2",
"contained": [
{
"resourceType": "Questionnaire",
"id": "6",
"meta": {
"source": "#KTMtHdFc0t1W48aK",
"profile": [
"http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-pop-obsn"
]
},
"url": "http://hl7.org/fhir/uv/sdc/Questionnaire/questionnaire-sdc-profile-example-ussg-fht",
"version": "1.0.0",
"name": "questionnaire-sdc-no_obs_window",
"title": "Questionnaire with no Observation Window",
"status": "active",
"subjectType": [
"Patient"
],
"item": [
{
"linkId": "1",
"definition": "http://loinc.org/fhir/DataElement/21112-8",
"code": [
{
"system": "http://loinc.org",
"code": "21112-8"
}
],
"text": "Date of Birth",
"type": "dateTime"
}
]
}
],
"extension": [
{
"url": "http://hl7.org/fhir/us/davinci-dtr/StructureDefinition/dtr-questionnaireresponse-questionnaire",
"valueReference": {
"reference": "#6"
}
}
],
"questionnaire": "http://hl7.org/fhir/uv/sdc/Questionnaire/questionnaire-sdc-profile-example-ussg-fht",
"status": "in-progress",
"subject": {
"reference": "Patient/2"
},
"item": [
{
"linkId": "1",
"definition": "http://loinc.org/fhir/DataElement/21112-8",
"text": "Date of Birth"
}
]
}
The Clinical Reasoning module implementation of $populate available in the JPA Starter Server only supports Expression based population.
super thanks @Brenin Rhodes ! Is there any implementation of this or Structuremap based population that you know of, if HAPI did not support?
I think the csiro renderer can do it to.
And my server has support too.
Your questionnaire definition is missing the extension at the root to indicate to perform an obs pre-pop.
@Brian Postlethwaite besides the following, what extension do I need? thanks!
"meta": {
"profile": [
"http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-pop-obsn"
]
}
You need the one that indicates the period of observations to scan.
(I need to look up what it is)
Observation LinkPeriod
thanks! good to know, I thought it is optional
https://build.fhir.org/ig/HL7/sdc/populate.html#observation-based-population
Bullet point 3. is the section to refer to in the spec to see that.
thanks @Brian Postlethwaite ! Just to learn more, is there any reason that not making this field a required field in this, if it does not work without it?
That's actually a really good idea, can you log that as a change request?
would love to. I am really new to FHIR, could you let me know how to do that
Excellent, at the bottom of the spec page you referenced above there's a link to peopose a change
that you follow then fill out the fields.
If you don't have a user account in jira there, I believe there's a link to create a new account. (@Lloyd McKenzie )
Then you can post the jira issue number back here so we can follow it up easily.
thanks! will do
If you have any troubles, let us know and we can assist.
the jira ticket created
Thanks.
Brian Postlethwaite said:
That's actually a really good idea, can you log that as a change request?
If you make observationLinkPeriod required in Populatable Questionnaire, then wouldn't it be required to have the extension on every item in the Questionnaire?
We could require it with an invariant that allows the value at the root to inherit
You're right, I've misread things.
But I do think that should be at the root.
There should be at least 1 in the questionnaire.
Lloyd McKenzie said:
We could require it with an invariant that allows the value at the root to inherit
In the Questionnaires where I've used observationLinkPeriod, I only wanted a couple of questions to be prepopulated, not all of the questions in the Questionnaire.
Do you need a linkPeriod if you're happy regardless of how old the data is?
Lloyd McKenzie said:
Do you need a linkPeriod if you're happy regardless of how old the data is?
Yes. observationLinkPeriod is the signal that you want Observation-based prepopulation to occur for the item.
So my proposal for tomorrow would be to include an invariant to need that extension somewhere, and also permit it at the root of questuonnaire (which then applies to any)
What would the convention be for saying "I don't care about the period"?
99years?
There are at least some patients for whom that's not old enough.
I hope to be one of them (eventually)...
Lloyd McKenzie said:
There are at least some patients for whom that's not old enough.
Which is I why I use 100 years, not just 99. :-) You could always pick 1000 years.
Re-reading the spec, it already states:
For observations where how recent they are does not matter (e.g. blood type), simply set the duration to a long period of time - e.g. 200 years.
Canada Health Infoway is preparing an analysis of the different architectural approaches to managing language translations in FHIR, including making recommendations around best practices for use in Canada. We plan to share the document in case others find it useful.
As part of that work, we're interested in learning what approaches other jurisdictions have taken and what's worked well/not so well with those approaches. The project is particularly focused on language support for terminology, but information about language support for non-coded elements is also welcome. Specific questions include:
If you don't feel comfortable sharing your comments publicly, private messages are also welcome.
@Giorgio Cangioli @Oliver Egger @Rutt Lindström @João Almeida
We have a central national clinical data repository and central terminology management. A lot of the terminology is coded locally and only available in Estonian. Of course, we use international terminologies as well, and they are always translated into Estonian. All data recorded in the national EHR repository must be available in Estonian.
Are display names sent in instances,
Yes
or are clients and servers expected to look up displays from terminologies?
They could, but not sure they have the habit.
If display names are included in instances, does the client control what they are, and if so, how do they do so?
Display names and additional designations are published in central value sets. These have been published as csv files so far, but are now also available in FHIR format (not in production yet, but nearly there).
Are systems expected to store and retain display names they receive?
The source of truth is the central repository. Local processing of data is not under our control.
Who is responsible for maintaining language translations for codes for different code systems?
This differs by code systems. Normally, the translations made by the original publisher are used by other systems as well.
Are translations maintained in FHIR, and if so, do you use ValueSet, CodeSystem or CodeSystem supplements to do so?
In FHIR, the translations are given in CodeSystems and CodeSystem Supplements. Supplements are used when a) the CodeSystem is maintained outside our reach (like HL7 code systems); b) when ValueSet is determined to use translations that are unacceptable in the CodeSystem (e.g. extreme abbreviations of terms of SNOMED CT concepts). All ValueSets use Estonian display names.
I find it cumbersome to use supplements, I would prefer to have translations in the original CS or straight in the VS. We try to avoid supplements, but we still have plenty. :)
Any other insights/advice?
I'm not sure what it means to "maintain translations in FHIR", so maybe my answers went offtopic a bit.
We have CodeSystems where the translations are authored/maintained in a dedicated system and FHIR is just used for publishing. But we also have a basic tool for authoring/maintaining specifacally FHIR terminology, and it also supports multiple languages. It's designed so that we can use it for creating FHIR resources from existing non-FHIR sources, but also actively maintain existing FHIR terminology resources.
Happy to give a demo of our free-range CSV to FHIR migration work or the new authoring tool if anyone's interested. Or explain any other aspects further.
What I really like is the possibility in R5 to define default properties to be included in the ValueSet expansion.
And what I really miss is the similar option for designations. I would like to include certain types of designations in the default expansion, but those types would be different for every value set, so I would like to have it configurable like I can do it with properties.
I think you can
In that case, let me rephrase my issue: "I wish I was smarter".
I don't want to hijack the thread with my issue though.
Let me split my answers in two how this is done/plan to be done in the European Cross-border services (MyHealth@EU) and some personal considerations.
Please consider that MyHealth@EU services are based on some specific assumptions that doesn't apply for all the cases.
Are display names sent in instances, or are clients and servers expected to look up displays from terminologies?
MyH@EU display names are recorded in the exchanged instances. During the exchange the used designations are checked against the information maintained in a central terminology server.
No additional expectations on the receiver side.
my opinion: as told, MyHealth@EU is a very special case. IMO, in general, display names should be recorded in the instances allowing the receiver to lookup designations against terminology services if able to do so.
If display names are included in instances, does the client control what they are, and if so, how do they do so?
MyH@EU: no specific expectations.
my opinion: The behavior should be flexible, because it strongly depends on the context of use and on the capabilities of the receivers. A simple display tool may rely on the information included in the instances; a receiving system operating in a mature infrastructure integrated with terminology services may take advantage of the capabilities offered by this context.
Are systems expected to store and retain display names they receive?
MyH@EU: no expectations beside the capability of displaying the content by using the translated information included on the exchanged instance (with one exception)
my opinion: if provided, the original designations should be somehow kept
Who is responsible for maintaining language translations for codes for different code systems?
MyH@EU: EU Member States
my opinion: depends on the context
Are translations maintained in FHIR, and if so, do you use ValueSet, CodeSystem or CodeSystem supplements to do so?
MyH@EU: not for the time being, even though the used Terminology Server support the concept of ValueSet and CodeSystem. Plan to provide support to FHIR in the next future.
my opinion: this is what I expect it should happen
Any other insights/advice?
Enhance somehow the capability of instances to support different kinds of designations, because real systems are not always connected to terminology services.
In Belgium, for the moment, there are no carved-in-stone rules for the use of different languages. However, there is the real concern to have the "orginal at the time of entry" description available, because that is considered to be the legally valid one if problems arise. So next to the current "display" for a code, which has no real status linked to it and can be in any language, people are requesting here to have next to display a field that contains the description "as it was at the time of the data entry" and is also marked as such.
.display is supposed to be the "defined by the dictionary" value. Shouldn't the CodeableConcept.text represent "as it was at time of entry"?
Lloyd McKenzie said:
for :flag_switzerland:
Are display names sent in instances,
yes
or are clients and servers expected to look up displays from terminologies?
yes if they want to translate to other languages
If display names are included in instances, does the client control what they are, and if so, how do they do so?
the display names for the different languages are provided, so the client should not control, however currently we raise only a warning during validation
Are systems expected to store and retain display names they receive?
For the Swiss EPR yes (implicitly), otherwise there are no requirements up till now
Who is responsible for maintaining language translations for codes for different code systems?
eHealth Suisse, a government organization which is responsible to translate for SNOMED CT and CodeSystem / ValueSets for the national patient record
Are translations maintained in FHIR, and if so, do you use ValueSet, CodeSystem or CodeSystem supplements to do so?
For SNOMED eHealth Suisse works with other countries for translations, for other CodeSystem (or until they are integrated with SNOMED CT) we use CodeSystem supplements and ValueSets
Daniel Venton said:
.display is supposed to be the "defined by the dictionary" value. Shouldn't the CodeableConcept.text represent "as it was at time of entry"?
Well, maybe, but if I read the definition of ".text", it captures two things:
1) the original text at the point of data entry
2) some specification if the chosen entry does not fully cover the intent of the practitioner.
This might be difficult if you have both needs at the same time. Concatenation of both needs could be a solution, but it is highly unelegant. A separate field for the original text at the point of data entry (maybe by means of an extension) would be preferable.
why would you need both at the same time? How would you know what the intent is if it's not the text entered by the practitioner? (If there's no text, then you get to ask that question)
One case is: a valueset with an "other"-like value: there you would want to specify which choice you made (in fact the color was 'blue'), and you would want to know what the original phrasing was in the original language. There are subtle differences between "neither" and "none" as "other"-like values in English as a source language. Or for the discovery of false encodings: e.g. in French the difference between "variole" et "varicelle" is little when choosing from a drop down list. If there is an additional spec for what is exactly affected, you have the combination of both items. (I believe the same is true in English for smallpox and chickenpox). This type of mistakes are at this moment in time being discussed in the SNOMED on FHIR workgroup.
CodeableConcept.text is what the user saw or typed when the concept was selected. It might be one of the formal displays for one of the codings, or it might be something else.
OK, that means that if you need both
1) the original text at the point of data entry
2) some specification if the chosen entry does not fully cover the intent of the practitioner.
you have to concatenate the two values. If that is what the standard says, it be so.
In France, the situation is very close to the Estonian description, thanks @Rutt Lindström :)
I have a related question : In a local profile (another language than en), when we want to refer to an international CS/VS (in whole), what is the best way to keep these int. CS/VS while adding local display codes ?
For example, say we want to add local display to maritalStatus : code M = marié.
We can :
A lot of CS/VS are in that configuration. It could be very efficient to reduce the total number of the resources designing the same concepts by adding the localized codes, imho.
The terminology validation rules state that the only display values allowed in an instance are those defined in the original code system or those defined in a CodeSystem supplement that's available to the validator. Displays introduced in a value set can't be used in an instance.
Thanks @Lloyd McKenzie for the supplement point that I forgot.
I reformulate my question concerning to the best practice to have localized CS/VS (I suppose this is a question in many countries).
If in a local profile we refer to a international CS/VS (say v2-002 for the sample) but with the need of having the localized displays, we have 2 main options :
Is there a best place to put the translated codes ?
Another point, if only 1 local language is required : we can set the translated terms in the display of each code and use the global language attribute, or add a designation on each code.
Is there a best practice ?
for me, best practice is :
Lloyd McKenzie said:
Canada Health Infoway is preparing an analysis of the different architectural approaches to managing language translations in FHIR, including making recommendations around best practices for use in Canada. We plan to share the document in case others find it useful.
@Lloyd McKenzie, was this analysis you mentioned ever published or shared by Infoway? I did a bit of googling but haven't been able to find anything. I'd be interested to read it if it's available somewhere!
Thanks for the poke @Rob Nickerson. The final 'pretty' version isn't posted anywhere that I'm aware of, but the near-final content can be found here: https://docs.google.com/document/d/12Qb2Eu5dNk85TspFjgLSVxiY3Ygzejpt36ODnJt07NE
Much appreciated @Lloyd McKenzie , this looks really helpful!
this is not quite true:
The ValueSet resource contains a similar concept.designation element to that found in CodeSystem and it serves a similar purpose. However, while CodeSystem can define designations in various languages, ValueSet only exposes those already defined. It cannot introduce new designations.
"It cannot introduce new designations allowed to appear in instances" I guess would be the correct framing?
Hello everyone,
I have few questions around Section 2.2.2.3 which is related to Finer-grained resource constraints using search parameters.
patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
Should we consider it to be:
patient/Observation.r
patient/Observation.s?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
Or this:
patient/Observation.r?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
patient/Observation.s?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
patient/Observation.rs?category=laboratory
is this a valid scope? Or should we only support code values when requested with url like
patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
These constraints can be used with any combination of cruds. They limit the subset of resources on which these interactions can be invoked.
Thanks Josh for clarifying that.
That makes this scope
patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
as:
patient/Observation.r?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
patient/Observation.s?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
Is this correct?
That's right. Scopes can be combined or exploded on "cruds" interactions when everything else is identical.
Understood. Also It would be great if you could check the second question as well and clarify it.
One more thing,
When requested with scope
patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
If Observation 1 is of category laboratory and Observation 2 is of category exam
In this case, GET Observation/1 should give out valid response whereas GET Observation/2 will throw Not Authorized (Invalid request) kind of response. Is this correct?
Basically, yes. We don't mandate what kind of error response the server returns, but generally it is good practice not to reveal information about whether a resource exists if the caller is not authorized to see it, so a 404 could be a good recommendation here.
Hello @Josh Mandel. I'd like to raise these important concerns about applying FHIR search parameter logic within SMART scopes for create
, update
, and delete
operations and highlight significant considerations regarding FHIR server performance, resource validation, and the practical implementation of access control mechanisms.
I understand the requirements to limit the searched resources for some categories of Observations
or Condition
because it's a very broad and contextual resources. As you and @Dan Cinnamon mentioned here:#smart>SMART 2 Fine-Grained Scopes with search parameters shouldn't be overstated.
However, following this approach from the FHIR Server perspective to process deletion — for example, [base url]/Observation/[id]
with the scope [user|patient|system]/Observation.d?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
— the system needs to perform the following actions under the hood:
So, we always double the actions on the FHIR server side.
Another issue I see is using the same scope parameter for the "c" (create) scope. To achieve this we need to move resource validation from the FHIR API to precede scope evaluation, which is what we call FHIR profile validation. Another issue encountered here is: who will guarantee the consistency of the combination of scope parameters and it's values? The FHIR specification defines what the FHIR system should do if search parameters are incorrect, but not for some string linked with &
applying to create resources.
I'd like to ask @Grahame Grieve and @Lloyd McKenzie : What are your thoughts about applying search parameters logic and the FHIR search specification as SMART scope parameters for create, delete, or update operations?
Hmm, are you proposing a change to the grammar or just calling out that servers may only want to support a subset of the scope language that makes sense in their context of use?
My interpretation of the above mentioned syntax leads to that basically all requests ends up being conditionally executed :thinking:
Serverside that is
That would be pretty bummer to do dynamically ...
Because you need human-friendly language, icons etc., I would expect the mapping between the SMARTv2 scopes and the set of criteria you use to share data under that scope would happen at dev time, not at run time. I can't imagine how you could possibly create a user-friendly authorization page based on fully dynamic SMARTv2 scopes.
Who determines what user friendly is?
Just visiting any slightly popular website these days users need to navigate in all kinds of weird cookie T&C's about what they are willing to share and how they are tracked (or not)
That didnt really seem to be an issue even though the user experience is totally tanked
What I mean is that when you have a user making a decision to approve a scope or not, you'd want this:
Vital Signs - Information about basic diagnostic measurements including heart rate, temperature, and blood pressure, such as the type, status, date, and value of the measurement and any associated comments by your care provider.
not this:
Observation.rs?category=vitalsigns
This (scope descriptions and user-facing language) is a side issue, but FWIW I think there are reasonable and scalable ways to present user-friendly authz screens from standardized scope language expressions. These approaches involve some work, but you can have a base of "common" scopes covering your 80%, then generate algorithmic descriptions of a wider class of additional scopes, expanding the common set over time based on real-life usage.
At baseline these (fallback) algorithmic descriptions could be based on terminologies and FHIR metadata (e.g., concatenating strings from resource, search param, and terminology metadata); and depending on your perception of the trade-offs, these could potentially augmented with 1) showing users any data in their current record that matches the scopes, and 2) using LLMs to generate best-effort friendly descriptions.
Overall, servers that want to generate the friendliest possible user-facing descriptions have plenty of things they can try. Servers that don't want to don't have to.
I'm not sure how servers could generally generate algorithmic descriptions, since the human-semantic meaning of scopes could be based on codes from value sets that the server may not have access to at run-time. I agree this is a side issue, but if it ends up being true that you always need a dev-time assessment of SMARTv2 scopes, than that seems to avoid some of the problems being presentd.
meaning of scopes could be based on codes from value sets that the server may not have access to at run-time
This technique depends on the (authz) sever having access to the terminologies that are used in the (fhir) server. This may not be something you have automatically or for free, but it's not an exotic requirement -- and it's not all/nothing. It scales down... so to put it in a "glass half full" perspective, this technique works for any scopes that use terminology you do have access to.
Jens Villadsen said:
My interpretation of the above mentioned syntax leads to that basically all requests ends up being conditionally executed :thinking:
One trick that I can share from Firely Server is that we evaluate these scopes against an in-memory repository. So we index the resources (in case of an update or create), check if the authorization system would allow the execution of the action (incorporating the fine-granular scopes in that decision) and then execute the operation against the real database. Yes, it doesn't come for free, but it's not super slow either.
Andrew Krylov said:
Another issue I see is using the same scope parameter for the "c" (create) scope. To achieve this we need to move resource validation from the FHIR API to precede scope evaluation, which is what we call FHIR profile validation. Another issue encountered here is: who will guarantee the consistency of the combination of scope parameters and its values? The FHIR specification defines what the FHIR system should do if search parameters are incorrect, but not for some string linked with
&
applying to create resources.
In case you want to support arbitrary scopes, you need to check the syntax with some kind of parser. I personally gave up checking in the authorization server to see if the value is allowed for the search parameter. Everything else is doable. In any case it's not a requirement that the request already fails at auth time, you could also leave it up the FHIR server to reject the scope.
Alexander Zautke said:
Jens Villadsen said:
My interpretation of the above mentioned syntax leads to that basically all requests ends up being conditionally executed :thinking:
Yes, it doesn't come for free, but it's not super slow either.
I agree ... It wont come for free, thats for sure.
@Alexander Zautke Could you explain or share more details (like a step-by-step algorithm) on how we can perform a request for DELETE /Observation/123
with the scope <scope level>/Observation.d?category=laboratory
? I understand that we should:
With this parameter, we generally perform three steps instead of one without scopes. Am I right?
@Josh Mandel This scope parameter <scope level>/Observation.crd?category=laboratory
should be applied to all actions listed in the scope. From this example, if we perform a create
interaction, do we need to apply the expression category=laboratory
for create
as well as for read
and delete
? So, in my example, d
is the last action in the scope, then the parameter follows. And this parameter doesn't relate only to d
; it is also applied to c
and r
?
Yes. These are the steps. I was saying that when we perform a match of the scopes in our server, we 1) retrieve the resource 2) index the resource in-memory 3) Execute the matching against that in-memory resource, by essentially seeing if a virtual search request using the parameters from the scopes would match the resource 3) execute the delete
Yes, the parameters apply to all the interactions that you include in your scope. If you need to apply different parameters to different interactions you would break up the scope accordingly (e.g. decomposing into one patient/Observation.cr?...
and one patient/Observation.d?...
).
Jens Villadsen said:
My interpretation of the above mentioned syntax leads to that basically all requests ends up being conditionally executed :thinking:
Yes, but applying these conditions is not part of the generic FHIR specification for servers. Everything is possible. Applying these FHIR search criteria for search requests is okay—the cost of these requests will be identical to the initial client request. The client asks for a search, and the server performs the search by just changing search parameters following the scopes. But for creation, update, delete, or history requests, the server needs to fetch the request data (the client doesn't ask about it but has to "pay"—wait for the response, utilize infrastructure) and then perform some checks to apply the search parameter logic. And for creating a new resource, it will be another approach. Imagine you have 100,000,000 Observations or even 100 mln x10 (it absolutely real) that in the database. Implementing these requirements on the server will be too costly for the clients.
Thank you @Alexander Zautke and @Josh Mandel for the clarification
@Andrew Krylov what do you mean? There's a lot of 'conditionals' in the spec: https://build.fhir.org/http.html- which can be inferred server side based on whatever token was handed out to the client and eventually resolved server side which can then be embedded for evaluation.
@Grahame Grieve is http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component
supported as backport extension to R4? Sushi seems to allow it but the IGP chokes in the snapshot generation:
Publishing Content Failed: Error generating snapshot for /Users/jkiddo/work/thp-core-ig/fsh-generated/resources/StructureDefinition-THPObservationDefinition(THPObservationDefinition): Unable to generate snapshot for http://thp.trifork.com/fhir/core/StructureDefinition/THPObservationDefinition in /Users/jkiddo/work/thp-core-ig/fsh-generated/resources/StructureDefinition-THPObservationDefinition because StructureDefinition http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component at Extension.extension.value[x]: invalid constrained type @ObservationDefinition.qualifiedValue from base64Binary, boolean, canonical, code, date, dateTime, decimal, id, instant, integer, markdown, oid, positiveInt, string, time, unsignedInt, uri, url, uuid, Address, Age, Annotation, Attachment, CodeableConcept, Coding, ContactPoint, Count, Distance, Duration, HumanName, Identifier, Money, Period, Quantity, Range, Ratio, Reference, SampledData, Signature, Timing, ContactDetail, Contributor, DataRequirement, Expression, ParameterDefinition, RelatedArtifact, TriggerDefinition, UsageContext, Dosage, Meta in http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component (00:00.206 / 00:54.603, 484Mb)
(00:00.002 / 00:54.605, 484Mb)
Use -? to get command line help (00:00.000 / 00:54.606, 484Mb)
(00:00.000 / 00:54.606, 484Mb)
Stack Dump (for debugging): (00:00.000 / 00:54.606, 484Mb)
java.lang.Exception: Error generating snapshot for /Users/jkiddo/work/thp-core-ig/fsh-generated/resources/StructureDefinition-THPObservationDefinition(THPObservationDefinition): Unable to generate snapshot for http://thp.trifork.com/fhir/core/StructureDefinition/THPObservationDefinition in /Users/jkiddo/work/thp-core-ig/fsh-generated/resources/StructureDefinition-THPObservationDefinition because StructureDefinition http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component at Extension.extension.value[x]: invalid constrained type @ObservationDefinition.qualifiedValue from base64Binary, boolean, canonical, code, date, dateTime, decimal, id, instant, integer, markdown, oid, positiveInt, string, time, unsignedInt, uri, url, uuid, Address, Age, Annotation, Attachment, CodeableConcept, Coding, ContactPoint, Count, Distance, Duration, HumanName, Identifier, Money, Period, Quantity, Range, Ratio, Reference, SampledData, Signature, Timing, ContactDetail, Contributor, DataRequirement, Expression, ParameterDefinition, RelatedArtifact, TriggerDefinition, UsageContext, Dosage, Meta in http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component
at org.hl7.fhir.igtools.publisher.Publisher.generateSnapshots(Publisher.java:6888)
at org.hl7.fhir.igtools.publisher.Publisher.loadConformance(Publisher.java:5672)
at org.hl7.fhir.igtools.publisher.Publisher.createIg(Publisher.java:1227)
at org.hl7.fhir.igtools.publisher.Publisher.execute(Publisher.java:1066)
at org.hl7.fhir.igtools.publisher.Publisher.main(Publisher.java:13615)
Caused by: org.hl7.fhir.exceptions.FHIRException: Unable to generate snapshot for http://thp.trifork.com/fhir/core/StructureDefinition/THPObservationDefinition in /Users/jkiddo/work/thp-core-ig/fsh-generated/resources/StructureDefinition-THPObservationDefinition because StructureDefinition http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component at Extension.extension.value[x]: invalid constrained type @ObservationDefinition.qualifiedValue from base64Binary, boolean, canonical, code, date, dateTime, decimal, id, instant, integer, markdown, oid, positiveInt, string, time, unsignedInt, uri, url, uuid, Address, Age, Annotation, Attachment, CodeableConcept, Coding, ContactPoint, Count, Distance, Duration, HumanName, Identifier, Money, Period, Quantity, Range, Ratio, Reference, SampledData, Signature, Timing, ContactDetail, Contributor, DataRequirement, Expression, ParameterDefinition, RelatedArtifact, TriggerDefinition, UsageContext, Dosage, Meta in http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component
at org.hl7.fhir.igtools.publisher.Publisher.generateSnapshot(Publisher.java:6969)
at org.hl7.fhir.igtools.publisher.Publisher.generateSnapshots(Publisher.java:6886)
... 4 more
Caused by: org.hl7.fhir.exceptions.DefinitionException: StructureDefinition http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component at Extension.extension.value[x]: invalid constrained type @ObservationDefinition.qualifiedValue from base64Binary, boolean, canonical, code, date, dateTime, decimal, id, instant, integer, markdown, oid, positiveInt, string, time, unsignedInt, uri, url, uuid, Address, Age, Annotation, Attachment, CodeableConcept, Coding, ContactPoint, Count, Distance, Duration, HumanName, Identifier, Money, Period, Quantity, Range, Ratio, Reference, SampledData, Signature, Timing, ContactDetail, Contributor, DataRequirement, Expression, ParameterDefinition, RelatedArtifact, TriggerDefinition, UsageContext, Dosage, Meta in http://hl7.org/fhir/5.0/StructureDefinition/extension-ObservationDefinition.component
at org.hl7.fhir.r5.conformance.profile.ProfileUtilities.checkTypeDerivation(ProfileUtilities.java:2996)
at org.hl7.fhir.r5.conformance.profile.ProfileUtilities.updateFromDefinition(ProfileUtilities.java:2827)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePathWithOneMatchingElementInDifferential(ProfilePathProcessor.java:688)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePath(ProfilePathProcessor.java:251)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPaths(ProfilePathProcessor.java:181)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPathWithSlicedBaseDefault(ProfilePathProcessor.java:1227)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPathWithSlicedBase(ProfilePathProcessor.java:1016)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPaths(ProfilePathProcessor.java:187)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePathWithEmptyDiffMatches(ProfilePathProcessor.java:888)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePath(ProfilePathProcessor.java:248)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPaths(ProfilePathProcessor.java:181)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPaths(ProfilePathProcessor.java:155)
at org.hl7.fhir.r5.conformance.profile.ProfileUtilities.generateSnapshot(ProfileUtilities.java:750)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePathWithOneMatchingElementInDifferential(ProfilePathProcessor.java:613)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePath(ProfilePathProcessor.java:251)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPaths(ProfilePathProcessor.java:181)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePathDefault(ProfilePathProcessor.java:379)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processSimplePath(ProfilePathProcessor.java:255)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPaths(ProfilePathProcessor.java:181)
at org.hl7.fhir.r5.conformance.profile.ProfilePathProcessor.processPaths(ProfilePathProcessor.java:155)
at org.hl7.fhir.r5.conformance.profile.ProfileUtilities.generateSnapshot(ProfileUtilities.java:750)
at org.hl7.fhir.igtools.publisher.Publisher.generateSnapshot(Publisher.java:6961)
... 5 more
I don't know. weird error. how would I reproduce?
I'll make a sample IG for you ...
Here you are: https://github.com/jkiddo/sample-ig
but you can't do this:
it's not a valid R5 extension definition
the exception isn't sensible, but what you're doing is not sensible
Should it be listed here (https://build.fhir.org/ig/HL7/fhir-extensions/extension-registry.html#) to be sensible?
cross version extensions aren't published yet
So how would I know whats 'sensible'?
well, Observation.component exists in both R4 and R5, so it's not a valid cross-version extension
This is ObservationDefinition
not Observation
oh so it is
let me check again
no, this is just a bug in my code. I'll put it on my list to fix
Any branch I could watch for?
I'm not sure how quickly I'll get to it. Depends on @Gino Canessa because it's dead code once he starts producing a revised set of cross-version extensions
But its the snapshot generation that fails :thinking:
so what's your point?
you're saying the extensions will change definition?
the definition of the cross-version extension is the problem. I'm defining it wrongly
Got it
Getting this done is my primary focus right now, so hopefully not too long ( :fingers_crossed: ). The last round of decisions invalidated large swaths of my previous iteration, so it's a bit of a grind at the moment.
Hi,
One of my implementers is thinking about his codesystems/valuesets, and he finds himself in trouble deciding which CodeSystem content mode to use (https://hl7.org/fhir/R4/valueset-codesystem-content-mode.html).
He would like to know what the influence of each of the content modes is on validation. What are the error - warning - information conditions for each of the values in the content mode. Based on that, he would like to make his decision. Is there some documentation on that? Could someone draw the table of all combinations?
If content is complete, unknown codes are treated as errors
For fragments and examples, it’s a hint
I don’t think it makes a difference otherwise
Both #fragment and #example produce errors, but different ones, and #example far more than #fragment.
#fragment: https://build.fhir.org/ig/hl7-be/mycarenet/qa.html
#example: https://build.fhir.org/ig/hl7-be/mycarenet/branches/issue-93/qa.html
@Grahame Grieve A table might be necessary to bring some light into the darkness...
well, this combination does not make me happy:
@Grahame Grieve How do we proceed? Do I make a Jira-issue?
well, that one should be just one warning - the error is spurious
How do you feel about these, #example should only produce warnings, shouldn't it? Why is it nagging about #complete?
image.png
well, it's not complete because it's just an example. But (a) the message could be clearer and (b) that's supposed to be a warning too
thanks for drawing those to my attention
@Grahame Grieve I created https://github.com/hapifhir/org.hl7.fhir.core/issues/1524
@Grahame Grieve I have follow up question over here. If i want to validate a CodeSystem and the content mode is "Example" or "fragment" the ideal validation result would be with warnings?
Incase i want to expand or look up for code inside the CodeSystems via Valueset, should that be a problem?
Eg. http://hl7.org/fhir/ValueSet/service-type - This Value includes https://www.hl7.org/fhir/R4/codesystem-service-type.json.html as the CodeSystem. How should it behave ideally?
sure you should get warnings for that
and lookup and expand should work, sure
Dave Hill created a new channel #PACIO Personal Functioning and Engagement.
Nikolai Ryzhikov created a new channel #Babylon (Aggregate FHIR terminology).
Jean Duteau created a new channel #Da Vinci PR.
Sanja Berger created a new channel #german/dguv.
Alejandro Benavides created a new channel #HL7 CAM.
Biswaranjan Mohanty created a new channel #Enhancing Oncology.
Abbie Watson created a new channel #fsh-tooling.
deadwall created a new channel #google-cql-engine.
Artur Novek created a new channel #FHIRest.
Aaron Nusstein created a new channel #US Behavioral Health Profiles.
Nagesh Bashyam created a new channel #UDS-Plus.
Grahame Grieve created a new channel #FHIR Foundation.
Grahame Grieve created a new channel #FHIR for Pets.
Koray Atalag created a new channel #Digital Twins on FHIR.
Preston Lee created a new channel #Meld.
Standard means unification, but since $operations introduction there are two levels for operation name: HTTP verb, i.e. PUT/DELETE/etc. and URL part, i.e. $validate/$submit/etc., which is actially OPERATION on $operation like POST $submit
Unified solutution may include $put/$delete/etc. as part of FHIR API, while HTTP REST verbs like GET and POST are only necessary on transport level.
Standard means unification, so far future versions of resource oriented FHIR may use resources for API as well, like Request with operationName, operationParameter properties and Response, while HTTP (or other transport protocols) may be used for requests/responses transportation only.
hi Alexander, interesting ideas but I am not seeing the advantages.
Part of what makes FHIR successful is that people (and software) already know HTTP and REST. Why replace PUT with $put when HTTP (and all the tools etc) support PUT already?
What do you mean by "only necessary on transport level"? Is GET transport level but $put not somehow?
Operations are intended for the things that HTTP doesn't support out of the box.
What would the advantage of replacing the basic REST verbs with these other things?
Are you trying to do SOA, where everything is a named operation?
REST seems like a race to the bottom in terms of sophistication (a few dumb verbs), but it was successful.
Some other paradigm will come along, but it would need to be a popular one if FHIR was to get leverage from it as it has from REST.
While not prohibited, hopefully people are hopefully not defining operations to do things exactly equivalent to what can be done via a simple RESTful interaction. I generally delineate 'simple' interactions for REST and more complicated requests for operations. For example, when creating a Bulk Data Export request we could have designated a resource that is POSTed with a request and used to track progress, but the community chose to generally model those things as operations instead of as discrete request data objects.
-
Arguments could be made over which is better, purer, etc., but I think we have a pretty good balance between the simplicity of RESTful calls for 'general' interactions and good frameworks for the more complicated stuff.
-
Note that none of this prevents you from defining a different paradigm / API surface / etc., and even proposing it for inclusion in the spec. But given the normative state of those areas of the specs and the adoption they have seen, I would discourage something that is a mostly a modification to the existing REST API (I doubt it would get enough support to be core and thus would only hurt interop instead of expanding it).
I am not sure exactly what you are driving towards here. We already have other transport paradigms in FHIR (e.g., Messaging). Since RESTful calls are ambiguous without the verb, it is included in the resources used (Bundle.entry.request.method).
-
I agree that it could have been done differently (e.g., by using URL segments as you describe, headers, other elements, etc.), but it would be a very breaking change to a lot of implementations to modify that today. If there is a compelling reason (e.g., as Rik talks about), it does not hurt to describe it. There is text around a few different approaches on the FHIR Services page (see Implementation Approaches). I will also note the Orchestration, Services, and Architechture Work Group (formerly Service-Oriented Architecture) which (I think =) covers some of these types of interests.
The one area where I see operations as being superior to REST is if you want to avoid having to do orchestration of multiple REST APIs for something that needs to be done atomically. So if you need several resources and other information like an event code that must to be handled as a unit, operations a good alternative to REST (alongside batches, transactions, and messages).
Hi Rik, Gino, Cooper, thank you for willing to answer to my question. I see I must clarify myself.
1. Unification of operation placement may simplify FHIR server implementation, usage and support. As far as I know, out of the box HTTP PUT is rarely used and can only write file to filesystem, so, anyway special software for FHIR specific things like validation, database manipulation must be developed. So, I don't see simplicity, contrarily, I see server configuration for PUT, more complex operation resolving, more complex PUT based user client instead of simple GET/POST based web browser. The same for other verbs. I can imagine REST API with operation as verb and parameters as URL like POST CoverageEligibilityRequest + GET CoverageEligibilityResponse. I can imagine SOA API with verb as transportation method (with or without body) and URL as operation+parameters like POST Patient/$put or GET Patient/$delete, but I cannot understand the idea of combined REST+SOA API with both DELETE Patient and POST CoverageEligibilityRequest/$submit. Even backward compatibility looks like irrelevant for FHIR.
2. Keeping all the data about request/response together in a single json may simplify these data manipulation. Using Bundle with .timestamp, .entry.request.method, .entry.request.url, etc. is really good solution.
hi Alexander
I don't know why you would say that PUT is rarely used. Do you mean because a single resource update is not useful? I agree that other orchestrations may be needed, and that is where operations do come in. But PUT is useful it seems.
PUT writes to the FHIR server. PUT is not limited to some sort of first level commit, as might be suggesting. However you choose to let clients update data, the server will need to do the same work (be it filesystem, database etc), so I don't yet know why PUT would not be sufficient.
Also the server can already do whatever validation it chooses to, on a PUT.
So I don't yet see any rationale for change based on these factors.
We use the base HTTP verbs without operations when the semantics fit - create, update and delete. We use operations for things that don't fit the semantics of the HTTP verbs. $submit is not the same thing as 'create'. No resource is created. No resource id is returned. No existing instance is revised. There are a lot of situations that don't fit into the limited CRUD semantics of the HTTP verbs. But that doesn't mean we should avoid using those verbs when we can. I'd say that 75% plus of FHIR interoperability is over the base HTTP verbs. Custom operations is 15-20% and the rest is messaging exchanges that don't involve HTTP at all. As soon as you get into the operation space, there's the challenge of standardizing the input and output arguments. The benefit of the HTTP verbs is that there's no possibility for customization. What goes in and what comes out is quite nailed down. That may feel limiting, but it's great in terms of robust interoperability (which is our primary objective).
As a side note, FHIR's approach to REST has been in use for over 10 years and is pretty widespread, so an alternative would have to have a huge upside to have a hope of justifying the transition effort to the market. At the moment, I'm not seeing it in what you've proposed.
Hi Rik
Rik Smithies said:
I don't know why you would say that PUT is rarely used. Do you mean because a single resource update is not useful?
PUT is just a sample. Unlike FHIR, most HTTP servers serve browsers' requests GET and POST, while other verbse are not in use. REST is rarely used, and, as we see, REST does not cover all FHIR requirements, so $operations are necessary.
Also the server can already do whatever validation it chooses to, on a PUT.
Sure, FHIR servers are as they are already created. But I care about FHIR servers' creation process itself.
Hi LLoyd
Lloyd McKenzie said:
As a side note, FHIR's approach to REST has been in use for over 10 years and is pretty widespread, so an alternative would have to have a huge upside to have a hope of justifying the transition effort to the market. At the moment, I'm not seeing it in what you've proposed.
This "side note" looks like the main point. I completely agree that rebuilding steady working things is a bad idea, but I think that FHIR is not so far from the beginning. It's already clear that REST is not enough for API, while more features will be added. And I'm sure, many new FHIR servers will be developed, while existing ones will be redeveloped once as a part of regular cycle. Anyway, I also agree that everything should be done at the right moment.
Thank you for the answer.
hi @Alexander Breusov maybe you have the answers you need now but I don't really understand your reply to my points.
You say "REST is rarely used".
If you mean that FHIR rarely uses REST then I don't think that is true. If the argument hinges on this then it would be good to understand why you say this.
It is true that FHIR APIs often add operations to plain REST, to round out the API. So 100% REST FHIR may not be all that common (e.g. $validate is often used, which is not REST, arguably). I don't see why that would mean it is better not to use the existing HTTP primitives and replace them with something that no one is familiar with.
I would like to create a datatype profile on the CodeableConcept datatype that includes a binding on CodeableConcept level itself, not the .coding
. This would prevent me from having to add the binding in the profile that will use this CodeableConcept profile.
Is this (or should this be) possible?
I think so
I can't add the binding within Forge on the root of the CodeableConcept , also don't seem to be able to do this with FSH.
You can do it in FSH, but you need to drop into the caret syntax to do it:
Profile: MyBoundCodeableConcept
Parent: CodeableConcept
* . ^binding.strength = #required
* . ^binding.valueSet = Canonical(MyValueSet)
Ideally, it would be nice if you could say * . from MyValueSet
inside the CodeableConcept profile, but it seems SUSHI does not like that, probably because the root element does not have a type
.
@Chris Moesel thanks. That seems to work.
@Ward Weistra we should perhaps deep dive into why this is not supported by Forge.
Bumped into other problems further down the line. The java and the .net validator do not pick up the valueset binding.
When generating a snapshot in Forge or using the HL7 Validator, the binding from the dataset profile is ignored and reverts to the default from the base resource.
Got some test data attached in a zip file here :)
The reason we erase it is that it is forbidden to put a binding on the root, see here: https://hl7.org/fhir/elementdefinition.html#interpretation
Column "Constraint definition, first element", row "binding".
I'm not saying the usecase isn't valid, and it would not even be hard to implement, but it would require a change to the spec.
Thanks Ewout for pointing to the right place in the spec...
As I really think this is a nice-to-have for profilers, I have submitted: https://jira.hl7.org/browse/FHIR-48664
Oh. Good find, @Ewout Kramer! For required
bindings, I think profilers could approximate this constraint by doing an open slicing on CodeableConcept.coding
by value and specifying a 1..1 slice with the bound value set. It's not pretty, but it might get the job done.
When using definition based extraction to extract resources, what will be the right way to add definition for a questionnaire where resource can be extracted with extensions.
Ex. Following is an sample questionnaire
{
"type" : "Questionnaire",
...
"item":[
{
"type": "choice",
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-itemControl",
"valueCodeableConcept": {
"coding": [
{
"system": "http://hl7.org/fhir/questionnaire-item-control",
"code": "drop-down",
"display": "Drop Down"
}
],
}
}
],
"definition" : "???????",
"linkId": "religion",
"text": "Religion",
"answerOption": [
{
"valueCoding": {
"code": "1013",
"display": "Christian",
"system": "http://terminology.hl7.org/CodeSystem/v3-ReligiousAffiliation"
}
},
{
"valueCoding": {
"code": "1023",
"display": "Islam",
"system": "http://terminology.hl7.org/CodeSystem/v3-ReligiousAffiliation"
}
},
]
}
]
}
Patient(http://build.fhir.org/ig/WorldHealthOrganization/smart-anc/StructureDefinition-anc-patient.html) resource has field patient-religion which is an extension.
Now I wanted to know what value I need to put in the definition field(??????), to get the answer extracted?
Let me know if there are any working examples which I can look at. anc-patient.PNG
I don't think we've talked about this. My leaning would be for the FHIRPath to allow the 'extension' element - so you could use extension('whatever').value. Can you submit a tracker item for us to talk about this and make it explicit in the spec?
It's the value from the elementdefinition.id in the referenced StructureDefinition isn't it?
If you use that form and create some a QR you can try out the $extract on that server.
Lloyd McKenzie said:
I don't think we've talked about this. My leaning would be for the FHIRPath to allow the 'extension' element - so you could use extension('whatever').value. Can you submit a tracker item for us to talk about this and make it explicit in the spec?
Hi @Lloyd McKenzie , should we raise a tracker item in Jira?
Brian Postlethwaite said:
It's the value from the elementdefinition.id in the referenced StructureDefinition isn't it?
Thanks, the example that you shared with the server works! We did some debugging and found that the implementation of the extraction piece in android fhir sdk is restricted to fields present in the resource .java:class (i.e. org.hl7.fhir.r4.model.*) . So that means extensions cant be extracted using splicing since it is not supported in the SDK.
cc: @Jing Tang
@Kashyap Jois - yes please
Wondering if the android SDK was updated to support this now?
I believe that the SMILE one now supports it.
I've also just updated my server (via the open source dotnet Q validator) to validate the Definition property too - which is called by the form tester in the fhirpath lab...
I have a QuestionnaireResponse with a contained Questionnaire that I'm getting back a validation failure when answering with a valueString
against the contained Questionnaire's item[x].type
being set as open-choice
. When validating against validator.fhir.org R4 4.0.1, I get back "Option list has no option values of type string".
Below is a trimmed down example failing validation. From reading other discussions here, it sounds like this is supported in R4 and the open-choice
is the correct type on the question item, I just haven't found an example of how to properly build this. I assume the Questionnaire being a contained resource on the QuestionnaireResponse shouldn't matter.
Hopefully unrelated - the itemControl code we are using does not belong in the ValueSet (known issue with a third party).
{
"id":"9bb747d7-2666-47f2-9c79-20bc05198448",
"meta":{
"versionId":"5"
},
"contained":[
{
"id":"ed364266b937bb3bd73082b1",
"item":[
{
"extension":[
{
"url":"http://hl7.org/fhir/StructureDefinition/questionnaire-itemControl",
"valueCodeableConcept":{
"coding":[
{
"code":"editableDropdown"
}
]
}
}
],
"id":"specimen-source",
"answerOption":[
{
"valueCoding":{
"code":"U",
"display":"Urine"
}
},
{
"valueCoding":{
"code":"B",
"display":"Blood"
}
},
{
"valueCoding":{
"code":"S",
"display":"Saliva"
}
}
],
"code":[
{
"code":"specimen-source"
}
],
"linkId":"specimen-source",
"text":"Source of specimen",
"type":"open-choice"
}
],
"name":"Test Open Choice question",
"status":"active",
"subjectType":[
"Patient"
],
"resourceType":"Questionnaire"
}
],
"item":[
{
"answer":[
{
"valueString":"spinal tap"
}
],
"linkId":"specimen-source",
"text":"Source of specimen"
}
],
"questionnaire":"#ed364266b937bb3bd73082b1",
"status":"in-progress",
"subject":{
"display":"Test Patient",
"identifier":{
"value":"4"
},
"reference":"Patient/4",
"type":"Patient"
},
"resourceType":"QuestionnaireResponse"
}
That looks valid to me.
why? The definition says it has a list of values that are of type Coding
, but the answer has a type of string
open-choice is a set of either codes from the referenced set, or a string if no value is appropriate from the set.
This was changed in R5, but is valid in R4/R4B
https://hl7.org/fhir/r4/codesystem-item-type.html#item-type-open-choice
Answer is a Coding drawn from a list of possible answers (as with the choice type) or a free-text entry in a string (valueCoding or valueString).
The choice
type matches the description you've desribed.
The 2 control types that often use this type are a combo-box that has an edit control, or an auto-complete style search control.
ok. I missed that. fixed next release
Where's that code so I can do a review on it to see which parts are missing (and compare to my validation)
Any ideas for workarounds to save this? We're on HAPI FHIR 6.8.0, and I assume this would be something that will be a while to make it all the way through the pipeline for HAPI to consume. I was thinking just adding the provided valueString
answer into the Questionnaires answer options list just to pass validation, but that feels very wrong. I'm not sure if HAPI allows overriding base HL7 rules.
Interestingly, HAPI has support for open-choice
to override validation, but all the code is commented out. I can post to the github to find out why.
@Grahame Grieve I just saw the fix and release, we'll test it out today
task "ready" says: The task is ready to be performed, but no action has yet been taken. Used in place of requested/received/accepted/rejected when request assignment and acceptance is a given.
what does that last part mean?
a) "when request assignment and acceptance are part of the workflow but are not yet done"
b) "when there is no need for such a thing as request assignment and workflow"
in other words, how do I do "here is a task for someone to maybe pick up"? I would assume it's "ready" but given the above description, I am not so sure
b
You use it when the filler doesn't get the option to say "yes" or "no".
what of the cases of "i don't need to care if there is a filler to accept this or not, right now I just want to say perhaps this should be done"?
For "here is a task for someone to maybe pick up", wouldn't Task.status="requested"?
i think so, but that is not what i read from the description. especially the "maybe pick up"
"The task is ready to be acted upon and action is sought."
Your use of status will depend on the workflow you are supporting, i.e. directed tasks, undirected claimed tasks, fulfilment offers, etc. For diag orders, we use requested, accepted, in-progress, completed but that is a directed/undirected model that might also involve the claiming of fulfilment based on patient choice.
If the Task isn't directed but is available for someone to claim, it would have a status of 'requested' and 'owner' would be empty.
Given the timing of the event (evening/night in europe), will there be any recodings? @Nikolai Ryzhikov 🐬
René Spronk said:
Given the timing of the event (evening/night in europe), will there be any recodings? Nikolai Ryzhikov 🐬
Sure
Sorry for such timing - we tried to cover 3 continents :/
IMHO covering 3 continents never works - better to optimize for 2 out of 3 (and have recordings for the 'lost' continent ;-) ).
Which TZ is Atlantis in again? ;-)
Same as Washington D.C.
'Null Island' https://en.wikipedia.org/wiki/Null_Island
René Spronk said:
IMHO covering 3 continents never works - better to optimize for 2 out of 3 (and have recordings for the 'lost' continent ;-) ).
North America, South America, Europe and Africa - at one time - is a possibility. Asia and Oceania - remains the Exclusion Zone.
Hello everyone,
I have a question regarding the implementation of CDS Hooks in Epic. I have developed a CDS service application that supports the patient-view, order-select, and order-sign hooks. How can I integrate this with Epic?
So far, I have created an application in Epic where the intended audience is clinicians and administrative users. I have selected all the necessary APIs and checked the "Uses CDS Hooks" option.
However, I am now looking for guidance on how to create the CDS Hook request in Epic and how to interact with the CDS hooks application that I built. I have reviewed the Epic documentation, but I couldn't find specific instructions on how to use or implement CDS Hooks in an Epic app.
Could you please explain how the CDS Hooks implementation works within Epic and the steps I should take to complete the integration?
Thanks
Best place to seek Epic support is to email open@epic.com
Hi @Lloyd McKenzie
In the Epic documentation, I noticed that Epic supports three CDS hooks. I would like to understand how to use this feature. Specifically, how can I utilize the patient-view and order-select CDS hooks in Epic?
Thanks
This forum is focused on FHIR, not on any vendor/profuct specific problems or issues. As Lloyd stated: best place to seek Epic support is to email open@epic.com
Hi,
I am new to the CDS hooks. I have started CDS hooks service development. Can any one helps me to understand on the core functionalities ?
it will be very helpful if someone helps and provides some thoughts on this.
@Rakesh Das - there is a #cds hooks channel where you might get answers to your questions.
Hello,
I need to use the Communication.payload.content[x]:contentCodeableConcept
element, which is part of the pre-adopted R5 specification, in our R4 implementation. According to http://hl7.org/fhir/R5/versions.html#extensions I'd need to add the package hl7.fhir.extensions.r4:4.0.1, but I'm not able to find it (the link to this package on the aforementioned page does not resolve).
Is there an alternative package or solution that would allow me to handle this scenario?
Thanks!
not right now - work in progress
@Grahame Grieve For now we decided to create a custom extension, mimicking the extension url
as much as possible (we use nictiz.nl instead of hl7.org in the url
), so that it can be replaced easily by the core extension as soon as the package becomes available. Since the element path contains brackets, is it correct to assume that the url
of the core extension will be http://hl7.org/fhir/5.0/StructureDefinition/extension-Communication.payload.content%5Bx%5D:contentCodeableConcept (i.e. with the brackets url escaped)?
Moreover we would like to mimic the id
as much as possible. How will the corresponding id
be constructed, since it's not allowed to include characters such as [, ], % and : in an id
? Currently we have omitted the content[x]: part altogether and use extension-Communication.payload.contentCodeableConcept as id
.
Thanks in advance!
@Gino Canessa
Regarding the cross-version extensions generally, we made a lot of great progress during the connectathon and WGM. The ball is in my court to implement all of those changes and generate the next pass of definitions for review. That will take some time on the order of weeks (1-4?).
@Gino Canessa Is it okay to base the id
of the extension like this: extension-Communication.payload.contentCodeableConcept
, where we omit the brackets and other special characters since they aren't allowed in anid
? Additionally, for the url
of the extension, would it be correct to assume: http://hl7.org/fhir/5.0/StructureDefinition/extension-Communication.payload.content%5Bx%5D:contentCodeableConcept, where the brackets are URL-encoded? Thanks!
That does not look quite right to me, but I have not gotten to reconciling changes in URL and structure yet (working on changes in differentiation right now).
I need to get to that section of changes before I can give an answer on the new content.