A weekly summary of chat.fhir.org to help you stay up to date on important discussions in the FHIR world.
@Grahame Grieve when you can share any updates, we are about to have a conversation with HL7 CG WG (Tuesday April 4th) on this topic - from a broader perspective of "FHIR Test Cases" in general. Just wondering if any notes can be shared.
not at this point
only that I am planning to add a set of test cases for $expand and $validate-code to the test cases github repo that explore known issues around expansion and validation
@Grahame Grieve , @Michael Lawley : If I may believe this: https://confluence.hl7australia.com/display/COOP/2023-03+Sydney+Connectathon , the Australian connectathon was on March 23rd... Do you already have some news about the Publisher/Terminology server interface?
nothing final yet. I will update when there's news
ok there's news. There's test cases here: https://github.com/FHIR/fhir-test-cases/tree/master/tx
From the next release of the validator, you can run them like this:
java -jar validator.jar -txTests -source https://github.com/FHIR/fhir-test-cases -output /Users/grahamegrieve/temp/txTests -tx http://tx-dev.fhir.org -version 4.0
Where:
there's a fair bit of work to go here, but this is the shape of where things are going
@Grahame Grieve What's the preferred way to provide feedback on these tests - questions, apparent bugs, etc?
Currently I have issues with the REGEX test, a bunch of the language tests, and the big-echo-no-limit
test which seems to require a system to refuse to return an expansion with more than 1000 codes?
Wrt the language tests, language-echo-en-en
, language-echo-de-de
seem to suggest that the expansion should set ValueSet.language
based on the displayLanguage
parameter to $expand
. But, that would then imply that the entire result ValueSet is in that language rather than just the ValueSet.expansion.contains.display
values (which is all that parameter is really requesting).
For the translated CodeSystems in the language tests, none of the translations have a use
value, so I (Ontoserver) can't know that they should be used as the preferredForLanguage
display value.
Last question: is there a branch available with the -txTests option
What's the preferred way to provide feedback on these tests - questions, apparent bugs, etc?
discussion here first, I think.
Currently I have issues with the REGEX test
what?
the big-echo-no-limit test which seems to require a system to refuse to return an expansion with more than 1000 codes?
well, this is something we'll have to figure out. It's my test that that's how my servers work. It's not necessarily how other systems have to work, so we'll have to figure out how to say that in the tests
Wrt the language tests, language-echo-en-en, language-echo-de-de seem to suggest that the expansion should set ValueSet.language based on the displayLanguage parameter to $expand. But, that would then imply that the entire result ValueSet is in that language rather than just the ValueSet.expansion.contains.display values (which is all that parameter is really requesting).
I sure expected some discussion on this. There's two different things that you might want - languages on display, and languages on the response. The way the tests work, if you specify one or more display languages, you get displays defined for those languages
But the language of the response - the ValueSet.language, that's based on the language parameter of the parameters, which the controls how the available displays are represented in the response, based on the value of ValueSet.language
with regard to the use parameter, I don't believe that the spec says anywhere that there is a preferredForLanguage code, so how can that be in the tests?
is there a branch available with the -txTests option
the master has that now
I now have the validator test runner going, but I think it is being really overzealous in the level of alignment its looking for between the expected response and the actual response.
First two issues: .meta
and .id
-- I don't think either of these should be included in the comparison.
Next one: ValueSet.expansion.id
-- that's purely a server-specific value
.meta and .id... I'm not producing them, right?
.expansion.id? or expansion.identifier?
Regarding the regex issue, we're limited to Lucene's flavour which does not include character classes like '\S' or '\d'.
.id
is in simple/simple-expand-all-response-valueSet.json
for example. I produce .meta
but not .id
ouch. would you like to propose an alternative regex?
".{4}[0-9]"
would work for me in this example, but it's not quite the same. The more accurate "[^ \t\r\n\f]{4}[0-9]"
would also work.
And yes, I did mean expansion.identifier, but I think this was a false negative -- me misreading the output
I will commit some changes when I can
btw, what are you putting in meta?
Require? no id or just don't care? I think a bunch of things should be don't care
Meta was including a version (doesn't really make sense) but also a lastUpdated
I think for id, it shouldn't have an id? I just stopped regurgitating the id, which was basically an oversight
It would also potentially propagate tags
What if it's using a stored expansion?
in this context?
Well, no, but I'm thinking that these tests should really only be looking for things that are known to be wrong
perhaps. They're also my own internal qa tests. that might be too much, I guess, but I'm hoping not
I was thinking that the expected response in the test would set the scope of required elements, and other things would just be ignored
you assume that I'm sure what the answer is there
I'm guessing there's a way to require an element but ignore the value
I'm not even sure that it can have a known answer
there is, yes
I've got a bunch of time later today to dig into this in detail, so I can hopefully provide coherent feedback rather than piecemeal reactions
ok great
Back quickly to .expansion.identifier
, this is what I'm seeing:
Group simple-cases
Test simple-expand-all: Fail
string property values differ at .expansion.identifier
Expected :$uuid$
Actual :4aa6f81f-ab79-41b2-96e2-6faa0aadc38c
well, that's not a valid value
oh! it needs the urn:uuid:
prefix?
yes
But the type is uri? which can be absolute or relative
well... a URI can be, but in this case:
uniquely identifies this expansion of the valueset
I think it should be absolute
There's several places in the spec where we missd this when we allowed relative URIs
uniquely in what scope though? wrt that specific tx server endpoint, or globally, or in some deployment environment?
I don't think you can legitimately enforce it to be a UUID (it might be something like [base]/expansion/[UUID]
, which would be "unique" and absolute)
This one is perhaps tricky:
Several tests expect an expansion parameter for excludeNested
but Ontoserver always behaves as if this was true, and so omits it because its value does not affect Ontoserver's behaviour.
That’s less than ideal from my pov. and probably excludes ontoserver from serving for hl7 igs. Maybe. I’ll think about the testing ramifications. Is that fact visible in the terminology capabilities statement?
You have it as a uuid anyway, so prefixing isn’t going to be a problem? And the intent is global since expansions are sometimes cached and reused. Sometimes at scale
Globally unique is fine, but then I'd be tempted to adopt a URI based on the template [base]/expansion/[UUID]
e.g., https://tx.ontoserver.csiro.au/expansion/4aa6f81f-ab79-41b2-96e2-6faa0aadc38c
.
But in principle, if the spec says URI, unique identifier, then I don't think it's good form to impose additional constraints.
Ontoserver does return TerminologyCapabilities.expansion.hierarchical = false
But the meaning of excludeNested
is only about the result representation (true
=> MUST return a flat expansion), it does not affect the logical content of the expansion.
Is there a reason you think that parameter should be included?
Conversely, Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging.
I fully expect we'll need to do some adjustments in this space
Is there a reason you think that parameter should be included?
IG Authors have raised issues before when the expansion in the IG loses the heirarchy
@Michael Lawley I've been thinking about this one:
omits it because its value does not affect Ontoserver's behaviour.
That's wrong - the parameters are to inform a consumer how the value set was expanded. Whether or not Ontoserver can or can't is not the point, it's how it acted when doing the expansion
Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging
offset = 0, presumably, but what's count in that case?
But the presence/absence/value of excludeNested
doesn't affect "expansion" (i.e., which codes are present), it only potentially affects how those codes are returned in the ValueSet.expansion.contains
.
Grahame Grieve said:
Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging
offset = 0, presumably, but what's count in that case?
MAXINT
it still affects the expansion even if it doesn't affect which codes are present
If a consumer is looking through a set of expansions, instead of just generating a new one, then it's going to be input into their choice
I had been approaching it from the perspective of judging whether or not a persisted expansion is re-usable for a different expansion request.
(Which is something that Ontoserver does when it has a ValueSet with a stored expansion.)
indeed, but you're only thinking of it in your context, it could/would also be done in expansion users that can't make the assumption you're making
I'm trying to think about this from the perspective of a client / consumer of ValueSet.expansion -- under what circumstances do they need to know excludeNested = true
? What is it actually telling them?
One answer might be "this value was provided for this expansion parameter in the original request"?
that this is expansion will not contain nested contains even if that might be relevent for this value set
Also, what should Ontoserver do if the request was $expand?excludeNested=false
? Should it state that in the parameters even though the actual expansion may have (if it was present) flattened any nesting? Or, should it change it to true
because flattening might have happened.
Perhaps the message is just "as a client, you do not have to look for nested codes when processing this expansion"?
well I think that the server should return an exception if the client asked it to do something it can't do
But that's not what excludeNested=false
means. It's not the same as saying "include nested"
no that's true
and you don't know whether flattening is a thing that happened or not, I presume
correct
Now looking at all the validation test cases, the system
parameter has the wrong type (valueString
not valueUri
) and, in the responses, code
also has the wrong type (valueString
instead of valueCode
)
and similarly for system
in the responses
wow that's a bad on my part. Fixed
nearly - still problems with the system
parameter
diff --git a/tx/validation/simple-code-bad-code-request-parameters.json b/tx/validation/simple-code-bad-code-request-parameters.json
index 077c424..59d292a 100644
--- a/tx/validation/simple-code-bad-code-request-parameters.json
+++ b/tx/validation/simple-code-bad-code-request-parameters.json
@@ -8,6 +8,6 @@
"valueCode" : "code1x"
},{
"name" : "system",
- "valueString" : "http://hl7.org/fhir/test/CodeSystem/simple"
+ "valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
}
In validation/simple-code-implied-good-request-parameters.json
, there is a non-standard parameter implySystem
:
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "url",
"valueUri" : "http://hl7.org/fhir/test/ValueSet/simple-all"
},{
"name" : "code",
"valueCode" : "code1"
},{
"name" : "implySystem",
"valueBoolean" : true
}]
}
indeed there is, and there should be, right?
it indicates that it is intentional that there's no system and the server should infer what the system is
But that is an invented non-standard parameter?
The use-case here seems to be that the system isn't knowable by the calling client, but in the context of validation, why wouldn't the system be known; there should be bindings available?
it's a code type, so there's only a code, and the server is asked to imply the system from the code and the value set
agree I haven't proposed that parameter, but it's still needed
Yes, its a code type, but that must exist in some context, right? The context should provide the system?
the value set itself is the context
What are the boundaries here? Can the ValueSet contain codes from > 1 code system? Can the code be non-unique in the valueset expansion?
The value set can contain codes from more than one code system, yes. A. number of them do. The code must be unique in the value set else it's an error
Presumably the system parameter does also need to be provided (from the documentation of $validate-code.code: "the code that is to be validated. If a code is provided, a system or a context must be provided"). Does the client just pass a dummy system that is ignored?
no the system is not provided in this case
since there isn't one
and yes, that violates the documentation on that parameter
And is it only ever used when supplying the code
parameter?
yes. it must be accompanied by a code and a value set
I fixed the remaining system parameters
for examples like validation/simple-code-bad-display-response-parameters.json
why is the result true
when the display is invalid? The specification for the result
output parameter is:
True if the concept details supplied are valid
Another test case issue: mis-named input parameter. See, for example, validation/simple-code-bad-version1-request-parameters.json
which includes a parameter version
that should instead be systemVersion
.
why is the result true when the display is invalid?
Because that's just a warning - the code has been judged to be in the value set
a parameter version that should instead be systemVersion
ouch
why is the result true when the display is invalid?
Because that's just a warning - the code has been judged to be in the value set
But if a display is provided it should be validated and if its not any of the displays listed by the CodeSystem, then it is invalid -- the definition of result
is not "True if the code is a member of the ValueSet/CodeSystem", but rather "if the concept details supplied are valid"; display
is one of these details.
I am very uncomfortable about relaxing display validation affecting the outcome due to the prevalence of EHRs that allow for the display to be edited arbitrarily.
well, I'm very sure that if I changed to an error instead of a warning, the IG authoring community would completely rebel, but I guess TI might want to have an opinion. So what do other people think?
There are lots of reasons for display not being valid. (E.g. If someone has a code system supplement the validator doesn't know about.)
Why is the IG authoring community using non-valid displays?
there's 4 reasons that I've seen:
Note, I am more concerned about the clinical community than the IG community.
If this is an impasse, perhaps the mode
flag should be used to relax things?
Either way, I think we need an explicitly agreed mechanism to use the "issues"
to flag the invalid display text.
Also, I think the test extensions-echo-all
is wrong at least in assuming supplements will be automagically included
ValueSet display should succeed
TI decided otherwise; that's no longer allowed
I expect that TI will choose to decide this in NOLA. You going to be there?
Either way, I think we need an explicitly agreed mechanism to use the "issues" to flag the invalid display text.
the tests are doing that now
This has been discussed, at some length, with regard to SNOMED CT descriptions, and I recall that @Dion McMurtrie produced a table with various permutations in the early days of SNOMED on FHIR.
Unless the edition and version of SCT is provided, it's not possible to determine the validity of an unrecognized description. Otherwise, the best a server can do is return the preferred term from its default edition & version and a warning.
well, this discussion is not just about SCT that's for sure
Also, I think the test extensions-echo-all is wrong at least in assuming supplements will be automagically included
why?
That's precisely the intent of this test - make sure that supplements such as this are automagically included
language supplements
Grahame Grieve said:
well, this discussion is not just about SCT that's for sure
Sure - but things are a lot more straightforward for single edition, single language Code Systems.
that's not much of hill to climb given how complex SCT is
It's far more complex with things like LOINC where the same complexity (different national editions and local extensions) exists, but where everyone does it differently and often poorly.
Re extensions-echo-all
, the supplement contains extensions (some I think are technically not valid where they're being used), and then expecting corresponding property values in the output (eg weight
)
which ones are not valid?
ItemWeight - only goes on Coding and in a Questionnaire
I think I created a task about that one
I used a property where I could, and an extension where I had to
We can force the overhead of a CodeSystem supplement, but we can't count on the supplement being available when performing production-time validation. And that means that non-matching display names shouldn't be treated as an error.
If you're doing prod time validation without all the base info, then you're only going to get half answers - do you tolerate missing profiles? But, if your use case is tolerant of bad displays, just omit them from the validate-code calls, or let's have an explicit parameter that the client passes telling the server to only treat as warnings
@Michael Lawley to increase your happiness, I'm just adding tests for supporting these 3 parameters from $expand for $validate-code: system-version, check-system-version, force-system-version, and as I'm doing that, I'm checking that they apply to Coding.version as well
One of the other challenges with the txTests is that Ontoserver returns additional expansion.parameters and this causes the test to report a false failure
Most implementations don't care about the display values - and will be sloppy with them. So the default behavior should be warnings - errors should require the explicit parameter.
One of the other challenges with the txTests is that Ontoserver returns additional expansion.parameters and this causes the test to report a false failure
I'm assuming that this is something we'll sort out, so I'm not worrying about that today
but it's a test problem, not an implementation problem
Given that displays are what clinicians see and interpret, being sloppy is bad -- we've seen real clinical risks here.
And just because (a group of) ppl are sloppy doesn't mean we should enable that by default.
but it's a test problem, not an implementation problem
It's a test problem yes, but it's making it very hard for me to work through the cases because it bails out early and hides potential actual problems in the rest of the response.
fair.
Do you have a list of the extra parameters? In general, some extra parameters would be fine but others might not be, and I don't want to simply let anything go by
The reality is that the displays in many code systems are not appropriate for clinician display. By 'sloppy' I mean that systems make the displays what they need to be for appropriate user interface, not worrying too much about diverging from the 'official' display names if the 'official' names aren't useful for the purpose. I'm not saying that the display names chosen are typically inappropriate/wrong.
Do you have a list of the extra parameters?
version
is the main one, and it seems strange that it's not expected in the result
Also, I'm getting a missing error for includeDesignations
. Again, this seems like our interpretations of "parameters that affected expansion" are mis-aligned. I interpret this as being the calculation of the matching codes, not the specific representation that gets returned (noting that displayLanguage
is counted since it affects the computed display
value)
Coding.display A representation of the meaning of the code in the system, following the rules of the system.
"following the rules of the system", not "following the rules of some system implementer".
Also, if a display is not appropriate, then get it fixed -- either at source (in HL7 / THO) or with the external party. If the external party won't play ball, then fix it in a shared supplement so everyone can benefit rather than lots of (potentially incompatible) fixes spread over many different IGs.
it sounds so easy when you say it like that
Sure. Except that's not what systems do today. They just load the codes into their databases and make the display names say what they want them to say. And they're not going to change that just because we might like them to.
If that's all they did I'd be less concerned. What they REALLY DO is allow people to change the display text on-the-fly to absolutely anything (and people do this), and the results sometimes bear zero resemblance to the code's meaning. This is why I say we're concerned about the clinical use case over the IG use case, and why I want the caller to explicitly request that an invalid display not return an error; then the onus is on the caller.
version is the main one, and it seems strange that it's not expected in the result
where is it missing? I just spent a while hunting for it, and yes, it was missing from the validate-code results, but I can't see where it's missing from the $expand results
Let's start with simple/simple-expand-all-response-valueSet.json
-- it only has:
"parameter" : [{
"name" : "excludeNested",
"valueBoolean" : true
}],
.. and ..?
Where is the version of the CodeSystem that was used in the expansion?
that code system doesn't have a version, so there's no parameter saying what it is
These days I guess that should be called system-version
? But it's a canonical, so I would expect http://hl7.org/fhir/test/CodeSystem/simple|
as the value
really? I would not expect that
That says "I use a version-less instance of this code system", rather than just not saying anything.
so firslty, it's not system-version - that's something else, an instruction about the default version to use. version is the actual version used. Though I just spent 15min verifying that for myself, and it could actually be documented
At least it's "not wrong"
+1 for documenting these :)
That says "I use a version-less instance of this code system", rather than just not saying anything
I
I'm not sure that it does. I just read the section on canonicals again, and at least we can say that this is not clear
I don't see another way to say it -- the trailing |
might be optional, but is, I think, in the spirit of things?
I think that the IG publisher would blow up on this:
"parameter" : [{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple|"
}]
If you want a versionless canonical, you omit the '|'. I would expect (and have only ever seen) the '|' there if there's a trailing version.
no wouldn't blow up, just wouldn't make sense in the page, because the code makes the same assumption as Lloyd
Hmm, that looks like it might be HAPI behaviour -- I'm guessing if you set the version to "" rather than null.
Investigating...
Yep, that is the issue.
Would IG publisher cope sensibly without the trailing |
"parameter" : [{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
it'll ignore that one. As it will ignore http://hl7.org/fhir/test/CodeSystem/simple| from the next release
if there's no version, there's nothing to say
And I'll work around HAPI to leave off the |
but you won't leave the parameter out?
what about in the response to $validate-code when there's no version on the code system?
that's why you should leave it out
I'll have to look & think deeper - if the ValueSet has two code systems but one has no version, then it could be misleading / confusing to have only one "version" reported? I think leaving it out means clients may have to work harder.
why would clients have to work harder?
Just looking now at extensions/expand-echo-bad-supplement-parameters.json
-- we've used PROCESSING as the code rather than BUSINESS-RULE ; seems a somewhat arbitrary distinction
It is but I don't mind changing
clients (that care) have to know that a missing version means a code-system didn't have a version. And, they have to scan the expansion to find all the code systems in scope (and this may not be a complete set if paging).
Additionally, what if the valueset references two "versions" of the same code system, and one is empty...hmm, not sure if that is possible with Ontoserver.
Re PROCESSING vs BUSINESS-RULE, ideally the test would allow either
what if the valueset references two "versions" of the same code system, and one is empty
You should go bang on that case
clients (that care) have to know that a missing version means a code-system didn't have a version
But they have to scan to decide that either way
Not if all the code systems are listed directly in expansion.parameters."version"
.
Another edge case - a code system is referenced in the compose, but no codes actually match - you'd never know it was in scope
Regarding setting the ValueSet.language
to the value of $expand
's displayLanguage
parameter, will this not be misleading if only some of the codes have translations in the requested language?
sticking to version for now... you're really using it as more than a version - you're using it as a dependency list
I'm thinking that clients might be doing that, yes
well, if we're going to use it to report things that don't contain versions, then we should change it's name. Or would you not consider that?
Regarding setting the ValueSet.language to the value of $expand's displayLanguage parameter, will this not be misleading if only some of the codes have translations in the requested language?
possibly, it that's what was going on, but it's not
well, the tests now have version as optional
though I think we should consider renaming it
did you want to talk about other parameters before we talk about language?
and going back, I sure don't understand this:
Also, I'm getting a missing error for includeDesignations. Again, this seems like our interpretations of "parameters that affected expansion" are mis-aligned. I interpret this as being the calculation of the matching codes, not the specific representation that gets returned (noting that displayLanguage is counted since it affects the computed display value)
what's it got to do with the calculation of matching codes?
https://github.com/hapifhir/org.hl7.fhir.core/pull/1246 - work to date if you won't want to wait for some weird testing thing to be resolved
Ignore the includeDesignations
thing - I'm just including it if a value was supplied.
(deleted)
Back on display validation, the example in the spec suggests that the appropriate response is to fail:
http://www.hl7.org/fhir/valueset-operation-validate-code.html#examples
Is there appetite for adding another mode, e.g. ALLOW_INVALID_DISPLAY ?
the example certainly does suggest failure is appropriate
As a status update, I think we're very close to passing except for the errors relating to unexpected "version"
values which manifest like:
Group simple-cases
Test simple-expand-all: Fail
array properties count differs at .expansion.parameter
Expected :1
Actual :2
and also some spurious validation of the actual error message strings:
Test validation-simple-codeableconcept-bad-system: Fail
string property values differ at .parameter[0].resource.issue[0].details.text
and
Test validation-simple-codeableconcept-bad-version1: Fail
string property values differ at .parameter[0].resource.issue[0].details.text
I figured the question of the actual error messages would come up at some point
but good to hear, thanks
Is there appetite for adding another mode, e.g. ALLOW_INVALID_DISPLAY ?
I don't think I'd like to add another mode for this. Or at least, not this alone. I'm considering the ramifications of just saying that's an error, and then picking through the issues in the IG publisher and downgrading it to a warning if the issues are only about displays.
Either way, I'll be putting this question to the two communities (TI and IG editors) in New Orleans
I think we're very close to passing
Well, too soon :-)
Seems the test harness complains about Ontoserver including extensions.
It also doesn't account for the expansion.contains
being flat when excludeNested
is not true
.
But I believe these are txTests issues, not Ontoserver issues
A new spec issue -- expansion.parameter.value[x]
doesn't support canonical
only uri
.
image.png
Which means the test responses that have an expansion.parameter
like:
{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
}],
are invalid.
yeah I discovered that last night. I'm midway through revising them for other reasons and then I'll make another commit
@Michael Lawley I committed fixed up tests.
with regard to error messages, can you share a copy of the different error messages with me? I'm going to set the tests up so that the messages have to contain particular words. (I think)
I'm going to set the tests up so that the messages have to contain particular words. (I think)
Um, ok.
The specified code 'code1x' is not known to belong to the specified code system 'http://hl7.org/fhir/test/CodeSystem/simple'
A version for a code system with URL http://hl7.org/fhir/test/CodeSystem/simple was not supplied and the system could not find its latest version.
A version for a code system with URL http://hl7.org/fhir/test/CodeSystem/simplex was not supplied and the system could not find its latest version.
None of the codes in the codeable concept were valid.
The provided code "#code1x" was not found in value set http://hl7.org/fhir/test/ValueSet/simple-all
The provided code "http://hl7.org/fhir/test/CodeSystem/en-multi#code1" exists in the ValueSet, but the display "Anzeige 1" is incorrect
The provided code "http://hl7.org/fhir/test/CodeSystem/simple#code2a" was not found in value set http://hl7.org/fhir/test/ValueSet/simple-filter-regex
Another test case error:
validation-simple-code-good-display
The ValueSet specifies a version for the code system 1.0.0
but the display value supplied in the request "good-display" is that from version 1.2.0
AND the response says that version 1.2.0
was used in the validation.
I think that's fixed up now?
No - https://github.com/FHIR/fhir-test-cases/blob/master/tx/validation/simple-code-good-display-response-parameters.json still shows version 1.2.0
, last updated 20 hrs ago
but what's the request?
duh. I forgot to push :sad:
and now the request has valueString not valueUri for the system :man_facepalming:
ah, that's an ongoing issue -- I just have local changes to work around :-)
I'll fix
ok pushed
Thanks! At least with my test harness the main outstanding issues is the display validation issue.
Now looking at extensions-echo-enumerated
:
ValueSet.extension
in the output expansion ValueSet?ValueSet.compose
, ValueSet.date
, and ValueSet.publisher
should all be optional.the display validation issue?
whether an invalid display causes result to be false
oh right. yes
Why are the top-level ValueSet.extension in the output expansion ValueSet?
Because they might matter, so the server should echo them
Not just for this expansion, but all) - ValueSet.compose, ValueSet.date, and ValueSet.publishershould all be optional.
I guess. I don't think it matters to me? I'll check if I care
Why are the top-level ValueSet.extension in the output expansion ValueSet?
Because they might matter, so the server should echo them
That suggests a stronger link between specification and the expansion than I expect. This appears to be the key statement from 4.9.8 Value Set Expansion
A resource that represents a value set expansion includes the same identification details as the definition of the value set
What is the scope of "identification details"?
regarding ValueSet.compose: I have a parameter includeCompose
for whether it should be returned or not, but I don't ever use it, and I wouldn't currently miss the compose
Is that not what includeDefinition
is for?
Also, looking at the OperationOutcome
s, why use .details.text
rather than .diagnostics
(given that there's no .details.coding
values)
dear me it is
diagnostics is for things like stack dumps etc. The details of the issue go in details.text
That suggests a stronger link between specification and the expansion than I expect. This appears to be the key statement from 4.9.8 Value Set Expansion
I didn't understand that
What is the scope of "identification details"?
url + version + identifiers, I think
OperationOutcome.issue.diagnostics
Comment: This may be a description of how a value is erroneous [...]
But happy to update - it's all new
Stronger link...
Why would an extension on a ValueSet definition be relevant to its expansion (as a general rule)?
it shouldn't be but it might be relevant to the usage of the expansion
hence why I echo it
Hmm, ok
Should that be a requirement here?
no, in fact, they are only included if includeDefinition is true.
pushed new tests. code for running the tests is in the gg-202305-more-tx-work2 branch of core
my local copy of tx-fhir-org still fails one of the tests... might have more work to do on the tester
open issues - text details, + the display validation question which is going to committee in New Orleans
So, turns out that it is HAPI's code that's populating the OperationOutcome and putting the text into diagnostics and not details.text
This is only in the case of things like code system (supplement) or value set not found/resolvable since that's a 404 response
this one definitely matters.
Yep, I'll have to take over from the default interceptor behaviour
Thanks @Grahame Grieve I have the new tests and the gg-202305-more-tx-work2 branch running locally.
A bunch of tests are failing because the expected expansion is hierarchical, but Ontoserver returns a flat expansion so there are errors like:
Group parameters
Test parameters-expand-all-hierarchy: Fail
array properties count differs at .expansion.contains
Expected :3
Actual :7
so why is Ontoserver returning a flat expansion? does it need a parameter?
Because it's allowed to, and unless you're returning "all codes", it's a hard problem to cut nodes out of a tree/graph
Let alone order them
but that one is all codes
All codes is very low on our priority list (infrequent use case) so we haven't done special-case work to preserve hierarchy.
It's also something that we've rarely been asked about.
it's certainly come up from the IG developers
and I'm surprised... structured expansions are a real thing for UI work
What we have heard is that some people want to have an explicit hierarchy on expansion that doesn't match the code system's hierarchy (eg where things are grouped differently from the normal isa hierarchy). In these cases the simplest approach we've found is to have them express the desired hierarchy in the stored expansion.
that might be, but as you see, there's reasons people want a heirarchy
But for IG developers, why do they care about the (on the wire) expansion; if the IG tooling needs to render the hierarchy, then it's in the CodeSystem already, or can be recovered from the ValueSet with $expand?property=parent
.
it IG tooling defers to the tx service on this matter. It doesn't try to impose heirachy on what the tx server chooses to return
@Michael Lawley we're going to do triage here on our open issues tomorrow. What I have in my mind:
have I missed anything?
Wrt "How a server reports that it doesn't do heirarchical expansions", a server may do this in some circumstances but not others. For example, Ontoserver (currently) does not do them when calculating the expansion itself, but may return them if its (re-)using a stored expansion.
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
it IG tooling defers to the tx service on this matter. It doesn't try to impose heirachy on what the tx server chooses to return
Then it's effectively choosing to be happy with what the tx server returns, and in that case anything that is in-spec with the general FHIR tx services spec should be acceptable.
Then it's effectively choosing to be happy with what the tx server returns, and in that case anything that is in-spec with the general FHIR tx services spec should be acceptable.
It is acceptable from the infrastructure's pov, but not acceptable from the consumer's pov
it might be acceptable to some consumers, the ones who choose to use Ontoserver, but I think that would mean many editors would not be ok with HL7 using Ontoservrer
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
but that's how the test case we're talking about works
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
If $expand?url=vs1
returns a hierarchical expansion, then I define vs2 as "include vs1", should it not also return a hierarchical expansion?
It is acceptable from the infrastructure's pov, but not acceptable from the consumer's pov
From my perspective, the consumer here is using a tool that could provide this behaviour itself by using the CodeSystem directly (or by reconstructing the hierarchy from parent relationships), but the tool chooses to hand it off to the tx server. Since this is a context-specific behaviour, why not have the tool that wants it, implement it?
Of course, if Ontoserver users call for this behaviour, then that's something we would strong consider, but otherwise it seems like there's an undocumented set of use-cases where a specific behaviour is desired that we have to discover in a trial by error manner.
well, here you are, discovering it :grinning:
returning an hierarchical expansion when the value set includes all of a hierarchical code system is a required feature for HL7 IG publication
Probably because I'm grounded in HL7 culture, but for me that's totally obvious and hardly needs to be stated as a requirement, so there you go. However, Ontoserver doesn't need to do that to be used by the eco-system as an additional terminology server
I'm thinking about how to handle that in the tests - that's why I asked whether this is a feature that surfaces in the metadata anywhere. But it doesn't :sad:
other than parameters-expand-all-hierarchy, parameters-expand-enum-hierarchy, and parameters-expand-isa-hierarchy, does this affect any other tests?
on the subject of display error/warning, I'll be advocating for a parameter that defaults to leaving the tx server returning an error.
is it another mode flag? or something else?
I think another mode flag works. With the default being return error, and the flag saying don't error on displays, just warn.
I've just updated https://r4.ontoserver.csiro.au/fhir with the work-in-progress changes to align better with the requirements as expressed in the txTests
I believe that many of the reported failures are false negatives, and some are very hard to understand what's going on, e.g.:
Test validation-simple-code-good-version: ... Exception: Error from server: Error:org.hl7.fhir.r4.model.CodeableConcept@11b455e5
org.hl7.fhir.r4.utils.client.EFhirClientException: Error from server: Error:org.hl7.fhir.r4.model.CodeableConcept@11b455e5
at org.hl7.fhir.r4.utils.client.network.FhirRequestBuilder.unmarshalReference(FhirRequestBuilder.java:263)
at org.hl7.fhir.r4.utils.client.network.FhirRequestBuilder.execute(FhirRequestBuilder.java:230)
at org.hl7.fhir.r4.utils.client.network.Client.executeFhirRequest(Client.java:194)
at org.hl7.fhir.r4.utils.client.network.Client.issuePostRequest(Client.java:119)
at org.hl7.fhir.r4.utils.client.FHIRToolingClient.operateType(FHIRToolingClient.java:279)
at org.hl7.fhir.convertors.txClient.TerminologyClientR4.validateVS(TerminologyClientR4.java:137)
at org.hl7.fhir.validation.special.TxTester.validate(TxTester.java:252)
at org.hl7.fhir.validation.special.TxTester.runTest(TxTester.java:191)
at org.hl7.fhir.validation.special.TxTester.runSuite(TxTester.java:163)
at org.hl7.fhir.validation.special.TxTester.execute(TxTester.java:95)
at org.hl7.fhir.validation.ValidatorCli.parseTestParamsAndExecute(ValidatorCli.java:227)
at org.hl7.fhir.validation.ValidatorCli.main(ValidatorCli.java:148)
I'll investigate
it's sure not a useful error message
I noticed also that the test fixtures are not automatically created?
Also language/codesystem-de-multi.json
has elements like title:en
which fails when I tried to load it in (using the 5->4 converter in HAPI)
oh. right
you can't use those directly, no
I forgot - I was playing around with that format and left it in
in the case of that test, the error should be
Error from server: Error:[0a8c6743-42a8-43fe-bca5-1138aa91595d]: Could not find value set http://hl7.org/fhir/test/ValueSet/version-all-1 and version null. If this is an implicit value set please make sure the url is correct. Implicit values sets for different code systems are specified in https://www.hl7.org/fhir/terminologies-systems.html.
I noticed also that the test fixtures are not automatically created?
I'm not sure what that means
All the test code systems, and valuesets identified in test-cases.json
are not automatically loaded into Ontoserver when I run the txTests thing. Instead, I needed to run my own loader
no they're passed in a tx-resource
parameter with each request
I didn't notice this until just now, running against the new r4.ontoserver deployment since previously I was testing against a local server that I'd already loaded things onto
Aha! Another magic parameter -- is support for that part of the test?
this is already known. You and I discussed it in the past. see FHIR-33944. It's very definitely required
The test cases do it this way since support is required to support the IG publisher
https://github.com/hapifhir/org.hl7.fhir.core/pull/1255 for the execution problem
Yes, I recall the proposal.
The test cases do it this way since support is required to support the IG publisher
that's effectively what I was asking.
Does this also extend to FHIR-33946 and the cache-id
parameter?
that one is optional - the client looks in the capability statement to see if cache-id is stated to be supported before deciding that the server is capable of doing that
though the test cases don't try that
I'm going to have to put some considered thought into how we support tx-resource
.
Non-exhaustive list of considerations:
None of these are a problem for us with ValueSet resources (we already support contained ValueSets), but they are for CodeSystems.
for me, those are not a thing - they are never written. You probably can't avoid that. But what's 'name clashes' about?
What happens when the resource passed via tx-resource
has the same URL as one that is already on the server? Does it shadow it? It may have an older version than the one on the server and the reference from the request may not be version-specific; should the older version supplied via tx-resource
be preferred over the newer one?
here's what I drafted about that:
One or more additional resources that are referred to from the value set provided with the $expand or $validate-code invocation. These may be additional value sets or code systems that the client believes will or may be necessary to perform the operation. Resources provided in this fashion are used preferentially to those known to the system, though servers may return an error if these resources are already known to the server (by URL and version) but differ from that information on the server
@Michael Lawley I updated the test cases for the new mode parameter
Thanks. I note that it is still complaining about extension content (Ontoserver includes some of its own extensions). I would have expected addition extension content to be generally ignored?
which extensions?
Michael Lawley said:
Coding.display A representation of the meaning of the code in the system, following the rules of the system.
"following the rules of the system", not "following the rules of some system implementer".
Also, if a display is not appropriate, then get it fixed -- either at source (in HL7 / THO) or with the external party. If the external party won't play ball, then fix it in a shared supplement so everyone can benefit rather than lots of (potentially incompatible) fixes spread over many different IGs.
Remembering that things get done according to the path of the least resistance, I see very little instruction and zero examples of using supplements in http://hl7.org/fhir/valueset.html - so chances of them being used for this purpose are very slim. Any changes in this area must offer a path of less or at most equal resistance compared to trimming the display text to what you mean.
well, we can provide examples, that's for sure.
Yep, at the same time, there is dragon text on the supplements:
The impact of Code System supplements on value set expansion - and therefore value set validation - is subject to ongoing experimentation and implementation testing, and further clarification and additional rules might be proposed in future versions of this specification.
That would need to go away as well to get confidence in using them
Otherwise hard to say 'this is what you shall use' when it's an experimental thing.
we're coming out of the experimentation phase :grinning:
and talking about the additional rules
Michael Lawley said:
If that's all they did I'd be less concerned. What they REALLY DO is allow people to change the display text on-the-fly to absolutely anything (and people do this), and the results sometimes bear zero resemblance to the code's meaning. This is why I say we're concerned about the clinical use case over the IG use case, and why I want the caller to explicitly request that an invalid display not return an error; then the onus is on the caller.
I don't see how this will improve the situation. It would just become an almost mandatory thing you do "just because the spec requires it" and it wouldn't carry the intended meaning.
Good use of supplements would, that way the IG can be explicit about the display codes it is tweaking to better fit the purpose. I'd be happy to do that in my IGs!
@Michael Lawley I finally got to a previously reported issue:
However, I'm trying to use tx.fhir.org/r4 as a reference point but I can't get it to behave.
For example http://tx.fhir.org/r4/ValueSet/$validate-code?system=http://snomed.info/sct&code=22298006&url=http://snomed.info/sct?fhir_vs=isa/118672003 gives a result=true even though the code is not in the valueset. In fact the url parameter seems to be totally ignored?
Indeed. It's an issue in the parser because there's 2 =
in the parameter - it's splitting on the second not the first
it works as expected if you escape the second =
I believe the correct strategy is to take the query part (everything from the 1st ?
) and split on &
, then split each of these on the first =
only
I didn't say I was happy with what it's doing
ah, not your parser code then?
it is. it's the oldest code I have. I think I haven't touched it since 1997 or so
PR time?
maybe. The URL itself is invalid so the behaviour isn't wrong, but I don't like it much
Why is that URL invalid?
an unescaped = in it. I think that's not valid according to the http spec. But I upgraded the server anyway, and it should be OK now
according to https://www.rfc-editor.org/info/rfc3986 it is valid, and '=' is considered to be a subdilimiter.
that doesn't really relate to it's use in key/value pairs
I don't see where an unescaped = is illegal?
@Michael Lawley a new issue has raised it's ugly head.
consider the situation where a value set refers to an unknown code system, and just includes all of it, and a client asks to validate the code
e.g.
{
"resourceType" : "ValueSet",
"id" : "unknown-system",
"url" : "http://hl7.org/fhir/test/ValueSet/unknown-system",
"version" : "5.0.0",
"name" : "UnknownSystem",
"title" : "Unknown System",
"status" : "active",
"experimental" : true,
"date" : "2023-04-01",
"publisher" : "FHIR Project",
"compose" : {
"include" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simpleX"
}]
}
}
and
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "url",
"valueUri" : "http://hl7.org/fhir/test/ValueSet/unknown-system"
},{
"name" : "code",
"valueCode" : "code1"
},{
"name" : "system",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simpleX"
}]
}
This is a pretty common situation in the IG world, and the IG publisher considers this a warning not an error.
but it's very clearly an error validating
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "issues",
"resource" : {
"resourceType" : "OperationOutcome",
"issue" : [{
"severity" : "error",
"code" : "not-found",
"details" : {
"text" : "The CodeSystem http://hl7.org/fhir/test/CodeSystem/simpleX is unknown"
},
"location" : ["code.system"]
}]
}
},
{
"name" : "message",
"valueString" : "The CodeSystem http://hl7.org/fhir/test/CodeSystem/simpleX is unknown"
},
{
"name" : "result",
"valueBoolean" : false
}]
}
... only... the validator decides that this is one of those cases because there's a parameter
"cause" : "not-found"
where cause is taken from OperationOutcome.issue.type.
but I removed cause from the returned parameters, and now I have no way to know that the valueset validation failed because of an unknown code system
the case above says that there is an unknown code system, but it doesn't explicitly say that the result is false because of the unknown code system.
This is a "fail to validate" rather than a "validate = false" situation -- I'd expect a 4XX series error from the Tx and an OperationOutcome about the CodeSystem not found.
Will that work?
I'm pretty sure Ontoserver does something like this
I don't think that's right - other issues can still be detected and returned
So I don't follow why you have removed cause
?
it wasn't a standard parameter. And it was pretty loose anyway
it's kind of weird to just put 'cause : not found' and assume everyone knows that means validation failed because the code system needed to determine value set membership wasn't found
I need a better way to say it...
you also have location: ["code.system"]
and the details.text
I do have that, but I'm going to be second guessing the server to decide whether that's the cause, or an incidental finding
Does this come down to identifying which one (or more?) of the issues was the trigger for result = false?
yes that's one way to look at it
Can it be as simple as "all the issues with severity = error"?
no I don't think it can. There's plenty of scope of issues with severity = error whether or not the code is in the value set
Doesn't that depend on how you interpret things? For example, if validating a codeableConcept, then you validate each contained Coding. If they all fail, then each contributes an issue with severity of error, but if any passes, then the issues from the others would just be warning?
This seems to be in line with
Indicates how relevant the issue is to the overall success of the action
I certainly don't think levels work like that. If a system is wrong, or a code is invalid, then that's an error
at the local level, but not at the level of the overall operation
issue.code
has this comment:
For example, code-invalid might be a warning or error, depending on the context
really?
really
Comments:
Code values should align with the severity. For example, a code offorbidden
generally wouldn't make sense with a severity ofinformation
orwarning
. Similarly, a code ofinformational
would generally not make sense with a severity offatal
orerror
. However, there are no strict rules about what severities must be used with which codes. For example,code-invalid
might be awarning
orerror
, depending on the context
(my emphasis)
oh I believed you. And I probably did write that. But I've noodled on it for a couple of hours, and in the context of the validator, invalid codes are invalid codes, whether they're in the scope of the value set or not.
and on further noodling, I think this is OK to be an extension for tx.fhir.org - the notion of 'it's not an error because the code system is unknown' is kind of centric to the base tx service, and not to additional ones. So I'm going with a parameter name of x-caused-by-unknown-system
for the link, and the tests won't require that
also @Jonathan Payne
@Grahame Grieve Looks nice... :+1:
other/codesystem-dual-filter.json
is invalid -- it has a duplicate code: AA
Also, HAPI is complaining about language/codesystem-de-multi.json
:
HAPI-0450: Failed to parse request body as JSON resource. Error was: HAPI-1825: Unknown element 'title:en' found during parse
hmm
hapi probably doesn't support JSON 5 either. can you try commenting that line out?
So, the testing/comparison aspect is complaining about / rejecting extensions that Ontoserver includes that are not part of the expected result.
e.g.,
Group simple-cases
Test simple-expand-all: Fail
properties differ at .expansion.contains[1]: missing property extension
Test simple-expand-enum: Fail
properties differ at .expansion.contains[1]: missing property extension
Test simple-expand-isa: Fail
properties differ at .expansion.contains[0]: missing property extension
Test simple-expand-prop: Fail
properties differ at .expansion.contains[0]: missing property extension
Test simple-expand-regex: Fail
properties differ at .expansion.contains[1]: missing property extension
what extensions are yo including?
One is http://ontoserver.csiro.au/profiles/expansion
what is it?
Why does that matter? It's an extension, if you don't understand it you can (should) ignore it.
(It's actually legacy from DSTU2_1 to indicate inactive status)
it doesn't matter for the tests, no, but I'm just interested for the sake of being nosy
:laughing:
I'll think about the testing part
@Michael Lawley https://github.com/hapifhir/org.hl7.fhir.core/pull/1303
I have rewritten these two pages:
I have removed the section on registration - I'm rewriting that after talking to @Michael Lawley, more on that soon
I reconciled the two pages and changed the way the web source reference works
@Grahame Grieve Hi, I am running the fhir tx testsuite against Snowstorm. For some tests, there are complaints about a missing "id" property, and the test fails. Turns out that the resource that is returned contains an "id" whereas the "reference" resource does not contain an "id". Is this a real "fail", or is the "id" property supposed to be optional?
Expected:
{
"$optional-properties$" : ["date", "publisher", "compose"],
"resourceType" : "ValueSet",
"url" : "http://hl7.org/fhir/test/ValueSet/simple-all",
"version" : "5.0.0",
"name" : "SimpleValueSetAll",
"title" : "Simple ValueSet All",
"status" : "active",
"experimental" : false,
"date" : "2023-04-01",
"publisher" : "FHIR Project",
"compose" : {
"include" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
},
"expansion" : {
"identifier" : "$uuid$",
"timestamp" : "$instant$",
"total" : 7,
"parameter" : [{
"name" : "excludeNested",
"valueBoolean" : true
},
{
"name" : "used-codesystem",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
},
{
"$optional$" : true,
"name" : "version",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
}],
"contains" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code1",
"display" : "Display 1"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"abstract" : true,
"inactive" : true,
"code" : "code2",
"display" : "Display 2"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2a",
"display" : "Display 2a"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2aI",
"display" : "Display 2aI"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2aII",
"display" : "Display 2aII"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2b",
"display" : "Display 2b"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code3",
"display" : "Display 3"
}]
}
}
Actual:
{
"resourceType": "ValueSet",
"id": "simple-all",
"url": "http://hl7.org/fhir/test/ValueSet/simple-all",
"version": "5.0.0",
"name": "SimpleValueSetAll",
"title": "Simple ValueSet All",
"status": "active",
"experimental": false,
"publisher": "FHIR Project",
"expansion": {
"id": "f4b71bf6-3ef4-4c30-a4ea-ab3f4ae3dad6",
"timestamp": "2024-10-09T15:08:23+02:00",
"total": 7,
"offset": 0,
"parameter": [
{
"name": "version",
"valueUri": "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
},
{
"name": "displayLanguage",
"valueString": "en"
}
],
"contains": [
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code1",
"display": "Display 1"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2",
"display": "Display 2"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2a",
"display": "Display 2a"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2aI",
"display": "Display 2aI"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2aII",
"display": "Display 2aII"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2b",
"display": "Display 2b"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code3",
"display": "Display 3"
}
]
}
}
it's not an error to return a populated id element. If doesn't even have to the same id. Probably it shouldn't be, but that's a style question
which means that the test is wrong, really
I updated the tests to allow id, but you'll have to wait for the release of a new validator to use them, unfortunately
about 24 hours
As you may know from other messages, I am investigating the options to make Snowstorm fhir tx testsuite compliant. As our reference server for terminology in Belgium is an Ontoserver (now 6.20.1 since yesterday), and I want the Snowstorm behaviour to be as similar as possible to the Ontoserver behaviour, I also ran the fhir tx testsuite against Ontoserver. I got a result of 16% fails.
I know from #Announcements > Using Ontoserver with Validator / IG Publisher that Ontoserver is considered compatible. How should I interpret the 16% failed tests? Is any software allowed to fail 16% tests? Any 16%, or only that specific 16% of the tests? What is also strange, is that the highest amount of failures is in the "simple-cases" test group. Is the "simple-cases" test group the test of _basic_ behaviour, and are these tests of a greater weight? What does this say about the interplay between IGPublisher and the tested terminology server?
I don't know about 16% failure. What version are you running? I test the public ontoserver everyday and get 100% pass rate
is that the highest amount of failures is in the "simple-cases" test group
hmm. maybe you need to set a parameter for flat rather than nested? Ontoserver doesn't do nested expansions, and that's a setting you pass to the test cases
try -mode flat
new Publication: STU 1 of theFHIR Shorthand implementation guide: http://hl7.org/fhir/uv/shorthand/STU1
New Publication: STU 1 of the FHIR Da Vinci Unsolicited Notifications Implementation Guide: http://hl7.org/fhir/us/davinci-alerts/STU1
New Publication: STU 1.1 of the C-CDA on FHIR Implementation Guide: http://hl7.org/fhir/us/ccda/STU1.1
New Publication: STU 1 of the Vital Records Mortality and Morbidity Reporting FHIR Implementation Guide: http://hl7.org/fhir/us/vrdr/STU1/index.html
New Publication: STU1 of the CARIN Consumer Directed Payer Data Exchange (CARIN IG for Blue Button®) FHIR Implementation Guide: http://hl7.org/fhir/us/carin-bb/STU1
New Publication: STU1 of the HL7 Payer Data Exchange (PDex) Payer Network, Release 1 - US Realm Implementation Guide: hl7.org/fhir/us/davinci-pdex-plan-net/STU1
New Publication: STU1 of the HL7 Prior-Authorization Support (PAS), Release 1- US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-pas/STU1
New Publication: STU1 of the HL7 Payer Data Exchange (PDex), Release 1 - US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-pdex/STU1
New Publication: STU1 of the HL7 Da Vinci - Coverage Requirements Discovery (CRD), Release 1- US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-crd/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Payer Coverage Decision Exchange, R1 - US Realm: http://hl7.org/fhir/us/davinci-pcde/STU1
New Publication: STU1 of the FHIR® Implementation Guide: Documentation Templates and Payer Rules (DTR), Release 1- US Realm: http://hl7.org/fhir/us/davinci-dtr/STU1/index.html
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Risk Based Contract Member Identification, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-atr/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Pharmacist Care Plan Document, Release 1 - US Realm: http://hl7.org/fhir/us/phcp/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Clinical Guidelines, Release 1: http://hl7.org/fhir/uv/cpg/STU1
Newly Posted: FHIR R4B Ballot #1: http://hl7.org/fhir/2021Mar
New Publication: Normative Release 1 of the HL7 Cross-Paradigm Specification: Clinical Quality Language (CQL), Release 1: http://cql.hl7.org/N1
New Publication: STU Release 1 of the HL7/NCPDP FHIR® Implementation Guide: Specialty Medication Enrollment, Release 1 - US Realm: http://hl7.org/fhir/us/specialty-rx/STU1.
New Publication: STU Release 3 of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures STU3 for FHIR R4: http://hl7.org/fhir/us/davinci-deqm/STU3
Lynn Laakso said:
New Publication: STU Release 3 of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures STU3 for FHIR R4: http://hl7.org/fhir/us/cqfmeasures/STU3
new Publication: STU Release 1 of the HL7 Immunization Decision Support Forecast (ImmDS) Implementation Guide: http://hl7.org/fhir/us/immds/STU1
New Publication: STU Release 4 of the HL7 FHIR® US Core Implementation Guide STU 4 Release 4.0.0: http://hl7.org/fhir/us/core/STU4
File not found ;-)
well that's not supposed to happen
it'll work now
The change log appears to be empty? http://hl7.org/fhir/us/core/history.html
Grahame has to fix that, it'll be 12 hours
fixed
New Publication: STU Update Release 1.1 of HL7 FHIR® Implementation Guide: Consumer Directed Payer Data Exchange (CARIN IG for Blue Button®), Release 1 - US Realm: http://www.hl7.org/fhir/us/carin-bb/STU1.1
I don't know as it matters but the directory of published versions doesn't show this version. http://hl7.org/fhir/us/carin-bb/history.html
it does for me. You might have a caching problelm
New Publication: STU Update Release 1.1 of HL7 FHIR® Profile: Occupational Data for Health (ODH), Release 1 - US Realm: http://hl7.org/fhir/us/odh/STU1.1
New Publication: STU Release 1 of HL7 FHIR® Implementation Guide: Vital Records Common FHIR Profile Library, Release 1: http://hl7.org/fhir/us/vr-common-library/STU1
New publication: STU Release 1 of HL7 FHIR® Implementation Guide: NHSN Inpatient Medication COVID-19 Administration Reports, Release 1- US Realm: http://hl7.org/fhir/us/nhsn-med-admin/STU1
New Publication: STU Release 1 of HL7 FHIR® Implementation Guide: NHSN Adverse Drug Event - Hypoglycemia Report, Release 1- US Realm: http://hl7.org/fhir/us/nhsn-ade/STU1
New Publication: STU Update (STU1.1) of HL7 FHIR® Implementation Guide: DaVinci Payer Data Exchange US Drug Formulary, Release 1 - US Realm: http://hl7.org/fhir/us/Davinci-drug-formulary/STU1.1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Release 1 - US Realm: http://hl7.org/fhir/us/bfdr/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Dental Data Exchange, Release 1 - US Realm: http://hl7.org/fhir/us/dental-data-exchange/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Post-Acute Care Cognitive Status, Release 1- US Realm: http://hl7.org/fhir/us/pacio-cs/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Post-Acute Care Functional Status, Release 1- US Realm: http://hl7.org/fhir/us/pacio-fs/STU1
New Publication: Release 4.0.1 of the CQF FHIR® Implementation Guide: Clinical Quality Framework Common FHIR Assets: http://fhir.org/guides/cqf/common/4.0.1/. (note: this is not a guide published through the HL7 consensus process, but according to the FHIR Community Process, so it's posted on fhir.org)
STU Update Publication of HL7 FHIR® Implementation Guide: Prior-Authorization Support (PAS), Release 1- US Realm: http://hl7.org/fhir/us/davinci-pas/STU1.1
STU Publication of HL7 FHIR Implementation Guide: minimal Common Oncology Data Elements (mCODE) Release 1 STU 2 – US Realm: http://hl7.org/fhir/us/mcode/STU2
STU Publication of HL7 FHIR® Implementation Guide: Electronic Case Reporting (eCR), Release 2: http://hl7.org/fhir/us/ecr/STU2
STU Update Publication of HL7 FHIR® Profile: Quality, Release 1 STU 4.1- US Realm: http://hl7.org/fhir/us/qicore/STU4.1
STU Publication of HL7 FHIR Implementation Guide: Profiles for ICSR Transfusion and Vaccination Adverse Event Detection and Reporting, Release 1 - US Realm: www.hl7.org/fhir/us/icsr-ae-reporting/STU1
Normative Publication of HL7 FHIR® Implementation Guide: FHIR Shorthand, Release 2: http://hl7.org/fhir/uv/shorthand/N1
STU Publication of HL7 FHIR® Structured Data Capture (SDC) Implementation Guide, Release 3: http://hl7.org/fhir/uv/sdc/STU3
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1- US Realm: http://hl7.org/fhir/us/davinci-cdex/STU1
STU Publication of HL7 FHIR® Implementation Guide: Health Record Exchange (HRex) Framework, Release 1- US Realm: http://hl7.org/fhir/us/davinci-hrex/STU1
STU Errata Publication of HL7 FHIR® Profile: Quality, Release 1 - US Realm STU 4.1.1: http://hl7.org/fhir/us/qicore/STU4.1.1
@David Pyke and @John Moehrke are pleased to announce the release of HotBeverage #FHIR Implementation Guide release April 1st - Based on IETF RFC 2324 allows for the fulfillment of a device request for an artfully brewed caffeinated beverage. http://fhir.org/guides/acme/HotBeverage/1.4.2022
STU Update Publication for HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex) Payer Network, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pdex-plan-net/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Quality Measures, Release 1 STU 3 - US Realm: http://hl7.org/fhir/us/cqf-measures/STU3
Informative Publication of HL7 EHRS-FM Release 2.1 – Pediatric Care Health IT Functional Profile Release 1 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=593
STU Publication of HL7 FHIR® IG: SMART Web Messaging Implementation Guide, Release 1: http://hl7.org/fhir/uv/smart-web-messaging/STU1
STU Publication of HL7 FHIR® Implementation Guide: Clinical Genomics, STU 2: http://hl7.org/fhir/uv/genomics-reporting/STU2
STU Publication of HL7 Domain Analysis Model: Vital Records, Release 5- US Realm: see http://www.hl7.org/implement/standards/product_brief.cfm?product_id=466
STU Publication of HL7 FHIR® Implementation Guide: Personal Health Device (PHD), Release 1: http://hl7.org/fhir/uv/phd/STU1
STU Publication of HL7 CDA® R2 IG: C-CDA Templates for Clinical Notes STU Companion Guide, Release 3 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Publication of HL7 FHIR® US Core Implementation Guide STU5 Release 5.0.0: http://hl7.org/fhir/us/core/STU5
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Health Care Surveys (NHCS), Release 1, STU Release 2.1 and STU Release 3.1 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=385
STU Publication of HL7 FHIR® Implementation Guide: Risk Adjustment, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-ra/STU1
Informative Guidance Publication of HL7 Short Term Solution - V2: SOGI Data Exchange Profile: http://www.hl7.org/permalink/?SOGIGuidance
Errata Publication of CDA® R2.1 (HL7 Clinical Document Architecture, Release 2.1): https://www.hl7.org/documentcenter/private/standards/cda/2019CDAR2_1_2022JUNerrata.zip
Errata Publication of US Core STU5 Release 5.0.1: http://hl7.org/fhir/us/core/STU5.0.1
STU Publication of HL7 FHIR® Implementation Guide: Digital Insurance Card, Release 1 - US Realm: http://hl7.org/fhir/us/insurance-card/STU1
STU Publication of HL7 FHIR® Implementation Guide: Subscription R5 Backport, Release 1: http://hl7.org/fhir/uv/subscriptions-backport/STU1
STU Update Publication of HL7 CDA® R2 Implementation Guide: Reportability Response, Release 1 STU Release 1.1- US Realm: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=470
STU Update Publication Request of HL7 CDA® R2 Implementation Guide: Public Health Case Report - the Electronic Initial Case Report (eICR) Release 2, STU Release 3.1 - US Realm: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=436
Informative Publication of HL7 FHIR® Implementation Guide: COVID-19 FHIR Clinical Profile Library, Release 1 - US Realm: http://hl7.org/fhir/us/covid19library/informative1
STU Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1 STU1.1.0 - US Realm: http://hl7.org/fhir/us/davinci-cdex/STU1.1
STU Update Publication of HL7 FHIR Profile: Occupational Data for Health (ODH), Release 1.2: http://hl7.org/fhir/us/odh/STU1.2
STU Publication of HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex) Drug Formulary, Release 1 STU2 - US Realm: http://hl7.org/fhir/us/davinci-drug-formulary/STU2
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Release 1 STU2 - US Realm: http://hl7.org/fhir/us/vrdr/STU2
STU Update Publication of HL7 FHIR® Implementation Guide: Electronic Case Reporting (eCR), Release 2.1 - US Realm: http://hl7.org/fhir/us/ecr/STU2.1
R5 Ballot is published. http://hl7.org/fhir/2022Sep/
STU Publication of HL7 FHIR® Implementation Guide: Vital Signs, Release 1- US Realm: http://hl7.org/fhir/us/vitals/STU1/
STU Publication of HL7 Cross Paradigm Specification: CDS Hooks, Release 1: https://cds-hooks.hl7.org/2.0/
New release of HL7 Terminology (THO) v4.0.0: https://terminology.hl7.org/4.0.0. (Thanks @Marc Duteau)
STU Publication of HL7 FHIR® Implementation Guide: Hybrid/Intermediary Exchange, Release 1- US Realm: http://www.hl7.org/fhir/us/exchange-routing/STU1
Errata publication of C-CDA (HL7 CDA® R2 Implementation Guide: Consolidated CDA Templates for Clinical Notes - US Realm): https://www.hl7.org/implement/standards/product_brief.cfm?product_id=492
STU Publication of HL7 FHIR® Implementation Guide: Security for Registration, Authentication, and Authorization, Release 1- US Realm: http://hl7.org/fhir/us/udap-security/STU1/
STU Publication of HL7 FHIR® Implementation Guide: FHIR for FAIR, Release 1: http://hl7.org/fhir/uv/fhir-for-fair/STU1
STU Publication of HL7 FHIR® Implementation Guide: PACIO Re-assessment Timepoints, Release 1 - US Realm: http://hl7.org/fhir/us/pacio-rt/STU1
STU Publication of HL7 FHIR® Implementation Guide: Medicolegal Death Investigation (MDI), Release 1 - US Realm: http://hl7.org/fhir/us/mdi/STU1
STU Publication of HL7 CDA® R2 Implementation Guide: ePOLST: Portable Medical Orders About Resuscitation and Initial Treatment, Release 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=600
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Results Interface, Release 1 STU Release 4 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=279
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Orders (LOI) from EHR, Release 1, STU Release 4 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=152
STU Publication of HL7 Version 2 Implementation Guide: Laboratory Value Set Companion Guide, Release 2- US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=413
New release of HL7 Terminology (THO) v5.0.0: https://terminology.hl7.org/5.0.0
This also means that the THO freeze has been lifted.
You can view the UTG tickets that were implemented in this release using the following dashboard and selecting 5.0.0 in the first pie chart. https://jira.hl7.org/secure/Dashboard.jspa?selectPageId=16115
Informative Publication of HL7 V2 Implementation Guide Quality Criteria, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=608
STU Publication of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.0 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2
STU Update Publication of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures, STU3.1 for FHIR R4 - US Realm: http://hl7.org/fhir/us/davinci-deqm/STU3.1/
STU Update Publication of HL7 FHIR® Implementation Guide: International Patient Summary, Release 1.1: http://hl7.org/fhir/uv/ips/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Consumer-Directed Payer Exchange (CARIN IG for Blue Button®), Release 1 STU2: http://hl7.org/fhir/us/carin-bb/STU2
STU Publication Request for HL7 Domain Analysis Model: Nutrition Care, Release 3 STU 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=609
Errata Publication of HL7 CDA® R2 Implementation Guide: Quality Reporting Document Architecture - Category I (QRDA I) - US Realm, STU 5.3: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=35
Snapshot3 of FHIR Core spec: http://hl7.org/fhir/5.0.0-snapshot3. This is published to support the Jan 2023 connectathon, and help prepare for the final publication of R5, which is still scheduled for March 2023
Informative Publication of HL7 EHRS-FM R2.0.1: Usability Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=611
STU Publication of NHSN Healthcare Associated Infection (HAI) Reports Long Term Care Facilities (HAI-LTCF-FHIR), Release 1 - US Realm: http://hl7.org/fhir/us/hai-ltcf/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Subscription R5 Backport, Release 1, STU 1.1: http://hl7.org/fhir/uv/subscriptions-backport/STU1.1/
New release of HL7 Terminology (THO) v5.1.0: https://terminology.hl7.org/5.1.0
The Final Draft version of FHIR R5 is now published for QA : http://hl7.org/fhir/5.0.0-draft-final. There's a two week period to do QA on it. In particular, we'd like to focus on the invariants - there'll be another announcement about that shortly
STU Update Publication of minimal Common Oncology Data Elements (mCODE) Implementation Guide 2.1.0 - STU 2.1: http://hl7.org/fhir/us/mcode/STU2.1/
STU Update Publication of HL7 FHIR Profile: Occupational Data for Health (ODH), Release 1.3: https://hl7.org/fhir/us/odh/STU1.3/
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1 STU 2 - US Realm: http://hl7.org/fhir/us/davinci-cdex/STU2/
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Release 1 STU2.1 - US Realm: https://hl7.org/fhir/us/vrdr/STU2.1/
I have started publishing R5. Unlike the IGs, R5 is rather a big upload - it will take me a couple of days. In the meantime, you might find discontinuities and broken links on the site, and confusion between R4 and R5 as bits are copied up. Also you may find missing and broken redirects too. I will make another announcement once it's all uploaded
STU Publication of HL7 FHIR® Implementation Guide: International Patient Access (IPA), Release 1: http://hl7.org/fhir/uv/ipa/STU1
STU Publication of HL7 FHIR® Implementation Guide: Longitudinal Maternal & Infant Health Information for Research, Release 1 - US Realm: http://hl7.org/fhir/us/mihr/STU1/
STU Publication of HL7 FHIR® Implementation Guide: Patient Cost Transparency, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pct/STU1
STU Publication of HL7 FHIR® Profile: Quality, Release 1 - US Realm (qicore) STU Release 5: http://hl7.org/fhir/us/qicore/STU5
Normative Publication of HL7 CDA® R2 Implementation Guide: Emergency Medical Services; Patient Care Report Release 3 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=438
STU Publication of HL7 Consumer Mobile Health Application Functional Framework (cMHAFF), Release 1, STU 2: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=476
STU Publication of HL7 FHIR® Implementation Guide: Data Segmentation for Privacy (DS4P), Release 1: http://hl7.org/fhir/uv/security-label-ds4p/STU1
STU Publication of HL7 FHIR® IG: SMART Application Launch Framework, Release 2.1: http://hl7.org/fhir/smart-app-launch/STU2.1
STU Publication of HL7 Version 2 Implementation Guide: Diagnostic Audiology Reporting, Release 1- US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=620
STU Publication of HL7 FHIR® R4 Implementation Guide: Clinical Study Schedule of Activities, Edition 1: http://hl7.org/fhir/uv/vulcan-schedule/STU1/
STU Update Publication of HL7 FHIR® Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-FHIR), Release 1 STU 1.1 - US Realm: http://hl7.org/fhir/us/hai-ltcf/STU1.1
STU Publication of HL7 CDA® R2 Implementation Guide: Personal Advance Care Plan (PACP) Document, Edition 1, STU3 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=434
STU Publication of HL7 CDA® R2 Implementation Guide: Pharmacy Templates, Edition 1 STU Release 2: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=514
STU Publication of HL7 FHIR® R4 Implementation Guide: Single Institutional Review Board Project (sIRB), Edition 1- US Realm: http://hl7.org/fhir/us/sirb/STU1
STU Publication of HL7 CDA® R2 Implementation Guide: C-CDA Templates for Clinical Notes STU Companion Guide Release 4 - US Realm Standard for Trial Use May 2023: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Publication of HL7 FHIR® US Core Implementation Guide STU6 Release 6.0.0: http://hl7.org/fhir/us/core/STU6
STU Publication of HL7/NCPDP FHIR® Implementation Guide: Specialty Medication Enrollment, Release 1 STU 2 - US Realm: http://hl7.org/fhir/us/specialty-rx/STU2/
STU Publication of Vulcan's HL7 FHIR® Implementation Guide: Retrieval of Real World Data for Clinical Research STU 1 - UV Realm: http://hl7.org/fhir/uv/vulcan-rwd/STU1
Version 6.1.0-snapshot1 of US Core for public review of forth coming STU update to STU6 - US Realm: http://hl7.org/fhir/us/core/STU6.1-snapshot1
STU Publication of HL7 FHIR® Implementation Guide: Military Service History and Status, Release 1 - US Realm: http://hl7.org/fhir/us/military-service/STU1
STU Publication of HL7 FHIR® Implementation Guide: Identity Matching, Release 1 - US Realm: http://hl7.org/fhir/us/identity-matching/STU1
STU Publication of HL7 FHIR® Implementation Guide: Making Electronic Data More Available for Research and Public Health (MedMorph) Reference Architecture, Release 1- US Realm: http://hl7.org/fhir/us/medmorph/STU1/
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Healthcare Safety Network (NHSN) Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-CDA), Release 1, STU 1.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=546
STU Update Publication of HL7 CDA® R2 Implementation Guide: C-CDA Templates for Clinical Notes Companion Guide, Release 4.1 STU - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Update Publication of HL7 FHIR® US Core Implementation Guide STU6 Release 6.1.0: http://hl7.org/fhir/us/core/STU6.1
STU Publication of HL7 FHIR® Implementation Guide: Cancer Electronic Pathology Reporting, Release 1 - US Realm: https://hl7.org/fhir/us/cancer-reporting/STU1
STU Publication of HL7 FHIR Implementation Guide: Electronic Medicinal Product Information, Release 1: http://hl7.org/fhir/uv/emedicinal-product-info/STU1
Unballoted STU Update Publication of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.1 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2.1
STU Publication of HL7 FHIR® Implementation Guide: CodeX™ Radiation Therapy, Release 1- US Realm: http://hl7.org/fhir/us/codex-radiation-therapy/STU1
STU Publication of HL7 FHIR® Implementation Guide: US Public Health Profiles Library, Release 1 - US Realm: http://hl7.org/fhir/us/ph-library/STU1
STU Publication of HL7 FHIR® Implementation Guide: ICHOM Patient Centered Outcomes Measure Set for Breast Cancer, Edition 1: http://hl7.org/fhir/uv/ichom-breast-cancer/STU1
STU Publication of HL7 FHIR® Implementation Guide: Health Care Surveys Content, Release 1 - US Realm: http://hl7.org/fhir/us/health-care-surveys-reporting/STU1
STU Publication of HL7 FHIR® Implementation Guide: Physical Activity, Release 1 - US Realm: http://hl7.org/fhir/us/physical-activity/STU1
STU Publication of HL7 FHIR® Implementation Guide: Quality Measures, Release 1 STU4 - US Realm: http://hl7.org/fhir/us/cqfmeasures/STU4
Unballoted STU Update Publication of HL7 FHIR® Implementation Guide: Healthcare Associated Infection Reports, Release 1, STU 2.1 —US Realm: http://hl7.org/fhir/us/hai/STU2.1
STU Publication of HL7 Cross Paradigm Specification: Health Services Reference Architecture (HL7-HSRA), Edition 1:https://www.hl7.org/implement/standards/product_brief.cfm?product_id=632
Errata publication of HL7 CDA® R2 Attachment Implementation Guide: Exchange of C-CDA Based Documents, Release 2 US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=464
Informative Publication of HL7 EHR-S FM R2.1 Functional Profile: Problem-Oriented Health Record (POHR) for Problem List Management (PLM), Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=630
STU Publication of HL7 CDA R2 Implementation Guide: Gender Harmony - Sex and Gender representation, Edition 1 - Component of: HL7 Cross-Paradigm Implementation Guide: Gender Harmony - Sex and Gender representation, Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=633
Informative Publication of HL7 Cross-paradigm Implementation Guide: Gender Harmony - Sex and Gender Representation, Edition 1: http://hl7.org/xprod/ig/uv/gender-harmony/informative1
STU Publication of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures, Edition 1 STU4 - US Realm: http://hl7.org/fhir/us/davinci-deqm/STU4
STU Publication of HL7 FHIR® Implementation Guide: Human Services Directory, Release 1 - US Realm: http://hl7.org/fhir/us/hsds/STU1
STU Update Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Common FHIR Profile Library R1.1: http://hl7.org/fhir/us/vr-common-library/STU1.1
Errata:
I wrong wrote:
STU Publication of HL7 Cross-Product Implementation Guide: HL7 Cross Paradigm Implementation Guide: Gender Harmony - Sex and Gender Representation, Edition 1: http://hl7.org/xprod/ig/uv/gender-harmony/
This was a copy paste error on my part, sorry. This is an informative publication, not a trial-use publication
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Release 1.1: http://hl7.org/fhir/us/bfdr/STU1.1
STU Update Publication of Vital Records Death Reporting FHIR Implementation Guide, STU2.2 - US Realm: http://hl7.org/fhir/us/vrdr/STU2.2
STU Publication of HL7 FHIR® Implementation Guide: Coverage Requirements Discovery, Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-crd/STU2
STU Publication of HL7 FHIR Implementation Guide: minimal Common Oncology Data Elements (mCODE) Release 1 STU 3 - US Realm: http://hl7.org/fhir/us/mcode/STU3
STU Publication of HL7 FHIR® Implementation Guide: Documentation Templates and Rules, Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-dtr/STU2
STU Update Publication of HL7 CDA R2 Implementation Guide: Personal Advance Care Plan (PACP), Edition 1 STU 3.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=434
STU Publication of HL7 FHIR® Implementation Guide: Protocols for Clinical Registry Extraction and Data Submission (CREDS), Release 1 - US Realm: http://hl7.org/fhir/us/registry-protocols/STU1
Informative Publication of HL7 Informative Document: Patient Contributed Data, Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=638
STU Update Publication of HL7 FHIR® Implementation Guide: Medicolegal Death Investigation (MDI), Release 1.1 - US Realm: http://hl7.org/fhir/us/mdi/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Prior-Authorization Support (PAS), Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-pas/STU2
FHIR Foundation Publication: HRSA 2023 Uniform Data System (UDS) Patient Level Submission (PLS) (UDS+) FHIR IG, Release 1- see http://fhir.org/guides/hrsa/uds-plus/
HL7 DK Publication: DK Core version 3.0 is now published at https://hl7.dk/fhir/core/index.html
STU Publication of Health Level Seven Arden Syntax for Medical Logic Systems, Edition 3.0: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=639
STU Publication of HL7 FHIR® Implementation Guide: Integrating the Healthcare Enterprise (IHE) Structured Data Capture/electronic Cancer Protocols on FHIR, Release 1- US Realm: http://hl7.org/fhir/uv/ihe-sdc-ecc/STU1
1st Draft Ballot of HL7 FHIR® R6: http://hl7.org/fhir/6.0.0-ballot1
Release of HL7 FHIR® Tooling IG (International): http://hl7.org/fhir/tools/0.1.0
Ballot for the next versions of the FHIR Extensions Pack (5.1.0-ballot1): http://hl7.org/fhir/extensions/5.1.0-ballot/
Ballot for CCDA 3.0.0: http://hl7.org/cda/us/ccda/2024Jan/
This is a particularly important milestone for the publishing process. Quoting from the specification itself:
Within HL7, since 2020, an initiative to develop the same underlying publication process tech stack across all HL7 standards has been underway. The intent is to provide the same look and feel, to leverage inherent validation and versioning, to ease annual updates, and to avoid the unwieldy word and pdf publication process. This publication of C-CDA R3.0 is the realization of that intent for the CDA product family.
Many people have contributed to this over a number of years, and while I'm hesitant to call attention to any particular individuals because of the certainty of missing some others who also deserve it, it would not have got across the line without a significant contribution from @Benjamin Flessner
Informative Publication of HL7 FHIR® Implementation Guide: Record Lifecycle Events (RLE), Edition 1: http://hl7.org/fhir/uv/ehrs-rle/Informative1
STU Update Publication of HL7 FHIR® Implementation Guide: Patient Cost Transparency, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pct/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: PACIO Personal Functioning and Engagement, Release 1 - US Realm: http://hl7.org/fhir/us/pacio-pfe/STU1
STU Publication of HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex), Release 2 - US Realm: http://hl7.org/fhir/us/davinci-pdex/STU2
STU Publication of HL7 FHIR® Implementation Guide: Member Attribution List, Edition 2- US Realm: http://hl7.org/fhir/us/davinci-atr/STU2
STU Publication of HL7 FHIR® Implementation Guide: PACIO Advance Directive Interoperability, Edition 1 - US Realm: http://hl7.org/fhir/us/pacio-adi/STU1
STU Publication of HL7 FHIR® R4 Implementation Guide: QI-Core, Edition 1.6 - US Realm: http://hl7.org/fhir/us/qicore/STU6
STU Update Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2.2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
Interim Snapshot 5.1.0-snapshot1 of the Extensions package (hl7.fhir.yv.extensions#5.1.0-snapshot1) has been published to support publication requests waiting for a new release of the extensions package @ http://hl7.org/fhir/extensions/5.1.0-snapshot1/
STU Publication of HL7 FHIR® Implementation Guide: C-CDA on FHIR, STU 1.2.0 - US Realm: http://hl7.org/fhir/us/ccda/STU1.2
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Healthcare Safety Network (NHSN) Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-CDA), Release 1, STU 1.2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=546
STU Publication of HL7 CDS Hooks: Hook Library, Edition 1: https://cds-hooks.hl7.org/
STU Publication ofHL7 FHIR® R5 Implementation Guide: Adverse Event Clinical Research, Edition 1: http://hl7.org/fhir/uv/ae-research-ig/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Digital Insurance Card, Release 1 - US Realm: http://hl7.org/fhir/us/insurance-card/STU1.1/
STU Publication of HL7 FHIR® R4 Implementation Guide: Adverse Event Clinical Research R4 Backport, Edition 1: http://hl7.org/fhir/uv/ae-research-backport-ig/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Central Cancer Registry Reporting Content IG, Edition 1- US Realm: https://hl7.org/fhir/us/cancer-reporting/STU1.0.1
STU Publication of HL7 FHIR® Implementation Guide: SMART Application Launch Framework, Release 2.2: http://hl7.org/fhir/smart-app-launch/STU2.2
STU Publication of HL7 FHIR® Implementation Guide: Pharmaceutical Quality (Industry), Edition 1: http://hl7.org/fhir/uv/pharm-quality/STU1
STU Publication of HL7 FHIR® US Core Implementation Guide STU 7 Release 7.0.0 - US Realm: http://hl7.org/fhir/us/core/STU7
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Orders from EHR (LOI) Edition 5 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=152
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Results Interface (LRI), Edition 5 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=279
Ok, a significant milestone has been reached with two new publications:
STU Publication of the HL7 FHIR® R4 Implementation Guide: Electronic Long-Term Services and Supports (eLTSS) Edition 1 STU2 - US Realm: http://hl7.org/fhir/us/eltss/STU2
STU Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports for Antimicrobial Use in Long Term Care Facilities (AULTC), Edition 1.0, STU1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=646
STU Publication of HL7 FHIR® Implementation Guide: Central Cancer Registry Reporting Content IG, Edition 1- US Realm: http://hl7.org/fhir/us/central-cancer-registry-reporting/STU1
STU Publication of HL7 FHIR® Implementation Guide: Using CQL With FHIR, Edition 1: http://hl7.org/fhir/uv/cql/STU1
STU Publication of HL7 FHIR® Implementation Guide: Canonical Resource Management Infrastructure (CRMI), Edition 1: http://hl7.org/fhir/uv/crmi/STU1
STU Publication of HL7 FHIR® Implementation Guide: Value Based Performance Reporting (VBPR), Edition 1 - US Realm: http://hl7.org/fhir/us/davinci-vbpr/STU1
STU Update Publication of HL7 FHIR® R4 Implementation Guide: At-Home In-Vitro Test Report, Edition 1.1: http://hl7.org/fhir/us/home-lab-report/STU1.1
STU Publication of MCC eCare Plan Implementation Guide, Edition 1 - US Realm: http://hl7.org/fhir/us/mcc/STU1
Normative Reaffirmation Publication of HL7 Version 3 Standard: Event Publish & Subscribe Service Interface, Release 1 - US Realm and HL7 Version 3 Standard: Unified Communication Service Interface, Release 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=390 or https://www.hl7.org/implement/standards/product_brief.cfm?product_id=388
Normative Reaffirmation Publication of HL7 Version 3 Standard: Regulated Studies - Annotated ECG, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=70
Normative Reaffirmation Publication of Health Level Seven Arden Syntax for Medical Logic Systems, Version 2.10: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=372
Normative Reaffirmation Publication of HL7 Healthcare Privacy and Security Classification System, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=345
Normative Reaffirmation Publication of HL7 EHR Clinical Research Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=16
Normative Reaffirmation Publication of HL7 EHR Child Health Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=15
Normative Reaffirmation Publication of HL7 Version 3 Standard: XML Implementation Technology Specification - Wire Format Compatible Release 1 Data Types, Release 1 and HL7 Version 3 Standard: XML Implementation Technology Specification - V3 Structures for Wire Format Compatible Release 1 Data Types, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=357 and https://www.hl7.org/implement/standards/product_brief.cfm?product_id=358
Normative Reaffirmation Publication of HL7 Version 3 Standard: Privacy, Access and Security Services; Security Labeling Service, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=360
Reaffirmation Publication of HL7 Version 3 Implementation Guide: Context-Aware Knowledge Retrieval Application (Infobutton), Release 4: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=22
Normative Publication of HL7 FHIR® Implementation Guide: FHIR Shorthand, Edition 3.0.0: http://hl7.org/fhir/uv/shorthand/N2
STU Publication Request for HL7 FHIR® Implementation Guide: Medication Risk Evaluation and Mitigation Strategies (REMS), Edition 1- US Realm: http://hl7.org/fhir/us/medication-rems/STU1
Normative Reaffirmation Publication of HL7 Cross-Paradigm Specification: FHIRPath, Release 1: http://hl7.org/FHIRPath/N2
STU Update Publication of HL7 FHIR® Implementation Guide: Security for Registration, Authentication, and Authorization (FAST), Edition 1 - US Realm: http://hl7.org/fhir/us/udap-security/STU1.1
Informative Publication of HL7 Guidance: AI/ML Data Lifecycle, Edition 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=658
Unballoted STU Update of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.2 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2.2
Normative Publication of HL7 Clinical Document Architecture R2.0 Specification Online Navigation, Edition 2024: https://hl7.org/cda/stds/online-navigation/index.html
Normative Publication of Health Level Standard Standard Version 2.9.1 - An Application Protocol for Electronic Data Exchange in Healthcare Environments: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=649
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Common Library, Edition 2 - US Realm: http://hl7.org/fhir/us/vr-common-library/STU2
Normative Retirement Publication of HL7 V3 Patient Registry R1, Person Registry R1, Personnel Management R1 and Scheduling R2: Patient Registry, Person Registry, Personnel Management and Scheduling.
@Ward Weistra My colleague tried grab us.core#7.0.0 package from https://packages.fhir.org/hl7.fhir.us.core The package lists up to 6.1.0 but does not have 7.0.0. I tried using direct url: https://packages.simplifier.net/hl7.fhir.us.core/7.0.0 which also shows that this version does not exist. How should I get us core v7 package into simplifier?
Thanks and have a safe travel back home.
I had the same issue, looking at the package on https://build.fhir.org/ig/HL7/US-Core/downloads.html, in the package.json file we have:
{
"name" : "hl7.fhir.us.core",
"version" : "7.0.0",
"tools-version" : 3,
"type" : "IG",
"date" : "20240627054756",
"license" : "CC0-1.0",
"canonical" : "http://hl7.org/fhir/us/core",
"notForPublication" : true,
with the bottom line ensuring it isn't published on the package registry. I guess the only option is to download it manually
@Eric Haas is this a publication bug?
I don't have control over package creation. The link is the same as in the publication history page. Simplifier only lists 6.1.0 and 3.1.1. Where are you downloading it manually?
@Ryan May @Eric Haas I think that may just be a result of looking at build.fhir.org. The package registry gets released versions from https://github.com/FHIR/ig-registry/blob/master/package-feeds.json -> https://hl7.org/fhir/package-feed.xml and the one there looks fine.
@Yunwei Wang The real issue is here: US Core 7 builds on VSAC 0.18.0 and that package is huge, so has been refused by the package registry infrastructure for now. US Core will next be refused because of missing dependencies.
The consensus is now that VSAC should indeed no longer be distributed as a FHIR package, but we're investigating if an exception can be made for the existing VSAC packages, perhaps one or more future ones and only those. And agree at FHIR-I on a package size limit value.
But this will take a moment...
(deleted)
The versioned US Core all point to the correct package version, The current version points to the current package, so I think this will be all sorted out in the next versions. :fingers_crossed:
IN case it helps - for the short term, any VSAC value set that is used in US Core (or C-CDA) has HL7 US Realm Program Management Author and HL7 US Realm Program Management Steward (Role : Steward). Should be part of the VS meta data
(deleted)
Also - minimally - IMHO the package should include only value sets that have status "active"
OK for the bigger issue though - I think my points stand. Also we could probably find out all IGs that use VSAC for VS build and source of truth and find out who the authors/stewards are and limit the VSAC package to that
@Grahame Grieve I'll continue to explore whether we can still load and serve VSAC 0.18.0+ for now.
But Gay has a suggestion above for a logical filtering of VSAC. Would this work for upcoming releases at least until a VSAC FHIR server is set up? For new VSAC packages it would be clear for users if they are missing a VS/CS when validating their IG.
(Or I'd welcome doing that retroactively for VSAC 0.18.0 and up too. Potentially we could run checks for all packages you know to depend on VSAC 0.18.0+ if they miss anything)
I'm going to investigate
here's my data:
we use value sets from the following stewards:
so @Gay Dolin's suggestion does not work
indeed, but does that make any difference?
I guess these could still be 19 separate packages. If need be, the next iteration of the VSAC package could depend on all of those.
I have no idea how many VS/CS those Stewards have in VSAC in total, but if that's a manageable amount you could include all for a Steward in such a subpackage so you don't need a new edition when someone needs one more.
hl7.fhir.us.vsac.hl7-usrpm
for example.
or simplifer could simply allow bigger packages, which would be way easier for everyone
How big are the 19 together.? I think that could be "The" package.
There is no need to pull in all the other sets - so many are really poorl value sets
US Realm Program mgt (US Core/C-CDA sets could REALLY take advantage of a seperate package. We possibly could then do away with depending on VSAC for the "Annual Releases" .
It would save HL7 about $40,000 a year, and ONC possibly even more
if people have used those 19, why would they not be allowed to use others?
Generally its IG authors who are building the sets
There is no need to pull in all the other sets - so many are really poorl value sets
if you can get formal agreement from #terminology that no one has a valid reason to use any other other stewards, I can remove them, sure
US Realm Program mgt (US Core/C-CDA sets could REALLY take advantage of a seperate package. We possibly could then do away with depending on VSAC for the "Annual Releases" . It would save HL7 about $40,000 a year, and ONC possibly even more
I don't know anything about annual releases, but US realm could just use THO or define it's own package. I'm told that VSAC is used because it's a better authoring environment
It is a better authoring environment
Heading into SD but will love to chat more about why this WOULD work
Grahame Grieve said:
if you can get formal agreement from #terminology that no one has a valid reason to use any other other stewards, I can remove them, sure
Terminology would never agree to that, as they should not. But perhaps the short term solution is using those current stewards, so publishing can get going again
Maybe long term solution is making simplifier bigger, but maybe each package update could still be based on: 1) IGs that have VSAC value sets 2) who are the stewards and then it will always be managable
If not the above, If we only pulled in value sets with status "Active" and some already published IG have sets that have flipped to "Not Maintained" - will that be problematic wrt to validation in implmentations?
If not - we could just pull in "active" sets - but that would still be pretty large - since VSAC now forces maintenance (https://www.nlm.nih.gov/vsac/support/authorguidelines/valuesetstatus.html) (though I'm guessing most of the "crap" value sets stewards/authors don't pay attenton to these emails that warn you your set is getting flagged as not maintained)
WRT the C-CDA annual release (which includes the shared US Core sets whose source of truth is VSAC), Since 2016 HL7/ONC has provided an annual release, basically to make up for the fact that all of C-CDA had not been balloted since 2015: https://vsac.nlm.nih.gov/download/ccda
At a cost to HL7 and ONC
We were hoping to get VSAC to create an ability for small releases to "push a button" and create a release, but they said "maybe we could start working on that in 2026"
So, @Brett Marquard and I are exploring if vendors (or vendor customers) even need a "release" and/or if we could offer it another way, hl7.fhir.us.vsac.hl7-usrpm
for example. :-)
WRT using THO - 1) not until the authoring space gets better 2) US Core and C-CDA increasingly share sets, so we prefer the sets are either in THO or VSAC rather than in one IG or another
Lastly, We are working with OCL folks (OCL is all (or mostly all, and goal is to be all) FHIR based) so that maybe someday in the future, their tooling would rival VSAC authoring and it could be used in the THO space and we would not have to use VSAC at all.
@Grahame Grieve - I know you are familiar with OCL, but in case others are not: https://openconceptlab.org/
But perhaps the short term solution is using those current stewards, so publishing can get going again
this is something that has already happened - the packages are already published. There is only one short term solution, which is for simplifier to remove the size limit for at least the vsac package
even if I can restrict to active only, that's for future publications
here's the value sets that are used that are not actively maintained:
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1032.115 (Not Maintained) @ MITRE used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1099.46 (Not Maintained) @ BSeR used by [hl7.fhir.us.bser#2.0.0-ballot]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1111.95 (Not Maintained) @ The Joint Commission used by [hl7.fhir.us.bser#2.0.0-ballot]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.10 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.41 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.50 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.57 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1144 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1152 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1154 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1157 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1223 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1270 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1196.309 (Not Maintained) @ IMPAQ used by [hl7.fhir.us.nhsn-ade#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1196.310 (Not Maintained) @ IMPAQ used by [hl7.fhir.us.nhsn-ade#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1223.9 (Not Maintained) @ CareEvolution Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1240.1 (Active) @ HL7 USRPM used by [hl7.cda.us.ccda#3.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113883.3.464.1003.111.11.1021 (Not Maintained) @ NCQA used by [hl7.fhir.us.mihr#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113883.3.464.1003.111.12.1015 (Not Maintained) @ NCQA used by [hl7.fhir.us.mihr#1.0.0]
so active only doesn't seem to be a goer either
Grahame Grieve said:
so active only doesn't seem to be a goer either
OK - I thought that might be the case
Yunwei Wang so we’re stuck here. packages.fhir.org won’t handle the package. But packages2.fhir.org/packages handles it no problems, so people should always use the alternative server
Did you forget this part or did you think I abandoned it?
Ward Weistra said:
I'll continue to explore whether we can still load and serve VSAC 0.18.0+ for now.
However, I would still like to keep any size exception, if at all feasible, as small as possible. So if we could switch for any next VSAC publication from one supersize VSAC package to more tailored hl7.fhir.us.vsac.hl7-usrpm
per publisher that would be a great way to not make the future impact any bigger then necessary.
So if we could switch for any next VSAC publication from one supersize VSAC package to more tailored
hl7.fhir.us.vsac.hl7-usrpm
per publisher that would be a great way to not make the future impact any bigger then necessary.
I'm sure not interested in that, given that then I'd be managing 19+ packages, and everyone would have to go hunting for a piece of metadata that's not easily available to decide which package they have to depend on, and you'd end up with 19 packages that occupy the same size. So what did all that achieve?
Grahame Grieve said:
I tried http://packages2.fhir.org/packages/hl7.fhir.us.core.v700/7.0.0, which redirects to download hl7.fhir.us.core.v700#7.0.0.tgz. This tgz file is 641 bytes and the extraction contains two files: package.json
and .index.json
. The package contents are not in it.
image.png
no you don't want that package
Thanks. What's the correct way to find package url on packages2? I opened the http://packages2.fhir.org/packages/catalog and search.
and then you got a set of packages listed (http://packages2.fhir.org/packages/catalog?op=find&name=us.core&pkgcanonical=&canonical=&fhirversion=), but didn't choose the first one, which is the correct one
OK I have reported FHIR#48547 to get an explicit, sensible package size limit in place.
Also discussed the proposal we agreed on before:
We agree in principle to make it work, now figuring out how it can technically be done without too much impact on performance + figure out what the impact is on bandwidth/processing costs. Possibly bring it in like the tooling packages, without actually processing the contents. It won't be overnight, but we're working on it.
Grahame Grieve said:
or simplifer could simply allow bigger packages, which would be way easier for everyone
I'm really unhappy with the "you're just being difficult" framing. Let me know if I'm misunderstanding.
I think it is being underestimated how much impact this going to have on every single FHIR tool and consumer down the line. This will lead to FHIR being perceived as slow and bloated in the future if we're not vigilant.
Hence I'd also urge to find a interim solution for at least any VSAC version newer than 0.18.0
and 0.19.0
. Otherwise we are still forcing 15k ValueSets and CodeSystems on anyone for even longer, while currently only <300 are being used in the whole ecosystem.
I think the CRMI operations that will return a Bundle with just the dependencies you need is the long-term solution. We can't realistically control how many artifacts will get included in a given package. That's driven by the publishing needs of whoever is putting out that specification. Inevitably, there will be a lot of content in package dependencies that isn't needed. Just as you don't want to load all of SNOMED into memory and instead just hit a terminology server to say "hey, is this code valid", it'll make sense to avoid the cost of loading large packages into memory and instead say "hey, get me the artifacts I need to process Y" of a conformance resource service and let someone else look after loading everything into memory.
I'm really unhappy with the "you're just being difficult" framing. Let me know if I'm misunderstanding.
Well, we can't change the past, so what else can I say?
we are still forcing 15k ValueSets and CodeSystems on anyone for even longer, while currently only <300 are being used in the whole ecosystem
The problem is, 'which 300'? we've found that there's no way to prospectively decide which people will want to use. I'm not interested in only doing value sets by request, where I have to issue a new VSAC every time anyone wants a value set added. Nor am I interested in splitting it up by steward, and having a whole slew of new packages to manage.
The right solution is to stop having a package altogether, but that is something that requires real $$ and no one's standing up to fund that. For now, though that might change
Hi All, we are reviewing our use of a HAPI FHIR validator built into our Git pipeline, because in order to keep it up to date requires the time of a skilled developer, and as a busy team we may not always have this resource. We were considering buying an off the shelf cloud based solution like AWS HealthLake, but we're not sure if it would meet our needs, at a quick glance it looks like it is more focused on analytics than validation. Does anyone use any FHIR validators that they can recommend, that would fit easily within our Git pipeline, requires little customization, or has a good user interface, produces human readable validation reports and can work with our terminology server. The Hammer FHIR validator looks promising, utlizing the best of both worlds, running both .NET and JAVA validators side by side, but I wasn't sure if this was still experimental. I was looking for a website that would help me make a decision on this by independantly testing the FHIR validators, listing the advantages of their features and their disadvantages, does such a site exist?
A few things to keep in mind:
Taken together, it means that any tool you run locally is going to have an associated maintenance (and likely configuration) requirement. Sometimes that maintenance will have to happen in very short order.
You don’t say why the hapi validator takes a skilled developer to maintain, nor what terminology server you’re using. Nor whether you’re using the command line or you wrapped it in something
PRs are welcome, but my expectation is that any validator will require skilled maintenance.
Note that the Java FHIR validator is the most thorough validator by a long shot, and I can’t recommend to others because they don’t pass the test cases. The test cases are unfriendly so the mere fact they don’t pass isn’t necessarily a problem but I do know the others are less thorough
Hi @Grahame Grieve and @Lloyd McKenzie thanks for replying. It's more of a question that we are small team of public sector employees, busy working on FHIR interoperability solutions, and updating our existing customized HAPI FHIR validator that is built into our Git pipeline (in future we were hoping to tie this to our NHS Terminology Server (Customised version of CSIRO Ontoserver technology), takes a certain amount of effort. That's why we wondered if there was an off the shelf solution that would pass the test cases mentioned make validation easier, it doesn't sound like there is, as the validator would need to be customised to our needs, of being valid against the latest or appropriate version of the UK Core, and valid against the SNOMED CT terminology we use.
I was listening to @Vadim Peretokin excellent presentation on FHIR validation at past Dev Days event, and I think he says pretty much the same as you, that the Java validator is the most battle tested validator in the FHIR eco system. @Vadim Peretokin can we use the Hammer FHIR validator in a git pipeline? Currently, we have a FHIR Validator that runs at least once a day, and will check for changes when pull requests are made.
Grahame Grieve said:
You don’t say why the hapi validator takes a skilled developer to maintain, nor what terminology server you’re using. Nor whether you’re using the command line or you wrapped it in something
PRs are welcome, but my expectation is that any validator will require skilled maintenance.
Note that the Java FHIR validator is the most thorough validator by a long shot, and I can’t recommend to others because they don’t pass the test cases. The test cases are unfriendly so the mere fact they don’t pass isn’t necessarily a problem but I do know the others are less thorough
as the validator would need to be customised to our needs, of being valid against the latest or appropriate version of the UK Core, and valid against the SNOMED CT terminology we use.
neither of those things should require a customised validator, unless UK core isn't conformant with FHIR itself, in which case you've got a huge problem irrespective of which validator you use
since hammer is a wrapper around the validator, I don't understand what it gets you in a pipeline like that?
@Grahame Grieve the UK Core is conformant with the FHIR standard, but as an example, where we have constrained the Patient.identifier to use an extension, NHSNumberVerificationStatus, we would expect to see for example, where in an instance example of the where if the NHS Number is present and verified, then we would expect a validator to check that the extension is present and the code/ display value of "01" / "Number present and verified" is present in that instance example. That is the kind of custom validation that I am referring to that goes beyond the capabilities of an "out of the box" FHIR validator. For example, if someone had made a mistake in the instance example and used "02"/ "NHS Number present and verified", instead of "01" / "Number present and verified".
Regarding Hammer, I only heard about it this week, and I need to do some background reading to understand it's full capability full potential. If anyone can point to any documentation or sites, that can give advice on FHIR validators, feel free to comment.
Grahame Grieve said:
neither of those things should require a customised validator, unless UK core isn't conformant with FHIR itself, in which case you've got a huge problem irrespective of which validator you use
the out of the box validator will validate the proper extension if the profiles you are using declare them and have constraints like you mentioned.
ok sure, a validator isn't going to enforce business logic like that that's not expressed anywhere, though such logic can usually be expressed using FHIRPath, and then will be enforced
Hi @Jean Duteau where you mention about profiles declare an extension, is that the same as define an extension: https://www.hl7.org/fhir/defining-extensions.html ? For example , use a canonical URL that uniquely identifies the extension, specify it's context, set it's cardinality, publish and reference the extension's canonical URL that uniquely identifies the extension in the profile. We do that, it's the business rules, alongside the standard FHIR rules that we need a validator to work on,
Jean Duteau said:
the out of the box validator will validate the proper extension if the profiles you are using declare them and have constraints like you mentioned.
There's two steps:
@John George would you be using the validator during ig development in the ci-build? because as far as I see you have described the required rules already in https://simplifier.net/guide/uk-core-implementation-guide-stu2/Home/ProfilesandExtensions/Profile-UKCore-Patient?version=2.0.1 and could use that package to validate any examples by specifing the needed profile. if you could share your git pipeline requirements, that would be helpful.
The original post mentioned a desire to have a validator that "has a good user interface, produces human readable validation reports" - apart from a few web frontends for $validate, and Simplifier and Hammer, there are not a lot of options. "Human readable validation reports" - I don't think any of the tools support that (on the assumption that one could create human readable reports for deep validation).
John George said:
in future we were hoping to tie this to our NHS Terminology Server (Customised version of CSIRO Ontoserver technology)
Let me know what help you need and I'll make sure you get it (there isn't anything customised about the Ontoserver being used for the NHS Terminology Server so anything you can do with Ontoserver, you'll be able to do with the NHS Terminology Server)
"Human readable validation reports"
what's a human readable validation report? The java validator has various output formats, one of which is html, which everyone is used to looking at in the IG publisher wrapper
But otherwise, what? It seems like not a big lift to add something to the validator for an output of what is desired, and there's already a framework for multiple outputs, so that exists
the other challenge which I'm always sweating on is how to make the messages more comprehendible
I'd estimate that "99% of implementers don't use IG Publisher". Authoring an IG is only done by a very small % of the community, and one can't assume FHIR implementers to be familiar with it.
When it comes to validation, you'd want both a very precise indication where in a resource the error occurs, with a comprehensible error message for someone not terribly well versed in FHIR and strucDefs.
ok, not everyone, that's true
with a comprehensible error message for someone not terribly well versed in FHIR and strucDefs.
I've given up on that. I don't know how to explain stuff. I mean, I try, but the language is rooted in the FHIR definitions, and I don't expect much from non-FHIR developers
Thanks @René Spronk for your input, yes we want a validator that not only my team at NHS England can understand the error messages, but possibly in future one that can be widely used throughout the NHS, so implementors of our FHIR specification, who may not be so well versed in FHIR can easily understand an error message and pinpoint where exactly the problem is. Having many years ago, worked in a hospital on pathology messaging, I can relate to this, and it would have been useful to have a validator who messages were easy to understand, rather than escalating this to our IT supplier. I want to find out if the HAPI FHIR validator that we use along side the .NET FHIR validator in Simplifier.net is sufficient, or if there have been any recent developments in FHIR validators that may mean there are better FHIR validation solutions out there.
René Spronk said:
When it comes to validation, you'd want both a very precise indication where in a resource the error occurs, with a comprehensible error message for someone not terribly well versed in FHIR and strucDefs.
suggestions to improve the error messages are always welcome
To help to understand validation issues we are supporting a student project at the Bern University of Applied Sciences, who are trying to determine with the help of a LLM what the underlying problem is based on the error messages from the Java validator/matchbox and how it could be rectified. The background is that there can be a lot of follow up warnings/errors with specific FHIR documents due to the slicing, and we want to find out whether LLM can make a direct recommendation on what needs to be corrected.
slicing is particularly difficult, yes
Despite best efforts, messages will never be perfectly easy to understand, by everyone.
As well as making them as simple as is practical, perhaps a link to an online FAQ could be given, which could spell out some common explanations (e.g. what a slice is). Also the FAQ can give a link to Zulip, as a last recourse. We don't want to replace an automated tool with humans, but, otoh, it is always good to get people into the community.
well, Rik, there's only 1152 messages that validator can produce :grinning:
Grahame Grieve said:
suggestions to improve the error messages are always welcome
For grouping of bulk FHIR Validation results like we do by https://git.uni-greifswald.de/CURDM/Bulk-FHIR-Validation/src/branch/main/README.md IMHO it would help to have (the main part of) error messages additionally/structured (maybe extension) without the used code / only on codesystem level, since in some cases while bulk validation too many different but roughly the same error messages (for each used code of a codesystem if many different values like ICD codes from many validated resources) which i sometimes group for a FHIR element to one single/aggregated error for the whole code system (at the moment by removing the code from message by regex, which could fail if the validators messages/output format will change).
that actually sounds like a suggestion to not change the messages!
maybe giving you message id will make it easier? But how can I remove the code from the message? That doesn't sound like a useful thing to do
To change messages to improve / to understand them better is very good. To remove the code genarally would be very bad (it helps and i want to see it very often). :)
Just wanted to mention, it would be good to be able to not use it/remove it for some (not all) further analysis.
So if no additional/separate $validate operation outcome element for the message without the code available, it would help to have as possible stable output patterns to be able detect/separate/extract the codesystem and code parts of the output string.
Another thing: It could help some users without deep understanding of validator internas, if he/she could distinguish if an error occurs because of i.e. validator config/external profiles (f.e. if a valueset could not be expanded) or the error is in the validated resource.
The core problem is our ontology service requires authorisation to be performed.
So we wrote an interceptor to handle this.
It also has some code to handle packages held on simplifier, not sure if that is needed anymore.
Other than that it is just the Java FHIR validator
The core problem is our ontology service requires authorisation to be performed.
You could make a PR to the core - others are interested in this. Though I'm not sure that'd we'd accept it - need a good test case
It also has some code to handle packages held on simplifier, not sure if that is needed anymore.
no not needed
Agree.
We also use this as a facade/application service to the to server (and AWS FHIRWorks) for FHIR apps like NLM Form builder
Hi @Oliver Egger our main validator is the IOPS GitHub CI CD Test Script , it validates all FHIR resources in a GitHub repo on commit. Fails on warnings (by design) and runs automatically on commit. We use it once a team member feels FHIR resources are ready to be added to the repo/simplifier (we would expect before this the team member would have done some validation in Forge/Simplifier (. NET) before committing FHIR resources to the repository. It's used on any repositories produced by my team on GitHub (Actions tab). Runs automatically on commit.
The details are:
GitHub Action IOPS-FHIR-Validation-Terminology Workflow
Push from FHIR repo e.g. FHIR-R4-UKCORE-STAGING-MAIN with ontoserver credentials
Check out IOPS-Validation
Adds IOPS-FHIR-Test-Scripts as folder inside FHIR-R4-UKCORE-STAGING-MAIN folder in the local ubuntu machine.
Check out validation-service-fhir-r4
Adds IOPS-FHIR-Validation-Service as folder inside FHIR-R4-UKCORE-STAGING-MAIN folder in the local ubuntu machine.
Install npm
Install npm in IOPS-FHIR-Test-Scripts folder
Configure FHIR Validator
npm start
in IOPS-FHIR-Test-Scripts folder (configures FHIR validator using ontoserver credentials). The start is defined in package.json as "ts-node src/configureValidator.ts"
Build FHIR Validator
Runs mvn clean install
inside IOPS-FHIR-Test-Scripts
Clean: remove target folder
Package: Follows the lifecycle phase
validate >> compile >> test (optional) >> package
(for reference install
: validate >> compile >> test (optional) >> package >> verify >> install)
Run FHIR Validator
nohup java -jar validation-service-fhir-r4/target/fhir-validator.jar --terminology.url=https://ontology.nhs.uk/production1/fhir --terminology.authorization.tokenUrl=https://ontology.nhs.uk/authorisation/auth/realms/nhs-digital-terminology/protocol/openid-connect/token --terminology.authorization.clientId=${{ secrets.ONTO_CLIENT_ID }} --terminology.authorization.clientSecret=${{ secrets.ONTO_CLIENT_SECRET }} --aws.validationSupport=false --aws.queueEnabled=false & sleep 120
nohup: no hang up is a command in Linux systems that keep processes running even after exiting the shell or terminal
java -jar: run the jar file passing the parameters listed after each --
Run Test
Run npm test
inside IOPS-FHIR-Test-Scripts folder. The test
is defined in package.json as
jest --runInBand src/validate.test.ts
Oliver Egger said:
John George would you be using the validator during ig development in the ci-build? because as far as I see you have described the required rules already in https://simplifier.net/guide/uk-core-implementation-guide-stu2/Home/ProfilesandExtensions/Profile-UKCore-Patient?version=2.0.1 and could use that package to validate any examples by specifing the needed profile. if you could share your git pipeline requirements, that would be helpful.
Kevin Mayfield said:
The core problem is our ontology service requires authorisation to be performed.
There is potentially a simple decoupled approach for this (using a locked down authenticating proxy) that would avoid needing to change the Validator ... just need to make sure that the content licensing requirements are sufficiently met in the context of use
@John George it sounds like your developers should be running a complete validation before committing. It is possible to use the java command line validator on each developers machine, before any git pipeline based validation. I use it from a button in my IDE (Oxygen). It completely validates the current file against whatever set of profiles I have configured.
Hi @Rik Smithies, thanks for your suggestion of using the Java command line validator. Perhaps I oversimplified the process, I've updated my response to @Oliver Egger's question about our git pipeline. Personally, before committing an instance example to the Git Repo I, aside from checking it's valid in Simplifier, I will run it past the ## FHIR Development and Testing (FHIR Validation) Skunkworks product <http://lb-fhir-validator-924628614.eu-west-2.elb.amazonaws.com/swagger-ui/index.html#/>. It's good for debugging and fault finding.
Rik Smithies said:
John George it sounds like your developers should be running a complete validation before committing. It is possible to use the java command line validator on each developers machine, before any git pipeline based validation. I use it from a button in my IDE (Oxygen). It completely validates the current file against whatever set of profiles I have configured.
@Rik Smithies I like the sound of an FAQ that could decipher an error message. For example, I have only learnt from experience of validating instance example that an error message like "is not a correct literal for a URI", means in layman's terms there is a typo in the URI, probably an extra . or / or - for example, but this might not be obvious to everyone.
Rik Smithies said:
Despite best efforts, messages will never be perfectly easy to understand, by everyone.
As well as making them as simple as is practical, perhaps a link to an online FAQ could be given, which could spell out some common explanations (e.g. what a slice is). Also the FAQ can give a link to Zulip, as a last recourse. We don't want to replace an automated tool with humans, but, otoh, it is always good to get people into the community.
@John George just to be clear, the java command line validator, is the same engine as the HAPI one that you were suggesting to integrate. I am all for "belt and braces" but if you can validate with HAPI before checking into git then you may not have the same need to integrate it into the pipeline. It doesn't really need a "skilled developer" to use, or update, the command line version (though there is still the somewhat separate "messages" issue). Once you know the command line (and maybe put it into a batch file that anyone can use) then updating it is a simple matter of downloading it and saving a new .exe into place. Also the IG publisher will ultimately validate your instances again, so that is yet another check.
The github action is effectively automating most of what you've said.
It's not a validator like was said earlier but just a CI/CD test script which uses a validator.
Maybe not exactly, because it sounds like it happens after check in. I wouldn't trust a programmer who didn't test their code and just commits it and sees if it breaks the build. Desk check first I say.
Rik Smithies said:
Maybe not exactly, because it sounds like it happens after check in. I wouldn't trust a programmer who didn't test their code and just commits it and sees if it breaks the build. Desk check first I say.
Depends on how onerous the desk check is and if there is other stuff I can be doing.
Takes 20 mins to desk check... takes 10 min for the check in to fail?
I have 18 things in my inbox, submit to build and work on item #2 while it churns.
yeah in theory. But in reality it only takes about a minute to validate one file (the one you just edited), and you can read your email during that minute. Anyway, it's an option. Apparently the github route is hard to manage. My point is that this route is not.
It depends on the kind of file you are editing. If you edit a profile, you need to validate any resources that apply the profile, any profiles that inherit it, and any resource that applies any of those.
Editing a single file != needing to validate a single file.
There is nothing like testing in production! If your validation takes a long time you might want to check your java heap. There was a thread about that on Builds.
@Rik Smithies if you go here you can see HL7 UK (igPublisher) is doing roughly the same as us.
https://build.fhir.org/ig/HL7-UK/UK-Core-Access/qa.html
We don't allow resources with errors and warning to go onto simplifier. We do more than just use FHIR Validation in our test script.
right, I get it. The build process will do a full check of everything. I think I was the one that set up that pipeline you mention, so I am aware of it :-) But it seems good if developers have an more interactive way to do it also. Checking one file (and I mean a single example file instance), is a lot quicker than doing a whole IG build. Of course I would also build the whole IG locally, before checking it in, if I was not changing an instance but a profile. Going further, I use schemas and schematrons, when editing files, so I can see the errors interactively before I even save the file. I generate schematrons from profiles. This is not as thorough as the IG build, but the more interactive the process the better I find (which is why IDEs are popular).
When registering a patient-facing app at fhir.epic.com it can be tricky to choose all (and only) the USCDI scopes, which is necessary to have your app registration broadcast to all participating providers.
I'm hoping Epic might add a feature to make this easier, because the cost of choosing incorrectly is high (if you discover you included too many scopes or too few after finalizing a registration, you need to re-start a new app registration from scratch and effectively throw out your old client). But in the meantime, here's a tip to help you select the right scopes:
Array.from(document.querySelectorAll("#WebServicesChosen option"))
.filter(e => e.getAttribute("data-uscdi-readonly") == 'True')
.forEach(e => e.setAttribute("selected", true))
Yeah, I've found this annoying too (we have to create new clients any time we do a dry run for ONC certification). I really should have submitted an enhancement request for this a long time ago. Anyway, I just submitted one now, so we at least have the request on our backlog. No promises on if/when we'll be able to get it done (lots of other competing priorities...).
And if you don't want all USCDI APIs, but just a subset, I crafted this snippet inspired by Josh's example that adds a USCDI tag to the web view so you can use the search box to narrow down to the USCDI APIs and pick the ones you want from there. No guarantees this will work forever. Disclaimer aside:
Array.from(document.querySelectorAll("#WebServicesChosen option"))
.filter(e => e.getAttribute("data-uscdi-readonly") == 'True')
.forEach(e => {
var apiId=$(e).attr('value');
var apiEntry = $('#availableWebServices a[id=' + apiId + ']');
apiEntry.append(' (USCDI)');
apiEntry.attr('filter-term',apiEntry.attr('filter-term') + ";USCDI");
});
(deleted)
Josh Mandel said:
When registering a patient-facing app at fhir.epic.com it can be tricky to choose all (and only) the USCDI scopes, which is necessary to have your app registration broadcast to all participating providers.
I'm hoping Epic might add a feature to make this easier, because the cost of choosing incorrectly is high (if you discover you included too many scopes or too few after finalizing a registration, you need to re-start a new app registration from scratch and effectively throw out your old client). But in the meantime, here's a tip to help you select the right scopes:
- open the "Edit" page for your app, starting with no scopes selected
- open Chrome dev tools and run the following snippet, and
- click "Save" at the bottom of the form
Array.from(document.querySelectorAll("#WebServicesChosen option")) .filter(e => e.getAttribute("data-uscdi-readonly") == 'True') .forEach(e => e.setAttribute("selected", true))
@Josh Mandel -- would over-specifying scope cause OAuth to fail w/ production client ids?
This comment was about how to register a client to be associated with certain scopes. If you are asking about what happens at runtime if you request the scopes that you are not registered for... This is probably a question for the epic team. Historically Epic ignored the list of scopes requested at runtime. I'm not sure if that's true today.
(deleted)
Oh, I'm wondering what happens if you over-specify non-uscdi scopes in a particular "https://fhir.epic.com/Developer/<whatever>" app configuration page -- could that cause authentication to fail ?
(to a real EHR, eg: https://fhir.mah.org/prd-fhir/api/FHIR/R4/) with what appears to be a valid production client id.
I guess it's a question for Epic, but I don't see why that would cause anything to fail if you are in fact registered for that scope and your client registration has been completed
In my experience, the failure mode with registering for too many scopes is that you cannot have your app's registration automatically propagated to Epic clients
Well, I've been trying to figure it out with them (on the open.epic free tier), and it's possible that "Automatic Client ID Distribution" is involved? (is this applicable to S4S?)
described here:
https://fhir.epic.com/Documentation?docId=patientfacingfhirapps
Were you able to get production client ids working w/ real Epic portals (eg: for a test or demo, possibly looking at your own data?)
(If we need to enable and configure auto-syncing, then what would the production client ID be for?)
I've registered another app, using the mechanism you described to add only uscdi scopes (and further limiting those to R4) -- wonder if that will make the production client id work.
Oh, BTW: the _non_-production client id for same app/s work with the free-tier Epic sandbox.
eg: https://fhir.epic.com/interconnect-fhir-oauth/api/FHIR/R4/
Were you able to get production client ids working w/ real Epic portals
Yes, that definitely works
Yes, for me too.
Oh, my, filtering and only adding uscdi scopes actually makes a difference, here:
Yeah, there's an indication at the bottom of your app details page showing you whether it is or isn't eligible for automated registration with all the sites. Pick too many scopes and the indication flips to no.
Yeah -- Only the ^last app configuration filters out non-uscid -- and it has >0 -- Client ID Downloads: 422 -- Previously, I had been adding all R4 "Read" scopes.
this has been very helpful -- thanks a lot!
Oh, I see what you're talking about, here:
Yeah, if you scroll all the way up to the top of this thread, that was why I documented this technique
"will" is green -- for every other app, it's:
But, you're totally right: The selection of USCID scopes should be automated and validated in that App configuration UI, somehow...
or maybe: the _non_-production client id should fail to work, if there were some sort of "USCID" checkbox...
I've commented in a few other spots, but we (Epic) upgraded our sandbox to a new version recently, and had a few hiccups. Those should be resolved now.
I've just recorded an overview video discussing app registration (thanks @John Moehrke for joining me!) but we hit a snag in registering a demo app. In the past this has worked, but we couldn't today find any combination of params to get to "Will be automatically downloaded". Screenshots of the full details below in case this helps debug.
Recorded discussion: https://youtu.be/Kl6UGGrNy4o
Screenshots
In this example I've selected only AllergyIntolerance.search
which should be a USCDIv1 scope (I tried servearl others too; just wanted to make the simplest failing example I could).
@Cooper Thompson or @Christopher Schaut is this a new bug in Epic or am I doing something wrong? The docs at https://fhir.epic.com/Documentation?docId=patientfacingfhirapps don't appear to have changed...
Cooper is out of the office starting last week for 3 weeks (sabbatical) and Chris is OOO until next week. Josh, would you mind emailing open@epic.com ?
Sure, I submitted and cc'd you @Isaac Vetter. Will share an update here if I hear back.
Response indicates a UI bug, with fix in progress
We inadvertently introduced a UI bug late last week that causes the [WILL] / [WILL NOT] labels to incorrectly state that auto-downloaded clients would not be automatically downloaded. This is incorrect. Clients that meet the criteria for automatic distribution continue to be automatically distributed (per the typical conditions, described in this documentation). Once a client is submitted, the developer-facing UI does correctly indicate that the client is auto-downloaded.
We’re working hard on fixing this issue. Thank you for reaching out and reporting this important usability bug.
FYI - the UI fix was released on Thursday, July 14. If you're seeing any other irregularity, let us know, and thanks for highlighting! @Josh Mandel
Thansk @Christopher Schaut -- confirmed this is now working as expected (follow-up discussion at https://youtu.be/_Uthk-nYUvc). At some point it'd be great to chat about the automation of client secret generation per-site, or support for SMARTv2-style asymmetric authn for confidential clients (I don't think https://fhir.epic.com/Documentation?docId=oauth2 supports this yet, but I'm not 100% sure).
(deleted)
My snippet for selecting all uscdi permissions in https://fhir.epic.com/Developer/Create has rotted away.... so here's an updated one:
document.querySelectorAll('li.apiListItem[filter-term*="USCDI"]').forEach(li => {
li.dispatchEvent(new MouseEvent('click', {bubbles: true, cancelable: true, view: window, ctrlKey: true}));
});
I've also maintained a version of this that just tags all the APIs with 'USCDI' so you can search for that string and see which of the APIs you may already have added aren't USCDI APIs:
Array.from(document.querySelectorAll("#WebServicesChosen option"))
.filter(e => e.getAttribute("data-uscdi-readonly") == 'True')
.forEach(e => {
var apiId=$(e).attr('value');
var apiEntry = $('#availableWebServices li[id=' + apiId + ']');
var apiEntryName = $('#availableWebServices li[id=' + apiId + '] span:first span:first');
apiEntryName.append(' (USCDI)');
apiEntry.attr('filter-term',apiEntry.attr('filter-term') + ";USCDI");
var apiEntry = $('#selectedWebServices li[id=' + apiId + ']');
var apiEntryName = $('#selectedWebServices li[id=' + apiId + '] span:first span:first');
apiEntryName.append(' (USCDI)');
apiEntry.attr('filter-term',apiEntry.attr('filter-term') + ";USCDI");
});
Wish there was just a button for it
These are nice though
In the PHD IG we are introducing the requirement to use a category element in observations generated by personal health devices (PHDs). This replaces the previous requirement to have a meta.profile element set to the canonical URL of the profile.
So, we had this in a numeric observation:
"meta" : {
"profile" : [
"http://hl7.org/fhir/uv/phd/StructureDefinition/PhdNumericObservation"
]
}
This is considered an anti-pattern in FHIR and lacks a good way of dealing with versions of the profile.
Instead we will have this:
"category" : [
{
"coding" : [
{
"system" : "http://hl7.org/fhir/uv/phd/CodeSystem/PhdObservationCategories",
"code" : "phd-observation"
}
]
},
...
]
The new approach will work just as well to help (but not guarantee) identifying PHD IG compliant observation resources.
Recently we found out that the the US Core IG is following a different approach for a similar goal. See Writing Vital Signs.
Here Meta.tag is used to identify patient supplied observations.
Meta.tag and Meta.profile can also be modified relatively easy by a server (link). Meta.tag seems designed to support workflow processes. Meta.profile seems designed to help validation by a consumer of (Observation) resources it receives from a compliant producer of such resources.
From a distance, these three approaches achieve the same goal, but are still different.
Should a single approach to categorize / tag / identify subsets of (Observation) Resources be harmonised across FHIR IGs?
@Dan Gottlieb @Josh Mandel :eyes:
From the server perspective, it is a workflow issue because the patient-produced observations are handled differently.
From my perspective, it's great to move away from requiring the population of meta.profile, for the reasons you highlighted -- and both options you described (tags and categories) can successfully achieve that goal.
Tags are a bit more flexible for use cases that go beyond observations -- for example in our Argonaut discussions last year which led into that futures page on us core, we identified a need talk about observations but also devices and provenance resources that might be associated. All of these things can be tagged in theory, but they can't all have a category.
In most resources, we represent that data came from a patient as an explicit element. E.g. MedicationRequest.reported[x]. The problem with tags is that they're not considered part of the signable content of the resource and are typically expected to be different on each server the resource is copied to.
From these comments I tend to stick to the use of a category element for the PHD IG.
I think that the fact that an observation came from a PHD device should not be seen as part of the workflow, but as a persistent qualifier of the observation. The same might hold for the tag "patient-supplied" in the US Core Write scenarios.
Lloyd McKenzie said:
In most resources, we represent that data came from a patient as an explicit element. E.g. MedicationRequest.reported[x]. The problem with tags is that they're not considered part of the signable content of the resource and are typically expected to be different on each server the resource is copied to.
Do you mean in terms of digital signatures? https://build.fhir.org/json.html#canonical describes 4 distinct canonicalization modes including a mode that preserves tags. (From what I can tell, these have not been implemented, so this discussion is a bit esoteric.)
It’s certainly possible to have a signature that includes the tags, but tags are specifically intended to be transient and server-specific.
I think that's a fair description of how we want to handle these based on our discussions about us core. We discussed that a server might initially tag these results and then could strip the tags after some level overview was performed.
It's important to keep in mind that the meaning of the "patient-supplied" tag proposed in US Core is not necessarily that the data were generated by the patient; it also applies to data routed through the patient but potentially generated elsewhere.
Why would patient-supplied be stripped after review? How does this notion relate to “reported” for resources that have that element?
Why would patient-supplied be stripped after review?
We heard from healthcare providers who indicated that they want the flexibility to accept data that is easy to distinguish, but then potentially merge it later into the rest of their data set. Tagging is a way to accomplish this.
If the only purpose of the tag in your particular workflow is to give you the opportunity to review, then the tag becomes irrelevant after you have reviewed.
We're not trying to standardize these kinds of workflow decisions, and I'm not trying to weigh in on whether I think they are an excellent idea. Merely trying to share the design discussions that led of a tag in our documentation.
How does this notion relate to “reported” for resources that have that element?
I don't think any of the resources we're talking about have that element. I just looked at observation, device, and provenance.
Ok, but if you were to extend the concept to, say, MedicationRequest, how would the concepts relate?
Why isn't Provenance an acceptable solution to both patient-supplied, and PHD data?
It's heavy
There is a .meta.security tag for this very purpose.
part of the security tag vocabulary on integrity https://terminology.hl7.org/ValueSet-v3-SecurityIntegrityObservationValue.html
Provenance could be used, but it is more verbose.
It’s certainly possible to have a signature that includes the tags, but tags are specifically intended to be transient and server-specific.
@Lloyd McKenzie I don't believe that this is quite correct. tags are transient and workflow specific. Servers SHOULD preserve them unless they are an active part of the workflow
but it is true that :
Applications are not required to consider the tags when interpreting the meaning of a resource.
so it would be wrong for information that has persistent meaning to only be in a tag.
The fact that a specific resource was routed to the EHR through a patient mediated workflow is not persistently meaningful. (Re: meaning: If the patient performed the Observation, we have a slot for that already, etc.)
In the US Core futures page, the idea is that the tags enable / inform EHR workflow. The EHR is free to apply or discard them.
At the Dallas WGM, FHIR-I agreed to an extension to track that a resource was produced conformant to an IG. Is there a ticket or other work item to track this? @Grahame Grieve, @Lloyd McKenzie (FYI @Erik Moll. @Martin Rosner. @Marti Velezis)
There’s no agreement in committee so far as I know. I have a draft to publish - I’ll get it done this week
We agreed that you would sort out the appropriate path forward in committee. The draft extension I understood to be a potential path forward. If committee changes their mind — just let us know. We will look for something once you have time to push the draft - I don’t think there is a rush on this - but maybe it will make it into the next Extension pack for review(no promises I understand). Thanks!! Marti
@Marti Velezis, “you” = who?
Marti Velezis said:
We agreed that you would sort out the appropriate path forward in committee.
Well — @Grahame Grieve and @Lloyd McKenzie were in the room as reps for FHIR-I. So “you” in that context is FHIR-I, and they would sort it out and get back to us….
That’s my recollection too. I waned to confirm you didn’t mean me or Dev.
Oh - yeah sorry — I should have replied to Grahame — it’s a miracle I replied at all :laughing:
Np.
I agree that I said FHIR-I would look at this, and a group of us scoped something out, which I drafted, but it isn't a FHIR-I agreement, yet
@Erik Moll theres a draft of an extension - not agreed to by FHIR-I yet - here: https://build.fhir.org/ig/HL7/fhir-extensions/branches/2024-06-gg-profile-extensions/StructureDefinition-obligations-profile.html
and a partner extension here:
The links to the obligations profile do not work....
The origimal PHD IG issue is this one: FHIR-24875
That branch has been merged, so the URLs with the branch are no longer valid. https://build.fhir.org/ig/HL7/fhir-extensions/StructureDefinition-obligations-profile.html looks correct to me.
With ClinicalUseDefinition being able to model most of the clinical particulars a drug database might have, it'd be nice to be able to package that knowledge up and share it through CRMI. This'd be simpler if the subject could be a CodeableReference, as it'd allow terminology to be used in place of a shared substance register (likely included in or depended on by the package). For example, we have some 1500 substances with some 30000 interactions across them. An enormous package either way, but the Substance resources don't really add much value, given that we'd still have to fall back to terminology in order to map between any local substance register and them.
I suppose this is a case of a global / local problem, where the clinical knowledge is authored against a global (canonical) substance, and is then used against a local substance. For example, in Finland, prescribing is done through a combination of ATC codes and a package identifier called VNR. These are mapped to a local substance identifier. Now, both EMA and the Finnish Medicines Agency Fimea are working on centralised knowledge bases focusing on the FHIR medication definition -module. Neither seems to be directly tackling clinical particulars at the moment, leaving a need for third-party drug databases and CDS -services around them. One such example would be interactions, as mentioned above. So far, our approach for bridging this gap has been focused on terminology; through ValueSet (of, say ATC, RxNorm, and SNOMED CT codes) and/or ConceptMap resources for extracting the global substance from a local resource, like MedicationRequest. (As well as the administration routes, but that's a different discussion, and can be handled rather well with an extension on ClinicalUseDefinition.)
There was a previous discussion about using a code as the subject of a ClinicalUseDefinition being a direction people have been wanting to avoid. Am I missing some context or an obvious solution here? What's the alternative? A national substance register? A regional one, like the EMA SPOR SMS? A custom substance register for each drug database? A shared base CRMI package for canonical substances? Not trying to step on the toes of regulatory work and national knowledge bases, but from my point of view, terminology seems like an easier match for drug databases in the CDS context. Any previous work, ideas or discussions on the topic?
At the moment you could achieve this use a reference to a contained resource, that just had a code.
Sure, but is that going against the intended usage of the resource? I'm trying to understand why a resource reference is preferred. Most of this experimentation we're working on stems from seeing ClinicalUseDefinition on the CRMI IG roadmap, and trying to figure out how we might provide clinical particulars in that form.
Would you be able to search by a code that is in a contained resource? Not entirely sure, but I think you can't.
Some (more) reasons why resource references are seen as preferred:
It's not the first time we're discussing this, and inbetween current and previous discussion I've talked to other people struggling with the same problem: the resource looks like it would be great for an interaction catalogue, but usually we don't build them between resources, but between terminology concepts.
@Kari Heinonen, your arguments have a theoretical point. However, the resource allows CodeableConcept as the interactant. So... If you can have a concept from terminology as one interactant, why should it not be allowed for the other one (in subject)? Just to make searching more difficult?
So, I created a Jira ticket: https://jira.hl7.org/browse/FHIR-48630
Yes. But a) is it explicitly prohibited to "repeat" the .subject reference resource(s) as an .interactant using CodeableConcept ? And b) references form a graph i.e. .subject could list references to multiple "targets" (forming links that can be back tracked) of different types - something that is much harder to accomplish with CodeableConcept semantics for codings contained within.
Kari Heinonen said:
Yes. But a) is it explicitly prohibited to "repeat" the .subject reference resource(s) as an .interactant using CodeableConcept ? And b) references form a graph i.e. .subject could list references to multiple "targets" (forming links that can be back tracked) of different types - something that is much harder to accomplish with CodeableConcept semantics for codings contained within.
b)
There are many use cases where you'd find a reference is a better solution.
We're saying CodeableConcept should be allowed - CodeableReference would allow implementers to go with CodeableConcept OR Reference according to what they need to achieve.
a)
It's not explicitly prohibited but semantically it would make exactly zero sense. It would basically say the subject has an interaction with itself. :)
Is there any implementation / material available where I could better understand what kind of (typed) subject graphs are used in practice? Or could you maybe summarise some experiences? I can't really see what kinds of different resource types a particular ClinicalUseDefinition might be pointing to. Not an expert on that, though, so I may well be missing something.
Our work covering indications, contraindications, interactions, risks, and various warnings tend to all be authored against substances (and administration routes, which don't seem to belong in the Substance resource either). Now the usage-side of things has the issue of whatever local prescribing system we're dealing with. So far, the best bet there has been terminology mapping, or using a ValueSet. (Edit: certainly, indications and contraindications have the Observation / Condition component to them as well, but those have been typically dealt with as terminologies too on our end.)
IMHO ClinicalUseDefinition is a part of much much bigger module that uses other Medication Definitional resources to form a self-standing graph/database. So not just Substances, it would e.g. have *Definition resources for both actual and abstract medical products. Each "product" resource instance having direct reference links (without searching as such) to backtrack to all relevant ClinicalUseDefinition instances.
Additional issue that did come to my mind :smile: Concerning directly identifying .subject using terminology instead of reference - happens when multiple codings are needed to achieve the necessary level of fidelity. Doing the search directly on ClinicalUseDefinition might not (?) be that straightforward in FHIR. Of course, sometimes this is actually desired (thinking ATC here), sometimes some "custom known corrections and post search clean ups" are needed. In a way references "shift" this matching to happen on "target resource" side, based on their properties, for better or worse.
AFAIK the bigger picture you outlined aligns with how the larger knowledge bases like EMA SPOR approach things. It is very reasonable, but there still seems to be a need to include third-party content for clinical particulars within that graph/database, if we are to have e.g. interaction checks, pharmacogenomics, etc. included. We'll need a way to publish compatible content, which can then be slotted into the graph. Hence the global/local problem I've been on about. We'd need to produce ClinicalUseDefinition resources (and other medication definition resources) that fit into the particular local knowledge base, preferably though CRMI. Maybe EMA SPOR streamlines this in the EU, and we might reference those as canonical. For a clinical knowledge author, it's a lot more manageable when the authoring can be done against a single canonical substance (or product) register. Now, in a perfect world, we'd get the definitional resources straight from the regulatory processes for the clinical particulars as well, but that's still a ways off, I'm afraid.
Perhaps it's still too early to see how things'll pan out. We're not really dead set on using terminologies, but we are very interested in trying to see if we could publish clinical particulars in a way that complements the existing (national) knowledge bases. In the short term, it's looking like terminology is the way.
I might be harassing :smile: you at this rather late hour with a solution where .subject contains, say, more product related references and .interactant is based on (global) terminology where some component or combination thereof of .subject product might be the actual .interactant given by code. And then have these referenced .subject product parts as contained, potentially unsearchable, resources with minimum data content (mainly identifiers). That would keep the core of ClinicalUseDefinition searchable using terminology concepts at the cost of making linking to product side more arduous ? Contained Med/Prod resources could possibly represent multiple local product registries (source identified by some property), if so needed, keeping the actual ClinicalUseDefinition core knowledge content intact and purely terminology based. Maybe ? <Insert Big Disclaimer Here>
Yes. Every authorised product has a list of indications, contraindications and interactions, which in the future will hopefully be coded and distributed as ClinicalUseDefinition resources. That future is not just around the corner. It takes time.
However, even this would not help a clinician who is prescribing in a generic manner - not a product, but a substance or a virtual concept. The need for a more generic terminology-based decision support catalogue will remain.
For now, it would maybe be the easiest for you to use Medication resource - this would combine substance and route of administration. I do see the benefit of using a terminology with appropriate concept properties, though.
Going for a contained Medication, the searchability would have some limitations, but it should be workable (see this). Estonian attempt at the same thing can be seen here (it's a draft, don't trust too much).
Thank you for the discussion, the pointers, and for making the Jira ticket! I'll be sure to give each a proper review as we move forward. ClinicalUseDefinition is one of the more exciting FHIR resources in recent memory, particularly for us dealing with MDR here in EU.
You're welcome, except that I lied. Medication resource doesn't have route of administration :)
MedicationKnowledge or AdministrableProductDefinition would have ingredient + route but neither of those resources is allowed to be referenced in .subject :D
So needs more interlinked contained resources then :grinning_face_with_smiling_eyes: Plus there's the more serious issue of existing systems using FHIR R4, not R5, which has ClinicalUseDefinition among other potentially relevant definitional resources ...
Hmmm, Looking into ePI interaction FHIR IG for Vulcan at
http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ClinicalUseDefinition-interaction-uv-epi.html
gives me a strong impression that .subject and .interactant are intended to be/allow concepts at different levels of fidelity and having a CodeableReference might present some issues to enforce that. Former has description of "The medication, product, substance ..." and latter talks about "The specific medication ... that interacts" or "The specific substance that interacts" for CodeableConcept. But maybe I'm just splitting semantic hairs with this.
For a definitional resource, there isn't really a way of including the specific interacting instance at authoring-time. A code can get us to a Ph. Eur. Monograph, a CAS number, or whatever level of specificity one might need for the global substance code. We'll likely start from SNOMED CT, and go from there.
We're looking at using an extension on ClinicalUseDefinition for the administration routes for now. We'll figure something out for interactions, where we need one for the interactant as well. For a CodeableReference, we'll have to evaluate whether it makes sense to include both the substance and the administration route or not. It'd certainly be useful if a particular ClinicalUseDefinition had multiple subjects. This isn't really the case for the knowledge we author, where each specific article is for a specific substance (or pair of substances). I suppose this is one of those places where we might be misusing the ClinicalUseDefinition resource if its' intended use is to point at a variety of subjects from a single resource. If that's the case, I'd like to hear more about how that actually works. I suppose it'd be useful for condensing something like a long list of subject substances interacting with grapefruit..
Joonatan Vuorinen said:
For a definitional resource, there isn't really a way of including the specific interacting instance at authoring-time.
In that context "specific" does not necessarily mean the resource instance per se. What IMO the spec is trying to say is that .subject (which, by the way, seem to have cardinality of 0..*) for example could be a product definition having multiple Ingredients and then .interactant "names" those that are relevant either using reference (and annoyingly needs to follow multiple links to make the "connection" to Ingredient as it is not directly allowed either) or terminology.
Right, I get your point about specificity. Not so sure I understand the benefits of having both (re-)defined in a ClinicalUseDefinition if the interactant is more specific. I guess it'd make the graph more explicit, but also deconstructs information about the ingredients of a particular medicinal product into a different resource.
FHIR in general tends to do that sort of deconstructing a lot - and developers tend to push back either by adding numerous extensions to "bubble up" data elements from deeper layers of FHIR model and/or using contained resources :smile:
Interesting. Just noticed that according to the official spec ClinicalUseDefinition does NOT define standard SearchParameter for interactant.item[x] at all ? I wonder if that is actually correct - this, of course, being something that can be remedied by custom FHIR server implementation.
We could definitely improve the list of standard search parameters. I don't think we've had a lot of feedback on what people need to search on. In fact many standard servers allow custom search parameters, so in the short term it may not even need a custom server implementation.
Btw you can also often add a custom search parameter for contained resources. Searching contained resources is supported by FHIR but not well supported by current servers, I understand. But custom search parameters are well supported. The result is that in practice you can usually search into contained resources just fine. And of course an alterative to contained resources is "uncontained" (normal) resources. They work fine right now, but are just a little verbose.
The downside of changing subject from reference to CodeableReference is that it will break every current implementation.
At the moment, afaik, the Medication Definition resources can go from R5 to R6 with no breaking changes. This would be the thing that prevents that. Something to consider. Our implementers may not thank us. We are not in a position where breaking changes are not permitted, but we do have to consider all factors.
I don't think it's reasonable to refrain from changes in FMM2 resources out of fear of breaking implementations that are not even there yet. You are only starting to hear feedback from people who are implementing those resources.
CodeableReference only became available in R5, so with your logic, it should have never been used almost anywhere. But it is used, even on MedicationRequest and Procedure, which have way more implementers than ClinicalUseDefinition.
Obviously we can't expect every change request to be approved, but maturity level 2 resource should not avoid breaking changes if they bring value to new implementers.
Just thinking aloud to double-check my understanding, please bear with me :smile: And shoot me cruelly down if needed.
Currently the impact is that 100% "out-of-the-box" searching to find interaction for given product(s) etc. would be solely based on resources referenced by .subject list. Soooo - would that mean that for example to represent interaction between,
Needs to
a) have at least 5 outgoing links in .subject to cover search both by product and substance plus
b) "duplicate" the two actually interacting substances (likely using code) as .interactants
c) potentially (i.e. without normal resources to reference) have a rather large bunch (depending on whether Medication or MedicinalProductDefinition etc. is used as search anchor in .subject) of contained, interlinked resources repeated in each ClinicalUseDefinition - provided that support does exist in server in the first place
IMHO that is a rather complex structure to maintain, at least in the scale of comprehensive FHIR interaction knowledge base.
The one thing we would absolutely want to avoid is coupling the publishing process of an interaction catalogue with all of the local knowledge bases (read: substance & product registers) it interfaces with. Substances tend to have a publishing cadence of roughly up to four times a year. Local registers update much more frequently, as I understand the biweekly cadence of the Finnish basic register is on the slower side of things. If we had to map each of the products as subjects for each ClinicalUseDefinition, it'll be a no-go for now. Us producing our own generic substance register and distributing that as a part of, say, an interaction catalogue is a workaround that might work fine -- but it still disconnects the references made by this third-party knowledge about clinical particulars from the actual graph a particular local knowledge base has built for itself.
(Edit: This is especially important for CDS services operating under MDR, where it just isn't possible to publish biweekly; much less daily. Quarterly would work fine, if we can defer the local mappings to terminologies that we can handle separately.)
@Kari Heinonen re a) Why would you use subjects of substances when you already have that substance in the product ingredient. Products can already linked to their ingredient substances.
@Joonatan Vuorinen are you saying you only want to map to substances and not products? That is allowed. No one is forcing you to use products I think.
Rik Smithies said:
re a) Why would you use subjects of substances when you already have that substance in the product ingredient. Products can already linked to their ingredient substances.
Because depending on where you start on product side there are multiple and more importantly different "hoops" to go through to associate product and substance - some use Substance directly, some go Ingredient/SubstanceDefinition route; some references, so to say, "go this way and some that way" in relation to where one wants the search to be directed. That is not particularly nice from developer perspective.
@Rik Smithies Yes. If we were doing this with terminologies, we'd have, say SNOMED CT 386963006 as a subject, which is the Ph. Eur. Monograph 2769 and the INN 6539. We've had success automatically mapping from the Finnish basic register to these substances through the product identifiers or ATC codes used in prescribing today.
If there eventually is some substance register that does this, and to which we can point a canonical reference, it'd work for us. AFAIK, EMA SPOR SMS is aiming to be that in the EU, but I haven't found how that would work in practice. Then there's the relation to a local knowledge base, such as the one Fimea is working on in Finland. Would we still reference SMS substances, or something else?
Granted, these are early days, and I'm sure there are still many things to see through. Just trying to give context around how terminology would make this easier for us at present.
EMA SPOR SMS is available and I'm quite sure that FIMEA and KELA both have mappings to it, so in a way it makes sense to use it.
However, it's still a code system, and more difficult to use than SNOMED CT as it's still quite raw.
@Rik Smithies Also, given the critical nature of the information itself, controlled amount of data redundancy might not be a bad thing. To be able to check that .interactant substance matches one of the referenced, referenced substance does in fact belong to some of the referenced product etc. Simply to catch obvious errors that are otherwise buried much deeper within the FHIR model.
And then there's the case of interacting Substance without it being an ingredient of any medicinal product - either the product does not currently belong to (national) product registry or it is out-of-scope of medicinal products altogether. I seem to recall there being some quite well known cases of that - and if .interactants can not be searched directly based on terminology that usually do have these ...
@Kari Heinonen the models allow different levels of detail in some places e.g. just a code for an ingredient or a reference to a bigger structure. But a given implementation would not generally use both methods. So it is uncommon to have to write code for two methods and you don't tend to get "go this way and some that way" in actual use.
I don't personally like the idea of de-normalizing the data to avoid a somewhat complicated search. That duplicated data may itself be confusing for clients. But if that is your choice you are free to do so, but you would also have to accept your own consequences of making the data larger, redundant, and more complex to maintain.
You could always create a custom operation to make searching easier.
That does not cover the use case of @Joonatan Vuorinen where the "operational medicinal product database" is NOT governed by the same organization supplying the interaction knowledge, correct ? So ClinicalUseDefinition would have only rather limited idea at what level/path between medicinal resources is used in the environment where it is integrated to. Hence preference for using de-normalization or "lowest common denominator" for any medicinal registry.
I suppose if you are pointing at data that in effect has different implementations within it, then yes you will need to allow for that.
I would imagine you would know what data you were pointing at before hand, and would notice if the implementation changed (anything could change in theory - they may start using contained resources one day ;-) ).
But ultimately yes there is some flexibility in the method/level of detail that all FHIR resources capture (e.g. dumb example, but someone's name may be found via reference.reference or in reference.display, so you need to code for both in theory).
So, unless you are able to constrain one of the versions out, then you will need to accommodate both. If it is too hard for your clients you could add an operation the does the hard work behind the scenes (and still allow more sophisticated clients to do it the "vanilla" way).
I would not clone the data to make that work. There is no need. But if you want a different workaround then feel free.
The longer this topic has gotten, the more I feel this is a relevant question:
Is the medication definition module intended to be used such that a ClinicalUseDefinition implements a third-party drug database for decision support? I.e. is it fundamentally designed to be such that a single (national) implementer builds the graph and that's it -- or is it supposed to support a scenario where multiple clinical content providers add to that graph?
Looking at the current documentation, I get the impression that what is marked as "prescribing support" is exactly the kind of thing I've been trying to describe. Is that the case, or have I mistaken the intended purpose? If I am mistaken, then perhaps some rewording of the docs might help avoid further confusion, especially around the CRMI roadmap, which otherwise aligns with the kind of knowledge we author around clinical reasoning.
We have no real trouble with using a Substance for now, whether it be a contained resource or not. We could reference some canonical substance, if and when such a register exists and is accessible both to us and the prescribing system (e.g. an EHR). Sounds like EMA SPOR SMS is just an export for non-regulatory actors. What to reference, then, if we'd like to make a ClinicalUseDefinition for SMS_ID 100000092656
(which is the example INN I used earlier)? The actual substance we are pointing to is known to the molecular structure at authoring-time. How a particular local system decides to build their own registers isn't. I cannot read between the lines whether we are doing something that is simply wrong.
We're not trying to be difficult here, and if it turns out that what we're trying to do is fundamentally incompatible with the module, then we'll find another way of modelling CDS for medications, and leave the FHIR integration to something like a CDS Hooks API.
It is for all such uses. Basicially any situation where you need to talk about indications, interactions etc. Resources are data oriented. Whenever you have that data, in any setting, or architecture, you use that resource. We don’t always predict or document all use cases, but that doesn't mean it is not appropriate.
At this point I'll play the "That's a good question" card :big_smile: and see myself out. Maybe the obvious should also be noted: resource ids essential to creating references are server dependent (and I don't think we are going to have canonicals for all substances, medications etc.). Sorry for dragging this discussion on.
So the 3rd party CDS content package needs to be somehow "imported" into the prescribing system UNLESS the ids that ClinicalUseDefinition use in references come from that same system (so that CDS content can be directly POSTed to server endpoint). Which kinda speaks for an architecture of a central medication database with real-time API for EHRs instead of "publishing and distributing CDS content" separately. Alternatively the CDS content package must contain big and detailed enough fragment of its own relevant FHIR resource graph to allow the client to figure out the mapping.
Right but there is nothing to stop you using the id (and server address) of a resource on another server, pre-allocated by that server. Obviously you need to know what that id is, before referencing it. But that applies to clients making references to resources on your "own" server also. Naturally if your system involves several servers you will need some assurances that the resources are going to stick around, but that is a business level problem.
Here are two examples, one that Rutt provided earlier, and one from Finland. These are the kinds of "local registers" we want to interface with:
Both of these are drafts of the Medication resources that a prescribing system and a national medication list would use. You might say that these are the resources that a ClinicalUseDefinition should refer to as a subject, but I'd like to underline that for any medical device under MDR, the lead time from a notified body alone is easily over a month. As such, there's no chance we could have a satisfactory publishing cadence with these resources being the subject. The authored knowledge is tied to substances, and for any region using Ph. Eur., those change three times a year. INN lists have similar cadences, as well. Local product registers might change daily. There's a massive discrepancy, regardless of how efficient we might try to be. Sure, this could be a moot point, if we could somehow determine that CDS based on these resources isn't a medical device -- but i'd rather not dive into that ditch here.
Neither example has a Substance resource defined at all. The Estonian example points to a separate CodeSystem containing the substances, and the Finnish example seems to just go by ATC. As such, there is no substance register (i.e. FHIR server accessible to both us and the prescribing system) to reference.
It is possible for us to map from either of these resources to our own Substance definition, i.e. to a SNOMED CT code, or some other precise definition. For the Estonian example, it's a mapping from this CodeSystem, likely with a ConceptMap. For the Finnish example, it's a mapping from either the ATC code or the product identifier (VNR). The product identifier can be mapped to an internal substance id (like the Estonian one) from a separate database export that Fimea publishes. From there, we can build similar ConceptMap resources. With terminology, we can integrate with both local systems today.
Without a CodeableReference, we'll have to publish our own Substance register as a part of our drug databases. It isn't a huge deal, but it really is just an extra step with little added value. Every clinical content provider has to provide their own, given that there is no incentive to share these canonical substance registers. And even if we do share, the connection to any local register still happens through terminology or some external id, losing the nice property of being able to navigate the graph through references.
Furthermore, EMA SPOR SMS doesn't seem to provide a FHIR server that non-regulatory actors (i.e. clinical content providers and EHRs) can access, and that we could then point canonical Substance references to. The Finnish medication knowledge base (lääketietovaranto, described in Finnish here) doesn't directly mention a FHIR server for substances either. It may well end up such that both provide CSV and/or XML exports, like they do today, even when the project is completed in ~2026 or onwards. This leaves us with terminologies in that case as well.
I hope I could illustrate the benefits of using a CodeableReference as the subject. I get that it's a tradeoff, and the negative side of the breaking change is something that has to weighed, as well.
hi Joonatan
Thanks. It is easy to see that someone may want to refer to external content that doesn't have a Substance resource defined.
Currently, in R5 (which is likely to be the only version with software support for a couple more years, and so will represent a significant amount of implementations), you would need to create a "dummy" resource to be able to reference this.
This is verbose, but since no user ever sees it, and data is large and verbose anyway, it doesn't seem to matter all that much. The resource can be contained, or not. There isn't much advantage in using contained. It may look slightly neater, but how neat things look isn't all that important. I would really not call this "making your own substance catalogue". It's just some plumbing - a shim. The data is full of such connections.
The advantage of this approach is that it works now and all references will be the same (in your system and in others that do define their own substances). Searching will work just fine, out of the box.
I can see that using a direct "code reference" has some advantages, but it won't be practical for a couple of years probably, because of the timeline for R6, and the fact that software support tends to have a significant time lag after a version of FHIR is developed. Any solution based on this code reference would likely never be able to exchange these resources with systems that exist or are in development now (until the R5 system was updated - and if R6 has no big functional advantage, the effort, including migrating all the existing data, would be unlikely to ever happen, imho).
Consequently a single, well intentioned, change like this may be responsible for splitting the world's data into two incompatible versions.
As part of the consumer and regulator-facing IPA website, we're creating a logo. We've created the following. We need to decide between them. They're not all finalized, and will be used as a design direction for further refinement. Please vote in the below poll to indicate your preference.
Option A: Globe and flame
smaller_variation_3_background_removed.png
Option B: Pixelated heart-shape with lame
own_identity_3.png
Option C: Torch as I in IPA
torch_2.webp
Option D: Hands cupping flame
hands_2.png
(Shoutout to @Andrew Fagan to creating these logo's!).
cc/ @Mikael Rinnetmäki , @Sheridan Cook , @John D'Amore , @Rob Hausam , @Brett Marquard , @Ricky Bloomfield , @Andrew Fagan , @Rashid Kolaghassi , @Vassil Peytchev , @Jason Vogt
/poll Which logo should IPA adopt?
Option A: Globe and flame
Option B: Pixelated heart-shape with flame
Option C: Torch as I in IPA
Option D: Hands cupping flame
A looks like a cannonball :thinking:
ChatGPT images aren't as good, but I was playing around with the idea of something other than fire, since "burning earth" and "burning hands" can both be problematic. So I asked for an earth with blue/orange rings around it, implying connecting the world. The "FHIR" reference is in the color of the rings, not actual burning fire. A real artist could make something much nicer, but maybe this is a better middle ground?
DALL·E 2024-10-02 13.12.19 - A logo featuring a stylized Earth at the center, surrounded by orbiting rings similar to those of a planet. The Earth is modern and sleek in design, w.webp
Maybe this also has less copyright risk if that's a concern.
Jens Villadsen said:
A looks like a cannonball :thinking:
Because if it's not love,
then it's the bomb, the bomb, the bomb...
that will bring us together..
by The Smiths: Ask, 1986
Personally not keen on any of the four logos under vote
I'm not deeply inolved in this, but I agree with @Kari Heinonen. I see no indication of "patient" or "access" in any of these. And I wonder how much FHIR is the background mechanics of patient access, rather than the headline.
That said I lean towards the torch, but worry that the "IPA" of the logo is a little anglo-centric for a spec that is supposedly "international".
The torch icon makes me think of the Olympic torch.
Logos are hard.
I love the enthusiasm and brainstorming in this thread! Please do post alternative ideas here.Please weigh in on your favorites, and issues with others.
We've got until end of day Sunday to make a decision. I'll take the best ideas to our web design firm then -- at which point we'll be locked into logo and color scheme.
Ok - tried to incorporate comments so far in a design that blends what folks like about Option A, C, and D. I know we won't make everyone happy but I like the symbolism of the torch and hands ("I'm an advocate for myself in bringing my data wherever I go, and healthcare systems are there to support my journey"). Better?
First: too "busy" i.e. too many elements in a small package, for example I couldn't make out the hands at first glance. Second: Don't like the torch to begin with, that's for The Olympics :smile: , and I believe "hands holding earth" is not a particularly original idea, right ?
Maybe modify previous "rings around world" to include a smallish flame "in orbit" leaving multicolored "trace" behind ? So sorry, can't do actual graphical design to save my life ...
maybe focus more on patient access more and international less
this stuff is hard
Richard, good idea! What would that look like?
༼ つ ◕_◕ ༽つ :fire:
(thats a logo that everybody understands :big_smile: - a person requesting fhir )
A colleague of mine suggested a theme of the patient "bringing data with them", and generated this with copilot --
image.png
Thank you, Grahame! The "galactic FHIR badge" is visible now, yes?
I don't think this is what you actually wanted for the second image:
but wow, that's perfect for IPA...
How about something like this?
Did not want to put too much effort into it yet, but the heart is supposed to pixellate a bit...
For what it's worth, in our call we discussed the torch and the prometheus aspect - which I kind of liked. Most gods think that ordinary people should not get access to FHIR, but not everyone agrees...
I'd like to perhaps explore the torch idea a bit further too. But the torch does not need to be the I of the IPA. Just a standalone symbol. And perhaps in a hand of the patient.
Grahame Grieve said:
but wow, that's perfect for IPA...
I do love @Grahame Grieve's accurate and concise illustration of the current state of affairs, but I don't feel it accurately captures the full ambition and the intent of IPA...
ipa-logo-hand.svg
SVG version, if anyone wants to utilize some of that.
IP Team -- it was great seeing the all of the contributions, thought and care. Personally, I love the metaphors - Prometheus, intergalactic, hand-delivered care, carrying the torch. Towards the goal of communicating to consumers and regulators. I've passed along the globe with colored rings to the web firm. https://chat.fhir.org/user_uploads/10155/KPjgxabLsJG76TEIhYAIEI20/DALLE-2024-10-02-13.12.19-A-logo-featuring-a-stylized-Earth-at-the-center-surrounded-by-orbiting-rings-similar-to-those-of-a-planet.-The-Earth-is-modern-and-sleek-in-design-w.webp
Next up -- we're determining the structure of the site, then we'll need to write content.
Items retrieved with _include and _revinclude are limited to 100. This is proving to be a serious limitation on requests for MedicationRequest with _revinclude=MedicationDispense in cases where a patient has multiple medications and/or daily dispensing regimes (e.g. Methodone scripts)...and the client wants to see it all (GOK why, but they do).
100 is enough to show them a page+ worth and you can then invoke a separate query to page through the whole set?
We had to do the same thing in the Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no conformant way to make the pagination make sense. There has to be a limit somewhere or _revinclude can produce pages of unbounded size, and there is no principled way to pick a limit that will work for everyone. We need to educate clients on what is or is not feasible in a single query. The real problem is that it's not easy for the client to figure out 1) that they hit the limit on a particular revinclude clause, and 2) exactly what separate query they should follow up with (especially with a mix of included and revincluded results).
I believe that the problem is that the _count setting only applies to the main target resource, not the inclusions, and the Server returns a 404 error if the included resources exceed the 100 max.
Ah. An error kind of sucks. A 200 with an embedded OperationOutcome with a warning that you're missing some would be better, though I guess we'd need to standardize the code to make that computable...
Oh, do they 404 it? We just truncate that specific set of revinclude results.
Standardizing an OperationOutcome (which could be in the search results as search.mode="outcome" although it looks like we deprecated that value?) would be useful so clients could have an interoperable way to detect this situation.
We could go beyond that to have the OO indicate a "next" link where more revinclude pages can be found but I don't think there's an obvious place to put that in an OO.
A custom (everything type) operation that returned a Bundle of all medication resources relating to a patient would be ideal. At present the $everything operations require a Patient Resource to be included in the Bundle and we don't hold those in our Meds CDR.
If we are hitting issues with a CDR with just 2 years worth of MR and MD resources, I'll almost guarantee that others will hit this issue with Medication CDRs that hold more data relating to individual patients.
Lloyd McKenzie said:
Ah. An error kind of sucks. A 200 with an embedded OperationOutcome with a warning that you're missing some would be better, though I guess we'd need to standardize the code to make that computable...
That's the solution that our MDR provider is currently looking at implementing.
In the long term I would like to push the standard towards something like $graph to generalize the concept of $everything and get the pagination right.
My opinion, that if you are trying to make a consumer that works with "any" FHIR server. Don't use _include at all. 1) You get duplicate includes for each page of primary resources. 2) You might not get the includes at all (due to limit).
Just retrieve the primary resources and call back for the distinct list of "includes" that you need. You have to be able to call back anyway (due to 2). Just make that your principal path.
If you do what @Daniel Venton suggests, then please do aggregated reference resolution. I.e., download all your initial content, then gather up all the reference URLs, de-duplicate them, and only then hit the server to get reference content. This takes a significant load off of the FHIR server as it prevents a large number of duplicate reference resolutions.
_include should not really be an issue? There cannot be too many of those.
Quite a few resources have 0..* references but I don't have a idea of how many have more than 1. Biggest might be DiagnosticReport:result, depending on how many results are in the average DR.
Wearing my FHIR hat (not my MS hat =), I think that either erroring or not including any of the related content is probably safer than including partial and hoping that the client can detect and reconcile them.
In the case of partial results:
What about not including any of the _revIncluded resources in the bundle and removing _revInclude
from the returned self
link if you can't include all the results?
If you don't support _include/_revinclude (at all) then you wouldn't mention them in self. If you do support them, but in a limited way, a warning-level OperationOutcome in the response bundle would be called for: "watch it - not a full set of includes".
Removing from the self link is very interesting but I would also be concerned that the client can't tell the difference between "the server doesn't support this particular revinclude ever" and "the server had to ignore this particular revinclude on this particular query". I guess they could cross-reference with the capability statement?
That's not a conclusive source to determine _include and _revinclude support, certainly not at the level of 'supporting a particular kind of _revinclude in a particular search'. OperationOutcome in the response Bundle is likely the best approach.
René Spronk said:
OperationOutcome in the response Bundle is likely the best approach.
But completely useless in an automated way because there is no standard code/message that means "1 or more _include/_revinclude was not fully populated."
Only if you are a user and as a power user you have some control over the queries being executed does the OO have meaning.
Nothing keeps you from proposing a standardized error codes for these situations..
Yes, but that doesn't help with "current situation" -- and even if something were to be proposed and included in the FHIR standard it would probably take a long time to roll out to production FHIR servers (and even longer for EHR FHIR facades)
I'm experiencing inconsistent behavior with the Microsoft FHIR server when querying data for different lines of business (LOBs) and cities. The API works fine for one LOB and city but fails for another, and I'm trying to understand the root cause of this issue. Here are the details:
URL used: chmmd.xxx.com/provider/Location?address-city=Annapolis&address-state=MD&_revinclude=PractitionerRole:location&_count=100.
_include
and _revinclude
Parameters:_include
and _revinclude
parameters are limited to 100 results. While querying for MAPD everything works as expected but CHPMD works ok with a smaller _count value, or with smaller cities (e.g. we see the problem with Annapolis MD@Brendan Kowitz @Mikael Weaver
My suggestion is don't use _inlcude and _revinclude unless you absolutely know that the included resource count is very small.
Instead get the primary resources you want and call back for the distinct include resources you need JIT.
Even if the query works, you are not guaranteed to get all the include resources. If you do get all the include resources, you might get multiple copies as they'll be on every page of primary resources.
Hello :).
Right now we are using HAPI for parsing our FHIR-files. Unfortunately HAPI has a lot of (changing) dependencies and is updated very frequently. Updating our HAPI version is always very annoying as we are not allowed to use some of its dependencies for security reasons and we sometimes have version conflicts with other libs we are using. On top we are using only using a very very small part of HAPIs functionality.
So, does anybody know a lightweight api for just parsing FHIR data in xml format other than HAPI? We are not validating, nor writing FHIR data, we just have to read it.
Many thanks in advance :)
Maybe https://confluence.hl7.org/display/FHIR/Open+Source+Implementations provides you with an alternative?
(a) are you just using HAPI FHIR core?
(b) why are you changing it frequently
(c) which dependencies are a problem
(d) maybe you should move this discussion to #hapi
Thanks for your responses :smile: :
Maybe https://confluence.hl7.org/display/FHIR/Open+Source+Implementations provides you with an alternative?
I don't see any alternative to HAPI on this page for a Java-Backend-Solution
(a) are you just using HAPI FHIR core?
Yes
(b) why are you changing it frequently
In the first place because of vulnerabilities (e.g. https://mvnrepository.com/artifact/ca.uhn.hapi.fhir/hapi-fhir-base/6.10.5)
(c) which dependencies are a problem
to name just a few: sqlite, telemetry-libs, plantuml.... In addition Hapi with its dependencies is > 500 MB, that is quite a lot for just parsing a XML-file. We develop a huge software product with lots of components and lots of third party dependencies. So we try to add new dependencies as little as possbile, as we are already deep inside the dependency hell :).
(d) maybe you should move this discussion to #hapi
That was my first thought but as I'm looking for an alternative to HAPI I have chosen this channel ^^.
I don't know why hapi-core depends on plantuml. Nor where telemetry-libs fit into the picture. But sqlite.. that's my usage
So why not create your own import tool based on some XML parser / XPATH Library ?
We ran into a very similar problem last year when we tried to replace our IBM/LinuxForHealth based R4 only Java implementation with the hapi-fhir one to support R5. Including the model libraries (org.hl7.fhir.r4/r4b/r5) alone was mostly okay (expect a few version differences), but when you include the convertors library to support conversion between the various formats, even more unwanted dependencies come with it (earlier dstu versions, httpclient5, plantuml, sqlite, saxon-he, nimbus-jwt, etc.). While it is possible to exclude these and get a smaller dependency set, I agree with @Gordon that it can cause dependency hell for a project.
Our requirement was to support only the Terminology module, so in the end, we ended up generating LinuxForHealth R5 classes for only that module and added our custom converters.
You could give LinuxForHealth a try, as that is Java-based and has only a few dependencies but it is very likely that the project is dead, so eventually a replacement is needed (it is one of the hottest topics these days at my company). The last release was in Dec 2022 and there weren't that many changes after it, plus there is no R4B and R5 support at all.
@Jose Costa Teixeira do we need to depend on plantuml?
@David Otasek do we need to depend on the telemetry stuff?
I will think about the sqlite dependency, whether there's a better way to package that
I'll check. This might be something we inherit from HAPI.
Grahame Grieve said:
Jose Costa Teixeira do we need to depend on plantuml?
I had no idea that hapi depended on plantuml. It is used for diagrams, so I don't see why
But I can look at the code to see if I find any use for it
I don't see any reference to telemetry dependencies in the core libs. Are we looking for a particular groupId/artifactId?
@Lloyd McKenzie appears on the git blame for the plantuml dependency.
It's for narrative generation for ExampleScenarios.
Not something that would typically be relevant in production environments, but not sure how to split that out.
sorry, i thought we were talking of hapi server. I forgot this was also used for our rendering
HAPI server also needs the ability to generate narratives. There just won't be many production systems that use ExampleScenario. (That said, I could see the same library eventually being used for PlanDefinition and maybe some other resources that would be more common.)
but what does the plantUML library actually do in that case?
Generates an SVG of the flow of the interactions in the scenario for inclusion in the narrative.
I think we’ll have to figure out how to make that and SQLite pluggable
Could the models and serializers just be in a seperate package that doesn't have dependencies?
Different kind of overhead. we’ll talk about it
Just in case anyone is interested we have created a single Maven module that aggregates the necessary FHIR R4, R4B and R5 model libraries (+ convertors) into a single dependency (mainly for OSGi use but hopefully it can be used in other Maven modules as well). It excludes anything that is not relevant in terms of parsing and composing resources and converting between the supported versions. Plus we have added type support for some of the typical resource operations as well. Take a look here: https://github.com/b2ihealthcare/fhir-core
Hey everyone,
As listed under SMART 2.0 (https://hl7.org/fhir/smart-app-launch/STU2/scopes-and-launch-context.html#finer-grained-resource-constraints-using-search-parameters), we can provide fine grained access to limit the data access (not just based on category/concept/FHIR resource type but also certain type of FHIR records).
Example: We can limit access to not just Observation but Laboratory Observation using a scope in this form - patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory
My understanding is that the Apps that need to access the FHIR API would be required to register for these scopes (with search parameters) during the app/client registration. Is that correct?
So I cannot have an app registered with scope just as "patient/Observation.rs" and later ask for "patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory" during Authorization flow at the time of user authorizing the app to access only laboratory observation data.
Can someone throw some more light on this?
We do not specify anything about apps needing to specify scopes at registration time. That might be a server specific behavior but there is no requirement around it.
In any case, if a server does require scopes registered at authorization time, the important thing a server might want to enforce is that these scopes requested at access time would be a subset of those.
In your example, if the app registered for generic access to observations, it would presumably be fine for the app to request a subset of observations at authorization time (say, only vital signs).
Thank you Josh for providing some clarity here.
I understand that scopes at the time of authorization needs to be subset, but trying to understand usecase for the app to register for observation scope (generic) and then using only vital-signs subset during authorization while it can get all of the observations based on generic scope it's registered with server anyway.
I see it as a valid scenario, when the client is registered for (example) 2 scopes, patient/Observation.rs and patient/Encounter.rs, and during authorization if it needs to access encounter information then it may only pass patient/Encounter.rs scope in the authorization request (subset of scopes that the client is registered for).
Further in OAuth 2 terminology, I would think that the scope "patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory" is different than the generic scope "http://patient/Observation.rs" and not a subset (even though from usage we consider it to be subset so as to have limited access of the observation resource). I would be curios to know if anyone implemented this with any OAuth server already?
As any OAuth server would reject the request for authorization, if request includes the scope "patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|laboratory", while the client generating that request is registered for superset scope "patient/Observation.rs". Also it would not make any sense for the client to request subset while it has access to superset, so trying to understand rationale behind it.
The use case for requesting fewer than "the maximum scopes you're able to request" at runtime is incremental authorization. Start with the least access you need to do something useful, and if you build trust with a user they may grant more access over time. Think of this like a "give location permission?" dialog in a mobile app that appears just at the time it is needed.
@Josh Mandel
One follow up question here on search parameter format.
Suppose, an app requires to access Observation resource (patient level) for vital signs and laboratory (in SMART v2), then how these scopes need to be assigned to the app?
In first option, we assign the scope twice but with different search parameter value for category. In 2nd option, we are concatenating category with & (just like how we do in http requests). I did not find a link on how to support multiple search parameters on same resource type/scope. Having said that my understanding is that option 1 is the correct option. But I would like that to be confirmed.
Appreciate your thoughts.
If you include two categories with an & the scope would only apply to data that had both of those categories at the same time
Josh Mandel said:
If you include two categories with an & the scope would only apply to data that had both of those categories at the same time
Thanks for prompt response. so option 1 seems to be the correct format for multiple search parameters on same resource type?
If you want to express a logical or like " laboratory vital signs" you can use the FHIR syntax ,
See "search for unions of results" on https://www.hl7.org/fhir/search.html
Sorry for confusion. My question is not on what/how parameters are to be passed on FHIR API in this case, but what "scope" (as per SMART v2) an app should have in order to retrieve Observation with multiple search parameters values (like Vital Signs as well as Laboratory)
Since we use the fhir query syntax in our Scopes, my message above is applicable to scope design.
Ok, so in that case scope (that client would need to have) would be "patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|vital-signs,http://terminology.hl7.org/CodeSystem/observation-category|laboratory"?
Appreciate that @Josh Mandel
Josh Mandel said:
If you want to express a logical or like " laboratory vital signs" you can use the FHIR syntax
,
See "search for unions of results" on https://www.hl7.org/fhir/search.html
The Clinical Scope Syntax railroad diagram seems to show that the ,
character would not be allowed given a strict interpretation. Finer-grained resource constraints using search parameters also indicates that the syntax allows "a series of param=value
items separated by &
. Do you know if the intention was to restrict the syntax to only allow AND
conditions? Or is it an error in the spec and OR
conditions were meant to be supported as well through the use of both &
and ,
?
This is the same convention as FHIR search: parameters can only be AND'd together, but when a ,
occurs within a value, this is a disjunction over the values for that parameter. See https://build.fhir.org/search.html#combining
Got it, the ,
occurs within the value
string in that case.
Another question regarding search parameters. Are there any restrictions on what param
values are in scope for the feature? For instance, are all the Search parameters for Observation
in scope for restricting permissions?
All of the search result parameters are fair game as far as the syntax (the scope "language") is concerned. But in real life, you should not expect that servers will be able to handle arbitrary Scopes that you write in this language. They are likely to support a limited set. For example, in the US context there are specific granular Scopes that onc requires certified products to support (On observations, this basically means permissions by category, for the categories defined in us core profiles)
I think you're referring to https://www.federalregister.gov/d/2023-28857/p-1245 in that case, correct?
Yes, or more practically https://build.fhir.org/ig/HL7/US-Core/scopes.html#the-following-granular-scopes-shall-be-supported is the kind of guidance that gets distilled from the onc requirements
Hello @Josh Mandel . May be I'll repeat @Sagar Shah, but we get continues requests from users of FHIR server. Is it assumed that the parameters available for finer-grained scopes can be used by the client dynamically? For example, the authorization server and FHIR server allow the use of patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|**laboratory**
, but the client will request patient/Observation.rs?category=http://terminology.hl7.org/CodeSystem/observation-category|**imaging**
.
Is it assumed that systems supporting the SMART protocol, such as authorization servers, accept such parameters for scopes in the same way that the FHIR specification handles search parameters? I ask this because, according to the specification, it is expected that the client can request any search parameters as scope parameters. However, the typical behavior for authorization servers does not include processing requests for scopes that are unknown to them. This is especially relevant when using chained parameters or modifiers. Since the entire string, including the suffix, is considered a scope, and likely varies with each request when using modifiers, how is an authorization service expected to handle such requests when it simply does not recognize the parameter values? I would like to hear your opinion. Thank you
As a server you can publish a list of the scopes you support (https://build.fhir.org/ig/HL7/smart-app-launch/conformance.html#response, see "scopes_supported"). In general it is useful if you are able to parse scopes, but that isn't required by the SMART specification; and even if you can parse scopes, the expectation is not that you can provide granular access to any scope that a client wants to construct.
If you are looking to pass US Health IT certification, there is a relatively short list of specific granular scouts you need to support:
”. If using US Core 6.1.0, this includes support for finer-grained resource constraints using search parameters according to section 3.0.2.3 of the implementation specification at § 170.215(c)(2) for the “category” parameter for the following resources: (1) Condition resource with Condition sub-resources Encounter Diagnosis, Problem List, and Health Concern; and (2) Observation resource with Observation sub-resources Clinical Test, Laboratory, Social History, SDOH, Survey, and Vital Signs
SMART v2 scope syntax for patient-level and user-level scopes to support the “permission-v2” “SMART on FHIR® Capability”. If using US Core 6.1.0, this includes support for finer-grained resource constraints using search parameters according to section 3.0.2.3 of the implementation specification at § 170.215(c)(2) for the “category” parameter for the following resources: (1) Condition resource with Condition sub-resources Encounter Diagnosis, Problem List, and Health Concern; and (2) Observation resource with Observation sub-resources Clinical Test, Laboratory, Social History, SDOH, Survey, and Vital Signs
From https://www.healthit.gov/test-method/standardized-api-patient-and-population-services ; more details at https://build.fhir.org/ig/HL7/US-Core//scopes.html for US Core
Thought I'd chime in on this quick since this has been something I've been watching closely as well. Over time my concern around this has waned quite a bit.
The concern I've had around this over the last few years is that, theoretically, these query-based scopes could introduce an arbitrary set of scopes that breaks with the traditional OAuth2 scope paradigm- even the simple "birthdate=1990" example in the IG is an example of this- supporting that example could result in at least 120+ scope values all on it's own, and that's a very simple example.
That being said- I'm seeing just as many requirements around authorization servers being transparent about what list of scopes it supports ("scopes_supported" for example). This leads me to believe that there was never really an intention for these scopes to be used in a completely arbitrary, infinite fashion, and instead are really just a useful tool to provide a targeted, but still well defined, level of granularity. From my perspective as someone coming from the general identity industry, these requirements are still within generally accepted identity industry practices.
At least for auth code apps, that is especially true, since the large majority of end users won't be able to comprehend SMART scopes directly. So auth servers would need to create a human-readable description, with icons, etc. to represent each scope they display. Which means finite, pre-defined scopes.
What is the latest on the story of generating Java data models for profiles?
(I checked fhir-codegen but I see it has this issue)
( blast from the past https://github.com/jkiddo/hapi-fhir-profile-converter )
@Vadim Peretokin I might have a colleague that would like to pitch in some effort
To the MS codegen project
works, but there's some open issues with it
What are those?
don't remember :-(
@Grahame Grieve and this class here https://github.com/hapifhir/org.hl7.fhir.core/blob/master/org.hl7.fhir.r5/src/test/java/org/hl7/fhir/r5/profiles/PETests.java illustrates how it can be used, correct? It isn't wrapped in any executable or something like that already, right?
that tests out the underlying engine.
I don't think it tests out the generated code itself
This is a great start!
I've played around with the code generation and found the following issues, sorted in priority:
Would you like me to file them so we can keep track? Both me and @Jens Villadsen agree this is something worth developing further, perhaps we can get some community traction on this :)
This is gonna be a fun ride!
ca.uhn.fhir.model.api.annotation.*
are used in the generated results.6 missed a 'not'. But you explained it in 8, so nvm
@Vadim Peretokin how to reproduce #2?
I'm struggling to find a definitive list of "magic" LOINC codes for the Vital Signs Profile. There is of course the table on the main page. But I'm finding in testing that there are other "magic" codes that trigger during validation that I can't find documented anywhere.
2710-2 (O2 Saturation) triggers the oxygensat profile, requiring 2708-6 to be present.
3141-9 (Body weight) triggers the bodyweight profile, requiring 29463-7 to be present.
8306-3 (Body height) triggers the bodyheight profile, requiring 8302-2 to be present.
Where can I find this complete list of codes? I tried downloading the various packages associated with these profiles, and I'm not finding this "magic" list (in a full exhaustive form) anywhere.
The list of codes is ever-changing. The rule is that if you're sharing a vital sign (regardless of which code you use to represent it internally), you MUST comply with the vital sign profile. The validator does its best to detect LOINC or SNOMED codes for vital signs and enforce the profiles. However, the rule applies even if you're using the "Dr. Bob's Favorite Observations" code system. The validator may not recognize the codes and thus not know that you're transmitting a vital sign and so it won't yell, but you'd be equally non-conformant.
Even if it's just a snapshot in time and will be expanded later, is there somewhere I can see the current list of magic numbers?
I think that validation is invalid.
8306-3 body height while lying down, 8302-2 is just body height.
To say you cannot use the code for the measurement you took, you have to use this other related but not necessarily the same code. Seems like you are smearing the data.
Essentially saying to LOINC, "You might as well delete all those more granular measurement codes as we aren't going to allow FHIR resources to be valid if they are used."
No one is saying you can't send the code you wish. But you must ALSO send the standard generic code as well. That ensures there's a base level of interoperability across all systems on vital signs.
@Justin Ware The list of codes used by the validator is in the HAPI code. I'm not aware of a separate place it's published. What is your reason for wanting to know the list of codes?
An observation only has a single code so I could put both codes in the CodeableConcept.coding but am I not asserting that they are translations if I do so?
You are indeed asserting that they are translations - and they are. More specifically, both are proper codings of the real-world concept in their particular code system. If you measure someone's height while lying, then you have a measurement that is a 8306-3, but it is still also an 8302-2.
Lloyd McKenzie said:
Justin Ware The list of codes used by the validator is in the HAPI code. I'm not aware of a separate place it's published. What is your reason for wanting to know the list of codes?
I'm essentially needing to "clean" up data that would otherwise fail validation. So if the validator says, "this looks like an [xyz] vital sign observation, but you're missing [123] required LOINC code", I want to be able to proactively append the required LOINC code to the Observation.code.coding
for every known instance that the profile "expects".
And to further clarify -- I'm needing to "clean" the data because this is part of a transformation to FHIR from non-FHIR data. So I can't expect the source data to include the FHIR magic codes all the time.
I'd recommend you plan to mark anything that's a blood pressure, weight, body temperature, etc. with the appropriate profile codes then - irrespective of their code system provide you have a clue what the codes mean. (Sometimes you might not.) However, if you only care about the ones the Java validator is currently checking, you can look here: https://github.com/hapifhir/org.hl7.fhir.core/blob/master/org.hl7.fhir.validation/src/main/java/org/hl7/fhir/validation/instance/advisor/BasePolicyAdvisorForFullValidation.java
So one of the use cases (that started my journey trying to solve this) is transforming C-CDA Vital Sign Observation to FHIR. The LOINC 2710-2
used to be part of the C-CDA ValueSet a long time ago, and was still being used in a particular set of C-CDAs. But I can find zero reference to this LOINC code anywhere in any version of the original FHIR vitalsigns profile nor the related US Core Vital Signs profiles. And it's a deprecated LOINC code that also can't be found in the C-CDA documentation unless you use the NIH tools to inspect really old versions of the ValueSet definition. So it was tricky to even determine "is this correctly an oxygen saturation measurement in C-CDA that should be profiled as such in FHIR", and then from there to try to determine, "what other codes are going to cause this same issue later so I can get ahead of the problem?"
I’ll mention that this “hidden” list of vital sign codes is something that we’re hoping to clean up in R6 by documenting it. The current list is actually maintained inside the Java validator.
that's where it is, but it's subject to the authority of OO, so create Jira issues if you think there's codes that shouldn't be, or there's codes missing
Justin Ware has marked this topic as resolved.
I set the domain - and now http://sql-on-fhir.org/ looks at GitHub pages.
Let's decide how we want to publish releases? Do we want to have the latest release on http://sql-on-fhir.org/
If we will be backward compatible, do we need different published versions? Or can live with only current?
@John Grimes 🐙 @Arjun Sanyal @Ryan Brush
Potentially we can use paths like http://sql-on-fhir.org/2.0.0 as the official IG publisher. I will require some gh page engineering to get all versions on the same site (probably intermediate bucket)
https is not working for me, maybe it's not enforced in the GH Pages setting?
Thanks for setting up the domain @Nikolai Ryzhikov 🐬!
I agree that we could make the current the most prominent, but we need the published versions for the IG publication process (and for stable URLs for users to refer to).
Here is my proposal for how to organise the content:
Playground: https://sql-on-fhir.org/playground
CI build: https://sql-on-fhir.org/ig/latest
v2: https://sql-on-fhir.org/ig/2.0.0
https://sql-on-fhir.org -> 302 Redirect -> https://sql-on-fhir.org/ig/latest
Content structure within GitHub Pages:
fhir.github.io/sql-on-fhir-v2/
├─ index.html [meta redirect to ig/latest]
├─ playground/
│ ├─ [built sof-js site]
├─ ig/
│ ├─ [IG output from release build]
│ ├─ latest/
│ │ ├─ [IG output from CI build]
│ ├─ 2.0.0/
│ │ ├─ [v2.0.0 release build]
│ ├─ [other release directories]
Jesse Cooke said:
https is not working for me, maybe it's not enforced in the GH Pages setting?
@Nikolai Ryzhikov 🐬 I don't think the domain is verified, did you get some instructions to add a TXT record?
Ok, I think we are getting closer!
I have mostly implemented this (I used "extra" instead of "playground" when I saw that you guys had used this).
It looks pretty good, except for the TLS thing and also some formatting problems with the header of the "extra" sections.
The way I did it was to set up a "releases" branch which has the historical release content (and nothing else).
This content serves as a base, and then I layer in the current build and the "extra" content, then package the whole lot up and send it to GitHub Pages.
I've fixed that problem with the incorrect URL for the CI build on the history page.
@Nikolai Ryzhikov 🐬 for the broken image links on the "extras" pages: you need to prefix the path with /ig/
See
image.png
@Martijn Harthoorn @Ward Weistra, @Nikolai Ryzhikov 🐬 and team have pointed out that there's plenty of published packages that have genuine errors in them - invalid resources with issues such as wrong data types, missing required elements, wrong values
Would it be useful if we defined a second kind of RSS feed that allowed the validators to publish their findings about the published packages, and then package registries could crawl those feeds and adorn the packages in the registry with that information?
To my knowledge, there's 3 validators - mine, yours, and Nicolai's validator. We could all do this?
As someone generally working with packages, I would be equally interested in those errata packages.
This feed would be independent of the publisher's (i.e. the organization publishing the package) package feed? It would be something "validator services" could publish containing their evaluation of published packages? And which registries could consume and display alongside their package lists? This isn't the publisher saying, "yeah, all my packages are perfect", right? And the feed could be updated at any time as validators find new things to check for?
Gino Canessa said:
As someone generally working with packages, I would be equally interested in those errata packages.
Is this errata? I see it more like a Flesch-Kincaid reading score on your package.
Elliot: I would hope it is more detailed than just the score =).
Knowing that things are validator-confirmed invalid would allow me to take a different code path in my tooling.
@Grahame Grieve Good idea! We've been having that on the wishlist for longer, to run all packages through our validator.
Or better still have updates to the packages to resolve the issues?
Is that something a technical correction could do to just the package? (yes with a new version number on the package - last number in the semver?)
(along with an errata page that highlights the corrections that are made in the package)
updates to the packages to resolve the issues
that requires a new version. And probably editorial changes, or even changes of substance
Yes, but would a single page describing the errata changes linked to the new version number/package be ok?
(and not a complete website - hence very low bar for the changes - especially since everyone implementing them has to make the changes locally - and have in the SDKs individually to get over them from the spec, and hand tweaking for IGs)
so here's a proposal for a feed like that:
<?xml version="1.0" encoding="UTF-8"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:fhir="http://hl7.org/fhir/feed" version="2.0">
<channel>
<title>Java Validator Package Validation Feed</title>
<description>Validation record of FHIR Packages validated by the Java Validator</description>
<link>http://fhir.org/guides/validation/validation-feed.xml</link>
<generator>HL7, Inc FHIR Publication tooling</generator>
<lastBuildDate>Tue, 27 Aug 2024 08:49:02 +0000</lastBuildDate>
<atom:link href="http://fhir.org/guides/validation/validation-feed.xml" rel="self" type="application/rss+xml"/>
<pubDate>Tue, 27 Aug 2024 08:49:02 +0000</pubDate>
<language>en</language>
<ttl>600</ttl>
<item>
<title>validation for hl7.fhir.us.davinci-pas#2.1.0-preview</title>
<description>Validation for Davinci PAS: no errors found. (25 warnings, 150 hints)</description>
<link>http://fhir.org/guides/validation/2024-10-09/hl7.fhir.us.davinci-pas#2.1.0-preview.json</link>
<guid isPermaLink="true">http://fhir.org/guides/validation/2024-10-09/hl7.fhir.us.davinci-pas#2.1.0-preview.json</guid>
<fhir:packageId>hl7.fhir.us.davinci-crd#2.1.0-preview</fhir:kind>
<pubDate>Wed, 09 Oct 2024 17:29:43 +0500</pubDate>
<atom:link href="http://hl7.org/fhir/us/davinci-pas/STU2.1-preview/package.tgz" rel="about" type="application/gzip"/>
</item>
</channel>
</rss>
documentation for item:
<item>
<title>{text}</title>
<description>{text}</description>
<link>{link to bundle containing operation outcomes}</link>
<guid isPermaLink="true">{same link}</guid>
<fhir:packageId>{package id being validated}</fhir:kind>
<pubDate>Wed, 09 Oct 2024 17:29:43 +0500</pubDate>
<atom:link href="{link as found in source feed)" rel="about" type="application/gzip"/>
</item>
</channel>
</rss>
I have a QuestionnaireResponse with a contained Questionnaire that I'm getting back a validation failure when answering with a valueString
against the contained Questionnaire's item[x].type
being set as open-choice
. When validating against validator.fhir.org R4 4.0.1, I get back "Option list has no option values of type string".
Below is a trimmed down example failing validation. From reading other discussions here, it sounds like this is supported in R4 and the open-choice
is the correct type on the question item, I just haven't found an example of how to properly build this. I assume the Questionnaire being a contained resource on the QuestionnaireResponse shouldn't matter.
Hopefully unrelated - the itemControl code we are using does not belong in the ValueSet (known issue with a third party).
{
"id":"9bb747d7-2666-47f2-9c79-20bc05198448",
"meta":{
"versionId":"5"
},
"contained":[
{
"id":"ed364266b937bb3bd73082b1",
"item":[
{
"extension":[
{
"url":"http://hl7.org/fhir/StructureDefinition/questionnaire-itemControl",
"valueCodeableConcept":{
"coding":[
{
"code":"editableDropdown"
}
]
}
}
],
"id":"specimen-source",
"answerOption":[
{
"valueCoding":{
"code":"U",
"display":"Urine"
}
},
{
"valueCoding":{
"code":"B",
"display":"Blood"
}
},
{
"valueCoding":{
"code":"S",
"display":"Saliva"
}
}
],
"code":[
{
"code":"specimen-source"
}
],
"linkId":"specimen-source",
"text":"Source of specimen",
"type":"open-choice"
}
],
"name":"Test Open Choice question",
"status":"active",
"subjectType":[
"Patient"
],
"resourceType":"Questionnaire"
}
],
"item":[
{
"answer":[
{
"valueString":"spinal tap"
}
],
"linkId":"specimen-source",
"text":"Source of specimen"
}
],
"questionnaire":"#ed364266b937bb3bd73082b1",
"status":"in-progress",
"subject":{
"display":"Test Patient",
"identifier":{
"value":"4"
},
"reference":"Patient/4",
"type":"Patient"
},
"resourceType":"QuestionnaireResponse"
}
That looks valid to me.
why? The definition says it has a list of values that are of type Coding
, but the answer has a type of string
open-choice is a set of either codes from the referenced set, or a string if no value is appropriate from the set.
This was changed in R5, but is valid in R4/R4B
https://hl7.org/fhir/r4/codesystem-item-type.html#item-type-open-choice
Answer is a Coding drawn from a list of possible answers (as with the choice type) or a free-text entry in a string (valueCoding or valueString).
The choice
type matches the description you've desribed.
The 2 control types that often use this type are a combo-box that has an edit control, or an auto-complete style search control.
ok. I missed that. fixed next release
Where's that code so I can do a review on it to see which parts are missing (and compare to my validation)
Any ideas for workarounds to save this? We're on HAPI FHIR 6.8.0, and I assume this would be something that will be a while to make it all the way through the pipeline for HAPI to consume. I was thinking just adding the provided valueString
answer into the Questionnaires answer options list just to pass validation, but that feels very wrong. I'm not sure if HAPI allows overriding base HL7 rules.
Interestingly, HAPI has support for open-choice
to override validation, but all the code is commented out. I can post to the github to find out why.
@Grahame Grieve I just saw the fix and release, we'll test it out today
Dave Hill created a new channel #PACIO Personal Functioning and Engagement.
Nikolai Ryzhikov created a new channel #Babylon (Aggregate FHIR terminology).
Jean Duteau created a new channel #Da Vinci PR.
Sanja Berger created a new channel #german/dguv.
Alejandro Benavides created a new channel #HL7 CAM.
Biswaranjan Mohanty created a new channel #Enhancing Oncology.
Abbie Watson created a new channel #fsh-tooling.
deadwall created a new channel #google-cql-engine.
Artur Novek created a new channel #FHIRest.
Aaron Nusstein created a new channel #US Behavioral Health Profiles.
Nagesh Bashyam created a new channel #UDS-Plus.
Grahame Grieve created a new channel #FHIR Foundation.
Grahame Grieve created a new channel #FHIR for Pets.
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Agenda:
C'thon recap
Milestone #1 Review sheet: https://docs.google.com/spreadsheets/d/1Jg2ypM6QNUfyMTgnkQ_jvI0x5lvKxb03D2CHfMAxxDk/edit?usp=sharing
Dev Days: who's coming? what needs prep?
Agenda for today's meeting:
I had a chance to review the FHIR Community Process Requirements v1 document which looks like the most current official source and agree with @John Grimes 🐙 from our last call that the requirements would not be difficult for us to meet.
The main non-tactical question I have is around the concept of "FCP Participant".
The reqs state that any entity including "individual" can become a participant (FCP101) and also states that "any registration information e.g. business/company registration details" (FCP102) shall be provided.
Since we are currently organized as a loose group of volunteers (some of whom work for companies with commercial interests) what should be our form of organization?
Is it recommended or required our group "register" in some sense?
Let's discuss on the call today. Thx!
cc: @Josh Mandel @Grahame Grieve
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Sorry guys, I will skip today meeting
Possible agenda for today's meeting:
We will also have @Kiran Ayyagari dropping in to tell us about Safhire.
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
During a review noted a couple of typos with case on Resource type.
https://github.com/FHIR/sql-on-fhir-v2/pull/262
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Standard means unification, but since $operations introduction there are two levels for operation name: HTTP verb, i.e. PUT/DELETE/etc. and URL part, i.e. $validate/$submit/etc., which is actially OPERATION on $operation like POST $submit
Unified solutution may include $put/$delete/etc. as part of FHIR API, while HTTP REST verbs like GET and POST are only necessary on transport level.
Standard means unification, so far future versions of resource oriented FHIR may use resources for API as well, like Request with operationName, operationParameter properties and Response, while HTTP (or other transport protocols) may be used for requests/responses transportation only.
hi Alexander, interesting ideas but I am not seeing the advantages.
Part of what makes FHIR successful is that people (and software) already know HTTP and REST. Why replace PUT with $put when HTTP (and all the tools etc) support PUT already?
What do you mean by "only necessary on transport level"? Is GET transport level but $put not somehow?
Operations are intended for the things that HTTP doesn't support out of the box.
What would the advantage of replacing the basic REST verbs with these other things?
Are you trying to do SOA, where everything is a named operation?
REST seems like a race to the bottom in terms of sophistication (a few dumb verbs), but it was successful.
Some other paradigm will come along, but it would need to be a popular one if FHIR was to get leverage from it as it has from REST.
While not prohibited, hopefully people are hopefully not defining operations to do things exactly equivalent to what can be done via a simple RESTful interaction. I generally delineate 'simple' interactions for REST and more complicated requests for operations. For example, when creating a Bulk Data Export request we could have designated a resource that is POSTed with a request and used to track progress, but the community chose to generally model those things as operations instead of as discrete request data objects.
-
Arguments could be made over which is better, purer, etc., but I think we have a pretty good balance between the simplicity of RESTful calls for 'general' interactions and good frameworks for the more complicated stuff.
-
Note that none of this prevents you from defining a different paradigm / API surface / etc., and even proposing it for inclusion in the spec. But given the normative state of those areas of the specs and the adoption they have seen, I would discourage something that is a mostly a modification to the existing REST API (I doubt it would get enough support to be core and thus would only hurt interop instead of expanding it).
I am not sure exactly what you are driving towards here. We already have other transport paradigms in FHIR (e.g., Messaging). Since RESTful calls are ambiguous without the verb, it is included in the resources used (Bundle.entry.request.method).
-
I agree that it could have been done differently (e.g., by using URL segments as you describe, headers, other elements, etc.), but it would be a very breaking change to a lot of implementations to modify that today. If there is a compelling reason (e.g., as Rik talks about), it does not hurt to describe it. There is text around a few different approaches on the FHIR Services page (see Implementation Approaches). I will also note the Orchestration, Services, and Architechture Work Group (formerly Service-Oriented Architecture) which (I think =) covers some of these types of interests.
The one area where I see operations as being superior to REST is if you want to avoid having to do orchestration of multiple REST APIs for something that needs to be done atomically. So if you need several resources and other information like an event code that must to be handled as a unit, operations a good alternative to REST (alongside batches, transactions, and messages).
Hi Rik, Gino, Cooper, thank you for willing to answer to my question. I see I must clarify myself.
1. Unification of operation placement may simplify FHIR server implementation, usage and support. As far as I know, out of the box HTTP PUT is rarely used and can only write file to filesystem, so, anyway special software for FHIR specific things like validation, database manipulation must be developed. So, I don't see simplicity, contrarily, I see server configuration for PUT, more complex operation resolving, more complex PUT based user client instead of simple GET/POST based web browser. The same for other verbs. I can imagine REST API with operation as verb and parameters as URL like POST CoverageEligibilityRequest + GET CoverageEligibilityResponse. I can imagine SOA API with verb as transportation method (with or without body) and URL as operation+parameters like POST Patient/$put or GET Patient/$delete, but I cannot understand the idea of combined REST+SOA API with both DELETE Patient and POST CoverageEligibilityRequest/$submit. Even backward compatibility looks like irrelevant for FHIR.
2. Keeping all the data about request/response together in a single json may simplify these data manipulation. Using Bundle with .timestamp, .entry.request.method, .entry.request.url, etc. is really good solution.
hi Alexander
I don't know why you would say that PUT is rarely used. Do you mean because a single resource update is not useful? I agree that other orchestrations may be needed, and that is where operations do come in. But PUT is useful it seems.
PUT writes to the FHIR server. PUT is not limited to some sort of first level commit, as might be suggesting. However you choose to let clients update data, the server will need to do the same work (be it filesystem, database etc), so I don't yet know why PUT would not be sufficient.
Also the server can already do whatever validation it chooses to, on a PUT.
So I don't yet see any rationale for change based on these factors.
We use the base HTTP verbs without operations when the semantics fit - create, update and delete. We use operations for things that don't fit the semantics of the HTTP verbs. $submit is not the same thing as 'create'. No resource is created. No resource id is returned. No existing instance is revised. There are a lot of situations that don't fit into the limited CRUD semantics of the HTTP verbs. But that doesn't mean we should avoid using those verbs when we can. I'd say that 75% plus of FHIR interoperability is over the base HTTP verbs. Custom operations is 15-20% and the rest is messaging exchanges that don't involve HTTP at all. As soon as you get into the operation space, there's the challenge of standardizing the input and output arguments. The benefit of the HTTP verbs is that there's no possibility for customization. What goes in and what comes out is quite nailed down. That may feel limiting, but it's great in terms of robust interoperability (which is our primary objective).
As a side note, FHIR's approach to REST has been in use for over 10 years and is pretty widespread, so an alternative would have to have a huge upside to have a hope of justifying the transition effort to the market. At the moment, I'm not seeing it in what you've proposed.
Set up:
given a Questionnaire resource stored as at the bottom,
send a post request to http://localhost:4004/fhir/Questionnaire/11
with the following payload, get error message as following the payload below. Thanks in advance for the help!
{
"resourceType": "Parameters",
"id": "example",
"parameter": [
{
"name": "subject",
"valueString": "07e2c163-71f6-46f1-99d5-d43c1a002cf2"
},
{
"name": "local",
"valueBoolean": true
},
{
"name": "context",
"part": [
{"name": "name",
"valueString": "patient"},
{"name": "content",
"valueReference": {
"reference": "Patient/07e2c163-71f6-46f1-99d5-d43c1a002cf2"
}}
]
}
]
}
=====
{"issue": [
{
"severity": "error",
"code": "exception",
"diagnostics": "Error encountered evaluating expression (%patient.id) for item (patient.id): library expression loaded, but had errors: Could not resolve identifier %patient in the current library., Member id not found for type null."
},
{
"severity": "error",
"code": "exception",
"diagnostics": "Error encountered evaluating expression (%patient.birthDate) for item (patient.birthDate): library expression loaded, but had errors: Could not resolve identifier %patient in the current library., Member birthDate not found for type null."
}
]}
appendix:
{
"resourceType": "Questionnaire",
"id": "11",
"meta": {
"versionId": "1",
"lastUpdated": "2024-10-03T19:31:08.959+00:00",
"source": "#Gqo4bXgfgBTXHlxJ",
"profile": [
"http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-extr-defn"
]
},
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-launchContext",
"extension": [
{
"url": "name",
"valueCoding": {
"system": "http://hl7.org/fhir/uv/sdc/CodeSystem/launchContext",
"code": "patient"
}
},
{
"url": "type",
"valueCode": "Patient"
}
]
},
{
"url": "http://hl7.org/fhir/StructureDefinition/structuredefinition-wg",
"valueCode": "fhir"
}
],
"url": "http://hl7.org/fhir/uv/sdc/Questionnaire/demographics",
"version": "3.0.0",
"name": "DemographicExample",
"title": "Questionnaire - Demographics Example",
"status": "draft",
"experimental": true,
"subjectType": [
"Patient"
],
"date": "2023-12-07T23:07:45+00:00",
"publisher": "HL7 International / FHIR Infrastructure",
"contact": [
{
"name": "HL7 International / FHIR Infrastructure",
"telecom": [
{
"system": "url",
"value": "http://www.hl7.org/Special/committees/fiwg"
}
]
},
{
"telecom": [
{
"system": "url",
"value": "http://www.hl7.org/Special/committees/fiwg"
}
]
}
],
"description": "A sample questionnaire using context-based population and extraction",
"jurisdiction": [
{
"coding": [
{
"system": "http://unstats.un.org/unsd/methods/m49/m49.htm",
"code": "001",
"display": "World"
}
]
}
],
"item": [
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
},
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "%patient.id"
}
}
],
"linkId": "patient.id",
"definition": "Patient.id",
"text": "(internal use)",
"type": "string",
"readOnly": true
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "%patient.birthDate"
}
}
],
"linkId": "patient.birthDate",
"definition": "Patient.birthDate",
"text": "Date of birth",
"type": "date",
"required": true
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "today()"
}
}
],
"linkId": "today",
"definition": "today",
"text": "Date of today",
"type": "date",
"required": true
}
]
}
This may be a bug that @Brenin Rhodes fixed at the connectathon.
Yes, this will be fixed in the coming release of CR.
@Brenin Rhodes does CR mean "Clinical Reasoning module" ?, which version of Hapi will the fix be in, thanks!
Yes: https://github.com/cqframework/clinical-reasoning
We're hoping to get it in the Nov release of HAPI.
when changing the $populate payload to the following, I got a different error message,
Error encountered evaluating expression (%patient.id) for item (patient.id): missing case statement for: org.hl7.fhir.r4.model.Reference
is the format of payload correct? I would really appreciate a working example of the populate payload. thanks!
{
"resourceType": "Parameters",
"id": "example",
"parameter": [
{
"name": "subject",
"valueString": "07e2c163-71f6-46f1-99d5-d43c1a002cf2"
},
{
"name": "useServerData",
"valueBoolean": true
},
{
"name": "parameters",
"resource": {
"resourceType": "Parameters",
"id": "example1",
"parameter": [
{
"name": "patient",
"valueReference": {
"reference": "Patient/07e2c163-71f6-46f1-99d5-d43c1a002cf2"
}
}]
}}
]
}
It is not. You are attempting to use the CQL parameters parameter to force a launchContext variable. The CQL parameters are not used in the same way launchContext variables are. To get that parameter to be "correct" it would need to have a resource
value of the Patient rather than a reference to it.
Even that is not needed though. To get your Questionnaire working against the current version you can change %patient
in your expressions to %subject
. A parameter of name %subject
with a Resource value corresponding to the subject
parameter is passed into the evaluation of each expression. Then the subject
parameter is all you will need in your request payload.
Once the new version of HAPI is released with our latest Clinical Reasoning module launch contexts will be fully supported and what you have should work.
works like a charm! thanks!
One other thing to note. In the new release of CR the subject parameter will be a Reference rather than a String. Other than that, I can confirm your original request payload and Questionnaire successfully return a populated QuestionnaireResponse.