A weekly summary of chat.fhir.org to help you stay up to date on important discussions in the FHIR world.
@Grahame Grieve , @Michael Lawley : If I may believe this: https://confluence.hl7australia.com/display/COOP/2023-03+Sydney+Connectathon , the Australian connectathon was on March 23rd... Do you already have some news about the Publisher/Terminology server interface?
nothing final yet. I will update when there's news
ok there's news. There's test cases here: https://github.com/FHIR/fhir-test-cases/tree/master/tx
From the next release of the validator, you can run them like this:
java -jar validator.jar -txTests -source https://github.com/FHIR/fhir-test-cases -output /Users/grahamegrieve/temp/txTests -tx http://tx-dev.fhir.org -version 4.0
Where:
there's a fair bit of work to go here, but this is the shape of where things are going
@Grahame Grieve What's the preferred way to provide feedback on these tests - questions, apparent bugs, etc?
Currently I have issues with the REGEX test, a bunch of the language tests, and the big-echo-no-limit
test which seems to require a system to refuse to return an expansion with more than 1000 codes?
Wrt the language tests, language-echo-en-en
, language-echo-de-de
seem to suggest that the expansion should set ValueSet.language
based on the displayLanguage
parameter to $expand
. But, that would then imply that the entire result ValueSet is in that language rather than just the ValueSet.expansion.contains.display
values (which is all that parameter is really requesting).
For the translated CodeSystems in the language tests, none of the translations have a use
value, so I (Ontoserver) can't know that they should be used as the preferredForLanguage
display value.
Last question: is there a branch available with the -txTests option
What's the preferred way to provide feedback on these tests - questions, apparent bugs, etc?
discussion here first, I think.
Currently I have issues with the REGEX test
what?
the big-echo-no-limit test which seems to require a system to refuse to return an expansion with more than 1000 codes?
well, this is something we'll have to figure out. It's my test that that's how my servers work. It's not necessarily how other systems have to work, so we'll have to figure out how to say that in the tests
Wrt the language tests, language-echo-en-en, language-echo-de-de seem to suggest that the expansion should set ValueSet.language based on the displayLanguage parameter to $expand. But, that would then imply that the entire result ValueSet is in that language rather than just the ValueSet.expansion.contains.display values (which is all that parameter is really requesting).
I sure expected some discussion on this. There's two different things that you might want - languages on display, and languages on the response. The way the tests work, if you specify one or more display languages, you get displays defined for those languages
But the language of the response - the ValueSet.language, that's based on the language parameter of the parameters, which the controls how the available displays are represented in the response, based on the value of ValueSet.language
with regard to the use parameter, I don't believe that the spec says anywhere that there is a preferredForLanguage code, so how can that be in the tests?
is there a branch available with the -txTests option
the master has that now
I now have the validator test runner going, but I think it is being really overzealous in the level of alignment its looking for between the expected response and the actual response.
First two issues: .meta
and .id
-- I don't think either of these should be included in the comparison.
Next one: ValueSet.expansion.id
-- that's purely a server-specific value
.meta and .id... I'm not producing them, right?
.expansion.id? or expansion.identifier?
Regarding the regex issue, we're limited to Lucene's flavour which does not include character classes like '\S' or '\d'.
.id
is in simple/simple-expand-all-response-valueSet.json
for example. I produce .meta
but not .id
ouch. would you like to propose an alternative regex?
".{4}[0-9]"
would work for me in this example, but it's not quite the same. The more accurate "[^ \t\r\n\f]{4}[0-9]"
would also work.
And yes, I did mean expansion.identifier, but I think this was a false negative -- me misreading the output
I will commit some changes when I can
btw, what are you putting in meta?
Require? no id or just don't care? I think a bunch of things should be don't care
Meta was including a version (doesn't really make sense) but also a lastUpdated
I think for id, it shouldn't have an id? I just stopped regurgitating the id, which was basically an oversight
It would also potentially propagate tags
What if it's using a stored expansion?
in this context?
Well, no, but I'm thinking that these tests should really only be looking for things that are known to be wrong
perhaps. They're also my own internal qa tests. that might be too much, I guess, but I'm hoping not
I was thinking that the expected response in the test would set the scope of required elements, and other things would just be ignored
you assume that I'm sure what the answer is there
I'm guessing there's a way to require an element but ignore the value
I'm not even sure that it can have a known answer
there is, yes
I've got a bunch of time later today to dig into this in detail, so I can hopefully provide coherent feedback rather than piecemeal reactions
ok great
Back quickly to .expansion.identifier
, this is what I'm seeing:
Group simple-cases
Test simple-expand-all: Fail
string property values differ at .expansion.identifier
Expected :$uuid$
Actual :4aa6f81f-ab79-41b2-96e2-6faa0aadc38c
well, that's not a valid value
oh! it needs the urn:uuid:
prefix?
yes
But the type is uri? which can be absolute or relative
well... a URI can be, but in this case:
uniquely identifies this expansion of the valueset
I think it should be absolute
There's several places in the spec where we missd this when we allowed relative URIs
uniquely in what scope though? wrt that specific tx server endpoint, or globally, or in some deployment environment?
I don't think you can legitimately enforce it to be a UUID (it might be something like [base]/expansion/[UUID]
, which would be "unique" and absolute)
This one is perhaps tricky:
Several tests expect an expansion parameter for excludeNested
but Ontoserver always behaves as if this was true, and so omits it because its value does not affect Ontoserver's behaviour.
That’s less than ideal from my pov. and probably excludes ontoserver from serving for hl7 igs. Maybe. I’ll think about the testing ramifications. Is that fact visible in the terminology capabilities statement?
You have it as a uuid anyway, so prefixing isn’t going to be a problem? And the intent is global since expansions are sometimes cached and reused. Sometimes at scale
Globally unique is fine, but then I'd be tempted to adopt a URI based on the template [base]/expansion/[UUID]
e.g., https://tx.ontoserver.csiro.au/expansion/4aa6f81f-ab79-41b2-96e2-6faa0aadc38c
.
But in principle, if the spec says URI, unique identifier, then I don't think it's good form to impose additional constraints.
Ontoserver does return TerminologyCapabilities.expansion.hierarchical = false
But the meaning of excludeNested
is only about the result representation (true
=> MUST return a flat expansion), it does not affect the logical content of the expansion.
Is there a reason you think that parameter should be included?
Conversely, Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging.
I fully expect we'll need to do some adjustments in this space
Is there a reason you think that parameter should be included?
IG Authors have raised issues before when the expansion in the IG loses the heirarchy
@Michael Lawley I've been thinking about this one:
omits it because its value does not affect Ontoserver's behaviour.
That's wrong - the parameters are to inform a consumer how the value set was expanded. Whether or not Ontoserver can or can't is not the point, it's how it acted when doing the expansion
Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging
offset = 0, presumably, but what's count in that case?
But the presence/absence/value of excludeNested
doesn't affect "expansion" (i.e., which codes are present), it only potentially affects how those codes are returned in the ValueSet.expansion.contains
.
Grahame Grieve said:
Ontoserver redundantly includes offset and count values in the expansion parameters even if they haven't had any impact on paging
offset = 0, presumably, but what's count in that case?
MAXINT
it still affects the expansion even if it doesn't affect which codes are present
If a consumer is looking through a set of expansions, instead of just generating a new one, then it's going to be input into their choice
I had been approaching it from the perspective of judging whether or not a persisted expansion is re-usable for a different expansion request.
(Which is something that Ontoserver does when it has a ValueSet with a stored expansion.)
indeed, but you're only thinking of it in your context, it could/would also be done in expansion users that can't make the assumption you're making
I'm trying to think about this from the perspective of a client / consumer of ValueSet.expansion -- under what circumstances do they need to know excludeNested = true
? What is it actually telling them?
One answer might be "this value was provided for this expansion parameter in the original request"?
that this is expansion will not contain nested contains even if that might be relevent for this value set
Also, what should Ontoserver do if the request was $expand?excludeNested=false
? Should it state that in the parameters even though the actual expansion may have (if it was present) flattened any nesting? Or, should it change it to true
because flattening might have happened.
Perhaps the message is just "as a client, you do not have to look for nested codes when processing this expansion"?
well I think that the server should return an exception if the client asked it to do something it can't do
But that's not what excludeNested=false
means. It's not the same as saying "include nested"
no that's true
and you don't know whether flattening is a thing that happened or not, I presume
correct
Now looking at all the validation test cases, the system
parameter has the wrong type (valueString
not valueUri
) and, in the responses, code
also has the wrong type (valueString
instead of valueCode
)
and similarly for system
in the responses
wow that's a bad on my part. Fixed
nearly - still problems with the system
parameter
diff --git a/tx/validation/simple-code-bad-code-request-parameters.json b/tx/validation/simple-code-bad-code-request-parameters.json
index 077c424..59d292a 100644
--- a/tx/validation/simple-code-bad-code-request-parameters.json
+++ b/tx/validation/simple-code-bad-code-request-parameters.json
@@ -8,6 +8,6 @@
"valueCode" : "code1x"
},{
"name" : "system",
- "valueString" : "http://hl7.org/fhir/test/CodeSystem/simple"
+ "valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
}
In validation/simple-code-implied-good-request-parameters.json
, there is a non-standard parameter implySystem
:
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "url",
"valueUri" : "http://hl7.org/fhir/test/ValueSet/simple-all"
},{
"name" : "code",
"valueCode" : "code1"
},{
"name" : "implySystem",
"valueBoolean" : true
}]
}
indeed there is, and there should be, right?
it indicates that it is intentional that there's no system and the server should infer what the system is
But that is an invented non-standard parameter?
The use-case here seems to be that the system isn't knowable by the calling client, but in the context of validation, why wouldn't the system be known; there should be bindings available?
it's a code type, so there's only a code, and the server is asked to imply the system from the code and the value set
agree I haven't proposed that parameter, but it's still needed
Yes, its a code type, but that must exist in some context, right? The context should provide the system?
the value set itself is the context
What are the boundaries here? Can the ValueSet contain codes from > 1 code system? Can the code be non-unique in the valueset expansion?
The value set can contain codes from more than one code system, yes. A. number of them do. The code must be unique in the value set else it's an error
Presumably the system parameter does also need to be provided (from the documentation of $validate-code.code: "the code that is to be validated. If a code is provided, a system or a context must be provided"). Does the client just pass a dummy system that is ignored?
no the system is not provided in this case
since there isn't one
and yes, that violates the documentation on that parameter
And is it only ever used when supplying the code
parameter?
yes. it must be accompanied by a code and a value set
I fixed the remaining system parameters
for examples like validation/simple-code-bad-display-response-parameters.json
why is the result true
when the display is invalid? The specification for the result
output parameter is:
True if the concept details supplied are valid
Another test case issue: mis-named input parameter. See, for example, validation/simple-code-bad-version1-request-parameters.json
which includes a parameter version
that should instead be systemVersion
.
why is the result true when the display is invalid?
Because that's just a warning - the code has been judged to be in the value set
a parameter version that should instead be systemVersion
ouch
why is the result true when the display is invalid?
Because that's just a warning - the code has been judged to be in the value set
But if a display is provided it should be validated and if its not any of the displays listed by the CodeSystem, then it is invalid -- the definition of result
is not "True if the code is a member of the ValueSet/CodeSystem", but rather "if the concept details supplied are valid"; display
is one of these details.
I am very uncomfortable about relaxing display validation affecting the outcome due to the prevalence of EHRs that allow for the display to be edited arbitrarily.
well, I'm very sure that if I changed to an error instead of a warning, the IG authoring community would completely rebel, but I guess TI might want to have an opinion. So what do other people think?
There are lots of reasons for display not being valid. (E.g. If someone has a code system supplement the validator doesn't know about.)
Why is the IG authoring community using non-valid displays?
there's 4 reasons that I've seen:
Note, I am more concerned about the clinical community than the IG community.
If this is an impasse, perhaps the mode
flag should be used to relax things?
Either way, I think we need an explicitly agreed mechanism to use the "issues"
to flag the invalid display text.
Also, I think the test extensions-echo-all
is wrong at least in assuming supplements will be automagically included
ValueSet display should succeed
TI decided otherwise; that's no longer allowed
I expect that TI will choose to decide this in NOLA. You going to be there?
Either way, I think we need an explicitly agreed mechanism to use the "issues" to flag the invalid display text.
the tests are doing that now
This has been discussed, at some length, with regard to SNOMED CT descriptions, and I recall that @Dion McMurtrie produced a table with various permutations in the early days of SNOMED on FHIR.
Unless the edition and version of SCT is provided, it's not possible to determine the validity of an unrecognized description. Otherwise, the best a server can do is return the preferred term from its default edition & version and a warning.
well, this discussion is not just about SCT that's for sure
Also, I think the test extensions-echo-all is wrong at least in assuming supplements will be automagically included
why?
That's precisely the intent of this test - make sure that supplements such as this are automagically included
language supplements
Grahame Grieve said:
well, this discussion is not just about SCT that's for sure
Sure - but things are a lot more straightforward for single edition, single language Code Systems.
that's not much of hill to climb given how complex SCT is
It's far more complex with things like LOINC where the same complexity (different national editions and local extensions) exists, but where everyone does it differently and often poorly.
Re extensions-echo-all
, the supplement contains extensions (some I think are technically not valid where they're being used), and then expecting corresponding property values in the output (eg weight
)
which ones are not valid?
ItemWeight - only goes on Coding and in a Questionnaire
I think I created a task about that one
I used a property where I could, and an extension where I had to
We can force the overhead of a CodeSystem supplement, but we can't count on the supplement being available when performing production-time validation. And that means that non-matching display names shouldn't be treated as an error.
If you're doing prod time validation without all the base info, then you're only going to get half answers - do you tolerate missing profiles? But, if your use case is tolerant of bad displays, just omit them from the validate-code calls, or let's have an explicit parameter that the client passes telling the server to only treat as warnings
@Michael Lawley to increase your happiness, I'm just adding tests for supporting these 3 parameters from $expand for $validate-code: system-version, check-system-version, force-system-version, and as I'm doing that, I'm checking that they apply to Coding.version as well
One of the other challenges with the txTests is that Ontoserver returns additional expansion.parameters and this causes the test to report a false failure
Most implementations don't care about the display values - and will be sloppy with them. So the default behavior should be warnings - errors should require the explicit parameter.
One of the other challenges with the txTests is that Ontoserver returns additional expansion.parameters and this causes the test to report a false failure
I'm assuming that this is something we'll sort out, so I'm not worrying about that today
but it's a test problem, not an implementation problem
Given that displays are what clinicians see and interpret, being sloppy is bad -- we've seen real clinical risks here.
And just because (a group of) ppl are sloppy doesn't mean we should enable that by default.
but it's a test problem, not an implementation problem
It's a test problem yes, but it's making it very hard for me to work through the cases because it bails out early and hides potential actual problems in the rest of the response.
fair.
Do you have a list of the extra parameters? In general, some extra parameters would be fine but others might not be, and I don't want to simply let anything go by
The reality is that the displays in many code systems are not appropriate for clinician display. By 'sloppy' I mean that systems make the displays what they need to be for appropriate user interface, not worrying too much about diverging from the 'official' display names if the 'official' names aren't useful for the purpose. I'm not saying that the display names chosen are typically inappropriate/wrong.
Do you have a list of the extra parameters?
version
is the main one, and it seems strange that it's not expected in the result
Also, I'm getting a missing error for includeDesignations
. Again, this seems like our interpretations of "parameters that affected expansion" are mis-aligned. I interpret this as being the calculation of the matching codes, not the specific representation that gets returned (noting that displayLanguage
is counted since it affects the computed display
value)
Coding.display A representation of the meaning of the code in the system, following the rules of the system.
"following the rules of the system", not "following the rules of some system implementer".
Also, if a display is not appropriate, then get it fixed -- either at source (in HL7 / THO) or with the external party. If the external party won't play ball, then fix it in a shared supplement so everyone can benefit rather than lots of (potentially incompatible) fixes spread over many different IGs.
it sounds so easy when you say it like that
Sure. Except that's not what systems do today. They just load the codes into their databases and make the display names say what they want them to say. And they're not going to change that just because we might like them to.
If that's all they did I'd be less concerned. What they REALLY DO is allow people to change the display text on-the-fly to absolutely anything (and people do this), and the results sometimes bear zero resemblance to the code's meaning. This is why I say we're concerned about the clinical use case over the IG use case, and why I want the caller to explicitly request that an invalid display not return an error; then the onus is on the caller.
version is the main one, and it seems strange that it's not expected in the result
where is it missing? I just spent a while hunting for it, and yes, it was missing from the validate-code results, but I can't see where it's missing from the $expand results
Let's start with simple/simple-expand-all-response-valueSet.json
-- it only has:
"parameter" : [{
"name" : "excludeNested",
"valueBoolean" : true
}],
.. and ..?
Where is the version of the CodeSystem that was used in the expansion?
that code system doesn't have a version, so there's no parameter saying what it is
These days I guess that should be called system-version
? But it's a canonical, so I would expect http://hl7.org/fhir/test/CodeSystem/simple|
as the value
really? I would not expect that
That says "I use a version-less instance of this code system", rather than just not saying anything.
so firslty, it's not system-version - that's something else, an instruction about the default version to use. version is the actual version used. Though I just spent 15min verifying that for myself, and it could actually be documented
At least it's "not wrong"
+1 for documenting these :)
That says "I use a version-less instance of this code system", rather than just not saying anything
I
I'm not sure that it does. I just read the section on canonicals again, and at least we can say that this is not clear
I don't see another way to say it -- the trailing |
might be optional, but is, I think, in the spirit of things?
I think that the IG publisher would blow up on this:
"parameter" : [{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple|"
}]
If you want a versionless canonical, you omit the '|'. I would expect (and have only ever seen) the '|' there if there's a trailing version.
no wouldn't blow up, just wouldn't make sense in the page, because the code makes the same assumption as Lloyd
Hmm, that looks like it might be HAPI behaviour -- I'm guessing if you set the version to "" rather than null.
Investigating...
Yep, that is the issue.
Would IG publisher cope sensibly without the trailing |
"parameter" : [{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
it'll ignore that one. As it will ignore http://hl7.org/fhir/test/CodeSystem/simple| from the next release
if there's no version, there's nothing to say
And I'll work around HAPI to leave off the |
but you won't leave the parameter out?
what about in the response to $validate-code when there's no version on the code system?
that's why you should leave it out
I'll have to look & think deeper - if the ValueSet has two code systems but one has no version, then it could be misleading / confusing to have only one "version" reported? I think leaving it out means clients may have to work harder.
why would clients have to work harder?
Just looking now at extensions/expand-echo-bad-supplement-parameters.json
-- we've used PROCESSING as the code rather than BUSINESS-RULE ; seems a somewhat arbitrary distinction
It is but I don't mind changing
clients (that care) have to know that a missing version means a code-system didn't have a version. And, they have to scan the expansion to find all the code systems in scope (and this may not be a complete set if paging).
Additionally, what if the valueset references two "versions" of the same code system, and one is empty...hmm, not sure if that is possible with Ontoserver.
Re PROCESSING vs BUSINESS-RULE, ideally the test would allow either
what if the valueset references two "versions" of the same code system, and one is empty
You should go bang on that case
clients (that care) have to know that a missing version means a code-system didn't have a version
But they have to scan to decide that either way
Not if all the code systems are listed directly in expansion.parameters."version"
.
Another edge case - a code system is referenced in the compose, but no codes actually match - you'd never know it was in scope
Regarding setting the ValueSet.language
to the value of $expand
's displayLanguage
parameter, will this not be misleading if only some of the codes have translations in the requested language?
sticking to version for now... you're really using it as more than a version - you're using it as a dependency list
I'm thinking that clients might be doing that, yes
well, if we're going to use it to report things that don't contain versions, then we should change it's name. Or would you not consider that?
Regarding setting the ValueSet.language to the value of $expand's displayLanguage parameter, will this not be misleading if only some of the codes have translations in the requested language?
possibly, it that's what was going on, but it's not
well, the tests now have version as optional
though I think we should consider renaming it
did you want to talk about other parameters before we talk about language?
and going back, I sure don't understand this:
Also, I'm getting a missing error for includeDesignations. Again, this seems like our interpretations of "parameters that affected expansion" are mis-aligned. I interpret this as being the calculation of the matching codes, not the specific representation that gets returned (noting that displayLanguage is counted since it affects the computed display value)
what's it got to do with the calculation of matching codes?
https://github.com/hapifhir/org.hl7.fhir.core/pull/1246 - work to date if you won't want to wait for some weird testing thing to be resolved
Ignore the includeDesignations
thing - I'm just including it if a value was supplied.
(deleted)
Back on display validation, the example in the spec suggests that the appropriate response is to fail:
http://www.hl7.org/fhir/valueset-operation-validate-code.html#examples
Is there appetite for adding another mode, e.g. ALLOW_INVALID_DISPLAY ?
the example certainly does suggest failure is appropriate
As a status update, I think we're very close to passing except for the errors relating to unexpected "version"
values which manifest like:
Group simple-cases
Test simple-expand-all: Fail
array properties count differs at .expansion.parameter
Expected :1
Actual :2
and also some spurious validation of the actual error message strings:
Test validation-simple-codeableconcept-bad-system: Fail
string property values differ at .parameter[0].resource.issue[0].details.text
and
Test validation-simple-codeableconcept-bad-version1: Fail
string property values differ at .parameter[0].resource.issue[0].details.text
I figured the question of the actual error messages would come up at some point
but good to hear, thanks
Is there appetite for adding another mode, e.g. ALLOW_INVALID_DISPLAY ?
I don't think I'd like to add another mode for this. Or at least, not this alone. I'm considering the ramifications of just saying that's an error, and then picking through the issues in the IG publisher and downgrading it to a warning if the issues are only about displays.
Either way, I'll be putting this question to the two communities (TI and IG editors) in New Orleans
I think we're very close to passing
Well, too soon :-)
Seems the test harness complains about Ontoserver including extensions.
It also doesn't account for the expansion.contains
being flat when excludeNested
is not true
.
But I believe these are txTests issues, not Ontoserver issues
A new spec issue -- expansion.parameter.value[x]
doesn't support canonical
only uri
.
image.png
Which means the test responses that have an expansion.parameter
like:
{
"name" : "version",
"valueCanonical" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
}],
are invalid.
yeah I discovered that last night. I'm midway through revising them for other reasons and then I'll make another commit
@Michael Lawley I committed fixed up tests.
with regard to error messages, can you share a copy of the different error messages with me? I'm going to set the tests up so that the messages have to contain particular words. (I think)
I'm going to set the tests up so that the messages have to contain particular words. (I think)
Um, ok.
The specified code 'code1x' is not known to belong to the specified code system 'http://hl7.org/fhir/test/CodeSystem/simple'
A version for a code system with URL http://hl7.org/fhir/test/CodeSystem/simple was not supplied and the system could not find its latest version.
A version for a code system with URL http://hl7.org/fhir/test/CodeSystem/simplex was not supplied and the system could not find its latest version.
None of the codes in the codeable concept were valid.
The provided code "#code1x" was not found in value set http://hl7.org/fhir/test/ValueSet/simple-all
The provided code "http://hl7.org/fhir/test/CodeSystem/en-multi#code1" exists in the ValueSet, but the display "Anzeige 1" is incorrect
The provided code "http://hl7.org/fhir/test/CodeSystem/simple#code2a" was not found in value set http://hl7.org/fhir/test/ValueSet/simple-filter-regex
Another test case error:
validation-simple-code-good-display
The ValueSet specifies a version for the code system 1.0.0
but the display value supplied in the request "good-display" is that from version 1.2.0
AND the response says that version 1.2.0
was used in the validation.
I think that's fixed up now?
No - https://github.com/FHIR/fhir-test-cases/blob/master/tx/validation/simple-code-good-display-response-parameters.json still shows version 1.2.0
, last updated 20 hrs ago
but what's the request?
duh. I forgot to push :sad:
and now the request has valueString not valueUri for the system :man_facepalming:
ah, that's an ongoing issue -- I just have local changes to work around :-)
I'll fix
ok pushed
Thanks! At least with my test harness the main outstanding issues is the display validation issue.
Now looking at extensions-echo-enumerated
:
ValueSet.extension
in the output expansion ValueSet?ValueSet.compose
, ValueSet.date
, and ValueSet.publisher
should all be optional.the display validation issue?
whether an invalid display causes result to be false
oh right. yes
Why are the top-level ValueSet.extension in the output expansion ValueSet?
Because they might matter, so the server should echo them
Not just for this expansion, but all) - ValueSet.compose, ValueSet.date, and ValueSet.publishershould all be optional.
I guess. I don't think it matters to me? I'll check if I care
Why are the top-level ValueSet.extension in the output expansion ValueSet?
Because they might matter, so the server should echo them
That suggests a stronger link between specification and the expansion than I expect. This appears to be the key statement from 4.9.8 Value Set Expansion
A resource that represents a value set expansion includes the same identification details as the definition of the value set
What is the scope of "identification details"?
regarding ValueSet.compose: I have a parameter includeCompose
for whether it should be returned or not, but I don't ever use it, and I wouldn't currently miss the compose
Is that not what includeDefinition
is for?
Also, looking at the OperationOutcome
s, why use .details.text
rather than .diagnostics
(given that there's no .details.coding
values)
dear me it is
diagnostics is for things like stack dumps etc. The details of the issue go in details.text
That suggests a stronger link between specification and the expansion than I expect. This appears to be the key statement from 4.9.8 Value Set Expansion
I didn't understand that
What is the scope of "identification details"?
url + version + identifiers, I think
OperationOutcome.issue.diagnostics
Comment: This may be a description of how a value is erroneous [...]
But happy to update - it's all new
Stronger link...
Why would an extension on a ValueSet definition be relevant to its expansion (as a general rule)?
it shouldn't be but it might be relevant to the usage of the expansion
hence why I echo it
Hmm, ok
Should that be a requirement here?
no, in fact, they are only included if includeDefinition is true.
pushed new tests. code for running the tests is in the gg-202305-more-tx-work2 branch of core
my local copy of tx-fhir-org still fails one of the tests... might have more work to do on the tester
open issues - text details, + the display validation question which is going to committee in New Orleans
So, turns out that it is HAPI's code that's populating the OperationOutcome and putting the text into diagnostics and not details.text
This is only in the case of things like code system (supplement) or value set not found/resolvable since that's a 404 response
this one definitely matters.
Yep, I'll have to take over from the default interceptor behaviour
Thanks @Grahame Grieve I have the new tests and the gg-202305-more-tx-work2 branch running locally.
A bunch of tests are failing because the expected expansion is hierarchical, but Ontoserver returns a flat expansion so there are errors like:
Group parameters
Test parameters-expand-all-hierarchy: Fail
array properties count differs at .expansion.contains
Expected :3
Actual :7
so why is Ontoserver returning a flat expansion? does it need a parameter?
Because it's allowed to, and unless you're returning "all codes", it's a hard problem to cut nodes out of a tree/graph
Let alone order them
but that one is all codes
All codes is very low on our priority list (infrequent use case) so we haven't done special-case work to preserve hierarchy.
It's also something that we've rarely been asked about.
it's certainly come up from the IG developers
and I'm surprised... structured expansions are a real thing for UI work
What we have heard is that some people want to have an explicit hierarchy on expansion that doesn't match the code system's hierarchy (eg where things are grouped differently from the normal isa hierarchy). In these cases the simplest approach we've found is to have them express the desired hierarchy in the stored expansion.
that might be, but as you see, there's reasons people want a heirarchy
But for IG developers, why do they care about the (on the wire) expansion; if the IG tooling needs to render the hierarchy, then it's in the CodeSystem already, or can be recovered from the ValueSet with $expand?property=parent
.
it IG tooling defers to the tx service on this matter. It doesn't try to impose heirachy on what the tx server chooses to return
@Michael Lawley we're going to do triage here on our open issues tomorrow. What I have in my mind:
have I missed anything?
Wrt "How a server reports that it doesn't do heirarchical expansions", a server may do this in some circumstances but not others. For example, Ontoserver (currently) does not do them when calculating the expansion itself, but may return them if its (re-)using a stored expansion.
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
it IG tooling defers to the tx service on this matter. It doesn't try to impose heirachy on what the tx server chooses to return
Then it's effectively choosing to be happy with what the tx server returns, and in that case anything that is in-spec with the general FHIR tx services spec should be acceptable.
Then it's effectively choosing to be happy with what the tx server returns, and in that case anything that is in-spec with the general FHIR tx services spec should be acceptable.
It is acceptable from the infrastructure's pov, but not acceptable from the consumer's pov
it might be acceptable to some consumers, the ones who choose to use Ontoserver, but I think that would mean many editors would not be ok with HL7 using Ontoservrer
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
but that's how the test case we're talking about works
Other examples like this would be where, for example, a ValueSet includes the implicit all-codes ValueSet and the result is flat, but if it instead includes the CodeSystem directly then it is hierarchical
If $expand?url=vs1
returns a hierarchical expansion, then I define vs2 as "include vs1", should it not also return a hierarchical expansion?
It is acceptable from the infrastructure's pov, but not acceptable from the consumer's pov
From my perspective, the consumer here is using a tool that could provide this behaviour itself by using the CodeSystem directly (or by reconstructing the hierarchy from parent relationships), but the tool chooses to hand it off to the tx server. Since this is a context-specific behaviour, why not have the tool that wants it, implement it?
Of course, if Ontoserver users call for this behaviour, then that's something we would strong consider, but otherwise it seems like there's an undocumented set of use-cases where a specific behaviour is desired that we have to discover in a trial by error manner.
well, here you are, discovering it :grinning:
returning an hierarchical expansion when the value set includes all of a hierarchical code system is a required feature for HL7 IG publication
Probably because I'm grounded in HL7 culture, but for me that's totally obvious and hardly needs to be stated as a requirement, so there you go. However, Ontoserver doesn't need to do that to be used by the eco-system as an additional terminology server
I'm thinking about how to handle that in the tests - that's why I asked whether this is a feature that surfaces in the metadata anywhere. But it doesn't :sad:
other than parameters-expand-all-hierarchy, parameters-expand-enum-hierarchy, and parameters-expand-isa-hierarchy, does this affect any other tests?
on the subject of display error/warning, I'll be advocating for a parameter that defaults to leaving the tx server returning an error.
is it another mode flag? or something else?
I think another mode flag works. With the default being return error, and the flag saying don't error on displays, just warn.
I've just updated https://r4.ontoserver.csiro.au/fhir with the work-in-progress changes to align better with the requirements as expressed in the txTests
I believe that many of the reported failures are false negatives, and some are very hard to understand what's going on, e.g.:
Test validation-simple-code-good-version: ... Exception: Error from server: Error:org.hl7.fhir.r4.model.CodeableConcept@11b455e5
org.hl7.fhir.r4.utils.client.EFhirClientException: Error from server: Error:org.hl7.fhir.r4.model.CodeableConcept@11b455e5
at org.hl7.fhir.r4.utils.client.network.FhirRequestBuilder.unmarshalReference(FhirRequestBuilder.java:263)
at org.hl7.fhir.r4.utils.client.network.FhirRequestBuilder.execute(FhirRequestBuilder.java:230)
at org.hl7.fhir.r4.utils.client.network.Client.executeFhirRequest(Client.java:194)
at org.hl7.fhir.r4.utils.client.network.Client.issuePostRequest(Client.java:119)
at org.hl7.fhir.r4.utils.client.FHIRToolingClient.operateType(FHIRToolingClient.java:279)
at org.hl7.fhir.convertors.txClient.TerminologyClientR4.validateVS(TerminologyClientR4.java:137)
at org.hl7.fhir.validation.special.TxTester.validate(TxTester.java:252)
at org.hl7.fhir.validation.special.TxTester.runTest(TxTester.java:191)
at org.hl7.fhir.validation.special.TxTester.runSuite(TxTester.java:163)
at org.hl7.fhir.validation.special.TxTester.execute(TxTester.java:95)
at org.hl7.fhir.validation.ValidatorCli.parseTestParamsAndExecute(ValidatorCli.java:227)
at org.hl7.fhir.validation.ValidatorCli.main(ValidatorCli.java:148)
I'll investigate
it's sure not a useful error message
I noticed also that the test fixtures are not automatically created?
Also language/codesystem-de-multi.json
has elements like title:en
which fails when I tried to load it in (using the 5->4 converter in HAPI)
oh. right
you can't use those directly, no
I forgot - I was playing around with that format and left it in
in the case of that test, the error should be
Error from server: Error:[0a8c6743-42a8-43fe-bca5-1138aa91595d]: Could not find value set http://hl7.org/fhir/test/ValueSet/version-all-1 and version null. If this is an implicit value set please make sure the url is correct. Implicit values sets for different code systems are specified in https://www.hl7.org/fhir/terminologies-systems.html.
I noticed also that the test fixtures are not automatically created?
I'm not sure what that means
All the test code systems, and valuesets identified in test-cases.json
are not automatically loaded into Ontoserver when I run the txTests thing. Instead, I needed to run my own loader
no they're passed in a tx-resource
parameter with each request
I didn't notice this until just now, running against the new r4.ontoserver deployment since previously I was testing against a local server that I'd already loaded things onto
Aha! Another magic parameter -- is support for that part of the test?
this is already known. You and I discussed it in the past. see FHIR-33944. It's very definitely required
The test cases do it this way since support is required to support the IG publisher
https://github.com/hapifhir/org.hl7.fhir.core/pull/1255 for the execution problem
Yes, I recall the proposal.
The test cases do it this way since support is required to support the IG publisher
that's effectively what I was asking.
Does this also extend to FHIR-33946 and the cache-id
parameter?
that one is optional - the client looks in the capability statement to see if cache-id is stated to be supported before deciding that the server is capable of doing that
though the test cases don't try that
I'm going to have to put some considered thought into how we support tx-resource
.
Non-exhaustive list of considerations:
None of these are a problem for us with ValueSet resources (we already support contained ValueSets), but they are for CodeSystems.
for me, those are not a thing - they are never written. You probably can't avoid that. But what's 'name clashes' about?
What happens when the resource passed via tx-resource
has the same URL as one that is already on the server? Does it shadow it? It may have an older version than the one on the server and the reference from the request may not be version-specific; should the older version supplied via tx-resource
be preferred over the newer one?
here's what I drafted about that:
One or more additional resources that are referred to from the value set provided with the $expand or $validate-code invocation. These may be additional value sets or code systems that the client believes will or may be necessary to perform the operation. Resources provided in this fashion are used preferentially to those known to the system, though servers may return an error if these resources are already known to the server (by URL and version) but differ from that information on the server
@Michael Lawley I updated the test cases for the new mode parameter
Thanks. I note that it is still complaining about extension content (Ontoserver includes some of its own extensions). I would have expected addition extension content to be generally ignored?
which extensions?
Michael Lawley said:
Coding.display A representation of the meaning of the code in the system, following the rules of the system.
"following the rules of the system", not "following the rules of some system implementer".
Also, if a display is not appropriate, then get it fixed -- either at source (in HL7 / THO) or with the external party. If the external party won't play ball, then fix it in a shared supplement so everyone can benefit rather than lots of (potentially incompatible) fixes spread over many different IGs.
Remembering that things get done according to the path of the least resistance, I see very little instruction and zero examples of using supplements in http://hl7.org/fhir/valueset.html - so chances of them being used for this purpose are very slim. Any changes in this area must offer a path of less or at most equal resistance compared to trimming the display text to what you mean.
well, we can provide examples, that's for sure.
Yep, at the same time, there is dragon text on the supplements:
The impact of Code System supplements on value set expansion - and therefore value set validation - is subject to ongoing experimentation and implementation testing, and further clarification and additional rules might be proposed in future versions of this specification.
That would need to go away as well to get confidence in using them
Otherwise hard to say 'this is what you shall use' when it's an experimental thing.
we're coming out of the experimentation phase :grinning:
and talking about the additional rules
Michael Lawley said:
If that's all they did I'd be less concerned. What they REALLY DO is allow people to change the display text on-the-fly to absolutely anything (and people do this), and the results sometimes bear zero resemblance to the code's meaning. This is why I say we're concerned about the clinical use case over the IG use case, and why I want the caller to explicitly request that an invalid display not return an error; then the onus is on the caller.
I don't see how this will improve the situation. It would just become an almost mandatory thing you do "just because the spec requires it" and it wouldn't carry the intended meaning.
Good use of supplements would, that way the IG can be explicit about the display codes it is tweaking to better fit the purpose. I'd be happy to do that in my IGs!
@Michael Lawley I finally got to a previously reported issue:
However, I'm trying to use tx.fhir.org/r4 as a reference point but I can't get it to behave.
For example http://tx.fhir.org/r4/ValueSet/$validate-code?system=http://snomed.info/sct&code=22298006&url=http://snomed.info/sct?fhir_vs=isa/118672003 gives a result=true even though the code is not in the valueset. In fact the url parameter seems to be totally ignored?
Indeed. It's an issue in the parser because there's 2 =
in the parameter - it's splitting on the second not the first
it works as expected if you escape the second =
I believe the correct strategy is to take the query part (everything from the 1st ?
) and split on &
, then split each of these on the first =
only
I didn't say I was happy with what it's doing
ah, not your parser code then?
it is. it's the oldest code I have. I think I haven't touched it since 1997 or so
PR time?
maybe. The URL itself is invalid so the behaviour isn't wrong, but I don't like it much
Why is that URL invalid?
an unescaped = in it. I think that's not valid according to the http spec. But I upgraded the server anyway, and it should be OK now
according to https://www.rfc-editor.org/info/rfc3986 it is valid, and '=' is considered to be a subdilimiter.
that doesn't really relate to it's use in key/value pairs
I don't see where an unescaped = is illegal?
@Michael Lawley a new issue has raised it's ugly head.
consider the situation where a value set refers to an unknown code system, and just includes all of it, and a client asks to validate the code
e.g.
{
"resourceType" : "ValueSet",
"id" : "unknown-system",
"url" : "http://hl7.org/fhir/test/ValueSet/unknown-system",
"version" : "5.0.0",
"name" : "UnknownSystem",
"title" : "Unknown System",
"status" : "active",
"experimental" : true,
"date" : "2023-04-01",
"publisher" : "FHIR Project",
"compose" : {
"include" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simpleX"
}]
}
}
and
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "url",
"valueUri" : "http://hl7.org/fhir/test/ValueSet/unknown-system"
},{
"name" : "code",
"valueCode" : "code1"
},{
"name" : "system",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simpleX"
}]
}
This is a pretty common situation in the IG world, and the IG publisher considers this a warning not an error.
but it's very clearly an error validating
{
"resourceType" : "Parameters",
"parameter" : [{
"name" : "issues",
"resource" : {
"resourceType" : "OperationOutcome",
"issue" : [{
"severity" : "error",
"code" : "not-found",
"details" : {
"text" : "The CodeSystem http://hl7.org/fhir/test/CodeSystem/simpleX is unknown"
},
"location" : ["code.system"]
}]
}
},
{
"name" : "message",
"valueString" : "The CodeSystem http://hl7.org/fhir/test/CodeSystem/simpleX is unknown"
},
{
"name" : "result",
"valueBoolean" : false
}]
}
... only... the validator decides that this is one of those cases because there's a parameter
"cause" : "not-found"
where cause is taken from OperationOutcome.issue.type.
but I removed cause from the returned parameters, and now I have no way to know that the valueset validation failed because of an unknown code system
the case above says that there is an unknown code system, but it doesn't explicitly say that the result is false because of the unknown code system.
This is a "fail to validate" rather than a "validate = false" situation -- I'd expect a 4XX series error from the Tx and an OperationOutcome about the CodeSystem not found.
Will that work?
I'm pretty sure Ontoserver does something like this
I don't think that's right - other issues can still be detected and returned
So I don't follow why you have removed cause
?
it wasn't a standard parameter. And it was pretty loose anyway
it's kind of weird to just put 'cause : not found' and assume everyone knows that means validation failed because the code system needed to determine value set membership wasn't found
I need a better way to say it...
you also have location: ["code.system"]
and the details.text
I do have that, but I'm going to be second guessing the server to decide whether that's the cause, or an incidental finding
Does this come down to identifying which one (or more?) of the issues was the trigger for result = false?
yes that's one way to look at it
Can it be as simple as "all the issues with severity = error"?
no I don't think it can. There's plenty of scope of issues with severity = error whether or not the code is in the value set
Doesn't that depend on how you interpret things? For example, if validating a codeableConcept, then you validate each contained Coding. If they all fail, then each contributes an issue with severity of error, but if any passes, then the issues from the others would just be warning?
This seems to be in line with
Indicates how relevant the issue is to the overall success of the action
I certainly don't think levels work like that. If a system is wrong, or a code is invalid, then that's an error
at the local level, but not at the level of the overall operation
issue.code
has this comment:
For example, code-invalid might be a warning or error, depending on the context
really?
really
Comments:
Code values should align with the severity. For example, a code offorbidden
generally wouldn't make sense with a severity ofinformation
orwarning
. Similarly, a code ofinformational
would generally not make sense with a severity offatal
orerror
. However, there are no strict rules about what severities must be used with which codes. For example,code-invalid
might be awarning
orerror
, depending on the context
(my emphasis)
oh I believed you. And I probably did write that. But I've noodled on it for a couple of hours, and in the context of the validator, invalid codes are invalid codes, whether they're in the scope of the value set or not.
and on further noodling, I think this is OK to be an extension for tx.fhir.org - the notion of 'it's not an error because the code system is unknown' is kind of centric to the base tx service, and not to additional ones. So I'm going with a parameter name of x-caused-by-unknown-system
for the link, and the tests won't require that
also @Jonathan Payne
@Grahame Grieve Looks nice... :+1:
other/codesystem-dual-filter.json
is invalid -- it has a duplicate code: AA
Also, HAPI is complaining about language/codesystem-de-multi.json
:
HAPI-0450: Failed to parse request body as JSON resource. Error was: HAPI-1825: Unknown element 'title:en' found during parse
hmm
hapi probably doesn't support JSON 5 either. can you try commenting that line out?
So, the testing/comparison aspect is complaining about / rejecting extensions that Ontoserver includes that are not part of the expected result.
e.g.,
Group simple-cases
Test simple-expand-all: Fail
properties differ at .expansion.contains[1]: missing property extension
Test simple-expand-enum: Fail
properties differ at .expansion.contains[1]: missing property extension
Test simple-expand-isa: Fail
properties differ at .expansion.contains[0]: missing property extension
Test simple-expand-prop: Fail
properties differ at .expansion.contains[0]: missing property extension
Test simple-expand-regex: Fail
properties differ at .expansion.contains[1]: missing property extension
what extensions are yo including?
One is http://ontoserver.csiro.au/profiles/expansion
what is it?
Why does that matter? It's an extension, if you don't understand it you can (should) ignore it.
(It's actually legacy from DSTU2_1 to indicate inactive status)
it doesn't matter for the tests, no, but I'm just interested for the sake of being nosy
:laughing:
I'll think about the testing part
@Michael Lawley https://github.com/hapifhir/org.hl7.fhir.core/pull/1303
I have rewritten these two pages:
I have removed the section on registration - I'm rewriting that after talking to @Michael Lawley, more on that soon
I reconciled the two pages and changed the way the web source reference works
@Grahame Grieve Hi, I am running the fhir tx testsuite against Snowstorm. For some tests, there are complaints about a missing "id" property, and the test fails. Turns out that the resource that is returned contains an "id" whereas the "reference" resource does not contain an "id". Is this a real "fail", or is the "id" property supposed to be optional?
Expected:
{
"$optional-properties$" : ["date", "publisher", "compose"],
"resourceType" : "ValueSet",
"url" : "http://hl7.org/fhir/test/ValueSet/simple-all",
"version" : "5.0.0",
"name" : "SimpleValueSetAll",
"title" : "Simple ValueSet All",
"status" : "active",
"experimental" : false,
"date" : "2023-04-01",
"publisher" : "FHIR Project",
"compose" : {
"include" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple"
}]
},
"expansion" : {
"identifier" : "$uuid$",
"timestamp" : "$instant$",
"total" : 7,
"parameter" : [{
"name" : "excludeNested",
"valueBoolean" : true
},
{
"name" : "used-codesystem",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
},
{
"$optional$" : true,
"name" : "version",
"valueUri" : "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
}],
"contains" : [{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code1",
"display" : "Display 1"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"abstract" : true,
"inactive" : true,
"code" : "code2",
"display" : "Display 2"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2a",
"display" : "Display 2a"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2aI",
"display" : "Display 2aI"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2aII",
"display" : "Display 2aII"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code2b",
"display" : "Display 2b"
},
{
"system" : "http://hl7.org/fhir/test/CodeSystem/simple",
"code" : "code3",
"display" : "Display 3"
}]
}
}
Actual:
{
"resourceType": "ValueSet",
"id": "simple-all",
"url": "http://hl7.org/fhir/test/ValueSet/simple-all",
"version": "5.0.0",
"name": "SimpleValueSetAll",
"title": "Simple ValueSet All",
"status": "active",
"experimental": false,
"publisher": "FHIR Project",
"expansion": {
"id": "f4b71bf6-3ef4-4c30-a4ea-ab3f4ae3dad6",
"timestamp": "2024-10-09T15:08:23+02:00",
"total": 7,
"offset": 0,
"parameter": [
{
"name": "version",
"valueUri": "http://hl7.org/fhir/test/CodeSystem/simple|0.1.0"
},
{
"name": "displayLanguage",
"valueString": "en"
}
],
"contains": [
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code1",
"display": "Display 1"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2",
"display": "Display 2"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2a",
"display": "Display 2a"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2aI",
"display": "Display 2aI"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2aII",
"display": "Display 2aII"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code2b",
"display": "Display 2b"
},
{
"system": "http://hl7.org/fhir/test/CodeSystem/simple",
"code": "code3",
"display": "Display 3"
}
]
}
}
it's not an error to return a populated id element. If doesn't even have to the same id. Probably it shouldn't be, but that's a style question
which means that the test is wrong, really
I updated the tests to allow id, but you'll have to wait for the release of a new validator to use them, unfortunately
about 24 hours
As you may know from other messages, I am investigating the options to make Snowstorm fhir tx testsuite compliant. As our reference server for terminology in Belgium is an Ontoserver (now 6.20.1 since yesterday), and I want the Snowstorm behaviour to be as similar as possible to the Ontoserver behaviour, I also ran the fhir tx testsuite against Ontoserver. I got a result of 16% fails.
I know from #Announcements > Using Ontoserver with Validator / IG Publisher that Ontoserver is considered compatible. How should I interpret the 16% failed tests? Is any software allowed to fail 16% tests? Any 16%, or only that specific 16% of the tests? What is also strange, is that the highest amount of failures is in the "simple-cases" test group. Is the "simple-cases" test group the test of _basic_ behaviour, and are these tests of a greater weight? What does this say about the interplay between IGPublisher and the tested terminology server?
I don't know about 16% failure. What version are you running? I test the public ontoserver everyday and get 100% pass rate
is that the highest amount of failures is in the "simple-cases" test group
hmm. maybe you need to set a parameter for flat rather than nested? Ontoserver doesn't do nested expansions, and that's a setting you pass to the test cases
try -mode flat
Ah yes, I had forgotten about that option
@Michael Lawley @Grahame Grieve Errors have gone down to 10% with
-mode flat
But that is still a lot... Any other suggestions? Since there are 'only' 21 failed testcases now, I'll post a list of their names here.
{
"name" : "simple-expand-isa-o2",
"status" : "fail",
"message" : "properties differ at .expansion.contains[0]: missing property abstract"
},
{
"name" : "simple-expand-isa-c2",
"status" : "fail",
"message" : "properties differ at .expansion: missing property offset"
},
{
"name" : "simple-expand-isa-o2c2",
"status" : "fail",
"message" : "string property values differ at .expansion.contains[0].code\nExpected:\"code2aI\" for simple-expand-isa-o2c2\nActual :\"code2a\""
},
{
"name" : "simple-lookup-1",
"status" : "fail",
"message" : "string property values differ at .parameter[6].part[2].valueCode\nExpected:\"code2aI\" for simple-lookup-1\nActual :\"code2aII\""
},
{
"name" : "simple-lookup-2",
"status" : "fail",
"message" : "array item count differs at .parameter[9].part\nExpected:\"2\" for simple-lookup-2\nActual :\"3\""
},
{
"name" : "validation-simple-code-bad-valueSet",
"status" : "fail",
"message" : "array item count differs at .issue\nExpected:\"1\" for validation-simple-code-bad-valueSet\nActual :\"2\""
},
{
"name" : "validation-simple-coding-bad-valueSet",
"status" : "fail",
"message" : "array item count differs at .issue\nExpected:\"1\" for validation-simple-coding-bad-valueSet\nActual :\"2\""
},
{
"name" : "validation-simple-codeableconcept-bad-valueSet",
"status" : "fail",
"message" : "array item count differs at .issue\nExpected:\"1\" for validation-simple-codeableconcept-bad-valueSet\nActual :\"2\""
},
{
"name" : "validation-simple-codeableconcept-bad-version2",
"status" : "fail",
"message" : "string property values differ at .parameter[1].resource.issue[1].details.text\nExpected:\"A definition for CodeSystem 'http://hl7.org/fhir/test/CodeSystem/simpleXX' version '1.0.4234' could not be found, so the code cannot be validated. Valid versions: []\" for validation-simple-codeableconcept-bad-version2\nActual :\"A definition for CodeSystem 'http://hl7.org/fhir/test/CodeSystem/simpleXX|1.0.4234' could not be found, so the code cannot be validated\""
},
{
"name" : "validation-simple-code-bad-language",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-code-bad-language\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language-header",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language-header\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language-vs",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language-vs\nActual :\"4\""
},
{
"name" : "validation-simple-coding-bad-language-vslang",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"6\" for validation-simple-coding-bad-language-vslang\nActual :\"4\""
},
{
"name" : "validation-simple-codeableconcept-bad-language",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"7\" for validation-simple-codeableconcept-bad-language\nActual :\"5\""
},
{
"name" : "big-echo-no-limit",
"status" : "fail",
"message" : "string property values differ at .resourceType\nExpected:\"OperationOutcome\" for big-echo-no-limit\nActual :\"ValueSet\""
},
{
"name" : "notSelectable-reprop-true",
"status" : "fail",
"message" : "number property values differ at .expansion.total\nExpected:\"1\" for notSelectable-reprop-true\nActual :\"0\""
},
{
"name" : "notSelectable-reprop-false",
"status" : "fail",
"message" : "number property values differ at .expansion.total\nExpected:\"1\" for notSelectable-reprop-false\nActual :\"0\""
},
{
"name" : "notSelectable-reprop-true-true",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"5\" for notSelectable-reprop-true-true\nActual :\"6\""
},
{
"name" : "notSelectable-reprop-false-false",
"status" : "fail",
"message" : "array item count differs at .parameter\nExpected:\"5\" for notSelectable-reprop-false-false\nActual :\"6\""
},
{
"name" : "act-class",
"status" : "fail",
"message" : "properties differ at .expansion.contains[10]: missing property property"
}
well, that's weird. like I said, 100% on the public ontoserver. is that what you get testing that one?
No, that is what I get testing the Belgian one. Sadly I cannot give you the URL, because it is not publicly accessible for the moment. But it's newly setup, so its setup might differ from the Australian setup.
well, how about you test the public Australian one. If that passes, then you have a baseline. Btw, the output will point you at a temp directory where you can use a diff program to look at the difference between expected and actual
@Michael Lawley I know you are in contact with our terminology man David Op de Beeck and his team. Could you suggest any possible modifications in the Belgian setup to get the test cases working?
it'd be a lot easier if you'd look at the differences and tell us why... a language thing?
that seems most likely to me
And is there a tad of documentation available on that topic?
which topic?
I mean from the Ontoserver side, how to pass the tests...
@Grahame Grieve
These are the cli options I am using now:
-txTests -source ./tx -output ./output -tx https://belgian.tx.server -version 4.0 -mode flat
Do I find the temp directory in test-results.json? Or in the stdout/stderr of the validator_cli.jar?
your output should start something like this:
Run terminology service Tests
Source for tests: /Users/grahamegrieve/work/test-cases/tx
Output Directory: /Users/grahamegrieve/temp/local.fhir.org
Term Service Url: http://local.fhir.org
External Strings: false
Test Exec Modes: []
Tx FHIR Version: true
look in the output directory
Just catching up with this thread. We too get an error for:
{
"name" : "simple-expand-isa-o2",
"status" : "fail",
"message" : "properties differ at .expansion.contains[0]: missing property abstract"
},
for example, because Ontoserver includes "abstract = true" for "code2" (implied because it has the property notSelectable = true"). The expected response in the test doesn't include this, but I think should be updated (in line with a number of the other expected responses in the "simple" set).
@Grahame Grieve do you perhaps have a "quarrantine" list that whitelists a bunch of reported (but wrong/trivial) failures?
not to my knowledge. I'll investigate
hmm so this is where I eat humble pie. it turns out that I don't check the outcomes at all. Here's the code in the my JUnit test cases:
String err = tester.executeTest(setup.suite, setup.test, modes);
Assertions.assertTrue(true); // we don't care what the result is, only that we didn't crash
and when I looked at that in surprise, I remembered what I was thinking. You (@Michael Lawley) might recall that occasionally, the tester crashed testing ontoserver. so I added the ontoserver tests to ensure that it didn't crash on you (which it hasn't since I added the tests)
But I erroneously got it in my mind that it was testing ontoserver, as is evident earlier in the thread. Now that I've changed it, I'm getting 100 failures
I'll dig into them over the weekend. @Bart Decuypere my apologies for giving you the run around on this
100?!? Are you setting mode flat?
yep
some of them at tx.fhir.org only tests, don't know why they're running, but that's only maybe 20. I haven't looked at the others
how often do you run them?
Every build runs the tests, but we have a quarantine list so that certain failures are tolerated.
I can share that - the intention had been to work through that list and get the corner cases sorted out (some have crept in as new tests were added), but with Jim on paternity leave we've not had the bandwidth
well, we better work through them then
and congratulations @Jim Steel btw
i'm not pointing at the Ontoserver message file. I better do that
what is this?
"extension" : [{
"extension" : [{
"url" : "inactive",
"valueBoolean" : true
}],
"url" : "http://ontoserver.csiro.au/profiles/expansion"
}],
lots of the fails are because of this
Yes, it's a "private" extension, kept for backward compatibility, that predates the R5 property stuff.
But it should be ignored for test purposes; unknown extensions that are not must understand are safe to ignore
it didn't used to be present
and this one is weird given that there's also inactive = true directly
Grahame Grieve said:
@Bart Decuypere my apologies for giving you the run around on this
No offense taken, I've seen worse in my life...
I am still eager to see the actual differences:
Java: 18 from C:\openjdk-18\jdk-18 on amd64 (64bit). 8148MB available
Paths: Current = C:\Temp\toy\fhir-test-cases, Package Cache = C:\Users\eh089\.fhir\packages
Params: -txTests -source ./tx -output ./output -tx https://belgian.tx.server -version 4.0 -mode flat
Run terminology service Tests
Source for tests: ./tx
Output Directory: ./output
Term Service Url: https://belgian.tx.server
External Strings: false
Test Exec Modes: [flat]
Tx FHIR Version: 4.0
Load Tests from ./tx
I can't find any files to diff in the output directory, only test-results.json
really? weird.
BTW: I forgot to paste the version:
FHIR Validation tool Version 6.3.32 (Git# 54bf319161d4). Built 2024-10-14T06:04:19.383Z (3 days old)
if you run it with these parameters:
-txTests -source /Users/grahamegrieve/work/test-cases/tx -tx https://tx.ontoserver.csiro.au/fhir -mode flat
you should get this in your output directory:
OK, I'll try...
The -output
option will need an overhaul, I presume... without it, it works as you described. The percentage of failures however is identical (10%).
@Michael Lawley So the Australian and the Belgian Ontoserver seem to have the same setup with regard to the FHIR tx testcases.
I'm not understanding that bit about the -output option.
If I specify the -output
option, the "actual" files do not get logged in the output directory, but in another directory (which is not visible in the stdout/stderr). Only the test-results.json file is written to the directory specified in the -output
option.
it works for me? Weird. I don't know how to investigate that
no I happened to have it set to the value it's hardcoded to use. Fixed next release
@Bart Decuypere so @Michael Lawley and I have been iterating on this in our dev versions, and we've resolved things and Ontoserver is back to 100% pass rate, but now we have to go through our various release processes, so it won't be immediately there for everyone, sorry
(and thanks @Michael Lawley!)
@Grahame Grieve and @Michael Lawley , a sincere thanks for your follow up. I'll expect a ping with version number if there are compatible versions available for both softwares.
the next release of the validator, which will be 6.4.0
though I don't know when that'll be.. sometime later this week?
I can't speak for Ontoserver
It will be after the validator is released, hopefully not long
@Bart Decuypere the pre-release version of Ontoserver deployed at https://r4.ontoserver.csiro.au/fhir passes all tests - you need to call it with: -mode flat -mode ontoserver
thanks @Michael Lawley and I'm glad we sorted that.
@Bart Decuypere so that things are clear, the tests present somewhat of a challenge that we're still thinking about, because in order to test the API, the tests require the server to support features that are not required in a server
in particular, the tests work by including specific code system(s) that the operations use in order to ensure that the outputs from the operations are predictable and testable. But this requires that a server support the feature of supporting ad-hoc code systems to be provided as a parameter to the request, and that is a not a mandatory feature for code system specific servers.
But I don't have budget cover to writing (and constantly rewriting) specific code system tests where the outputs change as the new versions of the code systems are released.
@Michael Lawley @Grahame Grieve Thanks for your efforts. I'll thoroughly go through them after my holidays. It's a pity with regard to the temporary nature of the results / unofficial feature used as it defeats in some way the purpose of the tests, viz. defining an official API.
Ontoserver® v6.21.0 passed all HL7 terminology service tests (modes flat;ontoserver, tests v1.6.0, runner v6.4.0)
I don't believe this is a temporary result; we would not expect later versions of Ontoserver to fail the v1.6.0 test suite unless a spec-violating bug is found in those tests.
all good until the next issue ;-)
The Agence du Numérique en Santé (ANS) Terminology Server is now part of the terminology eco-system, authoritative for https://smt.esante.gouv.fr* and https://mos.esante.gouv.fr*
@Michael Lawley @Grahame Grieve I ran the tests with the new validator_cli.jar (6.4.0) and the latest commit on master of the fhir-test-cases (269cf3a5), and I still get one error:
{
"name" : "big-echo-no-limit",
"status" : "fail",
"message" : "Response Code fail: should be '4xx' but is '200'"
}
against what server?
I ran with these options:
Params: -txTests -source ./tx -tx https://tx.ontoserver.csiro.au/fhir -mode flat -mode snowstorm
@Michael Lawley this is for you, I think
@Bart Decuypere you need -mode ontoserver not -mode snowstorm
of course... I definitely need a new pair of glasses! Thanks a lot for the effort!
When is the Ontoserver release scheduled?
It happened on the weekend - 6.21.0 is now out
I'll ask our team to upgrade! Thanks again!
(deleted)
new Publication: STU 1 of theFHIR Shorthand implementation guide: http://hl7.org/fhir/uv/shorthand/STU1
New Publication: STU 1 of the FHIR Da Vinci Unsolicited Notifications Implementation Guide: http://hl7.org/fhir/us/davinci-alerts/STU1
New Publication: STU 1.1 of the C-CDA on FHIR Implementation Guide: http://hl7.org/fhir/us/ccda/STU1.1
New Publication: STU 1 of the Vital Records Mortality and Morbidity Reporting FHIR Implementation Guide: http://hl7.org/fhir/us/vrdr/STU1/index.html
New Publication: STU1 of the CARIN Consumer Directed Payer Data Exchange (CARIN IG for Blue Button®) FHIR Implementation Guide: http://hl7.org/fhir/us/carin-bb/STU1
New Publication: STU1 of the HL7 Payer Data Exchange (PDex) Payer Network, Release 1 - US Realm Implementation Guide: hl7.org/fhir/us/davinci-pdex-plan-net/STU1
New Publication: STU1 of the HL7 Prior-Authorization Support (PAS), Release 1- US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-pas/STU1
New Publication: STU1 of the HL7 Payer Data Exchange (PDex), Release 1 - US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-pdex/STU1
New Publication: STU1 of the HL7 Da Vinci - Coverage Requirements Discovery (CRD), Release 1- US Realm FHIR® Implementation Guide: http://hl7.org/fhir/us/davinci-crd/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Payer Coverage Decision Exchange, R1 - US Realm: http://hl7.org/fhir/us/davinci-pcde/STU1
New Publication: STU1 of the FHIR® Implementation Guide: Documentation Templates and Payer Rules (DTR), Release 1- US Realm: http://hl7.org/fhir/us/davinci-dtr/STU1/index.html
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Risk Based Contract Member Identification, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-atr/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Pharmacist Care Plan Document, Release 1 - US Realm: http://hl7.org/fhir/us/phcp/STU1
New Publication: STU1 of the HL7 FHIR® Implementation Guide: Clinical Guidelines, Release 1: http://hl7.org/fhir/uv/cpg/STU1
Newly Posted: FHIR R4B Ballot #1: http://hl7.org/fhir/2021Mar
New Publication: Normative Release 1 of the HL7 Cross-Paradigm Specification: Clinical Quality Language (CQL), Release 1: http://cql.hl7.org/N1
New Publication: STU Release 1 of the HL7/NCPDP FHIR® Implementation Guide: Specialty Medication Enrollment, Release 1 - US Realm: http://hl7.org/fhir/us/specialty-rx/STU1.
New Publication: STU Release 3 of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures STU3 for FHIR R4: http://hl7.org/fhir/us/davinci-deqm/STU3
Lynn Laakso said:
New Publication: STU Release 3 of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures STU3 for FHIR R4: http://hl7.org/fhir/us/cqfmeasures/STU3
new Publication: STU Release 1 of the HL7 Immunization Decision Support Forecast (ImmDS) Implementation Guide: http://hl7.org/fhir/us/immds/STU1
New Publication: STU Release 4 of the HL7 FHIR® US Core Implementation Guide STU 4 Release 4.0.0: http://hl7.org/fhir/us/core/STU4
File not found ;-)
well that's not supposed to happen
it'll work now
The change log appears to be empty? http://hl7.org/fhir/us/core/history.html
Grahame has to fix that, it'll be 12 hours
fixed
New Publication: STU Update Release 1.1 of HL7 FHIR® Implementation Guide: Consumer Directed Payer Data Exchange (CARIN IG for Blue Button®), Release 1 - US Realm: http://www.hl7.org/fhir/us/carin-bb/STU1.1
I don't know as it matters but the directory of published versions doesn't show this version. http://hl7.org/fhir/us/carin-bb/history.html
it does for me. You might have a caching problelm
New Publication: STU Update Release 1.1 of HL7 FHIR® Profile: Occupational Data for Health (ODH), Release 1 - US Realm: http://hl7.org/fhir/us/odh/STU1.1
New Publication: STU Release 1 of HL7 FHIR® Implementation Guide: Vital Records Common FHIR Profile Library, Release 1: http://hl7.org/fhir/us/vr-common-library/STU1
New publication: STU Release 1 of HL7 FHIR® Implementation Guide: NHSN Inpatient Medication COVID-19 Administration Reports, Release 1- US Realm: http://hl7.org/fhir/us/nhsn-med-admin/STU1
New Publication: STU Release 1 of HL7 FHIR® Implementation Guide: NHSN Adverse Drug Event - Hypoglycemia Report, Release 1- US Realm: http://hl7.org/fhir/us/nhsn-ade/STU1
New Publication: STU Update (STU1.1) of HL7 FHIR® Implementation Guide: DaVinci Payer Data Exchange US Drug Formulary, Release 1 - US Realm: http://hl7.org/fhir/us/Davinci-drug-formulary/STU1.1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Release 1 - US Realm: http://hl7.org/fhir/us/bfdr/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Dental Data Exchange, Release 1 - US Realm: http://hl7.org/fhir/us/dental-data-exchange/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Post-Acute Care Cognitive Status, Release 1- US Realm: http://hl7.org/fhir/us/pacio-cs/STU1
New Publication: STU Publication of HL7 FHIR® Implementation Guide: Post-Acute Care Functional Status, Release 1- US Realm: http://hl7.org/fhir/us/pacio-fs/STU1
New Publication: Release 4.0.1 of the CQF FHIR® Implementation Guide: Clinical Quality Framework Common FHIR Assets: http://fhir.org/guides/cqf/common/4.0.1/. (note: this is not a guide published through the HL7 consensus process, but according to the FHIR Community Process, so it's posted on fhir.org)
STU Update Publication of HL7 FHIR® Implementation Guide: Prior-Authorization Support (PAS), Release 1- US Realm: http://hl7.org/fhir/us/davinci-pas/STU1.1
STU Publication of HL7 FHIR Implementation Guide: minimal Common Oncology Data Elements (mCODE) Release 1 STU 2 – US Realm: http://hl7.org/fhir/us/mcode/STU2
STU Publication of HL7 FHIR® Implementation Guide: Electronic Case Reporting (eCR), Release 2: http://hl7.org/fhir/us/ecr/STU2
STU Update Publication of HL7 FHIR® Profile: Quality, Release 1 STU 4.1- US Realm: http://hl7.org/fhir/us/qicore/STU4.1
STU Publication of HL7 FHIR Implementation Guide: Profiles for ICSR Transfusion and Vaccination Adverse Event Detection and Reporting, Release 1 - US Realm: www.hl7.org/fhir/us/icsr-ae-reporting/STU1
Normative Publication of HL7 FHIR® Implementation Guide: FHIR Shorthand, Release 2: http://hl7.org/fhir/uv/shorthand/N1
STU Publication of HL7 FHIR® Structured Data Capture (SDC) Implementation Guide, Release 3: http://hl7.org/fhir/uv/sdc/STU3
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1- US Realm: http://hl7.org/fhir/us/davinci-cdex/STU1
STU Publication of HL7 FHIR® Implementation Guide: Health Record Exchange (HRex) Framework, Release 1- US Realm: http://hl7.org/fhir/us/davinci-hrex/STU1
STU Errata Publication of HL7 FHIR® Profile: Quality, Release 1 - US Realm STU 4.1.1: http://hl7.org/fhir/us/qicore/STU4.1.1
@David Pyke and @John Moehrke are pleased to announce the release of HotBeverage #FHIR Implementation Guide release April 1st - Based on IETF RFC 2324 allows for the fulfillment of a device request for an artfully brewed caffeinated beverage. http://fhir.org/guides/acme/HotBeverage/1.4.2022
STU Update Publication for HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex) Payer Network, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pdex-plan-net/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Quality Measures, Release 1 STU 3 - US Realm: http://hl7.org/fhir/us/cqf-measures/STU3
Informative Publication of HL7 EHRS-FM Release 2.1 – Pediatric Care Health IT Functional Profile Release 1 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=593
STU Publication of HL7 FHIR® IG: SMART Web Messaging Implementation Guide, Release 1: http://hl7.org/fhir/uv/smart-web-messaging/STU1
STU Publication of HL7 FHIR® Implementation Guide: Clinical Genomics, STU 2: http://hl7.org/fhir/uv/genomics-reporting/STU2
STU Publication of HL7 Domain Analysis Model: Vital Records, Release 5- US Realm: see http://www.hl7.org/implement/standards/product_brief.cfm?product_id=466
STU Publication of HL7 FHIR® Implementation Guide: Personal Health Device (PHD), Release 1: http://hl7.org/fhir/uv/phd/STU1
STU Publication of HL7 CDA® R2 IG: C-CDA Templates for Clinical Notes STU Companion Guide, Release 3 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Publication of HL7 FHIR® US Core Implementation Guide STU5 Release 5.0.0: http://hl7.org/fhir/us/core/STU5
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Health Care Surveys (NHCS), Release 1, STU Release 2.1 and STU Release 3.1 – US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=385
STU Publication of HL7 FHIR® Implementation Guide: Risk Adjustment, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-ra/STU1
Informative Guidance Publication of HL7 Short Term Solution - V2: SOGI Data Exchange Profile: http://www.hl7.org/permalink/?SOGIGuidance
Errata Publication of CDA® R2.1 (HL7 Clinical Document Architecture, Release 2.1): https://www.hl7.org/documentcenter/private/standards/cda/2019CDAR2_1_2022JUNerrata.zip
Errata Publication of US Core STU5 Release 5.0.1: http://hl7.org/fhir/us/core/STU5.0.1
STU Publication of HL7 FHIR® Implementation Guide: Digital Insurance Card, Release 1 - US Realm: http://hl7.org/fhir/us/insurance-card/STU1
STU Publication of HL7 FHIR® Implementation Guide: Subscription R5 Backport, Release 1: http://hl7.org/fhir/uv/subscriptions-backport/STU1
STU Update Publication of HL7 CDA® R2 Implementation Guide: Reportability Response, Release 1 STU Release 1.1- US Realm: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=470
STU Update Publication Request of HL7 CDA® R2 Implementation Guide: Public Health Case Report - the Electronic Initial Case Report (eICR) Release 2, STU Release 3.1 - US Realm: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=436
Informative Publication of HL7 FHIR® Implementation Guide: COVID-19 FHIR Clinical Profile Library, Release 1 - US Realm: http://hl7.org/fhir/us/covid19library/informative1
STU Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1 STU1.1.0 - US Realm: http://hl7.org/fhir/us/davinci-cdex/STU1.1
STU Update Publication of HL7 FHIR Profile: Occupational Data for Health (ODH), Release 1.2: http://hl7.org/fhir/us/odh/STU1.2
STU Publication of HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex) Drug Formulary, Release 1 STU2 - US Realm: http://hl7.org/fhir/us/davinci-drug-formulary/STU2
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Release 1 STU2 - US Realm: http://hl7.org/fhir/us/vrdr/STU2
STU Update Publication of HL7 FHIR® Implementation Guide: Electronic Case Reporting (eCR), Release 2.1 - US Realm: http://hl7.org/fhir/us/ecr/STU2.1
R5 Ballot is published. http://hl7.org/fhir/2022Sep/
STU Publication of HL7 FHIR® Implementation Guide: Vital Signs, Release 1- US Realm: http://hl7.org/fhir/us/vitals/STU1/
STU Publication of HL7 Cross Paradigm Specification: CDS Hooks, Release 1: https://cds-hooks.hl7.org/2.0/
New release of HL7 Terminology (THO) v4.0.0: https://terminology.hl7.org/4.0.0. (Thanks @Marc Duteau)
STU Publication of HL7 FHIR® Implementation Guide: Hybrid/Intermediary Exchange, Release 1- US Realm: http://www.hl7.org/fhir/us/exchange-routing/STU1
Errata publication of C-CDA (HL7 CDA® R2 Implementation Guide: Consolidated CDA Templates for Clinical Notes - US Realm): https://www.hl7.org/implement/standards/product_brief.cfm?product_id=492
STU Publication of HL7 FHIR® Implementation Guide: Security for Registration, Authentication, and Authorization, Release 1- US Realm: http://hl7.org/fhir/us/udap-security/STU1/
STU Publication of HL7 FHIR® Implementation Guide: FHIR for FAIR, Release 1: http://hl7.org/fhir/uv/fhir-for-fair/STU1
STU Publication of HL7 FHIR® Implementation Guide: PACIO Re-assessment Timepoints, Release 1 - US Realm: http://hl7.org/fhir/us/pacio-rt/STU1
STU Publication of HL7 FHIR® Implementation Guide: Medicolegal Death Investigation (MDI), Release 1 - US Realm: http://hl7.org/fhir/us/mdi/STU1
STU Publication of HL7 CDA® R2 Implementation Guide: ePOLST: Portable Medical Orders About Resuscitation and Initial Treatment, Release 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=600
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Results Interface, Release 1 STU Release 4 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=279
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Orders (LOI) from EHR, Release 1, STU Release 4 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=152
STU Publication of HL7 Version 2 Implementation Guide: Laboratory Value Set Companion Guide, Release 2- US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=413
New release of HL7 Terminology (THO) v5.0.0: https://terminology.hl7.org/5.0.0
This also means that the THO freeze has been lifted.
You can view the UTG tickets that were implemented in this release using the following dashboard and selecting 5.0.0 in the first pie chart. https://jira.hl7.org/secure/Dashboard.jspa?selectPageId=16115
Informative Publication of HL7 V2 Implementation Guide Quality Criteria, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=608
STU Publication of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.0 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2
STU Update Publication of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures, STU3.1 for FHIR R4 - US Realm: http://hl7.org/fhir/us/davinci-deqm/STU3.1/
STU Update Publication of HL7 FHIR® Implementation Guide: International Patient Summary, Release 1.1: http://hl7.org/fhir/uv/ips/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Consumer-Directed Payer Exchange (CARIN IG for Blue Button®), Release 1 STU2: http://hl7.org/fhir/us/carin-bb/STU2
STU Publication Request for HL7 Domain Analysis Model: Nutrition Care, Release 3 STU 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=609
Errata Publication of HL7 CDA® R2 Implementation Guide: Quality Reporting Document Architecture - Category I (QRDA I) - US Realm, STU 5.3: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=35
Snapshot3 of FHIR Core spec: http://hl7.org/fhir/5.0.0-snapshot3. This is published to support the Jan 2023 connectathon, and help prepare for the final publication of R5, which is still scheduled for March 2023
Informative Publication of HL7 EHRS-FM R2.0.1: Usability Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=611
STU Publication of NHSN Healthcare Associated Infection (HAI) Reports Long Term Care Facilities (HAI-LTCF-FHIR), Release 1 - US Realm: http://hl7.org/fhir/us/hai-ltcf/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Subscription R5 Backport, Release 1, STU 1.1: http://hl7.org/fhir/uv/subscriptions-backport/STU1.1/
New release of HL7 Terminology (THO) v5.1.0: https://terminology.hl7.org/5.1.0
The Final Draft version of FHIR R5 is now published for QA : http://hl7.org/fhir/5.0.0-draft-final. There's a two week period to do QA on it. In particular, we'd like to focus on the invariants - there'll be another announcement about that shortly
STU Update Publication of minimal Common Oncology Data Elements (mCODE) Implementation Guide 2.1.0 - STU 2.1: http://hl7.org/fhir/us/mcode/STU2.1/
STU Update Publication of HL7 FHIR Profile: Occupational Data for Health (ODH), Release 1.3: https://hl7.org/fhir/us/odh/STU1.3/
STU Publication of HL7 FHIR® Implementation Guide: Clinical Data Exchange (CDex), Release 1 STU 2 - US Realm: http://hl7.org/fhir/us/davinci-cdex/STU2/
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Release 1 STU2.1 - US Realm: https://hl7.org/fhir/us/vrdr/STU2.1/
I have started publishing R5. Unlike the IGs, R5 is rather a big upload - it will take me a couple of days. In the meantime, you might find discontinuities and broken links on the site, and confusion between R4 and R5 as bits are copied up. Also you may find missing and broken redirects too. I will make another announcement once it's all uploaded
STU Publication of HL7 FHIR® Implementation Guide: International Patient Access (IPA), Release 1: http://hl7.org/fhir/uv/ipa/STU1
STU Publication of HL7 FHIR® Implementation Guide: Longitudinal Maternal & Infant Health Information for Research, Release 1 - US Realm: http://hl7.org/fhir/us/mihr/STU1/
STU Publication of HL7 FHIR® Implementation Guide: Patient Cost Transparency, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pct/STU1
STU Publication of HL7 FHIR® Profile: Quality, Release 1 - US Realm (qicore) STU Release 5: http://hl7.org/fhir/us/qicore/STU5
Normative Publication of HL7 CDA® R2 Implementation Guide: Emergency Medical Services; Patient Care Report Release 3 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=438
STU Publication of HL7 Consumer Mobile Health Application Functional Framework (cMHAFF), Release 1, STU 2: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=476
STU Publication of HL7 FHIR® Implementation Guide: Data Segmentation for Privacy (DS4P), Release 1: http://hl7.org/fhir/uv/security-label-ds4p/STU1
STU Publication of HL7 FHIR® IG: SMART Application Launch Framework, Release 2.1: http://hl7.org/fhir/smart-app-launch/STU2.1
STU Publication of HL7 Version 2 Implementation Guide: Diagnostic Audiology Reporting, Release 1- US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=620
STU Publication of HL7 FHIR® R4 Implementation Guide: Clinical Study Schedule of Activities, Edition 1: http://hl7.org/fhir/uv/vulcan-schedule/STU1/
STU Update Publication of HL7 FHIR® Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-FHIR), Release 1 STU 1.1 - US Realm: http://hl7.org/fhir/us/hai-ltcf/STU1.1
STU Publication of HL7 CDA® R2 Implementation Guide: Personal Advance Care Plan (PACP) Document, Edition 1, STU3 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=434
STU Publication of HL7 CDA® R2 Implementation Guide: Pharmacy Templates, Edition 1 STU Release 2: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=514
STU Publication of HL7 FHIR® R4 Implementation Guide: Single Institutional Review Board Project (sIRB), Edition 1- US Realm: http://hl7.org/fhir/us/sirb/STU1
STU Publication of HL7 CDA® R2 Implementation Guide: C-CDA Templates for Clinical Notes STU Companion Guide Release 4 - US Realm Standard for Trial Use May 2023: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Publication of HL7 FHIR® US Core Implementation Guide STU6 Release 6.0.0: http://hl7.org/fhir/us/core/STU6
STU Publication of HL7/NCPDP FHIR® Implementation Guide: Specialty Medication Enrollment, Release 1 STU 2 - US Realm: http://hl7.org/fhir/us/specialty-rx/STU2/
STU Publication of Vulcan's HL7 FHIR® Implementation Guide: Retrieval of Real World Data for Clinical Research STU 1 - UV Realm: http://hl7.org/fhir/uv/vulcan-rwd/STU1
Version 6.1.0-snapshot1 of US Core for public review of forth coming STU update to STU6 - US Realm: http://hl7.org/fhir/us/core/STU6.1-snapshot1
STU Publication of HL7 FHIR® Implementation Guide: Military Service History and Status, Release 1 - US Realm: http://hl7.org/fhir/us/military-service/STU1
STU Publication of HL7 FHIR® Implementation Guide: Identity Matching, Release 1 - US Realm: http://hl7.org/fhir/us/identity-matching/STU1
STU Publication of HL7 FHIR® Implementation Guide: Making Electronic Data More Available for Research and Public Health (MedMorph) Reference Architecture, Release 1- US Realm: http://hl7.org/fhir/us/medmorph/STU1/
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Healthcare Safety Network (NHSN) Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-CDA), Release 1, STU 1.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=546
STU Update Publication of HL7 CDA® R2 Implementation Guide: C-CDA Templates for Clinical Notes Companion Guide, Release 4.1 STU - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=447
STU Update Publication of HL7 FHIR® US Core Implementation Guide STU6 Release 6.1.0: http://hl7.org/fhir/us/core/STU6.1
STU Publication of HL7 FHIR® Implementation Guide: Cancer Electronic Pathology Reporting, Release 1 - US Realm: https://hl7.org/fhir/us/cancer-reporting/STU1
STU Publication of HL7 FHIR Implementation Guide: Electronic Medicinal Product Information, Release 1: http://hl7.org/fhir/uv/emedicinal-product-info/STU1
Unballoted STU Update Publication of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.1 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2.1
STU Publication of HL7 FHIR® Implementation Guide: CodeX™ Radiation Therapy, Release 1- US Realm: http://hl7.org/fhir/us/codex-radiation-therapy/STU1
STU Publication of HL7 FHIR® Implementation Guide: US Public Health Profiles Library, Release 1 - US Realm: http://hl7.org/fhir/us/ph-library/STU1
STU Publication of HL7 FHIR® Implementation Guide: ICHOM Patient Centered Outcomes Measure Set for Breast Cancer, Edition 1: http://hl7.org/fhir/uv/ichom-breast-cancer/STU1
STU Publication of HL7 FHIR® Implementation Guide: Health Care Surveys Content, Release 1 - US Realm: http://hl7.org/fhir/us/health-care-surveys-reporting/STU1
STU Publication of HL7 FHIR® Implementation Guide: Physical Activity, Release 1 - US Realm: http://hl7.org/fhir/us/physical-activity/STU1
STU Publication of HL7 FHIR® Implementation Guide: Quality Measures, Release 1 STU4 - US Realm: http://hl7.org/fhir/us/cqfmeasures/STU4
Unballoted STU Update Publication of HL7 FHIR® Implementation Guide: Healthcare Associated Infection Reports, Release 1, STU 2.1 —US Realm: http://hl7.org/fhir/us/hai/STU2.1
STU Publication of HL7 Cross Paradigm Specification: Health Services Reference Architecture (HL7-HSRA), Edition 1:https://www.hl7.org/implement/standards/product_brief.cfm?product_id=632
Errata publication of HL7 CDA® R2 Attachment Implementation Guide: Exchange of C-CDA Based Documents, Release 2 US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=464
Informative Publication of HL7 EHR-S FM R2.1 Functional Profile: Problem-Oriented Health Record (POHR) for Problem List Management (PLM), Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=630
STU Publication of HL7 CDA R2 Implementation Guide: Gender Harmony - Sex and Gender representation, Edition 1 - Component of: HL7 Cross-Paradigm Implementation Guide: Gender Harmony - Sex and Gender representation, Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=633
Informative Publication of HL7 Cross-paradigm Implementation Guide: Gender Harmony - Sex and Gender Representation, Edition 1: http://hl7.org/xprod/ig/uv/gender-harmony/informative1
STU Publication of HL7 FHIR® Implementation Guide: Data Exchange for Quality Measures, Edition 1 STU4 - US Realm: http://hl7.org/fhir/us/davinci-deqm/STU4
STU Publication of HL7 FHIR® Implementation Guide: Human Services Directory, Release 1 - US Realm: http://hl7.org/fhir/us/hsds/STU1
STU Update Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Common FHIR Profile Library R1.1: http://hl7.org/fhir/us/vr-common-library/STU1.1
Errata:
I wrong wrote:
STU Publication of HL7 Cross-Product Implementation Guide: HL7 Cross Paradigm Implementation Guide: Gender Harmony - Sex and Gender Representation, Edition 1: http://hl7.org/xprod/ig/uv/gender-harmony/
This was a copy paste error on my part, sorry. This is an informative publication, not a trial-use publication
STU Update Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Release 1.1: http://hl7.org/fhir/us/bfdr/STU1.1
STU Update Publication of Vital Records Death Reporting FHIR Implementation Guide, STU2.2 - US Realm: http://hl7.org/fhir/us/vrdr/STU2.2
STU Publication of HL7 FHIR® Implementation Guide: Coverage Requirements Discovery, Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-crd/STU2
STU Publication of HL7 FHIR Implementation Guide: minimal Common Oncology Data Elements (mCODE) Release 1 STU 3 - US Realm: http://hl7.org/fhir/us/mcode/STU3
STU Publication of HL7 FHIR® Implementation Guide: Documentation Templates and Rules, Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-dtr/STU2
STU Update Publication of HL7 CDA R2 Implementation Guide: Personal Advance Care Plan (PACP), Edition 1 STU 3.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=434
STU Publication of HL7 FHIR® Implementation Guide: Protocols for Clinical Registry Extraction and Data Submission (CREDS), Release 1 - US Realm: http://hl7.org/fhir/us/registry-protocols/STU1
Informative Publication of HL7 Informative Document: Patient Contributed Data, Edition 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=638
STU Update Publication of HL7 FHIR® Implementation Guide: Medicolegal Death Investigation (MDI), Release 1.1 - US Realm: http://hl7.org/fhir/us/mdi/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: Prior-Authorization Support (PAS), Edition 2 - US Realm: http://hl7.org/fhir/us/davinci-pas/STU2
FHIR Foundation Publication: HRSA 2023 Uniform Data System (UDS) Patient Level Submission (PLS) (UDS+) FHIR IG, Release 1- see http://fhir.org/guides/hrsa/uds-plus/
HL7 DK Publication: DK Core version 3.0 is now published at https://hl7.dk/fhir/core/index.html
STU Publication of Health Level Seven Arden Syntax for Medical Logic Systems, Edition 3.0: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=639
STU Publication of HL7 FHIR® Implementation Guide: Integrating the Healthcare Enterprise (IHE) Structured Data Capture/electronic Cancer Protocols on FHIR, Release 1- US Realm: http://hl7.org/fhir/uv/ihe-sdc-ecc/STU1
1st Draft Ballot of HL7 FHIR® R6: http://hl7.org/fhir/6.0.0-ballot1
Release of HL7 FHIR® Tooling IG (International): http://hl7.org/fhir/tools/0.1.0
Ballot for the next versions of the FHIR Extensions Pack (5.1.0-ballot1): http://hl7.org/fhir/extensions/5.1.0-ballot/
Ballot for CCDA 3.0.0: http://hl7.org/cda/us/ccda/2024Jan/
This is a particularly important milestone for the publishing process. Quoting from the specification itself:
Within HL7, since 2020, an initiative to develop the same underlying publication process tech stack across all HL7 standards has been underway. The intent is to provide the same look and feel, to leverage inherent validation and versioning, to ease annual updates, and to avoid the unwieldy word and pdf publication process. This publication of C-CDA R3.0 is the realization of that intent for the CDA product family.
Many people have contributed to this over a number of years, and while I'm hesitant to call attention to any particular individuals because of the certainty of missing some others who also deserve it, it would not have got across the line without a significant contribution from @Benjamin Flessner
Informative Publication of HL7 FHIR® Implementation Guide: Record Lifecycle Events (RLE), Edition 1: http://hl7.org/fhir/uv/ehrs-rle/Informative1
STU Update Publication of HL7 FHIR® Implementation Guide: Patient Cost Transparency, Release 1 - US Realm: http://hl7.org/fhir/us/davinci-pct/STU1.1
STU Publication of HL7 FHIR® Implementation Guide: PACIO Personal Functioning and Engagement, Release 1 - US Realm: http://hl7.org/fhir/us/pacio-pfe/STU1
STU Publication of HL7 FHIR® Implementation Guide: Payer Data Exchange (PDex), Release 2 - US Realm: http://hl7.org/fhir/us/davinci-pdex/STU2
STU Publication of HL7 FHIR® Implementation Guide: Member Attribution List, Edition 2- US Realm: http://hl7.org/fhir/us/davinci-atr/STU2
STU Publication of HL7 FHIR® Implementation Guide: PACIO Advance Directive Interoperability, Edition 1 - US Realm: http://hl7.org/fhir/us/pacio-adi/STU1
STU Publication of HL7 FHIR® R4 Implementation Guide: QI-Core, Edition 1.6 - US Realm: http://hl7.org/fhir/us/qicore/STU6
STU Update Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports, Release 4, STU 2.2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
Interim Snapshot 5.1.0-snapshot1 of the Extensions package (hl7.fhir.yv.extensions#5.1.0-snapshot1) has been published to support publication requests waiting for a new release of the extensions package @ http://hl7.org/fhir/extensions/5.1.0-snapshot1/
STU Publication of HL7 FHIR® Implementation Guide: C-CDA on FHIR, STU 1.2.0 - US Realm: http://hl7.org/fhir/us/ccda/STU1.2
STU Update Publication of HL7 CDA® R2 Implementation Guide: National Healthcare Safety Network (NHSN) Healthcare Associated Infection (HAI) Reports for Long Term Care Facilities (HAI-LTCF-CDA), Release 1, STU 1.2 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=546
STU Publication of HL7 CDS Hooks: Hook Library, Edition 1: https://cds-hooks.hl7.org/
STU Publication ofHL7 FHIR® R5 Implementation Guide: Adverse Event Clinical Research, Edition 1: http://hl7.org/fhir/uv/ae-research-ig/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Digital Insurance Card, Release 1 - US Realm: http://hl7.org/fhir/us/insurance-card/STU1.1/
STU Publication of HL7 FHIR® R4 Implementation Guide: Adverse Event Clinical Research R4 Backport, Edition 1: http://hl7.org/fhir/uv/ae-research-backport-ig/STU1
STU Update Publication of HL7 FHIR® Implementation Guide: Central Cancer Registry Reporting Content IG, Edition 1- US Realm: https://hl7.org/fhir/us/cancer-reporting/STU1.0.1
STU Publication of HL7 FHIR® Implementation Guide: SMART Application Launch Framework, Release 2.2: http://hl7.org/fhir/smart-app-launch/STU2.2
STU Publication of HL7 FHIR® Implementation Guide: Pharmaceutical Quality (Industry), Edition 1: http://hl7.org/fhir/uv/pharm-quality/STU1
STU Publication of HL7 FHIR® US Core Implementation Guide STU 7 Release 7.0.0 - US Realm: http://hl7.org/fhir/us/core/STU7
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Orders from EHR (LOI) Edition 5 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=152
STU Publication of HL7 Version 2.5.1 Implementation Guide: Laboratory Results Interface (LRI), Edition 5 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=279
Ok, a significant milestone has been reached with two new publications:
STU Publication of the HL7 FHIR® R4 Implementation Guide: Electronic Long-Term Services and Supports (eLTSS) Edition 1 STU2 - US Realm: http://hl7.org/fhir/us/eltss/STU2
STU Publication of HL7 CDA® R2 Implementation Guide: NHSN Healthcare Associated Infection (HAI) Reports for Antimicrobial Use in Long Term Care Facilities (AULTC), Edition 1.0, STU1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=646
STU Publication of HL7 FHIR® Implementation Guide: Central Cancer Registry Reporting Content IG, Edition 1- US Realm: http://hl7.org/fhir/us/central-cancer-registry-reporting/STU1
STU Publication of HL7 FHIR® Implementation Guide: Using CQL With FHIR, Edition 1: http://hl7.org/fhir/uv/cql/STU1
STU Publication of HL7 FHIR® Implementation Guide: Canonical Resource Management Infrastructure (CRMI), Edition 1: http://hl7.org/fhir/uv/crmi/STU1
STU Publication of HL7 FHIR® Implementation Guide: Value Based Performance Reporting (VBPR), Edition 1 - US Realm: http://hl7.org/fhir/us/davinci-vbpr/STU1
STU Update Publication of HL7 FHIR® R4 Implementation Guide: At-Home In-Vitro Test Report, Edition 1.1: http://hl7.org/fhir/us/home-lab-report/STU1.1
STU Publication of MCC eCare Plan Implementation Guide, Edition 1 - US Realm: http://hl7.org/fhir/us/mcc/STU1
Normative Reaffirmation Publication of HL7 Version 3 Standard: Event Publish & Subscribe Service Interface, Release 1 - US Realm and HL7 Version 3 Standard: Unified Communication Service Interface, Release 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=390 or https://www.hl7.org/implement/standards/product_brief.cfm?product_id=388
Normative Reaffirmation Publication of HL7 Version 3 Standard: Regulated Studies - Annotated ECG, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=70
Normative Reaffirmation Publication of Health Level Seven Arden Syntax for Medical Logic Systems, Version 2.10: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=372
Normative Reaffirmation Publication of HL7 Healthcare Privacy and Security Classification System, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=345
Normative Reaffirmation Publication of HL7 EHR Clinical Research Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=16
Normative Reaffirmation Publication of HL7 EHR Child Health Functional Profile, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=15
Normative Reaffirmation Publication of HL7 Version 3 Standard: XML Implementation Technology Specification - Wire Format Compatible Release 1 Data Types, Release 1 and HL7 Version 3 Standard: XML Implementation Technology Specification - V3 Structures for Wire Format Compatible Release 1 Data Types, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=357 and https://www.hl7.org/implement/standards/product_brief.cfm?product_id=358
Normative Reaffirmation Publication of HL7 Version 3 Standard: Privacy, Access and Security Services; Security Labeling Service, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=360
Reaffirmation Publication of HL7 Version 3 Implementation Guide: Context-Aware Knowledge Retrieval Application (Infobutton), Release 4: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=22
Normative Publication of HL7 FHIR® Implementation Guide: FHIR Shorthand, Edition 3.0.0: http://hl7.org/fhir/uv/shorthand/N2
STU Publication Request for HL7 FHIR® Implementation Guide: Medication Risk Evaluation and Mitigation Strategies (REMS), Edition 1- US Realm: http://hl7.org/fhir/us/medication-rems/STU1
Normative Reaffirmation Publication of HL7 Cross-Paradigm Specification: FHIRPath, Release 1: http://hl7.org/FHIRPath/N2
STU Update Publication of HL7 FHIR® Implementation Guide: Security for Registration, Authentication, and Authorization (FAST), Edition 1 - US Realm: http://hl7.org/fhir/us/udap-security/STU1.1
Informative Publication of HL7 Guidance: AI/ML Data Lifecycle, Edition 1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=658
Unballoted STU Update of HL7 FHIR® Implementation Guide: SDOH Clinical Care, Release 2.2 - US Realm: http://hl7.org/fhir/us/sdoh-clinicalcare/STU2.2
Normative Publication of HL7 Clinical Document Architecture R2.0 Specification Online Navigation, Edition 2024: https://hl7.org/cda/stds/online-navigation/index.html
Normative Publication of Health Level Standard Standard Version 2.9.1 - An Application Protocol for Electronic Data Exchange in Healthcare Environments: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=649
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Common Library, Edition 2 - US Realm: http://hl7.org/fhir/us/vr-common-library/STU2
Normative Retirement Publication of HL7 V3 Patient Registry R1, Person Registry R1, Personnel Management R1 and Scheduling R2: Patient Registry, Person Registry, Personnel Management and Scheduling.
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Birth and Fetal Death Reporting, Edition 2 - US Realm: http://hl7.org/fhir/us/bfdr/STU2
STU Publication of HL7 FHIR® Implementation Guide: Prescription Drug Monitoring Program (PDMP), Edition 1 - US Realm: http://hl7.org/fhir/us/pdmp/STU1
Normative Retirement Publication of HL7 Version 3 Standard: Security and Privacy Ontology, Release 1: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=348
STU Publication of HL7 FHIR® Implementation Guide: Vital Records Death Reporting (VRDR), Edition 3 - US Realm: http://hl7.org/fhir/us/vrdr/STU3
STU Update Publication of HL7 FHIR® Implementation Guide: Personal Health Device (PHD), Release 1.1: http://hl7.org/fhir/uv/phd/STU1.1
STU Publication of HL7 CDA® R2 Implementation Guide: Healthcare Associated Infection Reports, Release 4, STU 3 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=570
STU Errata Publication of HL7 CDA® R2 Implementation Guide: Public Health Case Report - the Electronic Initial Case Report (eICR), Release 3.1.1 - US Realm: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=436
STU Errata Publication of HL7 FHIR® Implementation Guide: Electronic Case Reporting (eCR), Release 2.1.2 - US Realm: http://hl7.org/fhir/us/ecr/STU2.1
STU Publication of HL7 FHIR® Implementation Guide: Quality Measures, Edition 1 STU 5 - US Realm: http://hl7.org/fhir/us/cqfmeasures/STU5
Normative Retirement Publication of HL7 Service-Aware Interoperability Framework: Canonical Definition Specification, Release 2: https://www.hl7.org/implement/standards/product_brief.cfm?product_id=3
What is the latest on the story of generating Java data models for profiles?
(I checked fhir-codegen but I see it has this issue)
( blast from the past https://github.com/jkiddo/hapi-fhir-profile-converter )
@Vadim Peretokin I might have a colleague that would like to pitch in some effort
To the MS codegen project
works, but there's some open issues with it
What are those?
don't remember :-(
@Grahame Grieve and this class here https://github.com/hapifhir/org.hl7.fhir.core/blob/master/org.hl7.fhir.r5/src/test/java/org/hl7/fhir/r5/profiles/PETests.java illustrates how it can be used, correct? It isn't wrapped in any executable or something like that already, right?
that tests out the underlying engine.
I don't think it tests out the generated code itself
This is a great start!
I've played around with the code generation and found the following issues, sorted in priority:
Would you like me to file them so we can keep track? Both me and @Jens Villadsen agree this is something worth developing further, perhaps we can get some community traction on this :)
This is gonna be a fun ride!
ca.uhn.fhir.model.api.annotation.*
are used in the generated results.6 missed a 'not'. But you explained it in 8, so nvm
@Vadim Peretokin how to reproduce #2?
I'll have a look again. On the road atm, so it'll be in a few days. Thanks for checking it out
@Grahame Grieve try generate something for e.g. https://hl7.dk/fhir/core/StructureDefinition-dk-core-gln-identifier.html
also these slices: https://hl7.dk/fhir/core/StructureDefinition-dk-core-patient-definitions.html#diff_Patient.identifier
hmm ... wait ... I'll share some code that can reproduce it ...
I thought I set up for the validator to do the code generation, but I can't see that now
where ?
I didn't do it
but what does this have to do with the validator?
it has all the knowledge etc, so it can do the code generation
java -jar validator.jar -codegen -ig x -ig y -profiles a,b,c -output {dir}
mmmkay ...
never tried that
it doesn't work now. Cause I never did it
lol
I'll most likely do some wrapping of it as well and put it somewhere public
but it will be a few days
why not put it in the validator where everyone can use it?
separation of concerns
also ... I'd like to be using whatever libraries as I see fit
also ... I do not know of the the release cycle of the validator
rarely more than a week
but if the code produced fits into the validator then I'll gladly make a PR
it's the generation that goes in the validator, not the generated code
yes
("the code produced" -> the wrapping code that I'll be producing - not the generated code )
I also consider building it as a maven plugin
@Vadim Peretokin
polymorphic types not supported
is that:
Attempt to get children for an element that doesn't have a single type
?
the next version of the validator will generate code on request:
--
The easiest way to generate code is to use the FHIR Validator, which can generate java classes for profiles. Parameters:
-codegen -version r4 -ig hl7.fhir.dk.core#3.2.0 -profiles http://hl7.dk/fhir/core/StructureDefinition/dk-core-gln-identifier,http://hl7.dk/fhir/core/StructureDefinition/dk-core-patient -output /Users/grahamegrieve/temp/codegen -package-name org.hl7.fhir.test
Parameter Documentation:
Options
-option {name}: a code generation option, one of:
narrative: generate code for the resource narrative (recommended: don't - leave that for the native resource level)
and it fixes a couple of those problems, though I have no doubt there's plenty more work to do
that looks funky
nvm .... didn't use the version property before. Its kinda odd though. Why isnt' that property automatically set since the PECodeGenerator
is already package specific?
it is now
its getting there now -> https://github.com/jkiddo/espresso
So far it supports R4 and R5 (automatically detects the version), by default selects all profiles and it works as a maven plugin
It supports IG's from the registries as well as any IG that has a public available package.tgz file
and local files as well ofc
let me know if you find other generation issues
you cant have -
in the naming. ENTERED-IN-ERROR, // "Entered in Error" = http://hl7.org/fhir/observation-status#entered-in-error
image.png
-all those annotations are not available in R4.
@Grahame Grieve I'm working from your 2024-10-gg-tx-server-auth
-branch so you can make your modifications there and I'll try it out right away
updated for the easy changes. Will try and get to the others tomorrow, but it will help if you say what you generated to get the errors
I'm taking all structuredefs from https://hl7.dk/fhir/core/3.2.0/
I can make the plugin generate an equivalent validator syntax
if that helps :see_no_evil:
You may wanna change the .
's in the enums to _
as well:
image.png
There also seems to be some funkyness when profiled datatypes are generated (such as https://hl7.dk/fhir/core/StructureDefinition-dk-core-gln-identifier.html) and the use in the parent resources
try again. what's 'funkiness'? I kind of like https://www.youtube.com/watch?v=uE-itlGNap4
This is the equivalent: java -jar validator_cli.jar -codegen -version 4.0.1 -ig hl7.fhir.dk.core#3.2.0 -output target/generated-sources/java -package-name org.hl7.fhir.example.generated -profiles http://hl7.dk/fhir/core/StructureDefinition-dk-core-patient,http://hl7.dk/fhir/core/StructureDefinition-NotFollowedAnymore,http://hl7.dk/fhir/core/StructureDefinition-dk-core-gln-identifier,http://hl7.dk/fhir/core/StructureDefinition-ConditionLastAssertedDate,http://hl7.dk/fhir/core/StructureDefinition-dk-core-basic-observation,http://hl7.dk/fhir/core/StructureDefinition-dk-core-observation,http://hl7.dk/fhir/core/StructureDefinition-dk-core-cpr-identifier,http://hl7.dk/fhir/core/StructureDefinition-dk-core-sor-identifier,http://hl7.dk/fhir/core/StructureDefinition-dk-core-cvr-identifier,http://hl7.dk/fhir/core/StructureDefinition-dk-core-condition,http://hl7.dk/fhir/core/StructureDefinition-dk-core-producent-id,http://hl7.dk/fhir/core/StructureDefinition-dk-core-related-person,http://hl7.dk/fhir/core/StructureDefinition-dk-core-authorization-identifier,http://hl7.dk/fhir/core/StructureDefinition-dk-core-kombit-org-identifier,http://hl7.dk/fhir/core/StructureDefinition-dk-core-practitioner,http://hl7.dk/fhir/core/StructureDefinition-dk-core-x-ecpr-identifier,http://hl7.dk/fhir/core/StructureDefinition-dk-core-d-ecpr-identifier,http://hl7.dk/fhir/core/StructureDefinition-dk-core-organization,http://hl7.dk/fhir/core/StructureDefinition-dk-core-municipalityCodes,http://hl7.dk/fhir/core/StructureDefinition-dk-core-RegionalSubDivisionCodes
when those generated classes are compilable then I'll stop bugging you ... for a while ...
:big_smile:
you can use -profile http://hl7.dk/fhir/core/* in the parameters now.
they should be copilable now
I don't see that you have made any commits that fixes the reported issues - did you forget to push?
everything up to date.
yea ok ... let me be specific. The generated code does not compile due to the issues raised.
which issues - it compiles for me using the code in that branch
Yea ... It must be some local caching/deps resolution that doesn't work locally. I just tried out the validation.cli and it looks good so far.
Okay ... I found the issue ... https://github.com/hapifhir/org.hl7.fhir.core/blob/8bc1f493c01db4e819344de2267d34e29a073f69/org.hl7.fhir.validation/src/main/java/org/hl7/fhir/validation/cli/services/ValidationService.java#L856 is using the PECodeGenerator
from the r5
package eventhough its generating for R4. Whats that about?
Whats the purpose of the PECodeGenerator
in the r4
then good for? Because the r4
instance is still generating classes that do not compile.
the purpose of it is to create dual work for me, and the opportunity to forget to keep it in sync.
Why not just delete it if serves no genuine purpose?
Grahame Grieve said:
the purpose of it is to create dual work for me, and the opportunity to forget to keep it in sync.
Which falls short of the intention, which is combinatorial work for Grahame ;-)
no that wasn't a serious answer. The real reason is because I think that there'll be people who are just using r4, and would find using R5 to do the generation an unnecessary impost,
so I decided that I'd maintain all the Profile model code in both R4 and R5.
I got the sarcasm part ( im a dane you know) but why would anyone care in which package the code is if it does what its supposed to?
... and a new finding:
var workerContext = SimpleWorkerContext.fromPackage(new FilesystemPackageCacheManager.Builder().build().loadPackage("hl7.fhir.dk.core","3.2.0"));
var dkCoreOrganization = new DkCoreOrganization().setEANID(new GLNIdentifier().setValue(UUID.randomUUID().toString()));
dkCoreOrganization.build(workerContext);
Try to run that piece of code from the generated samples ... while it compiles, it fails runtime.
because of what you have to do to feed the code. It works for me to use the R5 code because the validator is entirely R5 internally, but that comes with a fair bit of work internally both in terms of having R4 <-> R5 conversions (which are part of the code) and also in terms of the work to set up the context.
org.hl7.fhir.exceptions.FHIRException: No children with the name 'EANID'
at org.hl7.fhir.r4.profilemodel.PEInstance.byName(PEInstance.java:149)
at org.hl7.fhir.r4.profilemodel.PEInstance.children(PEInstance.java:129)
at org.hl7.fhir.r4.profilemodel.PEInstance.clear(PEInstance.java:185)
at org.hl7.fhir.example.generated.DkCoreOrganization.save(DkCoreOrganization.java:277)
at org.hl7.fhir.example.generated.DkCoreOrganization.build(DkCoreOrganization.java:232)
at PluginTest.testDoStuff(PluginTest.java:41)
... and the example becomes a bit weird as there you actually have to use a SimpleWorkerContext
from r4
... :face_with_raised_eyebrow:
ok. I will investigate
But the question remains: Should I use the PEGenerator in the r4 package for R4 IGs or should I use the one from r5?
r4.
You are fully aware that the corrections you did on the R5 you also need to do on R4 - in terms on enums and '-' in enums?
If you would like to see a new bunch of errors for R5 packages then you can run: java -jar validator_cli.jar -codegen -version R5 -ig hl7.fhir.uv.emedicinal-product-info#1.0.0 -output target/generated-sources/java -package-name org.hl7.fhir.example.generated -profiles http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-MedicinalProductDefinition-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-SubstanceDefinition-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-PackagedProductDefinition-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ClinicalUseDefinition-indication-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-RegulatedAuthorization-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ClinicalUseDefinition-interaction-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-Composition-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-Ingredient-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ClinicalUseDefinition-warning-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-AdministrableProductDefinition-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-Bundle-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ClinicalUseDefinition-contraindication-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ManufacturedItemDefinition-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-Organization-uv-epi,http://hl7.org/fhir/uv/emedicinal-product-info/StructureDefinition-ClinicalUseDefinition-undesirableEffect-uv-epi
Also - I'm moving https://github.com/jkiddo/espresso to an HL7 repo if thats something the community would like to have : :man_shrugging:
@Jens Villadsen compiles ok for me:
I'm moving https://github.com/jkiddo/espresso to an HL7 repo if thats something the community would like to have
no, it shouldn't be an HL7 repo, it should be in http://github.com/FHIR - I would welcome it there.
Okay - wasn't aware you made a new branch. With that, the r4 stuff works now. The R5 stuff however does not. Just tried again from the new branch and double-tried it with the validation CLI as well. Same result: end result does not compile.
that's all in the R5 base master - what I committed against.
do I think it does compile, but if you want to progress this, please be specific about what doesn't compile
which class is that in?
RegulatedAuthorizationUvEpi
BundleUvEpi
you don't see those issues locally?
weird. I can't generate that
definition not found
I don't know what that means
you can reproduce it by running https://github.com/jkiddo/espresso/blob/79537a53027cef10e12e87a5eac643851c7b3faa/src/test/java/PluginTest.java#L46
@Jens Villadsen https://github.com/hapifhir/org.hl7.fhir.core/pull/1797
you may wanna look into this error:
[ERROR] Failures:
[ERROR] DateTimeUtilTests.testToHumanDisplayLocalTimezone:76 expected: <04-Feb-2002 00:00:00> but was: <4 Feb 2002, 00:00:00>
[ERROR] DateTimeUtilTests.testToHumanDisplayLocalTimezone:76 expected: <04-Feb-2002 00:00:00> but was: <4 Feb 2002, 00:00:00>
[ERROR] DateTimeUtilTests.testToHumanDisplayLocalTimezone:76 expected: <04-Feb-2002 00:00:00> but was: <4 Feb 2002, 00:00:00>
[INFO]
[ERROR] Tests run: 724, Failures: 3, Errors: 0, Skipped: 5
@Grahame Grieve this is however worse:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (java-compile) on project org.hl7.fhir.r4: Compilation failure
[ERROR] /Users/jkiddo/work/org.hl7.fhir.core/org.hl7.fhir.r4/src/main/java/org/hl7/fhir/r4/profilemodel/PEInstance.java:[301,36] error: cannot find symbol
[ERROR] symbol: class DataType
[ERROR] location: class PEInstance
[ERROR]
look into this error:
that's a jvm issue. It passes on the ci-build jvm. We don't know how to test that without getting dependent on different responses from different jvms
I can ignore the tests - I cant ignore compilation errors
Try again then
I'm sorry if my opinion won't please anyone. However, discussions in critical infrastructure areas are for a continuous improvement process. I think code generation is a serious cybersecurity threat. software literature, especially from Bertrand Meyer, should be considered regarding extension and modification practices.
I really can't understand why software development isn't based on science. Software is present in all technologies and yet it is developed in-house. There is no interoperability without considering Tenembaum definitions
@Grahame Grieve it sort of compiles now (gonna check the rest of the generation now) - after I removed some occurences of DebugUtilities and some test code. I just realized, none of the github actions runs on branches - is that intentional? This line of compilation issues could have been identified earlier on if there were actions on branches.
I have no idea what you're talking about there? What actions?
but in this case, I committed it without checking and went kayaking
Github actions
they all run, I just didn't look at the outcome
None of https://github.com/hapifhir/org.hl7.fhir.core/actions seems to highlight the compilation issues I encountered
sorry ... thats me being blind
they did encounter the exact same issues
indeed. they only work when I get around to looking at them though
I was confused by the naming of the action though ...
I would expect OWASP
and License check
to identify compilation issues
would != wouldnt
they do. I don't know why, I've never cared :grinning:
we are back to naming and cache invalidation then .... the two most important parts of software engineering ... :wink:
anyways ... R5 packages from a first glance seem to work fine ...
R4 has some issues ...
java.lang.AbstractMethodError: Receiver class org.hl7.fhir.r4.hapi.ctx.HapiWorkerContext does not define or inherit an implementation of the resolved method 'abstract java.util.List fetchResourcesByType(java.lang.Class)' of interface org.hl7.fhir.r4.context.IWorkerContext.
at org.hl7.fhir.r4.fhirpath.FHIRPathEngine.<init>(FHIRPathEngine.java:273)
at org.hl7.fhir.r4.fhirpath.FHIRPathEngine.<init>(FHIRPathEngine.java:266)
at org.hl7.fhir.r4.hapi.fluentpath.FhirPathR4.<init>(FhirPathR4.java:31)
at org.hl7.fhir.r4.hapi.ctx.FhirR4.createFhirPathExecutor(FhirR4.java:56)
at ca.uhn.fhir.context.FhirContext.newFhirPath(FhirContext.java:859)
at org.hl7.fhir.contrib.CodeGeneratorFactory.<init>(CodeGeneratorFactory.java:53)
this time not in the generated code though ... :thinking:
no hapi will need some changes in R4 to deal with the upgrades to FHIRPath that make the PE code work
I'm not the one responsible for that
Should I run a git blame on that?
on what? No, I added some routines to IWorkerContext, but someone else has to add them to the HAPI implementaiton
no blame there
No blame, no one responsible - okay
I didn't say no one. It's the HAPI team - they do it when they upgrade the version of core that underlies HAPI. If you want to accelerate that, maybe it's you
after some seriously jank class shadowing I got it working - so no compile time issues. Now its on to runtime issues - and signatures
Doing e.g. :
var patient = new DkCorePatient(context).setDEcpr(new DkCoreDeCprIdentifier().setSystem(DkCoreDeCprIdentifier.DkCoreDeCPRValueSet.URNOID122081761613).setValue(UUID.randomUUID().toString()));
var pat = patient.build(context);
results in
org.hl7.fhir.exceptions.DefinitionException: The discriminator path 'system' has no fixed value - this is not supported by the PEBuilder
at org.hl7.fhir.r4.profilemodel.PEBuilder.makeSliceExpression(PEBuilder.java:585)
at org.hl7.fhir.r4.profilemodel.PEDefinitionElement.fhirpath(PEDefinitionElement.java:74)
at org.hl7.fhir.r4.profilemodel.PEInstance.children(PEInstance.java:131)
at org.hl7.fhir.example.generated.DkCorePatient.load(DkCorePatient.java:165)
at org.hl7.fhir.example.generated.DkCorePatient.<init>(DkCorePatient.java:128)
and the syntax is odd. If you feeded the DkCorePatient constructor with the context, why would I need to pass it again in build
?
if you feeded the DkCorePatient constructor with the context, why would I need to pass it again in
build
?
I don't know. There is a parameter-less constructor, but it's still an odd thing. I'm not sure I did that?
try my updated code, anyway
hmmm ... didn't seem to change a thing
you should have:
/**
* Build a instance of the underlying object based on this wrapping object
*
*/
public Patient build(IWorkerContext context) {
workerContext = context;
return build();
}
/**
* Build a instance of the underlying object based on this wrapping object
*
*/
public Patient build() {
Patient theThing = new Patient();
PEBuilder builder = new PEBuilder(workerContext, PEElementPropertiesPolicy.EXTENSION, true);
PEInstance tgt = builder.buildPEInstance(CANONICAL_URL, theThing);
save(tgt, false);
return theThing;
}
yes ... thats what I also saw at https://github.com/hapifhir/org.hl7.fhir.core/pull/1797/commits/adaa3fa708b9ba2c387c54f07b2fd743f57d5859#diff-eba85889502469fe87a2c54ba10c17918c7a345345aa42c17e01a7f4494a7148 -
wait ... those classes are namespaced to R5 ...
Grahame Grieve said:
the purpose of it is to create dual work for me, and the opportunity to forget to keep it in sync.
are we back to this then?
(that quote is gonna haunt you like a cat chasing a mouse)
oh yes ... code signature is updated for R5 classes with the new build()
method. No update on R4 ... :(
are we back to this then?
indeed. Updated
Update updated the signature - all good. Still have the problem though:
org.hl7.fhir.exceptions.DefinitionException: The discriminator path 'system' has no fixed value - this is not supported by the PEBuilder
at org.hl7.fhir.r4.profilemodel.PEBuilder.makeSliceExpression(PEBuilder.java:592)
at org.hl7.fhir.r4.profilemodel.PEDefinitionElement.fhirpath(PEDefinitionElement.java:74)
at org.hl7.fhir.r4.profilemodel.PEInstance.children(PEInstance.java:131)
at org.hl7.fhir.example.generated.DkCorePatient.load(DkCorePatient.java:165)
at org.hl7.fhir.example.generated.DkCorePatient.<init>(DkCorePatient.java:128)
at PluginTest.testDefaultR4MojoGoal(PluginTest.java:44)
...
what's the code to reproduce that?
Jens Villadsen said:
var patient = new DkCorePatient(context).setDEcpr(new DkCoreDeCprIdentifier().setSystem(DkCoreDeCprIdentifier.DkCoreDeCPRValueSet.URNOID122081761613).setValue(UUID.randomUUID().toString())); var pat = patient.build(context);
results in
org.hl7.fhir.exceptions.DefinitionException: The discriminator path 'system' has no fixed value - this is not supported by the PEBuilder at org.hl7.fhir.r4.profilemodel.PEBuilder.makeSliceExpression(PEBuilder.java:585) at org.hl7.fhir.r4.profilemodel.PEDefinitionElement.fhirpath(PEDefinitionElement.java:74) at org.hl7.fhir.r4.profilemodel.PEInstance.children(PEInstance.java:131) at org.hl7.fhir.example.generated.DkCorePatient.load(DkCorePatient.java:165) at org.hl7.fhir.example.generated.DkCorePatient.<init>(DkCorePatient.java:128)
ok. how does that discriminator work?
From https://hl7.dk/fhir/core/3.2.0/StructureDefinition-dk-core-patient.html - I assume its the identifier slicing here thats causing issues.
( Im not near a computer)
Getting back into this - what is the best way to test the latest version of codegen?
best way... hmm. It's in this branch: https://github.com/hapifhir/org.hl7.fhir.core/tree/2024-11-gg-pe-code-gen-2
no fix for the runtime exception yet, correct?
no. why is it happening?
I'll see if I can find time this evening to have a look - but the stacktrace remains the same:
org.hl7.fhir.exceptions.DefinitionException: The discriminator path 'system' has no fixed value - this is not supported by the PEBuilder
at org.hl7.fhir.r4.profilemodel.PEBuilder.makeSliceExpression(PEBuilder.java:585)
at org.hl7.fhir.r4.profilemodel.PEDefinitionElement.fhirpath(PEDefinitionElement.java:74)
at org.hl7.fhir.r4.profilemodel.PEInstance.children(PEInstance.java:131)
Grahame Grieve said:
ok. how does that discriminator work?
by system ...
@Grahame Grieve So to me it seems like the fhirpath construction goes bad as it does not take into account that there's some cardinality check missing.
If your tests show that its working then please point me to one
oh no it fails for me
I just haven't spent the time debugging it yet. Maybe you can?
well, I spent the time and :
The discriminator path 'system' has no fixed value - this is not supported by the PEBuilder
turns out that's exactly a statement of truth. It has no fixed value
cause it has a binding. I added support for that (in r5)
Arh ... Now I understand!
Arh ... Now I understand!
Will it be backported?
sure
Seems like it works now ...
now time to throw it at the lions - aka. client side consumers :grimacing:
when's the 6.4.1 version of core expected to hit the rails?
new release sometime this wek
Is there a specific reason why AuditEvent and Provenance don't have business identifiers?
I noticed their absence as I was working with a FHIR server that only supports conditional creates with identifiers. I.e., an identifier is the only thing it allows me to specify as the value for the ifNoneExist
rule.
And yes, I'd like to only create a Provenance resource in the case it does not already exist.
There was not a compelling equiviant business artifact that we needed to have identifier linkage. The guidance from modeling is that we include elements only where there is not an 80% need, expecting that extensions are easy. Can you provide examples of business identifiers to non-fhir artifacts? I don't think there is a compelling reason to reject. If we are still unclear, we could at least create a core extension so that everyone does it the same way.
An extension would not help in my case. My problem is that I cannot do a conditional create of a Provenance resource on a server (Google Healthcare API), as it only supports conditional creates based on an identifier.
then help us understand the "existing system" that has a business identifier that you need to place into AuditEvent.identifier or Provenance.identifier.
image.png
Use agent for “who”, activity or outcome for “what”.
Agent has “who” referred an a resource with Identifier (Patient, Device…..)
It’s like centralize identifiers in actors and not in actions.
@Felipe Soriano is this a new topic? It does not seem to relate to the existing topic, as for references Provenance and AuditEvent does use the Reference datattype that does have that support. This topic is about a root level .identifier in Provenance and AuditEvent.
John Moehrke said:
Felipe Soriano is this a new topic? It does not seem to relate to the existing topic, as for references Provenance and AuditEvent does use the Reference datattype that does have that support. This topic is about a root level .identifier in Provenance and AuditEvent.
Hey @John Moehrke,
I think that both Resources contain reference properties to Resources with identifiers.
if access is from user or root, it’s implementation rule.
In my opinion always start with security row levels. Root access doesn’t change implementation.
Root access is analyzed action. User access is operational action.
Systems operation never can down.
What you think about it?
tks
That is not a problem. That is not what this stream is discussing. This stream is about a business identifier for an AuditEvent instance, or Provenance instance.
BasedOn?
It's a action based in event and actors, ok? For event there is basedOn, for actor there is agente..
For Resource 'what' there is Entity
@John Moehrke I was thinking of something like this:
{
"resource": {
"resourceType": "Bundle",
"type": "transaction",
"entry": [
{
"request": {
"ifNoneExist": "identifier=bundle-test-observation",
"method": "POST",
"url": "Observation"
},
"fullUrl": "urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665",
"resource": {
"resourceType": "Observation",
"identifier": [
{
"assigner": {
"display": "Sensotrend Oy",
"identifier": {
"system": "urn:ietf:rfc:3986",
"value": "https://www.sensotrend.com/"
}
},
"use": "official",
"system": "urn:ietf:rfc:3986",
"value": "urn:uuid:4e20b340-6477-592f-9d8e-bfb7395e61b9"
}
],
"status": "final",
"code": {
"coding": [
{
"code": "2344-0",
"display": "Glucose [Mass/volume] in Body fluid",
"system": "http://loinc.org"
}
],
"text": "Interstitial glucose"
}
}
},
{
"request": {
"ifNoneExist": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9",
"method": "POST",
"url": "Provenance"
},
"fullUrl": "urn:uuid:9aa87028-6b8f-421f-9524-2e0ffac8f002",
"resource": {
"identifier": [
{
"assigner": {
"display": "Sensotrend Oy",
"identifier": {
"system": "urn:ietf:rfc:3986",
"value": "https://www.sensotrend.com/"
}
},
"use": "usual",
"value": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9"
}
],
"agent": [
{
"type": {
"coding": [
{
"code": "assembler",
"display": "Assembler",
"system": "http://terminology.hl7.org/CodeSystem/provenance-participant-type"
}
],
"text": "Assembler"
},
"who": {
"display": "My FHIR App"
}
}
],
"recorded": "2024-09-25T23:00:44.044+03:00",
"resourceType": "Provenance",
"target": [
{
"type": "Observation",
"reference": "Observation/urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665"
}
]
}
}
]
}
}
The "existing system" being my FHIR app.
Mikael Rinnetmäki said:
John Moehrke I was thinking of something like this:
{ "resource": { "resourceType": "Bundle", "type": "transaction", "entry": [ { "request": { "ifNoneExist": "identifier=bundle-test-observation", "method": "POST", "url": "Observation" }, "fullUrl": "urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665", "resource": { "resourceType": "Observation", "identifier": [ { "assigner": { "display": "Sensotrend Oy", "identifier": { "system": "urn:ietf:rfc:3986", "value": "https://www.sensotrend.com/" } }, "use": "official", "system": "urn:ietf:rfc:3986", "value": "urn:uuid:4e20b340-6477-592f-9d8e-bfb7395e61b9" } ], "status": "final", "code": { "coding": [ { "code": "2344-0", "display": "Glucose [Mass/volume] in Body fluid", "system": "http://loinc.org" } ], "text": "Interstitial glucose" } } }, { "request": { "ifNoneExist": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9", "method": "POST", "url": "Provenance" }, "fullUrl": "urn:uuid:9aa87028-6b8f-421f-9524-2e0ffac8f002", "resource": { "identifier": [ { "assigner": { "display": "Sensotrend Oy", "identifier": { "system": "urn:ietf:rfc:3986", "value": "https://www.sensotrend.com/" } }, "use": "usual", "value": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9" } ], "agent": [ { "type": { "coding": [ { "code": "assembler", "display": "Assembler", "system": "http://terminology.hl7.org/CodeSystem/provenance-participant-type" } ], "text": "Assembler" }, "who": { "display": "My FHIR App" } } ], "recorded": "2024-09-25T23:00:44.044+03:00", "resourceType": "Provenance", "target": [ { "type": "Observation", "reference": "Observation/urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665" } ] } } ] } }
Hi @Mikael Rinnetmäki ,
In my opinion Provenance is stronger Resource. I think it's for ensure digital assign, so property 'who' is a physical person and not a simple log system.
IMHO!
Tks
the Provenance.agent is any kind of agent. Not limited to physical person. This can definitely be a system, Device. An example is when content is created by an AI algorithm running on a specific model. That can be defined in Device and recorded as Provenance.agent.
@Mikael Rinnetmäki it sounds like a bizarre FHIR server
I dont think I understand the business need for identifiers on AuditEvents or Provenance
I mean - there is always the case where you use FHIR as a proxy in front of multiple existing systems ( e.g. multiple existing legscy auditlogging systems) and you need to represent those in a lossless coherent setup in which case identifiers could be used to label where the events origin from ( given you dont use meta.source )
Then you need identifiers
But that is a general argument for having Identifiers on all resources
( which would feel wrong since there actually is a meta.source)
I think it is a perfectly valid optimization choice. See https://cloud.google.com/healthcare-api/docs/how-tos/fhir-resources#conditionally_create_a_fhir_resource
In the Cloud Healthcare API v1, conditional operations exclusively use the
identifier
search parameter, if it exists for the FHIR resource type, to determine which FHIR resources match a conditional search query.
We use uuid5 identifiers in conditional creates for observations, with all FHIR servers we work with. Just plain data deduplication.
Jens Villadsen said:
Mikael Rinnetmäki it sounds like a bizarre FHIR server
Perhaps @Paul Church cares to comment?
This is just one of the compromises that were necessary to reach the scale that we're operating at. The search index is not transactional with the primary storage (to keep the tail latencies down) so conditional ops on arbitrary search criteria aren't atomic, which makes them almost useless. We looked at what customers actually needed and this satisfied the most common use cases, so we built a transactional index just for identifiers.
Unfortunately this doesn't cover the resource types that don't have identifiers - we would certainly be in favour of having a repeated identifier field on all resource types (it would be really useful for this and also for Healthcare Data Engine's entity reconciliation) but that's a big change.
This is the one I was also thinking about. It seems like a very different reason for which the .identifier element exists elsewhere. I would have preferred that the .src would have been defined better as that exists everywhere.
Using meta
has it own kinds of problems The metadata about a resource. This is content in the resource that is maintained by the infrastructure. Changes to the content might not always be associated with version changes to the resource. aka. meta on historic versions may/may not reflect the actual history meta data
maybe not so bizzare after all :wink: How many resource types do not have Identifiers after all? ~5%?
Paul Church said:
This is just one of the compromises that were necessary to reach the scale that we're operating at. The search index is not transactional with the primary storage (to keep the tail latencies down) so conditional ops on arbitrary search criteria aren't atomic, which makes them almost useless. We looked at what customers actually needed and this satisfied the most common use cases, so we built a transactional index just for identifiers.
Unfortunately this doesn't cover the resource types that don't have identifiers - we would certainly be in favour of having a repeated identifier field on all resource types (it would be really useful for this and also for Healthcare Data Engine's entity reconciliation) but that's a big change.
I agree, ID is an agnostic data, so it’s should be controlled by data store and not from specification. Specification allows many methods for this.
@Paul Church just out of curiosity, would you perhaps be willing to consider supporting indices for resources without an identifier if a standard extension would exist for them, for this purpose?
That is, if there would be an identifierExtension
that would be specified to support deduplication of resource types that do not have an identifier.
My immediate use case is conditional create for the Provenance resource. Ideally in a batch Bundle. See also discussion in topic Bundles all the way down.
We have given this some thought - perhaps a Google-specific extension on resources that don't have an identifier field, and an "identifier" search parameter that uses this extension and also participates in the index for conditional operations. The details would be a bit tricky. So far this has not gone past the stage of idle speculation.
Paul Church said:
We have given this some thought - perhaps a Google-specific extension on resources that don't have an identifier field, and an "identifier" search parameter that uses this extension and also participates in the index for conditional operations. The details would be a bit tricky. So far this has not gone past the stage of idle speculation.
Are the searches done directly against the element or using the searchparameter specification? If it's the latter, you could create an identifier extension, then use an identifier searchparam to search by it, making it look like searching for auditevents is the same as with any other resource.
Actual searches are done using the SearchParameter spec (with some preprocessing but basically our built-in search and custom search are doing the same thing) but the transactional index used for conditional operations is completely different and doesn't work with custom search parameters.
Conceptually you are right but in this particular case, with our particular implementation details, defining that search parameter won't accomplish what Mikael wants.
Guys, are you allow me to propose an argument for this discussion?
How can a resource for this depend on an extension?
In other words, is it an unusable resource without an extension?
@Grahame Grieve ?
@Lloyd McKenzie ?
Is there a specific reason why AuditEvent and Provenance don't have business identifiers?
I noticed their absence as I was working with a FHIR server that only supports conditional creates with identifiers. I.e., an identifier is the only thing it allows me to specify as the value for the ifNoneExist
rule.
And yes, I'd like to only create a Provenance resource in the case it does not already exist.
There was not a compelling equiviant business artifact that we needed to have identifier linkage. The guidance from modeling is that we include elements only where there is not an 80% need, expecting that extensions are easy. Can you provide examples of business identifiers to non-fhir artifacts? I don't think there is a compelling reason to reject. If we are still unclear, we could at least create a core extension so that everyone does it the same way.
An extension would not help in my case. My problem is that I cannot do a conditional create of a Provenance resource on a server (Google Healthcare API), as it only supports conditional creates based on an identifier.
then help us understand the "existing system" that has a business identifier that you need to place into AuditEvent.identifier or Provenance.identifier.
image.png
Use agent for “who”, activity or outcome for “what”.
Agent has “who” referred an a resource with Identifier (Patient, Device…..)
It’s like centralize identifiers in actors and not in actions.
@Felipe Soriano is this a new topic? It does not seem to relate to the existing topic, as for references Provenance and AuditEvent does use the Reference datattype that does have that support. This topic is about a root level .identifier in Provenance and AuditEvent.
John Moehrke said:
Felipe Soriano is this a new topic? It does not seem to relate to the existing topic, as for references Provenance and AuditEvent does use the Reference datattype that does have that support. This topic is about a root level .identifier in Provenance and AuditEvent.
Hey @John Moehrke,
I think that both Resources contain reference properties to Resources with identifiers.
if access is from user or root, it’s implementation rule.
In my opinion always start with security row levels. Root access doesn’t change implementation.
Root access is analyzed action. User access is operational action.
Systems operation never can down.
What you think about it?
tks
That is not a problem. That is not what this stream is discussing. This stream is about a business identifier for an AuditEvent instance, or Provenance instance.
BasedOn?
It's a action based in event and actors, ok? For event there is basedOn, for actor there is agente..
For Resource 'what' there is Entity
@John Moehrke I was thinking of something like this:
{
"resource": {
"resourceType": "Bundle",
"type": "transaction",
"entry": [
{
"request": {
"ifNoneExist": "identifier=bundle-test-observation",
"method": "POST",
"url": "Observation"
},
"fullUrl": "urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665",
"resource": {
"resourceType": "Observation",
"identifier": [
{
"assigner": {
"display": "Sensotrend Oy",
"identifier": {
"system": "urn:ietf:rfc:3986",
"value": "https://www.sensotrend.com/"
}
},
"use": "official",
"system": "urn:ietf:rfc:3986",
"value": "urn:uuid:4e20b340-6477-592f-9d8e-bfb7395e61b9"
}
],
"status": "final",
"code": {
"coding": [
{
"code": "2344-0",
"display": "Glucose [Mass/volume] in Body fluid",
"system": "http://loinc.org"
}
],
"text": "Interstitial glucose"
}
}
},
{
"request": {
"ifNoneExist": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9",
"method": "POST",
"url": "Provenance"
},
"fullUrl": "urn:uuid:9aa87028-6b8f-421f-9524-2e0ffac8f002",
"resource": {
"identifier": [
{
"assigner": {
"display": "Sensotrend Oy",
"identifier": {
"system": "urn:ietf:rfc:3986",
"value": "https://www.sensotrend.com/"
}
},
"use": "usual",
"value": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9"
}
],
"agent": [
{
"type": {
"coding": [
{
"code": "assembler",
"display": "Assembler",
"system": "http://terminology.hl7.org/CodeSystem/provenance-participant-type"
}
],
"text": "Assembler"
},
"who": {
"display": "My FHIR App"
}
}
],
"recorded": "2024-09-25T23:00:44.044+03:00",
"resourceType": "Provenance",
"target": [
{
"type": "Observation",
"reference": "Observation/urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665"
}
]
}
}
]
}
}
The "existing system" being my FHIR app.
Mikael Rinnetmäki said:
John Moehrke I was thinking of something like this:
{ "resource": { "resourceType": "Bundle", "type": "transaction", "entry": [ { "request": { "ifNoneExist": "identifier=bundle-test-observation", "method": "POST", "url": "Observation" }, "fullUrl": "urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665", "resource": { "resourceType": "Observation", "identifier": [ { "assigner": { "display": "Sensotrend Oy", "identifier": { "system": "urn:ietf:rfc:3986", "value": "https://www.sensotrend.com/" } }, "use": "official", "system": "urn:ietf:rfc:3986", "value": "urn:uuid:4e20b340-6477-592f-9d8e-bfb7395e61b9" } ], "status": "final", "code": { "coding": [ { "code": "2344-0", "display": "Glucose [Mass/volume] in Body fluid", "system": "http://loinc.org" } ], "text": "Interstitial glucose" } } }, { "request": { "ifNoneExist": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9", "method": "POST", "url": "Provenance" }, "fullUrl": "urn:uuid:9aa87028-6b8f-421f-9524-2e0ffac8f002", "resource": { "identifier": [ { "assigner": { "display": "Sensotrend Oy", "identifier": { "system": "urn:ietf:rfc:3986", "value": "https://www.sensotrend.com/" } }, "use": "usual", "value": "identifier=provenance-for-4e20b340-6477-592f-9d8e-bfb7395e61b9" } ], "agent": [ { "type": { "coding": [ { "code": "assembler", "display": "Assembler", "system": "http://terminology.hl7.org/CodeSystem/provenance-participant-type" } ], "text": "Assembler" }, "who": { "display": "My FHIR App" } } ], "recorded": "2024-09-25T23:00:44.044+03:00", "resourceType": "Provenance", "target": [ { "type": "Observation", "reference": "Observation/urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665" } ] } } ] } }
Hi @Mikael Rinnetmäki ,
In my opinion Provenance is stronger Resource. I think it's for ensure digital assign, so property 'who' is a physical person and not a simple log system.
IMHO!
Tks
the Provenance.agent is any kind of agent. Not limited to physical person. This can definitely be a system, Device. An example is when content is created by an AI algorithm running on a specific model. That can be defined in Device and recorded as Provenance.agent.
@Mikael Rinnetmäki it sounds like a bizarre FHIR server
I dont think I understand the business need for identifiers on AuditEvents or Provenance
I mean - there is always the case where you use FHIR as a proxy in front of multiple existing systems ( e.g. multiple existing legscy auditlogging systems) and you need to represent those in a lossless coherent setup in which case identifiers could be used to label where the events origin from ( given you dont use meta.source )
Then you need identifiers
But that is a general argument for having Identifiers on all resources
( which would feel wrong since there actually is a meta.source)
I think it is a perfectly valid optimization choice. See https://cloud.google.com/healthcare-api/docs/how-tos/fhir-resources#conditionally_create_a_fhir_resource
In the Cloud Healthcare API v1, conditional operations exclusively use the
identifier
search parameter, if it exists for the FHIR resource type, to determine which FHIR resources match a conditional search query.
We use uuid5 identifiers in conditional creates for observations, with all FHIR servers we work with. Just plain data deduplication.
Jens Villadsen said:
Mikael Rinnetmäki it sounds like a bizarre FHIR server
Perhaps @Paul Church cares to comment?
This is just one of the compromises that were necessary to reach the scale that we're operating at. The search index is not transactional with the primary storage (to keep the tail latencies down) so conditional ops on arbitrary search criteria aren't atomic, which makes them almost useless. We looked at what customers actually needed and this satisfied the most common use cases, so we built a transactional index just for identifiers.
Unfortunately this doesn't cover the resource types that don't have identifiers - we would certainly be in favour of having a repeated identifier field on all resource types (it would be really useful for this and also for Healthcare Data Engine's entity reconciliation) but that's a big change.
This is the one I was also thinking about. It seems like a very different reason for which the .identifier element exists elsewhere. I would have preferred that the .src would have been defined better as that exists everywhere.
Using meta
has it own kinds of problems The metadata about a resource. This is content in the resource that is maintained by the infrastructure. Changes to the content might not always be associated with version changes to the resource. aka. meta on historic versions may/may not reflect the actual history meta data
maybe not so bizzare after all :wink: How many resource types do not have Identifiers after all? ~5%?
Paul Church said:
This is just one of the compromises that were necessary to reach the scale that we're operating at. The search index is not transactional with the primary storage (to keep the tail latencies down) so conditional ops on arbitrary search criteria aren't atomic, which makes them almost useless. We looked at what customers actually needed and this satisfied the most common use cases, so we built a transactional index just for identifiers.
Unfortunately this doesn't cover the resource types that don't have identifiers - we would certainly be in favour of having a repeated identifier field on all resource types (it would be really useful for this and also for Healthcare Data Engine's entity reconciliation) but that's a big change.
I agree, ID is an agnostic data, so it’s should be controlled by data store and not from specification. Specification allows many methods for this.
@Paul Church just out of curiosity, would you perhaps be willing to consider supporting indices for resources without an identifier if a standard extension would exist for them, for this purpose?
That is, if there would be an identifierExtension
that would be specified to support deduplication of resource types that do not have an identifier.
My immediate use case is conditional create for the Provenance resource. Ideally in a batch Bundle. See also discussion in topic Bundles all the way down.
We have given this some thought - perhaps a Google-specific extension on resources that don't have an identifier field, and an "identifier" search parameter that uses this extension and also participates in the index for conditional operations. The details would be a bit tricky. So far this has not gone past the stage of idle speculation.
Paul Church said:
We have given this some thought - perhaps a Google-specific extension on resources that don't have an identifier field, and an "identifier" search parameter that uses this extension and also participates in the index for conditional operations. The details would be a bit tricky. So far this has not gone past the stage of idle speculation.
Are the searches done directly against the element or using the searchparameter specification? If it's the latter, you could create an identifier extension, then use an identifier searchparam to search by it, making it look like searching for auditevents is the same as with any other resource.
Actual searches are done using the SearchParameter spec (with some preprocessing but basically our built-in search and custom search are doing the same thing) but the transactional index used for conditional operations is completely different and doesn't work with custom search parameters.
Conceptually you are right but in this particular case, with our particular implementation details, defining that search parameter won't accomplish what Mikael wants.
Guys, are you allow me to propose an argument for this discussion?
How can a resource for this depend on an extension?
In other words, is it an unusable resource without an extension?
@Grahame Grieve ?
@Lloyd McKenzie ?
We are hitting a challenge related to the CGM use case where we want to allow clients to submit a bundle to the EHR with a whole set of linked data (e.g., parent observations together with their members; diagnostic reports; document references), while allowing the EHR to pick and choose which resources it wants to persist.
A batch
submission to POST /
seems to prohibit the inter-resource links, and a transaction
submission seems to prohibit the EHR's picking and choosing.
Have others faced this issue? Of course we could define a custom operation but it seems like these semantics ("here is a set of data, keep what you want") would be broadly reusable.
That's called "Messaging"... don't reinvent that, it already exists. Server side orchestration.
If you don't want that, then it'd be an operation. There's no way to do this using out of the box REST.
Not out of the box fhir rest today, that's fair. But this gap seems arbitrary to me.
Well, 'client side orchestration' is a core concept of REST. FHIR introduced a workaround by adding a hack to support operations.
Note that a receiver doesn't have to accept all content of a FHIR resource, but deciding not to do anything with a resource posted to a server ? That'd probably go a step to far (from a pure REST perspective).
You mean as transaction having behavior? That's also fair...
I think it is similar to messaging, but not quite the same either.
My first reaction was that we should add a generic operation on bundle ($process
?) that can be defined to take an arbitrary bundle for processing according to either a parameter (canonical) or key (profile).
But thinking on it, I think that is overall less-good than just defining a specific operation for this use case. I like the simplicity of a single 'here is a bundle do something with it' operation that could serve as a gateway, but I think it would cause implementation issues down the road.
So, my two cents would be to define a $process-cgm
operation that takes in a collection
bundle
and does what you want with it.
Agreed, a custom operation for CGM is the least disruptive, most clearly FHIR compliant approach. Just want to make sure we're not missing the opportunity for reusable components.
And I realize I should have been more precise about the semantics here. It's not just "here's a set a resources, do what you want with them". Rather, it's "here's a set of resources; please persistent them all... just like a transaction, but it's okay to skip things that you can't keep"
Yeah, it feels like we could also carve out something in transaction
for 'successfully reviewed and not keeping this'.
Yeah, you can almost get there today with 202 responses to all the creates (but that implies async processing will occur, rather than "I already rejected")
Yeah, I thought about that too ;-). I think we would want something more specific though, so that the caller can understand what was actually persisted.
Is this a bit like a conditional create, but the condition depends on the server capabilities? e.g. an "if-supported" header
I think so. But more something of a union of "if-supported" and "if-wanted", along with a status code that would be successful for the transaction but indicate to the caller that it was not persisted.
No way to do that in a transaction though, without adding to Bundle.request
Yep - trying to sort out if there is something workable before filing a ticket to discuss further.
"If-you-feel-like-it" header :-)
Lol - I am going back and forth between a feature-capability header and an extension on Bundle.type
to indicate it can be processed that way.
In either case, need to figure out if there is a sensical response that can thread the needle on the existing behavior so that it is safe in all situations (e.g., client thinks it will + server does not and vice-versa).
you guys sure are afraid of "FHIR Messaging"...
an alternative is a History Bundle
I am not afraid of messaging, I just do not think this use case aligns more than it does batch
/transaction
.
history
bundles do not have processing semantics, so it would require a new operation anyway and it could just use a collection
at that point.
Could you elaborate on why this isn't messaging? When we get CGM on incoming v2 interfaces today, they come in the form of messages.
I don't think there are hard boundaries, but generally "write these data to your endpoint" is well covered by FHIR REST API. The rest semantics of identifying resources, learning about locations written to, being able to issue follow-up queries that read your own writes...all this is consistent with FHIR REST semantics.
a
transaction
submission seems to prohibit the EHR's picking and choosing.
what language?
For a transaction, servers SHALL either accept all actions (i.e. process each entry resulting in a 2xx or 3xx response code) and return an overall 200 OK, along with a response bundle (see below), or reject all resources and return an HTTP 400 or 500 type response.
I read this as "you can't decide to persist some and reject others" but if that's an over-read, this would be great to understand.
Perhaps we can interpret this as "servers SHALL either accept all actions permitted by business rules and return an overall 200 OK, or...."
(deleted)
I read this as "you can't decide to persist some and reject others" but if that's an over-read, this would be great to understand.
yes. that is what it means, and the definitions of update
and create
in the spec are clear too. The language does preclude your use case. And it does so deliberately. @René Spronk is right about the intention to be client driven. A transaction isn't something a server can opt out of getting right
As far as an operation, I'm inclined to agree with @Gino Canessa - trying to specify generically and reusably how a server would act if it declined to accept a resource that was referred to elsewhere in the set of submitted resources sounds like it would get super complicated long before it got sufficiently robust to be reliable, and so it's better to have a custom operation.
The server couldn't ignore the patient resource and patient matching issues, right?
As for "this is messaging, why are you scared of that" - it could be done with messaging, but you'd have to define an event with the same kind of details as if you defined an operation
It seems like messaging and operations are basically equivalent. So my vote would be messaging, because there are many thousands of humans who have a solid understanding of "messages". And thus I won't have to explain to all of them what a custom operation is. And then explain to them why we picked that instead of messaging.
It seems like messaging and operations are basically equivalent.
well, they're not when it comes to the implementation level. Unless you call $process-message, in which case they run into each other, but messaging gives you more layers of flex. (and mistakes)
So my vote would be messaging, because there are many thousands of humans who have a solid understanding of "messages".
Really? like, as in v2 messages? Because that might suggest to me that messaging isn't such a good idea. Because there are precisely 0 users who have a sense of this message type right now
And thus I won't have to explain to all of them what a custom operation is. And then explain to them why we picked that instead of messaging.
this seems like a non-argument since you'll have to explain to them (a) what a FHIR message is and (b) why we picked that instead of an operation
Unless deployment of FHIR Messaging is way more common in USA than I expected
Today, we have CGM data flowing into our EHR using HL7v2 messages (ORUs). So I think this message type is understood. We are just translating it into FHIR.
FHIR Messaging is somewhat common outside the USA. And CGM spec is UV realm.
But I guess flipping it the other way: what are the advantages an operation has over messaging? For this type of data, I don't see many.
I'm not sure what you are referring to with the flex/mistakes part.
those are better arguments, but an ORU message type? interesting.
@Josh Mandel the degree of specification of an R01 event is way short of the kind of spec you're talking about
And I will admit I am being selfish when I advocate for messaging. We have a set of existing tools, documentation, and training for our admins and analysts to work with FHIR Messaging. We don't have that for operations. But I don't expect this reason to overrule any conceptual arguments.
But I guess flipping it the other way: what are the advantages an operation has over messaging? For this type of data, I don't see many.
well, it's about the the complexity of the transfer. You're right that operations and messages have similar outcomes. But an operation is a simpler interaction: send the server a request to perform some operation, and get a result back. Where as messaging is 'send a message somewhere, where it might get changed/re-sent/replayed, and then wait to get a response back at some time'
I'm not sure what you are referring to with the flex/mistakes part.
Routing, messaging agents, loosely specified events. These are good for custom implementations, but they make scaling seamlessly hard
We have a set of existing tools, documentation, and training for our admins and analysts to work with FHIR Messaging
for FHIR messaging? Interesting. What are you currently using it for?
Yup - FHIR Messaging. I don't have the full list handy, but at least Norway and Denmark have some national specs that we have implemented.
ah Europe, ok.
Yeah, in US everyone still uses HL7v2, NCPDP, etc.
For what is worth we had some cases of having to import bundles using some special logic and we used $process-message - that was somewhat simpler for both clients and server than adding custom operations
Can you say more about "simpler" @Michele Mottini ? I haven't used messaging but at a glance it seems to introduce a lot more machinery (e.g., headers with sources/responses, event definitions, conformance challenges of describing how the event type dictates a specific profile of bundle must be supplied of bundle must be supplied for a given event code) compared with dedicated operation (which can directly profile what type of bundle is to be submitted, with no additional machinery around it).
Simpler to implement - 'if the event a this do x, if the event is b to y' instead of having to create different operation endpoints
The profiling is not that complicated either? See for example https://build.fhir.org/ig/HL7/davinci-alerts/StructureDefinition-notifications-bundle.html
Operations involve less overhead. You don't necessarily need a Bundle (or even a payload) if url filters are sufficient. Operations don't require specifying sender or receiver or worrying about reliable messaging or any of that stuff. To me, messaging is something that you only use if:
a) there's a need for routing (sender doesn't know the endpoint of the receiver, or there's a fan-out expectation)
b) the systems involved already prefer a messaging approach to REST because of existing infrastructure. (You can have a 'dumb' front-door endpoint that then routes a request to a specific handler, which is harder to do with operations.)
Generally, to meet real-world business rules, we need senders, receivers, and especially reliable delivery, so messaging ends up being easier than operations for this type of exchange. If you start with operations, then folks assume you don't need that stuff, then you end up doing extra work later to tell everyone that you actually do need that stuff and here is how you hack it in to the operation.
It is almost more about expectations than anything else. With messaging, there is generally an expectation that if the data is generally fine, but something goes wrong (e.g., the chart is locked and data can't file), the server can manage a resequencing queue to eventually file the data when the lock is released. But with operations (or RESTful exchanges in general), the expectation is that the client has to do that. And often (in my experience) the clients are less well equipped to manage retries. And for things like locks, the best they can really do is retry regularly, since I don't know that many systems send subscription notifications based on lock releases.
But where locks are one example, in general, which system do we want to maintain the "workqueue" of messages that are generally fine, but couldn't file due to some state issue? The default "expectations" seem like Messaging = server maintains the workqueue and "operations" = client maintains the workqueue. This assignment isn't required, but it feels... natural?
Sender and receiver for operations would be handled by the auth layer, the same as any other REST action.
Not sure what you mean by “work queue”. Are you talking asynchronous? If so, the yes
If Epic says that A is better then B I think it would be a good idea to go with A
The work queue is needed when data is exchanged synchronously, but the persistence of that data is handled asynchronously. For example, a Message is sent from the CGM vendor to an EHR. The EHR checks the data and it all looks good, but it can't store the data to the chart right now, because the chart is locked (or various other reasons, but locking is a good example). The EHR synchronously responds to the CGM vendor saying that the data is accepted. At this point, the EHR is "promising" that the data will eventually be persisted to the chart. That message is then put on a queue. There are some automated processes that my take the data from the queue and persist it (for example, automatically when locks are released). But in some cases, there may be manual work involved to resolve the issue. For example, the patient may be locked because it is being unmerged, after which a human must decide which patient the message to file to. In this case, the message is sent to what we call a "work queue", that human analysts regularly review to ensure data makes it into the correct chart.
The alternative is that the EHR just returns an error to the CGM app, even if the data is good. But in my experience, many apps are not equipped or interested in taking on the responsibility for ensuring data eventually makes it into the EHR's chart. They don't know (and probably shouldn't need to know) what system state means the data can be persisted. And they probably don't now how to resolve the state issue. Some state issues like locks will usually resolve themselves eventually. But other issues may resolve human intervention in the EHR.
These are important considerations, but they didn't surface in our design discussion for the Argo CGM project until just now. Let's be sure to take these into account after the connectathon when we're evaluating how to proceed. On the flip side, we do want to give clients a clear way to tell whether (and at what location) their submitted data has been persisted.
I did get carried away on the general principles. For CGM data specifically, some of these issues aren't as critical, since the use case here doesn't involve doing critical alerting based on data, and the data use expectations are more trending focused, or for provider/patient discussion (where if there is missing data, they can just resolve in then and there).
How do you handle this for simple REST? Because the expectation is that if you return a 201 or 200 you could perform a ‘read’ a couple of seconds later and see the new data (as could anyone else who’s authorized)
This is one of the main reasons why we have a limited number of "write" REST APIs. But for the write APIs we do have, we'll return a 4xx code, and then it becomes the app's (or user's) problem. And those are the situations I'm referencing when I say "in my experience, many apps are not equipped or interested in taking on the responsibility".
We still often recommend folks use HL7v2 messaging instead of FHIR REST APIs for exactly this reason.
Somewhat related, but given that the HTTP part of the FHIR spec that covers batches and transactions is normative, is there documentation anywhere for how each part of that page got past the FMM4 "tested across scope" and FMM5 "5 production systems" criteria? Did those FMM criteria get evaluated for each section of the HTTP page?
I don't think we formally evaluated it at that level. at least, I don't remember it. We did look at the support for transactions, I recall asking how many systems had implemented them
Some experimental code that I did a long time ago (originated in DSTU2) to demonstrate implementing this
https://github.com/brianpos/fhir-net-api/blob/be638e6359d710e8bd949fce3c5d121af150f84c/src/Hl7.Fhir.Questionnaire/QuestionnaireProcessing.cs#L23
https://github.com/brianpos/fhir-net-api/blob/develop-r4-sqlonfhir-1.3/src/Hl7.Fhir.Questionnaire/StructureItem.cs
https://github.com/brianpos/fhir-net-api/blob/develop-r4-sqlonfhir-1.3/src/Hl7.Fhir.Questionnaire/StructureItemTree.cs
Was never merged back into the FirelySDK
Hello All,
I am new to SDC implementation and got stuck at following point- How can we pass the existing resource reference to a newly extracted resource using Definition based extraction. It would be really helpful if there is sample example for the same.
Thank you
To clarify, you're generating a new resource with extraction and then trying to create or update a resource that points to that new resource? I don't think we have a solution for that. Can you submit a Jira issue for us to try to tackle it as a new capability?
For instance, Patient is an existing resource. Created new resource - Encounter using Definition Based Extraction. Now, I need to pass Patient reference in Encounter (Encounter.subject)
Thank you so much @Lloyd McKenzie & @Brian Postlethwaite for the clarification. That really helps.
Sample:
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "'Patient/'+%LaunchPatient.id",
"name": "enc-subject"
}
},
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"linkId": "enc-subject",
"type": "string",
"text": "Subject",
"readOnly": true,
"definition": "http://hl7.org/fhir/us/core/StructureDefinition/us-core-encounter#Encounter.subject.reference"
}
Hi All, I am working on Pregnancy Questionnaire where I need to capture "Delivery Time" which can repeat in case of twins, triplet etc. For each Delivery Time, a new Observation should be extracted. Can someone review the below Questionnaire for the same (if that is the correct approach):
{
"resourceType": "Questionnaire",
"id": "ANCDELIVERY",
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-launchContext",
"extension": [
{
"url": "name",
"valueId": "LaunchPatient"
},
{
"url": "type",
"valueCode": "Patient"
},
{
"url": "description",
"valueString": "The patient that is to be used to pre-populate the form"
}
]
},
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-sourceQueries",
"valueReference": {
"reference": "#PrePopQuery"
}
}
],
"url": "http://sample.org/Questionnaire/ANCDELIVERY",
"version": "0.1.0",
"name": "ANCOYO Labour & Delivery",
"title": "ANCOYO Labour & Delivery",
"status": "active",
"experimental": false,
"date": "2022-08-25T09:00:00+05:30",
"description": "ANCOYO Labour & Delivery workflow.",
"subjectType": [
"Patient"
],
"item": [
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-itemExtractionContext",
"valueCode": "Observation"
}
],
"linkId": "11.2.2",
"type": "group",
"text": "Delivery Time",
"item": [
{
"linkId": "11.2.2.1",
"type": "time",
"text": "Delivery Time",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.valueTime",
"repeats": true
},
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.2",
"type": "group",
"text": "Delivery Time Code",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.code",
"item": [
{
"linkId": "11.2.2.2.1",
"type": "choice",
"text": "Delivery Time Code",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.code.coding",
"initial": [
{
"valueCoding": {
"code": "ANC.End.12",
"display": "Delivery time",
"system": "http://fhir.org/guides/who/anc-cds/CodeSystem/anc-custom-codes"
}
}
]
}
]
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "'Patient/'+%LaunchPatient.id",
"name": "11.2.2.3"
}
},
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.3",
"type": "string",
"text": "Subject",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.subject.reference"
},
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.4",
"type": "choice",
"text": "Status",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.status",
"initial": [
{
"valueCoding": {
"code": "final",
"display": "Final",
"system": "http://hl7.org/fhir/observation-status"
}
}
]
},
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.5",
"type": "choice",
"text": "Category",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.category",
"initial": [
{
"valueCoding": {
"code": "survey",
"display": "Survey",
"system": "http://terminology.hl7.org/CodeSystem/observation-category"
}
}
]
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "today()",
"name": "11.2.2.6"
}
},
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.6",
"type": "dateTime",
"text": "Effective DateTime",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.effectiveDateTime"
}
]
}
]
}
Is there a standard way to define the profile of the resource that is being extracted from a questionnaire using definition based extraction? I know that we can use a hidden item with an initial answer mapped to the Resource.meta.profile field , is this inline with the SDC standard?
cc: @Pallavi Ganorkar
Instead of specifying the observation profile canonical url, you coukd specify the canonical url of your custom profile.
Do you have one already for your case?
(I could try it out)
Currently, we didn't create our profile url.. that's why using the base url.
How can you have a profile without a profile URL?
Are you referring to this query?
Ankita Srivastava said:
Hi All, I am working on Pregnancy Questionnaire where I need to capture "Delivery Time" which can repeat in case of twins, triplet etc. For each Delivery Time, a new Observation should be extracted. Can someone review the below Questionnaire for the same (if that is the correct approach):
{
"resourceType": "Questionnaire",
"id": "ANCDELIVERY",
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-launchContext",
"extension": [
{
"url": "name",
"valueId": "LaunchPatient"
},
{
"url": "type",
"valueCode": "Patient"
},
{
"url": "description",
"valueString": "The patient that is to be used to pre-populate the form"
}
]
},
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-sourceQueries",
"valueReference": {
"reference": "#PrePopQuery"
}
}
],
"url": "http://sample.org/Questionnaire/ANCDELIVERY",
"version": "0.1.0",
"name": "ANCOYO Labour & Delivery",
"title": "ANCOYO Labour & Delivery",
"status": "active",
"experimental": false,
"date": "2022-08-25T09:00:00+05:30",
"description": "ANCOYO Labour & Delivery workflow.",
"subjectType": [
"Patient"
],
"item": [
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-itemExtractionContext",
"valueCode": "Observation"
}
],
"linkId": "11.2.2",
"type": "group",
"text": "Delivery Time",
"item": [
{
"linkId": "11.2.2.1",
"type": "time",
"text": "Delivery Time",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.valueTime",
"repeats": true
},
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.2",
"type": "group",
"text": "Delivery Time Code",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.code",
"item": [
{
"linkId": "11.2.2.2.1",
"type": "choice",
"text": "Delivery Time Code",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.code.coding",
"initial": [
{
"valueCoding": {
"code": "ANC.End.12",
"display": "Delivery time",
"system": "http://fhir.org/guides/who/anc-cds/CodeSystem/anc-custom-codes"
}
}
]
}
]
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "'Patient/'+%LaunchPatient.id",
"name": "11.2.2.3"
}
},
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.3",
"type": "string",
"text": "Subject",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.subject.reference"
},
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.4",
"type": "choice",
"text": "Status",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.status",
"initial": [
{
"valueCoding": {
"code": "final",
"display": "Final",
"system": "http://hl7.org/fhir/observation-status"
}
}
]
},
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.5",
"type": "choice",
"text": "Category",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.category",
"initial": [
{
"valueCoding": {
"code": "survey",
"display": "Survey",
"system": "http://terminology.hl7.org/CodeSystem/observation-category"
}
}
]
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "today()",
"name": "11.2.2.6"
}
},
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
}
],
"enableWhen": [
{
"question": "11.2.2.1",
"operator": "exists",
"answerBoolean": true
}
],
"linkId": "11.2.2.6",
"type": "dateTime",
"text": "Effective DateTime",
"definition": "http://hl7.org/fhir/StructureDefinition/Observation#Observation.effectiveDateTime"
}
]
}
]
}
@Ankita Srivastava, you had said "Currently, we didn't create our profile url.. that's why using the base url.". I'm not sure how you can have a profile without assigning it a url - I'm confused as to what you mean.
I am not using any profile for now.. just created sample Questionnaire for Pregnancy using base url. My query is related to multiple Observation extraction in case of repeated value for Delivery Time.
There are two different queries - one from my side and the other one from Kashyap. And somehow, looks like they got mixed up.
Ah, sorry. Your thread got hijacked a bit by @Kashyap Jois.
You're referencing the contained resource #PrePopQueries, but you don't actually have a Bundle for that, and as far as I can tell, aren't doing any pre-population.
Other than that it looks right on initial scan, though I'm wondering why you're using expression-based extraction rather than Observation-based extraction. (The latter would avoid a whole lot of complexity.)
Ahh.. Thank you so much @Lloyd McKenzie for spotting the #PrePopQueries issue.. it's a copy pasting error :D.. will rectify it.
With regards to your second point, I agree, that Observation based extraction is much easier. However, the stakeholders have requested to use definition based rather than Observation based. Hence, I am following this approach.
My bad, just thought it maybe unnecessary to create a new topic. But i can just move my question to another topic
@Ankita Srivastava thanks for your example above, that's exactly what I needed, at least for part of what I'm trying to do. My other question has to do with defining a resource that may be generated within the Questionnaire.
For example, if I'm working with the SDOHCC Hunger Vital Signs Questionnaire, and someone is "at risk" of food insecurity, I'd like to generate a new Condition for this person for food insecurity. I'd like to have this predefined Condition, and then use two hidden questions (as per the example above) to extract the subject reference and the recordedDate. What I'm not sure is the best way to predefine the Condition. Could I put this in the contained field of the Questionnaire? Or would I have to create an empty Condition and then extract all of the fields I want to be completed from the Questionnaire (even the ones that are static and wouldn't change)?
SDOH generally recommends using StructureMap. SDC hasn't talked about the notion of using a 'contained' resource as a 'template' with extensions indicating certain "fill-in-the-blank" fields to be taken from the Questionnaire, but that's a really cool idea. It's certainly a lot more elegant than the current FHIRPath-based approach which requires using either hidden questions with fixed values, or a profile that defines everything. It'd also be a whole lot easier for people to wrap their heads around than StructureMap.
There's no SDC call this week, but if you wanted to submit a change request proposing this, we could possibly take it up at next week's call.
I'd be happy to. Because I'd like to use this for more than just SDOH (for instance, taking a history from a patient and creating new conditions).
@Ankita Srivastava has found a nice big gap in the definition part of the spec
https://jira.hl7.org/browse/FHIR-41508
The gist is that looking to define how to do a form using definition extract that supports either new or existing content, and works for new patients, and new other resources too.
e.g. Admission form that:
Hello All,
I am looking for some examples for "open-choices" - Definition based extraction. Do we have some?
Also, do we have any reference implementation where I can check the resource extraction from definition based Questionnaire.
Looking for some recommendations. Kindly help.
Thank you.
@Brian Postlethwaite?
I don't have an example for that example, but I do know of a server that has some at least partial support for defnition based extraction.
It does have some limitations, though I believe that implementation has been shelved, and I haven't had a chance to get in and do another implementation based on the open code that I wrote over 6 years ago that is in the firely SDK on a really old branch that never merged in.
https://sqlonfhir-r4.azurewebsites.net/fhir - Need to have both SD and Questionnaire loaded in order to $extract from a QuestionnaireResponse.
Thank you @Brian Postlethwaite for the reference link. I will check that out.
We have a use-case where we want to use Questionnaire grouping to semantically specify related question groups in a nested way:
* group A
* group B
* group C
Each group in this example is a set of questionnaire items that we want to extract -- e.g. extracting Obs A, Obs B, and Obs C.
The question is the scope of the itemExtractionContext
, does it only regard direct child items in the group as part of that context, or does it nest? In other words, in my example, would adding itemExtractionContext
to each of A, B, and C groups make sense to specify 3 resources to extract?
in sdc ig, we can read :
Note that only one context can be in play at the same time. When a new context is declared, it takes the place of the old context.
I understand that declaring a new itemExtractionContext
in group B will replace the one declared in group A.
I don't know how it will manage this kind of nesting :
* group A (extract condition)
* item A.1 (condition.code)
* group B (extract observation
* item B.1 (observation.code)
* item B.2 (observation.effectiveDate)
* item A.2 (condition.status)
Hope you are not in this case.
Nope, the nesting would be more like this:
* group A (extract condition1)
* item A.1 (condition1.code)
* item A.2 (condition1.staus)
* group B (extract observation1)
* item B.1 (observation1.code)
* item B.2 (observation1.valueQuantity)
* item B.3 (observation1.effectiveDate)
* group C (extract observation2)
* item C.1 (observation2.code)
* item C.2 (observation2.valueBoolean)
* item C.3 (observation2.effectiveDate)
Thanks, I missed that bit about taking the place of the old context!
nicolas griffon said:
I don't know how it will manage this kind of nesting :
* group A (extract condition) * item A.1 (condition.code) * group B (extract observation * item B.1 (observation.code) * item B.2 (observation.effectiveDate) * item A.2 (condition.status)
Does anybody have an answer for this scenario ?
I guess the best idea is to avoid this kind of nesting but...
If you set the extraction context on Group B, that should theoretically work. (Whether it does in practice would involve testing the tools :>)
Thank you for your answer !
To summarize, the itemExtractionContext extension apply :
(so the following exemple is not working:
* item A (extraction context : condition)
* item A.1 (definition: condition.code)
* item B (definition: condition.date)
)
Another way to say it is that:
For a definition attribute, the corresponding itemExtractionContext extension is necessarily on a 'father' item (or on the item itself (or on the questionnaire root)).
Am I right ?
That sounds correct to me
Do we always need itemExtractionContext
? As I get from the documentation itemExtractionContext
can be empty or missing. And with this structure I'll get two conditions:
* item A (extraction context : condition)
* item A.1 (definition: condition.code)
* item B (definition: condition.date)
One condition with code and other with date
If you want a single Condition, you need to move your context further up the tree to the Questionnaire root or some other common parent of item A and B.
I understand that. Just needed to make sure I got this correct. I have a question about items without itemExtractionContext
. So, I have this structure:
* group A
* item A.1 (definition: Patient.name.given)
* group B
* item B.1 (definition: Patient.name.given)
Groups do not have itemExtractionContext
. Should we create two patients or we are in the same context here? It's not documented well what is context change.
I think at the moment the behavior is undefined, but it should be. Can you submit a change request for us to clarify?
While implementing/writing a tutorial of this definition based extraction I've found that obviously there isn't a 1..1 mapping from item type to data type in resources.
Hence I've created a table to try and define/describe the possible mappings between to try and indicate what could/should be supported.
https://1drv.ms/x/s!ApkGK_oT9urNvqJBhEwSPjNDb64txw?e=ASPisT
The cells with a Y* or ? there's some question or issue that might arrise. Green cells are expected direct maps.
Happy to take some feedback on if this is valid/useful then see if it's something we should have in the IG.
Not a horrible idea, though it's a bit large. Can we discuss on our Thursday call?
Anyone tried extracting a contained resource in a definition model?
Outside using a profile to declare a slice for it.
That is a comprehensive spreadsheet. It seem like we could create recommendations based on the direction of mapping
Yes, this one is only looking at export, but we could also develop same for prepop.
I have code for that too which I'd be able to put into the table. At least in the prepop direction you have more direct control via the fhirpath expression.
I have a blog post almost ready that we can preview on today's call if you're interested that walks right into this functionality.
The context selection step for update selection is the last part I need to understand/document/implement.
(and make suggestions for improvement...)
Part 1 in a blog series on SDC extraction!
https://brianpos.com/2024/10/31/extracting-fhir-data-from-completed-sdc-forms/
ok folks, these are the URLs for the extract extensions:
Are we ok with using these? (even though it's changing the name of the existing itemExtractionContext one)
Should we consider other URLs?
(I don't want to change obs or SM)
And could/should I put all those examples that I threw together into the spec too?
Definitely include the examples in the spec. They're quite useful
extractTemplate/extractTemplate or templateExtract/templateExtractValue
extractDefinition or definitionExtractValue etc...
(which would be consistent with observationExtract)
I prefer the extractXXX style, but a shame the others were already the other way around
I'm going with the what is now above. Do argue for alternatives if you want something different (looks good to me)
A new thread to discuss the template based extraction
+1 on this. Just a question before reading the proposal: won't this be limited by the fact that contained resources can't contain other resources? Can I create a medicationrequest with a contained medication inside of it with this approach?
You can, as long as you include a reference to them in the resource. Which we will to indicate what they are for.
That other contained resource would need to be pulled out too, but I'll update the docs to indicate to watch out for that while using the template.
I like the template approach! Do I understand correctly that the template resources need to be valid FHIR resources? Does this also mean we need to provide placeholder data for everything that will be replaced during the extraction process?
If you have an extension, you'll satisfy cardinality and invariants requiring that an element be 'present'.
Hi all,
Thank you @Brian Postlethwaite for a very nice proposal.
We definitely need a better/simpler way for extraction instead of FHIR Mapping language.
BTW how do you envision conditional logic and loops implementation?
I got similar ideas working on the extraction engine: https://github.com/beda-software/FHIRPathMappingLanguage/issues/4
fhirpath expressions littered into the template at appropriate areas that would be repeated.
I've done a comparison between the approaches including the template approach for a single Questionnaire.
https://hackmd.io/@brianpos/template-extract-examples (you do need to be signed in to view it)
I think it shows the differences between the approaches pretty well.
I'm now going to do another sample with a more complicated extraction.
@Grey Faulkenberry @Lloyd McKenzie @Ilya Beda @Jose Costa Teixeira @Axel Vanraes @Bas van den Heuvel @Paul Lynch @Halina Labikova
And here's a sample form that uses a quick POC I threw together.
https://dev.fhirpath-lab.com/Questionnaire/tester?tab=extract,csiro%20renderer&id=https%3A%2F%2Ffhir.forms-lab.com%2FQuestionnaire%2Fsigmoidoscopy-complication-casefeature-definition5&subject=Patient%2F45086382
Feel free to mess with it and try some others, there are highly likely issues, but at least this uses the 3 core extensions proposed in the way they are intended.
Also noting that I slightly changed the names compared to that in the proposal so it was clearer that they apply to the template technique.
Hmm, I found an issue with the template approach that needs some thoughts from others...
To be in the questionnaire it needs to have an id
so that the item that "instantiates" it can reference it.
Assuming that we can't update a resource with this approach, I can just remove the value.
If we want to be able to do updates, we'll need another extension to cater for how to set the ids. etc.
Can't use the same approach for all the other values, as the id property can not have an extension.
Put an extension on the base resource I guess?
Alternatively, the extension that says "Instantiate this template" could be complex and point to the template and (optionally) also specify the id for it.
I think I like that second option.
I'd rather have the extension in the contained resource to keep things together there.
(and I suspect most cases are just going to want to create new instances - so cleanest in that case)
And could even use the same extension for controlling fullURL stuff in definition base - which we haven't agreed on yet
One of the reasons for a difference is that the id doesn't just appear in the contained resource - it also appears in the URL of the 'PUT' and also drives whether we're doing a PUT or POST.
I think having an extension on the Questionnaire that says "extract from here to this resource location using this template" is reasonable.
Correct, hence why we have the issue in the template approach.
The templated resource MUST have the id to be able to reference, so we need an extension to tell us what to do with it.
If we indicate that we can't do updates with the template approach, then this isn't a problem - always remove it and always POST - my leaning.
What I'm saying is that the id of the template is something like "templateFoo". When we instantiate the template, we grab the extension from where it's being used as a template.
We can definitely do updates with the template approach. We just need to specify the "target id" (if there is one) in the extension that invokes the template.
I'd rather have another extension in the contained resource than turn the use this resource as the template into a complex extension.
If we're going to add the template approach, we'll need to also solve the same issue as the definition appoach has with referencing other resources in the output.
(and I think this is the same problem)
But we're not just deciding the id (which could make sense to be in the template), we're also deciding the URL and the REST action (which doesn't make sense to be in the template).
In theory, the same template could even be used from more than one place in the same Questionnaire with different behavior.
Which is the same problem we need to solve for definition - hence why I think it can be the same.
What's the issue with the invocation of the template being complex?
What are you doing for definition again?
We don't have a great story for that specific part yet - was likely another extension.
The setting the id was simple, just use the definition location in the resource, and with a calculated expression etc to populate it. But we have no story with the fullURL, ifMatch etc - the other bundle.entry.request props.
@Lloyd McKenzie @Brian Postlethwaite
From my experience the extraction template is always a bundle.
So, myy suggestion here is to restrict template to be a bundle only.
In this case we have a more granular control over urls and action verbs PUT, POST, PATCH, etc. and issue with template id is fixed.
There were 2 ways to go, one was to have a single template that is the bundle and everything drives from there.
The other was a refernce to each template for each resource in the items.
I took the latter approach.
This was consistent with the definition approach.
(and to be honest, I didn't try out the first)
I'm doing a complex example which has repeating sections and multiple resources.
Then I'll do one that has cross resource references - where the real fun starts.
I've also pushed a variation on those in my comparison to my github project SDC by example with minimal cases...
https://github.com/brianpos/sdc-by-example/tree/master/4.Extraction/comparison-simple-obs
Here you can find lot's of SDC mappers https://github.com/beda-software/fhir-emr/tree/master/resources/seeds/Mapping
They are implemented with JUTE, but I would like to migrate them to this new extraction engine.
Some of them are pretty complex. For example, this mapper https://github.com/beda-software/fhir-emr/blob/master/resources/seeds/Mapping/gad-7-extract.yaml creates Observation and Provenance and also create or updates.
If conditionId
is provided, it updates the condition instead of creating a new one: https://github.com/beda-software/fhir-emr/blob/master/resources/seeds/Mapping/gad-7-extract.yaml#L51-L57
Here is another example, that update a practitioner resource and create a list of practitioner roles.
For each specialty selected in the form practitioner role is created : https://github.com/beda-software/fhir-emr/blob/021728163c10331f4c1506c5b8af9c5fa76ac702/resources/seeds/Mapping/practitioner-edit.yaml#L33C16-L71
@Brian Postlethwaite @Lloyd McKenzie
How do you envision handling this cases?
I'll take a look at your condition one, that should be easy enough with revised definition approach.
There seem to be a typo made in the FHIR SDC spec, specifically in the following statements (https://hl7.org/fhir/uv/sdc/modular.html):
"This portion of the SDC specification describes two mechanisms for enabling re-use:
In the first case, every single 'item' in the questionnaire must be specified, including all 'display' items, groups, etc. Re-use is limited to question text, value set, data type and other information that can be determined from the referenced definition element. On the other hand, with modular questionnaires, multiple items can be defined along with display text, enableWhen logic and other questionnaire characteristics. The first approach is best suited for "data-element"- based questionnaires and the latter for defining collections of questions. (While defining separate modules for every single question is possible, it would be quite a bit of overhead).
The two mechanisms are not mutually exclusive. It is possible to have a form that relies on sub-questionnaires and that also has some elements that rely on externally defined element definitions."
Based on the context, it appears that the first case and the first approach actually pertain to the Data Element-based Questionnaires mechanism, while the second case is related to the Modular Questionnaires mechanism. Thoughts?
It looks to me like the bullet points are reversed.
FHIR SDC Questionnaire.url states that the .url SHALL be globally unique. Currently our implementation is using url of the format - "[base]/ca-on-iar-ocan-questionnaire-template"
As more forms become available, it'll be challenging to establish uniqueness.
One possible solution/recommendation could be to append the base url with guid (which is same as the resource.id) that would be unique amongst each questionnaire such as "[base]/fe16095b-9412-430d-8633-2b4e034be37c"
Does this approach sounds correct?Thoughts?
Resource id on the 'source server' is an option, but note that the URL should remain the same as the Questionnaire moves from server to server, and the ids on other servers will almost certainly not be the same. (That's ok, just something to be aware of.)
@Radhika Verma can you submit a change request with respect to the bullet points?
Our Forms builder doesn't have a FHIR server as of now, so we are auto-generating a GUID when creating a template. In the future, if we import the template into a FHIR server and deploy it from one server to another, can resource.id be reserved or the destination server will override the resource.id?
Yes, will submit the change request for the bullet points
I did a writeup on canonical urls and versioning.
https://brianpos.com/2022/12/13/canonical-versioning-in-your-fhir-server/
The questuonnaire URL property and resource id are not the same. And the resource id is most likely different on each server the form is distributed, and that's fine/normal, and why the canonical url exists, and responses refer the the canonical url, and not a reference to the resource id.
Reminder, that I have a set of questionnaire definitions providing how to do $extract with each of the techniques here (comparing the diffs):
https://github.com/brianpos/sdc-by-example/blob/master/4.Extraction/comparison-simple-obs/readme.md
And another set here with some complex definitions! (but no write-up on the diffs, but you can run all these in the lab)
https://github.com/brianpos/sdc-by-example/tree/master/4.Extraction/comparison-complex
Keen to get more review on these to determine if we should be putting the template mode into the spec or not.
And also start to consider how we support update and setting other bundle.entry.request
properties.
I've also realized that if you want to include the meta.profile property, then the TEMPLATE also needs to pass that, which I think in most cases they won't - so will need to use the template fhirpath expression to populate it, which is not a big deal, but a bit of a gotcha.
I like the core idea of templates. Will try to review with some use-cases this weekend
Thanks. The approach in principal has been approved for inclusion in the SDC IG on today's SDC call. I'll be doing the spec write-up over the coming week with the proposed wording.
@Axel Vanraes do reach out with any questions here if you need to. I'll be watching closely this week.
Is "http://hl7.org/fhir/sid/ndc" the correct .coding.system to use for a ExplanationOfBenefit.item.productOrService.coding.code value that contains a CMS NDC identifier formatted as 11 digits with no spaces, hyphens or other characters as described here under "Other useful Information"? For example, is the following example a valid way to convey an NDC in a Pharmacy ExplanationOfBenefit.item.productOrService element?
<productOrService>
<coding>
<system value="http://hl7.org/fhir/sid/ndc"/>
<code value="55111078927"></code>
<display value="SEVELAMER CARBONATE"></display>
</coding>
</productOrService>
cc @Rob Hausam , @Grahame Grieve
That's not quite what https://terminology.hl7.org/NDC.html says. @Reuben Daniels can you follow up with the relevant committee to get those aligned?
Grahame Grieve said:
That's not quite what https://terminology.hl7.org/NDC.html says. Reuben Daniels can you follow up with the relevant committee to get those aligned?
So @Grahame Grieve , do you know if there is a different code system (coding.system value) that can be used if the source system only has the NDC formatted as 11 digits with no hyphens (e.g. 55111078927
)?
I don't think that there is, or at least, if there is, I don't know about it
I believe that many payers only maintain/persist the 11 digits with no hyphens format in their source systems.
@MaryKay McDaniel and @Linda ,
I am curious what your thoughts are on expecting the hypenated NDC format vs. 11 digits with no hyphens NDC format in the ExplanationOfBenefit.item.productOrService.coding.code on retail pharmacy EOBs? Most of the retail pharmacy EOB data I have seen from payers only provides the 11 digits with no hyphens NDC format.
@David Riddle Some of this is duplicative of what is in the HL7 confluence page but to highlight use of the 11 digit NDC (5-4-2) is a regulatory requirement when NDC is utilized for a HIPAA covered transactions that is why payor functions deal with 11 digit NDCs. The three sections denote the labeler-product-package segments. To do the conversions one needs to start with the 10 digit code create the 11 digit conversions, to go from 11 to 10 digit there is ambiguity. The 11 digit code always follows a 5-4-2 construction so to go from FDA NDC which can be 4-4-2, 5-3-2 or 5-4-1, a leading zero is added to the short segment to construct a 11 digit 5-4-2 code.
It's also useful to know that individual healthcare organizations, primarily ones with inpatient pharmacies, mint custom NDC codes to manage in house compounding and other Rx workflows which are not FDA valid NDCs. So if you get records with NDCs from a provider organization with an inpatient facility its quite possible you will see these custom facility NDCs...
FDA published a proposed rule in 2022 to move to 12 digits and resolve a lot of the problems that exist with the current 10 digit code. Final rule has been delayed. FDA will run out of 5 digit labeler codes in 10-15 years so eventually things will change.
Steven Kassakian said:
go from 11 to 10 digit there is ambiguity.
@Steven Kassakian ,
I think I am following everything you have described and I am familiar with the ambiguity associated with attempting to move from 11 digit (no hyphens) to 10 digit format. That said, I believe you are saying that the payer must find a way to convert their source 11 digit (no hyphens) NDC values into the correct 10 digit NDC code in order for the ExplanationOfBenefit.item.productOrService.coding to conform to the C4BB required binding to the FDANDCOrCompund value set, correct? So even if there is a valid code system for a 11 digit (no hyphens) NDC code value, using a 11 digit NDC does not meet the requirements of the C4BB ExplanationOfBenefit-Pharmacy profile.
Do I have that right?
Hi @David Riddle
You may want to have a look at this thread which relates to NDC 10 and 11 codes:
#terminology > NDC 11 Codes
@Carol Macumber
@Grahame Grieve , @Rob Hausam and @Steven Kassakian ,
I apologize if I am being obtuse, but the fundamental question I am trying to get answered is whether or not 11-digit, no hyphens, spaces or other character NDC codes are included in the FDANDCOrCompund value set required by the C4BB-ExplanationOfBenefit-Pharmacy profile? Or put another way, since that value set includes all codes defined in http://hl7.org/fhir/sid/ndc
, does the http://hl7.org/fhir/sid/ndc code system include 11-digit, no hyphens, spaces or other character NDC code values?
The '10 digit NDC code, with "-" included.' statement found here would seem to indicate that use of NDCs in HL7 is strictly limited to 10 digit codes with hyphens; however, that confuses me given the 'You should now be able to create value sets that contain only 11-digit or 10-digit codes, using the code-type
property.' statement from @Rob Hausam here. I am not sure that I know/understand where to look for the 'code-type' property, but I can't seem to find any mention of it in the FDANDCOrCompund value set. So does that mean that value set includes both 11-digit and 10-digit codes?
cc @Reuben Daniels
I think we're trying to answer that question, but I at least don't know the answer
Grahame Grieve said:
I think we're trying to answer that question, but I at least don't know the answer
Thanks, @Grahame Grieve !
I wasn't sure if my question had been answered and I was just missing it.
Hi All
Interesting question to ponder first thing on a Friday :)
I walked through this with @Carmela Couderc and logically, to me (i.e., other TI Co-chairs should chime in if they feel differently @Reuben Daniels , @Jessica Bota , @Marc Duteau ), if the code system returns a valid response via $lookup and $validate-code, then a value set defined as "All codes in ____" would contain all valid code representations. In this case, one for each code-type.
lookup and validate-code against tx.fhir.org for 11-digit
http://tx.fhir.org/r4/CodeSystem/$lookup?system=http://hl7.org/fhir/sid/ndc&code=00409371801
https://tx.fhir.org/r4/CodeSystem/$validate-code?system=http://hl7.org/fhir/sid/ndc&code=00409371801
lookup and validate-code against tx.fhir.org for 10-digit
https://tx.fhir.org/r4/CodeSystem/$lookup?system=http://hl7.org/fhir/sid/ndc&code=0409-3718-01
https://tx.fhir.org/r4/CodeSystem/$validate-code?system=http://hl7.org/fhir/sid/ndc&code=0409-3718-01
But...this makes me think that further guidance/constraint on the use of code-type may be required. For example, if a code system supports code-type, should TS's be required to return that property in expansion details. Otherwise, it's not apparent that these two codes are in fact, the same concept.
With regards to the expectations of the Carin IG, I'd say that if the intent is only to support one, or the other, the value set definition should be updated to filter on the code-type property to be either "10-digit" or "11-digit"
Note: There is a open ticket https://jira.hl7.org/browse/FHIR-44627 regarding alt identifiers and synonyms (as you'll note that the examples above include a synonym for the alternate) at the Sept WGM on Thursday morning joint FHIR-I and TI
@Carol Macumber I think that in this case it would be a mistake to take tx.fhir.org as authoritative, since I only guessed when I did that implementation. If TI decided differently, I'd have to update my implementation
Grahame Grieve said:
Carol Macumber I think that in this case it would be a mistake to take tx.fhir.org as authoritative, since I only guessed when I did that implementation. If TI decided differently, I'd have to update my implementation
Fair enough, but it's all i've got to go off right now as far as expected behavior :)
@Grahame Grieve @Carol Macumber @Reuben Daniels Any chance that instead of deprecating property "Synonym" we could change the name or "Alternate-code"? If that is what it has always been used for, it might make the alignment of what you have in tx fit with what TI wants.
umm whoops. That part of the tx.fhir.org response hasn't been updated since we decided to dump 'synonym' as a committee. I think. I'll have to investigate
FWIW I added a hand-crafted example to the ticket FHIR-44627 to try to initiate some progress on unpicking the details and to ensure I'm understanding the proposal correctly.
Specific questions I have about the current proposal:
{
"resourceType": "CodeSystem",
"id": "alternate-codes",
"meta": {
"versionId": "1",
"lastUpdated": "2024-10-31T15:53:19.832+10:00"
},
"url": "https://example.com/CodeSystem/alternate-codes",
"version": "1.0",
"name": "Code_System_with_alternate_codes__synonym_codes_",
"title": "Code System with alternate codes (synonym codes)",
"status": "draft",
"experimental": true,
"caseSensitive": true,
"valueSet": "https://example.com/ValueSet/alternate-codes",
"compositional": false,
"content": "complete",
"count": 5,
"property": [
{
"code": "alternatePrimaryCode",
"uri": "http://hl7.org/fhir/concept-properties#alternatePrimaryCode",
"description": "This property contains an alternative code that may be used to identify this concept instead of the primary code",
"type": "code"
}
],
"concept": [
{
"code": "A",
"display": "The concept A",
"property": [
{
"code": "alternatePrimaryCode",
"valueCode": "B"
}
]
},
{
"code": "B",
"property": [
{
"code": "alternatePrimaryCode",
"valueCode": "A"
}
]
},
{
"code": "X",
"display": "The concept X",
"property": [
{
"code": "alternatePrimaryCode",
"valueCode": "Y"
},
{
"code": "alternatePrimaryCode",
"valueCode": "Z"
}
]
},
{
"code": "Y",
"property": [
{
"code": "alternatePrimaryCode",
"valueCode": "X"
},
{
"code": "alternatePrimaryCode",
"valueCode": "Z"
}
]
},
{
"code": "Z",
"property": [
{
"code": "alternatePrimaryCode",
"valueCode": "X"
},
{
"code": "alternatePrimaryCode",
"valueCode": "Y"
}
]
}
]
}
- How do I know which code is primary?
If there's a code that's a primary, that's a(n addition) property of the primary code
- Is there always a single primary code?
no. unless otherwise stated they are siblings
- What if the concept entries have different display text?
So?
- Can they all include designations & properties?
yes
- If so, do they just get merged (aggregated, duplicates removed?)
no
- What are the rules for $expand, $lookup, $validate-code
the assertion of alternatePrimaryCode
is a property that is reported in $lookup, but it does not change the behavior of $expand or $validate-code
The TI tracker call on November 18 will focus on FHIR-44627 - a proposal for dealing with alternate codes. The call starts at 5pm Eastern Standard Time (clocks change from daylight savings time this weekend). It would be great if folks on this thread can attend - but if not - please provide your thoughts in the ticket comments. Thanks.
there's an event in Australia that day - will have to see
Based on @Grahame Grieve reply, I think my mental model for this is wrong.
My understanding of this was that we were providing a mechanism for multiple codes to be able to identify the same concept
, so if we had codes "A" and "B" where "B" is an alternateCode for "A", then I would have expected a $lookup for "B" to return the same set of properties and designations as a $lookup for "A" because they identify the same concept
?
Furthermore, if this concept has a property "kind" equal to "special", and I have a ValueSet with a filter <"kind" "=" "special">, then do I get "A" or "B" or both "A" and "B" in the expansion?
However, it seems like this is closer to just a convention for being able to indicate that "A" and "B" are "the same", but they may still have different displays, parents, properties, etc although in most cases they would be identical (except for "alternateCode" itself) and it would be an mistake if they were not?
yes. we are not trying to indicate that they are the same concept, because that might not quite be true.
they are codes that are considered synonymous in some ways
e.g. NDC 10 digit and 11 digit codes are quite synonymous
but the 10 and 11 digit codes have different properties (whether they are 10 or 11 digit codes)
In that case the wording proposed in FHIR-44627 is VERY misleading:
"This property allow CodeSystem resource instances to represent multiple codes for a concept.
Would this also be appropriate for cases where codes are sometimes padded and sometimes not?
I agree that the definition could be improved.
I'm not sure about the padding. Is the padding binary? e.g. either padded one way, or not padded? If it's more complicated than that, then it's time for a grammar?
This property allow CodeSystem resource instances to represent multiple codes for a concept
I think this was what the proposal was originally intended to address - the case where there is a single concept, with multiple identifiers that need to be used to reference it (and that will be fully functional in FHIR terminology service operations). If it's something else where the two (or more) identifiers potentially refer to different concepts that aren't necessarily entirely equivalent, then probably that should be handled in ConceptMap? I'll plan to join the call on Nov 18.
the trick is what 'concept' means. Does it mean something kind of abstract, the same thing, or does it mean the definition identified by the code, and that the codes identify the same definition in the code system
I think the real question is not what a concept is, but rather is there only one or two (or more).
Re NDC 10 and 11, the length of code is a property of the code, not the concept
Re NDC 10 and 11, the length of code is a property of the code, not the concept
that's true, but we've always done NDC as two concepts with different codes, with different properties on the codes so value sets can choose which codes they want to describe
Yes, that's exactly what we did. But the idea of the two CodeSystem.concept instances (in the case of the NDC 10 and 11 digit codes) was that they were intended to represent the exact same logical "concept", but represented using the two different identifying codes (with both of the CodeSystem.concept instances being considered as exactly equivalent and totally interchangeable in regard to their meaning when used in terminology service operations or for any purpose). That was the intent for NDC (as a prototypic example). And I would tend to argue that it should also be true in any case where this is done, regardless of what code system it is done with.
@Rob Hausam I think that @Michael Lawley has put his finger on something that matters here:
with both of the CodeSystem.concept instances being considered as exactly equivalent and totally interchangeable in regard to their meaning
because they are not exactly equivalent at the code system level - they have different codes and properties.
the are the same logical concept, but that's not quite the same thing
I mostly get what you are saying, but I think we may be splitting hairs here somewhat - or describing the same thing from a different perspective using some different wording. Something like that. As what I was saying is that the two concepts (in the FHIR CodeSystem representation) are "exactly equivalent ... in regard to their meaning". Because having the different values for the code and the property that specifies why the codes are different doesn't alter the meaning of the concepts in the slightest degree. So I think that both of our statements are actually compatible.
:rolling_on_the_floor_laughing: "the same" but "not _exactly_ equivalent"
This is somewhat reminiscent of the distinction between "the thing" and "a representation of the thing" (eg JSON vs XML vs RDF representations of a FHIR Resource), but also "the SNOMED Concept identified by 1234567809" and "the FHIR representation of that concept.
So, there is an NDC concept, and there are the NDC10 and NDC11 representations of that concept, and then there is/are the FHIR representation(s) of the NDC concept. I think it's fine to have two representations, but we MUST be clear on things like:
[base]/CodeSystem/$subsumes?system=NDC&codeA=10digit&codeB=11digit
Yes - the return from $subsumes in that case should be equivalent
(assuming that 10digit
and 11digit
in this case are placeholders for the "real" codes).
And the system NDC
is actually http://hl7.org/fhir/sid/ndc
.
we may be splitting hairs here somewhat
that's true, but we have to when it comes to handling code systems corretly
Agree. As Michael said, we just need to be clear.
Items retrieved with _include and _revinclude are limited to 100. This is proving to be a serious limitation on requests for MedicationRequest with _revinclude=MedicationDispense in cases where a patient has multiple medications and/or daily dispensing regimes (e.g. Methodone scripts)...and the client wants to see it all (GOK why, but they do).
100 is enough to show them a page+ worth and you can then invoke a separate query to page through the whole set?
We had to do the same thing in the Google API for _revinclude (if I recall correctly we don't limit _include) because there's just no conformant way to make the pagination make sense. There has to be a limit somewhere or _revinclude can produce pages of unbounded size, and there is no principled way to pick a limit that will work for everyone. We need to educate clients on what is or is not feasible in a single query. The real problem is that it's not easy for the client to figure out 1) that they hit the limit on a particular revinclude clause, and 2) exactly what separate query they should follow up with (especially with a mix of included and revincluded results).
I believe that the problem is that the _count setting only applies to the main target resource, not the inclusions, and the Server returns a 404 error if the included resources exceed the 100 max.
Ah. An error kind of sucks. A 200 with an embedded OperationOutcome with a warning that you're missing some would be better, though I guess we'd need to standardize the code to make that computable...
Oh, do they 404 it? We just truncate that specific set of revinclude results.
Standardizing an OperationOutcome (which could be in the search results as search.mode="outcome" although it looks like we deprecated that value?) would be useful so clients could have an interoperable way to detect this situation.
We could go beyond that to have the OO indicate a "next" link where more revinclude pages can be found but I don't think there's an obvious place to put that in an OO.
A custom (everything type) operation that returned a Bundle of all medication resources relating to a patient would be ideal. At present the $everything operations require a Patient Resource to be included in the Bundle and we don't hold those in our Meds CDR.
If we are hitting issues with a CDR with just 2 years worth of MR and MD resources, I'll almost guarantee that others will hit this issue with Medication CDRs that hold more data relating to individual patients.
Lloyd McKenzie said:
Ah. An error kind of sucks. A 200 with an embedded OperationOutcome with a warning that you're missing some would be better, though I guess we'd need to standardize the code to make that computable...
That's the solution that our MDR provider is currently looking at implementing.
In the long term I would like to push the standard towards something like $graph to generalize the concept of $everything and get the pagination right.
My opinion, that if you are trying to make a consumer that works with "any" FHIR server. Don't use _include at all. 1) You get duplicate includes for each page of primary resources. 2) You might not get the includes at all (due to limit).
Just retrieve the primary resources and call back for the distinct list of "includes" that you need. You have to be able to call back anyway (due to 2). Just make that your principal path.
If you do what @Daniel Venton suggests, then please do aggregated reference resolution. I.e., download all your initial content, then gather up all the reference URLs, de-duplicate them, and only then hit the server to get reference content. This takes a significant load off of the FHIR server as it prevents a large number of duplicate reference resolutions.
_include should not really be an issue? There cannot be too many of those.
Quite a few resources have 0..* references but I don't have a idea of how many have more than 1. Biggest might be DiagnosticReport:result, depending on how many results are in the average DR.
Wearing my FHIR hat (not my MS hat =), I think that either erroring or not including any of the related content is probably safer than including partial and hoping that the client can detect and reconcile them.
In the case of partial results:
What about not including any of the _revIncluded resources in the bundle and removing _revInclude
from the returned self
link if you can't include all the results?
If you don't support _include/_revinclude (at all) then you wouldn't mention them in self. If you do support them, but in a limited way, a warning-level OperationOutcome in the response bundle would be called for: "watch it - not a full set of includes".
Removing from the self link is very interesting but I would also be concerned that the client can't tell the difference between "the server doesn't support this particular revinclude ever" and "the server had to ignore this particular revinclude on this particular query". I guess they could cross-reference with the capability statement?
That's not a conclusive source to determine _include and _revinclude support, certainly not at the level of 'supporting a particular kind of _revinclude in a particular search'. OperationOutcome in the response Bundle is likely the best approach.
René Spronk said:
OperationOutcome in the response Bundle is likely the best approach.
But completely useless in an automated way because there is no standard code/message that means "1 or more _include/_revinclude was not fully populated."
Only if you are a user and as a power user you have some control over the queries being executed does the OO have meaning.
Nothing keeps you from proposing a standardized error codes for these situations..
Yes, but that doesn't help with "current situation" -- and even if something were to be proposed and included in the FHIR standard it would probably take a long time to roll out to production FHIR servers (and even longer for EHR FHIR facades)
I'm experiencing inconsistent behavior with the Microsoft FHIR server when querying data for different lines of business (LOBs) and cities. The API works fine for one LOB and city but fails for another, and I'm trying to understand the root cause of this issue. Here are the details:
URL used: chmmd.xxx.com/provider/Location?address-city=Annapolis&address-state=MD&_revinclude=PractitionerRole:location&_count=100.
_include
and _revinclude
Parameters:_include
and _revinclude
parameters are limited to 100 results. While querying for MAPD everything works as expected but CHPMD works ok with a smaller _count value, or with smaller cities (e.g. we see the problem with Annapolis MD@Brendan Kowitz @Mikael Weaver
My suggestion is don't use _inlcude and _revinclude unless you absolutely know that the included resource count is very small.
Instead get the primary resources you want and call back for the distinct include resources you need JIT.
Even if the query works, you are not guaranteed to get all the include resources. If you do get all the include resources, you might get multiple copies as they'll be on every page of primary resources.
Hi @Daniel Venton @Mikael Weaver @Brendan Kowitz
"I believe that if we don't use _include
and _revinclude
in our query, we encounter 429 errors for the consumer. Previously, the consumer wasn't using _include
and _revinclude
, which resulted in 429 errors. To avoid these errors and optimize the query, we considered using _include
and _revinclude
.
Do you have any suggestions for optimizing the query while keeping these points in mind?"
Thank you!
So if you execute a bunch of simple requests the server rate limits you.
If you try to consolidate queries (so you issue one, but the server acts like many) then you don't actually get the data.
Sounds like you need to convince the server to raise your limit OR put a choke on your code so you don't cross your limit.
_includes are an area where Azure API for FHIR and Health Data Services differ a lot. In API for FHIR which is Cosmos based, the server internally will do the GET operations to fill the included items in the Bundle. Health Data Services will use SQL to join them in a single query.
If you are getting 429s in API for FHIR you can always increase the RUs which will give you more throughput, this is self-service configuration.
HDS might be better to open a support ticket so we can see what might be going on perf wise, these databases run in big elastic-pools so you get some amount of scaling built in.
Thank you @Brendan Kowitz @Daniel Venton yes, we raised the RUs on fhir server last time when we saw 429 errors and considered to use _revinclude to optimize the resources. But looks like we have to live with more RUs and not use _revinclude.
In API for FHIR, _revinclude does do an additional query to find the reference indexes, then proceeds with using GETs to pull them into the bundle, this could contribute to some additional RU usage over _include
This is the standard extension for linking a fhirpath expression back to a source library value
http://hl7.org/fhir/extension-cqf-library.html
(and yes I plan to use the versioned canonical to be able to manage updates to the definition and prompt for propogation/updates)
@Paul Lynch
Does anyone know about this error? Could not resolve context name Patient in model System.
vivek kacham said:
Does anyone know about this error? Could not resolve context name Patient in model System.
I suggest you start a new thread/topic. Also, when you do that, could you provide some context about when you are seeing that error?
Brian Postlethwaite said:
This is the standard extension for linking a fhirpath expression back to a source library value
http://hl7.org/fhir/extension-cqf-library.html
(and yes I plan to use the versioned canonical to be able to manage updates to the definition and prompt for propogation/updates)
Paul Lynch
So, you can put that extension on an item (or maybe in a particular expression extension?), and it points to a "Library" resource, which has a list of Attachments, which I suppose would contain the FHIRPath in the "data" field and maybe indicate that the data is FHIRPath via the Attachment's content type. However, I don't see a way to say which attachment's FHIRPath should be run.
Each library instance is one expression.
I'm using it to be a reference source for where the original expression came from, but I clone it into the Questionnaire definition so no need to look anything up at runtime.
I need clarification on the following points.
1) what's the difference between answerExpression and candidateExpression.
2) With the maturity level being 0 is it worth implementing answerExpression.
It'll be great if anyone can provide some clarity.
answerExpression defines the allowed options. You can't choose anything else. candidateExpression provides an initial starter list of data from the record that represents a candidate answer that could be chosen, but doesn't constrain you from choosing/specifying other things.
Same answer as with all maturities - if it's useful to you, use it. However, expect that low maturity artifacts may be rough around the edges and are more likely to evolve in future releases.
@Lloyd McKenzie Thanks for this.
We are trying to figure out the best way to handle the following scenario where you have a set of dynamic dropdown lists. For example: The first is: Select Country - Based on the response given, the next dropdown of list of States (for that country) would be populated and then based on the states a Region list would be populated - https://github.com/google/android-fhir/issues/979
The values for each of the "lists" would be different ValueSets with answerOptions that can be referenced via uri
We were thinking initially that this would be an example of answerExpression. But now we are wondering whether this could be accomplished using candidateExpression?
How would you approach this?
Thanks.
(deleted)
answerExpression would be most appropriate. CandidateExpression is more of a set of guidance that appears beside the control where you capture possible answers and gives you data from the EHR to select from.
This specific issue comes up fairly regularly, we should have some examples to cover this.
@Paul Lynch got any?
(would be nice for my DevDays content too)
Brian Postlethwaite said:
This specific issue comes up fairly regularly, we should have some examples to cover this.
Paul Lynch got any?
The RxTerms demo (https://rxterms.nlm.nih.gov/) uses an answerExpression for the strength list. (See https://lforms-fhir.nlm.nih.gov/baseR4/Questionnaire/rxterms for the definition.)
That would be using the fhir query yeah? Not terminology fhirpath calls?
Brian Postlethwaite said:
Each library instance is one expression.
Getting back to old discussion... Shouldn't there be a way to put more than one FHIRPath expression into a library, so that a set of expressions could be shared between Questionnaires with a single (or smaller number of) Library resources?
We don't have a syntax for 'naming' FHIRPath expressions in a single file.
Maybe an extension should be added? It seems very awkward to create Library resource that contains an Attachment that contains a (base-64 encoded!) a FHIRPath expression, just for a single expression.
It's not really an extension, it's the format of the 'data' in the Library
We'd have to define a new mime type for some sort of text file that contains 'named' FHIRPath expressions
The 'data' is in the Attachment, not the Library. I was thinking each FHIRPath expression could be its own Attachment, which could have an extension to name it.
Personally not a fan. Then tweaks to a single expression cause more rippled change consequences. And searching becomes less pleasant too.
My "library" can currently let you find expressions that use a function via the search.
Happy to demonit next week if you like.
The only stuff in it is the fhirpath unit test content though.
Maybe the term library is the problem.
What I want IS a single expression per item and locate them correctly. Having more in it wouldn't serve my needs.
In my use case the "library" is the entire collection, and the fhir Library resource is an item in the collection.
Different cases?
It just seems like a lot wrappers around an expression. I was trying to remove one of those layers.
The expression is the granularity I need. With a container resource for its descriptive metadata. And be able to search for it individually.
This is what is linked via the cqlExpressionLibrary extension which could then help managing updates? Though I've done nothing about that.
I expect a more common pattern will be to maintain population and extraction logic in data element libraries, in which case you'll be using StructureDefinition instead of Library, and that'll be less overhead. Tend to agree with Brian that if you want a bunch of FHIRPath expressions in a single Library, may as well just call it CQL.
Lloyd McKenzie said:
I expect a more common pattern will be to maintain population and extraction logic in data element libraries, in which case you'll be using StructureDefinition instead of Library, and that'll be less overhead. Tend to agree with Brian that if you want a bunch of FHIRPath expressions in a single Library, may as well just call it CQL.
I don't think I've seen an example or documentation for storing FHIRPath expressions in a StructureDefinition. What would that look like?
You'd put the extensions on an ElementDefinition. Most of the extension's scopes have expanded to include that.
Take a look here: https://build.fhir.org/ig/HL7/sdc/StructureDefinition-sdc-question-library.profile.xml.html
Topic for a call I think.
I came up with this sort of thing
https://dev.fhirpath-lab.com/Library/extract-expr-PatientAddressCity
Where I use the library resources related artifact to specify the inputs the expression needs, and use context to say where the expression is expected to be used.
UI doesn't show it, check the json expander.
The logical model thing looks like more of the questions than expressions.
That is not a bulky as I thought it would be.
I'm just working on my presentation for the SQL on FHIR webinar, and I've found an aspect of my implementation that could be improved if there's a new feature in ViewDefinition. At present, the ViewDefinition nominates the resource type on which it's based. And my expression checker checks the expressions against the base resource definition.
That check includes a particular check that's quite useful: checking whether the expression for a column value can return more one value, and comparing that to the definition of the column
But that check would be more useful if the checker was informed by a profile on the resource. E.g. the base resource says that the base category element can repeat, but a profile says that it can only be 0..1 not 0..*.
The author knows that the profile applies, and the warning is not correct. But the machinery could get this right if as well as the resource type, the view definition could also nominate a profile for the resources it applies to.
I don't propose that the applicable profile acts as a filter like in a .where clause - that's computationally difficult, even if an implementer can link in validation like that. Just that the profile acts as a context for semantic checking of the definition, based on the assumption that the implementation will ensure that the profile is met somehow
Good idea. I think we discussed that author can attach a list of profiles to view definition to communicate intention. But didn't agree on extra semantic of it like filtering or hints context for fhirpath.
I imagine that IG authors can compliment profiles with flat views and eventually "queries" and publish as part of IG ;)
interesting. I'll think about that
I've added a GH ticket for tracking: https://github.com/FHIR/sql-on-fhir-v2/issues/265
+1 to this idea -- adding an optional field (resourceProfile
)? to ViewDefinition
for this is simple enough and adds clarity to the definition as well as improved validation, as Grahame points out.
(singular I hope)
Why singular? If my view works with (say) 3 different profiles, I'm thinking listing multiple profiles (with "any-of" semantics) would be straightforward.
well, multiple is beyond my skill for FHIRPath evaluation right now
I'm missing where/how this is involved in fhirpath evaluation for you. I was assuming you have pre-calculated a set of profiles for each of your resources and you'd run these views only on resources that match the profile(s) specified in this new slot.
If there were 3 specified profiles, you could just loop over them, similar to how you loop over all the view definitions.
the driver for me is using the profile to do static evaluation of the path expressions. I do that now, and so does @Brian Postlethwaite. I don't know how to do that for multiple profiles, and I presume that was Brian's point.
It doesn't make any difference for the actual execution
You can still treat 3 profiles in one ViewDefinition the same as 1 profile in each of three ViewDefinitions,no?
That check includes a particular check that's quite useful: checking whether the expression for a column value can return more one value, and comparing that to the definition of the column
You'd check if under any of the 3 named profiles the expression can return more than one value. It's still just a loop over the profiles.
sounds conceptually simple, but sure, I could triplicate all the errors I find. I didn't say it's impossible, just that I don't know how to do it nicely right now. Nor did I say your thought isn't valid
Would the assumption be that ALL profiles apply, or ANY profiles apply.
The difference being that the static analysis would need to check for each property that at least 1 profile satisfies it, or that ALL pass.
with "any-of" semantics
i.e. you provide a list of profiles such that "a resource that matches any of these profiles is a candidate for this view"
I'm not sure that would be so simple either. Will think about it more.
I like the idea of multiple profiles with any-of semantic - "profile": [ a, b, c]
Should it be treated as "where" filter is a separate question!
Brian Postlethwaite said:
I'm not sure that would be so simple either. Will think about it more.
Claude notes
Basic semantic could be: this view was designed with these profiles in mind!
Grahame Grieve said:
But that check would be more useful if the checker was informed by a profile on the resource. E.g. the base resource says that the base category element can repeat, but a profile says that it can only be 0..1 not 0..*.
Wouldn't this be achievable by simply adding a first()
to the FHIRPath expression? I am trying to understand cases where adding a profile field to ViewDefinition is required and cannot be replaced by more constrained FHIRPath expressions.
first() semantics are potentially lossy. The goal here is to use an expression that is obviously not lossy, but also to justify the fact that it would only ever return one result by static analysis based on profiles..
Bashir, You are correct that the .first()
can resolve the issue, however the purpose for GG and I of the profile is to be able to check that these have adequately been covered.
E.g. Only real properties have been selected, and that end up with appropriate cardinalities via static analysis.
Thanks @Josh Mandel and @Brian Postlethwaite for the clarifications. So is it fair to say that the proposed profile field has no impact on the evaluation/application of a ViewDefinition on an actual resource and is ONLY used for static analysis? If yes, in the above cardinality/collection example, if there is no first()
and the collection
is not set to true
, I suppose a view-runner should fail (or return no value for that column) when the VD is applied on a resource that does not adhere to the target profile.
My reading of this is that this proposal is not imposing any additional requirements on a "runner", it is simply providing additional information so that a "validator" can do its job more precisely. It also serves to improve documentation and communication of intent.
A runner can be completely unaware of the resource definition or any declared profiles.
We can get both if authors would add "first" as safeguard, but smart runners would be able to prove that transformation is not lossy for these specific expressions ;)
This is a good feature for smart runners to inform that transformation can be lossy! $Validate?lossy=false
Right; actually I think I should have been more clear in my last message: My point was that with first()
, applying VD on a resource that does not comply with the target profile, would be lossy but does _not fail_. Maybe this is preferred over the option without first()
(which will/should fail). I agree that adding the VD profile field for documentation/validation is a good idea.
I've opened a pull request to add this: https://github.com/FHIR/sql-on-fhir-v2/pull/267
John Grimes 🐙 said:
I've opened a pull request to add this: https://github.com/FHIR/sql-on-fhir-v2/pull/267
May be VD.profile
? Or does it conflict with something?
(I have been told to ask in this channel after I posted in the implementer channel.)
Hi, I have a question regarding the use of Auth0 for SMART on FHIR EHR launches.
The client app uses the fhirclient.js library, but it seems that fhirclient.js cannot retrieve the correct Bearer-type access token from Auth0. In Postman, I can access FHIR data using the token retrieved by fhirclient.js if I select the ‘JWT Bearer’ option, rather than the ‘Bearer Token’ option. As a result, the fhirclient.js process cannot access fhir server due to the incorrect token for its process.
Can anyone confirm if this is a known issue? Are there any specific configurations or parameters required for this setup?
From my experience, when using the OAuth2 authentication type in Postman, I need to include an “audience” parameter in the Auth Request to retrieve a valid Bearer token. However, I haven’t found a place in fhirclient.js where this “audience” parameter can be set. I even tried modifying the library to include this parameter for testing, but without success.
Hi! Let's break this down step by step. First, it would be helpful if you could provide some additional context about your setup:
SMART EHR Launch specifically uses an authorization code flow that requires "Bearer" tokens. The SMART App Launch specification requires the authorization server to issue "Bearer" tokens from its token endpoint. If Auth0 is configured to issue a different token format, this could be causing your issues.
Note: I notice some potential confusion in your description - "JWT Bearer" in my experience refers to a method for authenticating or authorizing with the token endpoint (RFC 7523), not a token format used for accessing the FHIR server itself.
aud
)The aud
parameter is indeed required in the authorization request, but you shouldn't need to configure it separately in fhirclient.js. Here's why:
aud
value to the authorize endpointaud
= FHIR endpoint URLHi Josh thanks for the response!
For JWT Bearer, it's something in Postman, please check the screenshot.
Screenshot 2024-10-25 at 09.06.06.png
The incorrect token I got from fhirclient.js process can be used in the JWT Bearer option in the Postman to access data from the fhir endpoint.
I can also get such a token if i don't setup audience in the Postman
Screenshot 2024-10-25 at 10.03.11.png
However, if I tick this audience setup, I can get the correct Bearer token and access the fhir data.
In terms of audience in fhirclient.js, I understand it's the fhir endpoint, but I don't think aud is used in the token request unless the private key is provided, check this code screenshot
Screenshot 2024-10-25 at 09.58.33.png
SMART EHR Launch which is open source.
Can you share a link to the specific EHR you're trying to launch against? I'm not sure precisely what you mean.
The screenshot you are showing above relating to JWT bearer has to do with authentication for a token API endpoint request, not for a resource API interaction. I think you may be making a wrong assumption somewhere but I am still not sure what exactly you are doing.
https://github.com/aehrc/smart-ehr-launcher is the open source app I use for testing
"The screenshot you are showing above relating to JWT bearer has to do with authentication for a token API endpoint request, not for a resource API interaction. I think you may be making a wrong assumption somewhere but I am still not sure what exactly you are doing."
Yes it is, I just used postman to test the token retrived via fhirclient.
Ah. I'm not familiar with this tool but if https://github.com/aehrc/smart-ehr-launcher is using auth0 under the hood, that configuration could be part of the issue.
@Sean Fong is listed as the developmeper -- Sean, does any of this ring a bell?
This EHR launcher is from EHR app side, I think the problem is fhirclient.js (client app) and the auth server side.
I am pretty sure some other auth server works with fhirclient, since it's out there for a while,my question is for Auth0 particularly. Not sure if anyone else has experienced something similar.
I'm sorry, I still can't understand the setup you're using. Can you provide a minimal reproducible example? (https://stackoverflow.com/help/minimal-reproducible-example has some background.)
Sorry I am still pretty new to SMART on FHIR, using some examples to learn how to make it happen and debug through fhirclient to check out the exact steps. The client app I have been testing is https://github.com/aehrc/smart-forms.
Currently the problem is Auth0 access token retrieved via fhirclient is incorrect, therefore the process cannot continue. If I manually put the correct token in the middle of the process, Auth0 can access the fhir endpoint, but it doesn't return patientid in the tokenResponse, the process in fhirclient requires this information (the interesting thing is - in fhirclient patientid is in tokenResponse in the earlier code, but once it requests token from Auth0, the tokenResponse gets overwritten completely). However, it seems there is a way to create costomised claims in its response via Actions, I am going to test it out. Again if I manually put patientid in the tokenResponse, I can retrieve the fhir data for this patient. But I don't believe I should change any of the code in fhirclient.js, it was just for testing/debugging. So the current blockers are the incorrect access token retrieve and the tokenResponse customisation from Auth0 side.
My steps are:
Is there anything more I should provide?
This is not reproducible.
What fhir endpoint are you connecting to, and how is it related to auth0?
Sure. It's configured in auth0 API. the user setup in user management has the permission to access this API fhir endpoint. In this test, when the ehr app tries to launch the client app, a logon screen pops to allow to enter this configured username and password.
One more point sorry didn't mention, the client id for the client app is registered in this fhir server as well.
Hi all,
https://github.com/aehrc/smart-ehr-launcher is a fork of the https://github.com/smart-on-fhir/smart-launcher-v2. It doesn't use Auth0.
From what I can understand, it looks like it's a case of Auth0 not passing the patientId claim to the tokenResponse.
It might be due to Auth0 not being configured to do so. Might need to configure Auth0 so it knows how to pass the patientId claim to the tokenResponse.
If I remember correctly, there is somewhere in the SMILE CDR settings to write JS code for SMART App Launch processing. Perhaps there is something similar within the Auth0 config.
Hi Sean, thanks for your input!, yes the tokenResponse from Auth0 doesn't have patientid, but it seems it possibly can be done via Action in Auth0, I am going to test this point. But the token retrieved from Auth0 is not correct, I guess it's because the fhirclient doesn't really pass audience to the token request if the private key is not provided. Do you know if anyone has tested this point or has this experience? (It seems Auth0 itself does have it's own SDK libraries for smart on fhir)
My understanding this smart ehr launcher on github is a web application, it requires an authorization server to be configured in order to make it work with client apps.
It sounds like you're trying to use auth0 for something it doesn't do. I'm not sure what got you started down this path, but your authorization server needs to be aware of your fhir server and needs to support SMART App Launch. You can't just... use an unrelated server.
If you're just looking for a test server, any reason why you don't use https://launch.smarthealthit.org ?
Auth0 does claim they support SMART on FHIR and it's free for me to test that's where I started :)
I am pretty sure Auth0 can work if the client app uses it's Auth0 SMART library.
So your feeling is that Auth0 is not compatible with fhirclient.js?
That's also a point what I would like to learn, where the client apps stand? Do they need to developed for all the EHR auth servers or auth servers need to configured for whatever smart libraries client apps use?
"supporting" SMART on FHIR isn't the same as "works with any fhir server you choose, with no configuration."
You'll need to follow the auth0 configuration process; it sounds like you're hitting issues with auth0, and that the fhirclient.js is behaving correctly according to the spec. Sorry I can't be more helpful, but I don't know the ins/outs of auth0's product.
Do they need to developed for all the EHR auth servers or auth servers
No, clients that follow the spec should interoperate with servers that follow the spec
I think fhirclient.js would work with any authorization server with SMART App Launch set up properly.
You can try using the Auth0 SMART library for initial testing, but AFAIK a lot of SMART apps are using fhirclient.js. So would be best if you can configure Auth0's SMART App Launch to play well with fhirclient.js.
Ideally client apps can be launched from any server supporting SMART App Launch.
Thanks both for your input! If Auth0 is not compatible with fhirclient.js, not a problem, it's still a good journey to deep dive to discover the issues and figuour out why. :smile:
Hi coninue the topic a bit more.
I've just revisted the spec and found this.
image.png
It seems patient id is not mandatory for the tokenResponse in the spec. However fhirclient doesn't seem to allow the process to continue without patientid in the tokenResponse. (Just note, not for Auth0 particular for not respond patient id, just a general question.)
In terms of the token issue, it's either my config issue or the issue of Auth0 itself, it doesn't seem "audience" is a required token request parameter in the spec.
If you're requesting patient/ scopes, then a patient
launch context parameter is required (see https://build.fhir.org/ig/HL7/smart-app-launch/scopes-and-launch-context.html#launch-context-arrives-with-your-access_token)
aud
is a required authorization request parameter, not a token request parameter (see https://build.fhir.org/ig/HL7/smart-app-launch/app-launch.html#request-4)
"aud
is a required authorization request parameter, not a token request parameter" - Yes I found this. Thanks for confirming!
Also thanks for pointing me to the right spot for "If you're requesting patient/ scopes, then a patient
launch context parameter is required"!
Hi @Nicole Sun - Auth0 is part of Okta, and I have done quite a bit of work around supporting SMART launch framework with it. There are a few challenges to overcome- but with the new forms capability, it's now able to natively host things like the patient picker- so that's super exciting: https://github.com/dancinnamon-okta/auth0-smartfhir-demo
I think part of the issue here is that Auth0 expects the "audience" to be sent in a querystring parameter called "audience", and the SMART launch framework uses the querystring parameter called "aud". Therefore when you make that initial call- auth0 is actually not associating it properly. There are 2 solves:
1- In my github repo there- I am using a WAF in front of Auth0 - many customers have WAFs in front for various reasons-- but in my example, I simply update the aud->audience querystring variable. The purpose of the two are identical- just the names are different.
2- In Auth0- there is a setting, under "Settings -> Tenant Settings -> API Authorization Settings -> Default Audience". In there, you can put your FHIR base URL there (what you would have passed in via the aud parameter). This effectively hard-codes the audience from a logic/access control perspective, and would make it such that your Auth0 tenant could only be used for 1 FHIR service. Might be OK for now.
FYI- Auth0 always mints JWTs for accessing APIs- there should be no concern there- I'm using a custom action within Auth0 to transfer the selected patient from the picker form (also hosted by Auth0), and putting it within the JWT. Auth0 currently has no way of actually putting the patient in the actual /token response though- so that's the other part that the WAF handles- it reads the selected patient ID in the JWT, and it copies the value into the /token response.
Hi @Dan Cinnamon - Thanks, that's very helpful for me to understand how Auth0 works for smart! I will test out your suggestions and let you know. I did find your github demo and discovered that you programatically use action to set the patient in the claim. I can see Action is a component in Auth0 web UI as well.
I have tried Default Audience, it works! yes I understand it's a hardcoded solution, but it's a good workaround if there is no multiple audiences required.
:high_voltage:danger!:high_voltage:
Note that even with a signgle intended fhir server, aud
plays an important security role by letting you recognize (and reject requests) when an app has been tricked by a malicious fhir endpoint that lies (via its .well-known/smart-configuration) by saying that your authorization server provides its tokens. If you (as an authorization server) don't recognize this case, you'll allow the app to be tricked into revealing your tokens to the malicious endpoint operator, which can then turn around and use the tokens to access your resource server).
If you're just playing around / testing, no big deal, but it's essential to process aud
correctly in a production environment.
Oh! yes thank you @Josh Mandel - good callout there. Auth0 does validate the audience (when it sees one), but at one point I did have extra code in to additionally explicitly validate it in the event someone used this workaround and that would definitely be required here. I'll put that back in the repo.
@Nicole Sun - Here is example code for the recommended Auth0 action that would validate the aud field against what we're expecting. In the example I'm not explicitly rejecting the request, since it's possible there could be other, non-FHIR related APIs that the tenant is authorizing for- but it's a starting point for validation on aud.
https://github.com/dancinnamon-okta/auth0-smartfhir-demo/blob/6c10a722938ed17f101a5d3ea50b5d0b92f6c2da/deploy/auth0/actions/Initial%20FHIR%20Authorize.js#L17
@Josh Mandel Yes totally agree! Not for any productions, just for testing and learning purpose. Thanks for pointing out the importance!!
@Dan Cinnamon thanks! it’s good to learn action is where we can do different customizations in the process.
Sorry for forgetting about today's call. Was deep into something and somehow didn't hear/process my phone's little chime. Will try again next week.
Hello @Lloyd McKenzie, I am working on SDC implementation these days and would like to join the weekly calls. Request you to please share the bridge details for joining the same. Thank you
@Ankita Srivastava all our call details are listed here: https://confluence.hl7.org/pages/viewpage.action?pageId=40743450
Thank you so much @Lloyd McKenzie
Hi @Lloyd McKenzie , I am back from the dead :wink: and was wondering if there is a call today. Maybe missed an announcement somewhere? Are the calls still happening ?
Hi @Stoyan Halkaliev ! I think there is no call today because of the WGM, but it should be back next week.
Thank you @Paul Lynch for the quick reply. Looking forward to next week.
Glad you're among the living once again @Stoyan Halkaliev :)
Hi @Lloyd McKenzie , Just wanted to check if we have a call today?
It was canceled. (See the "No call today" thread.)
Thanks @Paul Lynch for the quick reply. I missed the notification.
Will there be a call today? It appears that Lloyd McKenzie Lloyd is on vacation.
I guess we get a week off then? :smile:
I don't see it on https://www.hl7.org/concalls/index.cfm, but then I don't see it listed for previous weeks either.
Will there be a SDC call today?
Yes
NOTE: We'll be taking a break on calls through the end of the year. Next call will be Thursday Jan. . 12
There will be an SDC call today (at long last). Details here: https://confluence.hl7.org/pages/viewpage.action?pageId=40743450
is cancelled. Brian's not able to attend and our remaining tracker items require his input.
I'm in leave tomorrow so won't be able to make it this week.
@Bryn Rhodes, are you still able to come?
Apologies, TSMG meets today at that time so I won't be able to make it
Ok, then let’s cancel the call
Did we cancel this weeks SDC call?
(or was it for next week?)
We'd said we would cancel today, but then I forgot to do that. Now recorded properly.
I've updated the definition based extraction proposal:
https://hackmd.io/@brianpos/definition-based-extract
I've included the list of issues that I experienced, then the list of recommendations/proposals to cover them off.
We can work through it all on the call, then I can update proposed wording in the spec based on the content for voting. I believe that I've covered off everything that I wanted to, and simpler than where I was trying to take it while at the connectathon.
@David Hay @Lloyd McKenzie @Paul Lynch @Brenin Rhodes
I'm really excited with how clean/capable it has ended up.
I'm now up to working on the template based approach.
I am trying to use the ODH codes - http://terminology.hl7.org/CodeSystem/PHOccupationalDataForHealthODH
The codeSystem is defined in THO, but it has no codes. Where are the codes?
I do find on CDC - https://phinvads.cdc.gov/baseStu3/CodeSystem/2.16.840.1.114222.4.5.327
"date": "2020-12-07"
- that one's pretty old
The one on cdc.gov site is newer (20201030) than THO (20190320)
irrelevant since the THO one doesn't include codes
right
I would rather take a codeSystem that is a millenium old vs one that is empty
also... Authority... cdc.gov seems to be the right authority...
vs HL7 "publisher" : "TBD - External Body",
should the phinvads package have included codeSystems? This is what I was expecting, but it seems it only has valueSets in it.
because systems like vsac and phinvads republish 100s of code systems that are published elsewhere
and they used to do soe badly, though things are improving a little lately
so I ignore code systems from those servers unless analysis shows that they are the only and correct source
so far I process 4 code systems from vsac, and one is a constant source of grief for us all
so, what about these ODH codes?
I see on the CDC website that they give an email to ask questions, so I asked PHINVS (CDC) <PHINVS@cdc.gov>
The answer I got back recommended the use of the baseStu3 uri
Note, that Eric has indicated that us-core just used ignorewarnings.
CDC is the source of truth and since that url provides a FHIR resource, I'd say that it is a likely source for the concept content. @Jenny Couse is that correct?
Unfortunately, that site has a different defining url, which means that it cannot be the authoritative source for HL7 approved representation of the code system. @Reuben Daniels @Carol Macumber Perhaps we can add this to our list of what needs to be reconciled.
the THO resident codeSystem is grossly unsatisfactory. It can't even figure out what to put in the publisher, indicating "TBD".
NIOSH has hired a team from Dogwood including myself to update their Occupation and Industry resources, including the resources in THO. So with that hat on I'll be coordinating with HTA and with myself with my UTG content curator hat on to deprecate, retire, or update the various resources around ODH in THO. Because yeah, currently that metadata record is very bad.
For the current state of things, from NIOSH's perspective PHIN VADS is the current source of truth.
phinvads only contains valueSets
but it would be good to add the few codeSystems that CDC also owns and manages. That would likely satisfy my ODH need.
I hope you can work with yourself. tough boss
I hope the source of truth won't remain R3
Why do I have to deal with captchas while trying to view FHIR specs? They're just static pages. This is the fourth one I've had to do in the last five minutes while trying to investigate changes across US Core versions and I'm starting to lose my mind.
there's a security layer in front of the HL7 website to perform surface protection - it was getting a serious amount of assault.
Usually that means that something about your web browser is set to suppress the User-Agent header, and the security layer doesn't recognise your requests as coming from a normal browser
It can also mean that you're coming from an IP address that (perhaps because there's a bunch of shared traffic routed through it) is hitting the HL7 website a lot more than a 'normal' IP address does and the site is trying to make sure you're not part of a DoS attack.
My understand is that all Mitre traffic will be coming from a few IPs, so that is certainly the case.
You can reach out to webmaster@hl7.org and they can potentially put those IPs on a whitelist. (We commonly do that for WGMs and connectathons when there's suddenly a huge spike in traffic from a normally quiet IP :smile: )
I was teaching yesterday at the university and had also to solve 4 times the captcha under the laugthers of the student when I failed even once ...
Don't you even know what a crosswalk is?!
I haven't faced this issue during a training course as of yet - if it were to happen too often I'd probably set up a mirror to avoid annoying my training attendees.
I've got one Mac user in my training course today: new captchas appear before he can even enter something in a previous captcha. That's unworkable.
The System clearly recognizes that Mac users are a superior race of benevolent AI overlords ?
@Eric Schmitt
Strong possibility it is related to using iCloud Private Relay, where its likely that other not-so-nice or abnormal traffic is coming from that 3rd party IP -- which is how it gets put on the flagged list. There are some alternatives to IP-based security rules in this case of iCloud Private Relay that @Eric Schmitt and the HL7 tech team can investigate, but this is part of an ongoing, dynamic, and resource intensive challenge of providing a reliable global platform.
i use a mac too, but icloud private relay is not activated in my case ...
Hi All,
What you are experiencing is our Web Application Firewall rule for Rate-limiting by IP address. With your feedback I have made an adjustment to that rule.
The rule is based off unique IP address requests.
A unique IP would get CAPTCHA if they had more than 100 requests in 2 minutes.
I have increased the rate limit to 200 requests in 2 minutes.
If we need to go higher we can, but it is best practice to step slowly and evaluate impact on legitimate users.
Raising the rate limit to 200 requests allows legitimate users to make more requests in a short period of time without triggering CAPTCHA, but it is still restrictive enough to prevent high volumes of requests typical of automated bots.... which were attacking us daily. We want to find a good balance to not affect legitimate requests.
Feel free to reach out any time here or Eric@hl7.org
Best,
Eric Schmitt
Director of Technical Services
Is this request amount per html page or http requests? because the FHIR homepage itself makes 41 http requests alone, switching from R5 to R4 and then selecting some pages brought myself pretty fast over the 100 requests and that could be what I experienced ...
Thank you for your feedback Oliver.
To clarify, our AWS WAF rate-limiting rule applies to all HTTP requests made by a single IP address, rather than individual HTML pages. This means that each request—for example, requests for images, JavaScript files, or stylesheets—counts toward the rate limit for a given IP.
We can increase the threshold as we feel needed to try to find the ideal balance.
Note: we are also looking into the capabilities of CDN(Content Delivery Network). If properly setup, only essential requests (like the initial HTML page) will reach our WAF rules as requests.
I'm hoping to find someone who's active on laboratory/pathology front, and could propose a good source for histochemical staining method codes. A reference to a code system or a good example value set would be great. Also, if you happen to understand the content of the value set, I'd have some extra questions. :)
We have identified about half of what we need in SNOMED, but for some reason, they are often classified under hematological staining, and I'm not sure if it's correct to use them for histochemical staining.
We also browsed LOINC, and got lost relatively quickly. For SNOMED, we can add local codes, for LOINC, we cannot.
If you or someone you know has had recent encounter with this kind of content, please let me know. :)
Examples of what I'm talking about:
@Andrea Pitkus, PhD, MLS(ASCP)CM?
@Rutt Lindström I'm by no means an expert on all of the available staining techniques, but I do know a little about this in general. I would think that most (if not all) of what you are looking for would be available in LOINC (in the LOINC part codes used for METHOD_TYP). If it's not, I am a little surprised, but even if that's the case I think that the LOINC team would likely be quite amenable to adding whatever is missing. Have you checked with them about any of it?
Thank you, @Rob Hausam
We are struggling with LOINC. We don't maintain LOINC ourselves, only the observation concepts are translated, so we don't have a national browser/owner/user/maintainer for the rest of LOINC (the methods are in the Parts section as I understand).
We could not find everything in LOINC browser either, but our searching capabilities were below bar and there might be confusing synonyms, so we should probably try again.
So, if I would like to find all the staining methods they have, then what is the intelligent way of doing that? There must be a smarter way than this:
image.png
@Rutt Lindström The LOINC hierarchy browser may also be of help. It doesn't quite provide the flexibility and views that I might have liked to see, but if you search the "Method" hierarchy for "stain", in the hierarchy of the results you will find the 'LP' codes for the stain methods (I think likely all of them) - which includes most of the examples that you gave (at least at some level), plus a lot of others. It doesn't quite provide a nice view of only the methods and their codes for the stain methods (unless I've missed something in the settings), but it may be a good start. To do more than that, then probably you would load the LOINC data into a database and write SQL queries (or the equivalent) against that.
Ha! I tried the hierarchy browser, but did not realise that I had to pick "Method" from the menu, so that's why it did not work for me.
Thank you for being patient and holding my hand here, @Rob Hausam . I'm a little smarter now.
I will export what I found, and see if my colleagues can sort it out and get some experts speak along to verify the content.
:hug:
LOINC would not care about every detail regarding observation methods as its axis Method_Type implies. Its inclusion of a kind of method type depends on clinical relevance/significance. So in fact LOINC would not include every variant histochemical method but clinically important method types.
Literatures would help, such as:
Histochemistry: historical development and current use in pathology
https://pubmed.ncbi.nlm.nih.gov/23957702/
IMO, there is no exhaustive and up-to-date list for these methods.
Thank you, @Lin Zhang
The original request we got actually proposed a local code system, but we try not to create local codesystems when an international codesystem already contains requested content.
However, this looked borderline, and I think your last sentence confirmed it. :smile:
(@Grete Ojavere )
@Rutt Lindström We are also starting to dig deeper how to implement this, how were your experiences? Did you find the additional staining techniques in LOINC?
I think we did not find everything we needed, but I hope @Grete Ojavere can share more :)
For various methods, not limited to lab ones, LOINC focuses on method types of clinical significance only, but not down to every detailed levels of granularity method developers care about.
@Lin Zhang I suppose that depends on the scope. We are currently not trying to model the inner workings of a lab system, but to appropriately report IHC staining results as Observations as part of a report for research purposes only
In the end, we still made local codes because we couldn't find everything we needed, and we also got information from our pathologists that they don't use LOINC codes in their work
I just wanna give a big-axx salute to the synthea team for making an awesome tool for generating synthetic data that is easily extendable and without a sxxt-load of magic. You guys nailed it!
Thanks @Jens Villadsen , we appreciate the praise, and we're glad you find it useful.
Especially the flexporter feature is a nice addon
@Jason Walonoski - btw - one of my colleagues asked if adding/loading js libs could be an option so that one could use js util functions for different cases upon generation in the flexporter
@Jens Villadsen thanks for the kind words!
Re: flexporter support for loading JS libraries, this is something we've explored a little but it wasn't super user friendly and I wasn't sure how far down the rabbit hole to go at that point. Limited support does exist if things are in the right place but it's not currently exposed via the mapping file. https://github.com/synthetichealth/synthea/blob/master/src/main/java/org/mitre/synthea/export/flexporter/FlexporterJavascriptContext.java#L21
I just opened https://github.com/synthetichealth/synthea/issues/1505 for this feature request, please chime in with any thoughts or specifics you might want to see
Thats good enough for us to play around with. Perfect!
Does anyone have a Docker image for synthea that I could use to spin up quickly? Im interested in learning how to use Synthea, but want to isolate its use to a container, if possible. Im new here, thanks for the help.
There are a few on GitHub that you could try
FWIW - the main branch should build w.o. any problems
David Pyke said:
There are a few on GitHub that you could try
Thank you for your prompt response. Would you be able to point me to one? I tried looking earlier, just not sure what to search for on GitHub.
Chatgpt can help you wrap it in Docker if need be
But you will need to deal with volume mounts or alike
HEre is one that has what you need to build one: https://github.com/IndustrialDataops/Synthea-Docker
@WorldOnFHIR - I'm not aware of what you're trying to do, but you can also just download the run the Java jar file directly. It doesn't have any outside dependencies except Java, so really no need to isolate it unless you are trying to avoid installing Java on your machine (in which case I'd use a VM rather than Docker in this particular case). There are more hoops to jump through to change the config and get the output files out of a Docker container, I suspect.
https://github.com/synthetichealth/synthea/wiki/Basic-Setup-and-Running
@Justin sorry I missed this, I don't usually follow zulip closely. In case you're still working on this, we also have a "customizer" tool that can help you construct a dockerfile to run synthea in: https://synthetichealth.github.io/spt/#/customizer
Dave Hill created a new channel #PACIO Personal Functioning and Engagement.
Nikolai Ryzhikov created a new channel #Babylon (Aggregate FHIR terminology).
Jean Duteau created a new channel #Da Vinci PR.
Sanja Berger created a new channel #german/dguv.
Alejandro Benavides created a new channel #HL7 CAM.
Biswaranjan Mohanty created a new channel #Enhancing Oncology.
Abbie Watson created a new channel #fsh-tooling.
deadwall created a new channel #google-cql-engine.
Artur Novek created a new channel #FHIRest.
Aaron Nusstein created a new channel #US Behavioral Health Profiles.
Nagesh Bashyam created a new channel #UDS-Plus.
Grahame Grieve created a new channel #FHIR Foundation.
Grahame Grieve created a new channel #FHIR for Pets.
Koray Atalag created a new channel #Digital Twins on FHIR.
Preston Lee created a new channel #Meld.
Jean Duteau created a new channel #Vulcan/OMOP.
Dave Hill created a new channel #PACIO Standardized Medication Profile.
Dave Hill created a new channel #PACIO Transitions of Care.
Dave Hill created a new channel #PACIO Integration Track.
I would like to configure a HAPI starter with IPS. Can anyone help me by providing either boilerplate or an explanation how to hook the config? There is the IPS Reference Server, but I cannot find the source and the server itself is down.
which IPS version do you want?
ballot version? Or the current one, older?
We have an R4 server. And I guess as a basis I would like the current released version. Ultimately, we will have to modify the IPS for Australian standard. So if there is a recipe as to how to do that i.e. right modifications for the IPS processing that would also be welcome.
i meant which version of the IPS.
ah sorry
current one
Yes, I am aware of that documentation. I was looking to see a reference to the actual source files changed and implemented. This is an architecture description, not an implementation guide.
@Jörn Guy Süß I don't think there is an implementation guide (not that I can recall, anyway). But it's pretty straightforward. This is what I actually have in my application.yaml file (which I last updated in May):
hapi:
fhir:
### This enables the swagger-ui at /fhir/swagger-ui/index.html as well as the /fhir/api-docs (see https://hapifhir.io/hapi-fhir/docs/server_plain/openapi.html)
openapi_enabled: true
### This is the FHIR version. Choose between, DSTU2, DSTU3, R4 or R5
fhir_version: R4
### enable to use the ApacheProxyAddressStrategy which uses X-Forwarded-* headers
### to determine the FHIR server address
# use_apache_address_strategy: true
### forces the use of the https:// protocol for the returned server address.
### alternatively, it may be set using the X-Forwarded-Proto header.
# use_apache_address_strategy_https: false
### enables the server to host content like HTML, css, etc. under the url pattern of /static/**
### the deepest folder level will be used. E.g. - if you put file:/foo/bar/bazz as value then the files are resolved under /static/bazz/**
#staticLocation: file:/foo/bar/bazz
### enable to set the Server URL
server_address: https://fhir.hausamconsulting.com/r4
# defer_indexing_for_codesystems_of_size: 101
# install_transitive_ig_dependencies: true
#implementationguides:
### example from registry (packages.fhir.org)
# swiss:
# name: swiss.mednet.fhir
# version: 0.8.0
# example not from registry
# ips_1_1_0:
# url: https://build.fhir.org/ig/HL7/fhir-ips/package.tgz
# name: hl7.fhir.uv.ips
# version: 1.1.0
# supported_resource_types:
# - Patient
# - Observation
##################################################
# Allowed Bundle Types for persistence (defaults are: COLLECTION,DOCUMENT,MESSAGE)
##################################################
# allowed_bundle_types: COLLECTION,DOCUMENT,MESSAGE,TRANSACTION,TRANSACTIONRESPONSE,BATCH,BATCHRESPONSE,HISTORY,SEARCHSET
# allow_cascading_deletes: true
# allow_contains_searches: true
# allow_external_references: true
# allow_multiple_delete: true
# allow_override_default_search_params: true
# auto_create_placeholder_reference_targets: false
cr_enabled: true
ips_enabled: true
# default_encoding: JSON
# default_pretty_print: true
# default_page_size: 20
# delete_expunge_enabled: true
# enable_repository_validating_interceptor: true
# enable_index_missing_fields: false
# enable_index_of_type: true
# enable_index_contained_resource: false
### !!Extended Lucene/Elasticsearch Indexing is still a experimental feature, expect some features (e.g. _total=accurate) to not work as expected!!
### more information here: https://hapifhir.io/hapi-fhir/docs/server_jpa/elastic.html
advanced_lucene_indexing: true
bulk_export_enabled: false
bulk_import_enabled: false
# enforce_referential_integrity_on_delete: false
# This is an experimental feature, and does not fully support _total and other FHIR features.
# enforce_referential_integrity_on_delete: false
# enforce_referential_integrity_on_write: false
# etag_support_enabled: true
# expunge_enabled: true
# client_id_strategy: ALPHANUMERIC
# fhirpath_interceptor_enabled: false
# filter_search_enabled: true
# graphql_enabled: true
narrative_enabled: false
# mdm_enabled: true
# local_base_urls:
# - https://hapi.fhir.org/baseR4
mdm_enabled: false
# partitioning:
# allow_references_across_partitions: false
# partitioning_include_in_search_hashes: false
cors:
allow_Credentials: true
# These are allowed_origin patterns, see: https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/cors/CorsConfiguration.html#setAllowedOriginPatterns-java.util.List-
allowed_origin:
- "*"
As it appears (and as far as I recall at the moment), the ips_enabled: true
was all that I needed. It is time for me to do an update and refresh on my server, though, so when I do I can see if there is anything else that is needed or anything that has changed.
Thanks, I have found that switch. I was looking after where I would best inject the beans to configure the rendering toolchain shown in the architecture. If you come across that information, that would be greatly appreciated.
Not sure if this is what you are looking for, but from here you can chase down the classes/beans that are registered and find the source code in the parent HAPI project:
https://github.com/hapifhir/hapi-fhir-jpaserver-starter/blob/master/src/main/java/ca/uhn/fhir/jpa/starter/ips/StarterIpsConfig.java
@Jörn Guy Süß If what Craig posted doesn't fully answer it, maybe you can clarify further what you are needing/wanting to do with configuring the rendering toolchain?
I want to replace a Narrative Generator so it shows sections that are unique to AU-specific patients. This is towards implementing https://build.fhir.org/ig/HealthIntersections/au-ips/ on HAPI.
If I create a class that implements these, where do I (best) hook the bean in the hapi starter overlay?
Belay that, this is autowired(!). I am on my way. Thank you. This was fantastic help.
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Agenda:
C'thon recap
Milestone #1 Review sheet: https://docs.google.com/spreadsheets/d/1Jg2ypM6QNUfyMTgnkQ_jvI0x5lvKxb03D2CHfMAxxDk/edit?usp=sharing
Dev Days: who's coming? what needs prep?
Agenda for today's meeting:
I had a chance to review the FHIR Community Process Requirements v1 document which looks like the most current official source and agree with @John Grimes 🐙 from our last call that the requirements would not be difficult for us to meet.
The main non-tactical question I have is around the concept of "FCP Participant".
The reqs state that any entity including "individual" can become a participant (FCP101) and also states that "any registration information e.g. business/company registration details" (FCP102) shall be provided.
Since we are currently organized as a loose group of volunteers (some of whom work for companies with commercial interests) what should be our form of organization?
Is it recommended or required our group "register" in some sense?
Let's discuss on the call today. Thx!
cc: @Josh Mandel @Grahame Grieve
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Sorry guys, I will skip today meeting
Possible agenda for today's meeting:
We will also have @Kiran Ayyagari dropping in to tell us about Safhire.
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
During a review noted a couple of typos with case on Resource type.
https://github.com/FHIR/sql-on-fhir-v2/pull/262
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Trying to connect to the meeting - having some technical difficulties.
Hi all! Agenda for today's meeting:
Zoom link: https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
And Eastern daylight time has started, so its not for another hour - I got up too early!