A weekly summary of chat.fhir.org to help you stay up to date on important discussions in the FHIR world.
This is a discussion thread regarding the requirements for tools using the shared .fhir/packages registry.
Many tools are installing packages in this directory and relying on packages installed by other tools, but some behaviours (the content of packages.ini, the addition of indexing files), can lead to conflicts, particularly if multiple processes are trying to utilize this directory at the same time.
.lock file usage is used by the java core libraries (Validator, IG Publisher), but this is, as @Josh Mandel pointed out could be needlessly complex and avoided altogether if package installs were guaranteed to be an atomic action (install into a temp directory in .fhir/packages
, and then rename) and idempotent (I don't care if another process installed a package over the one I just installed, because they will be guaranteed to be exactly the same).
What if we maintained that all tools should NOT be able to alter the content of installed packages, and should limit their own data to a subdirectory (for example .fhir/packages/.firely
)?
Aside from package installation, this would also neatly prevent file name collisions (.fhir/packages/.hl7/my.package#1.2.3/some-new-file
doesn't interfere with .fhir/packages/.firely/my.package#1.2.3/some-new-file
) and means each tool just needs to implement concurrency within its own directory.
(deleted - accidental double post)
No idea why zulip posted that twice. Apologies.
can we live without packages.ini?
as for indexing... I thought that we did that before install? can you check?
I'm imagining a scenario where a pre-existing cache contains packages from previous implementations. How would we know that the packages are still compatible with the current way of doing things? packages.ini with a simple version is a good way of explicitly stating what particular interactions are supported in the cache.
that's true. There wouldn't really be contention around a packages.ini that only contained the cache version?
I'll have to dig into the indexing as it was changed in recent merges, but my memory is that it generated missing indexes on package read, which was what necessitated re-arranging it.
If it only contains a package cache version, maybe it needs to not be 'packages.ini' at all.
but my memory is that it generated missing indexes on package read, which was what necessitated re-arranging it.
we can change it to only do it before the package is renamed.
If it only contains a package cache version, maybe it needs to not be 'packages.ini' at all.
gonna be something though
cache.version?
could be. I don't mind, but the name and format aren't germane to the locking discussion?
we can change it to only do it before the package is renamed.
This assumes then, that all other tools will also index before renaming. It would require it, in fact, and that the indexes produced are always the same.
Which I think will guarantee headaches.
And yeah, I don't care about the name. All I'd care about is the existence of something to indicate cache version.
Well, I do care about the name, a lot actually, but that discussion doesn't need to be here
that the indexes produced are always the same
there is a spec for the index, but I do deviate from it :-(
@Martijn Harthoorn maybe we should talk about that
the documentation for the index file says:
The files array contains an object for each resource in the package with the following properties:
* filename - the filename in the package directory that is being described
* resourceType
* id
* url
* version
* kind
* type
* supplements
* content
but i had to add:
So all of the above needs to be satisfied, and:
The .index.json file can be rebuilt at any time
I can imagine the pain of sushi generating a bad index (not to imply that sushi would be prone to this) and then some other component having to deal with that. Or doing something (like above with valueSet) that gets overwritten by some other tool.
Tagging @Chris Moesel at request
And @Marten Smits is currently maintaining the .NET packaging library, so I am adding him too.
SUSHI does not generate its own packages. We only cache packages that we have pulled from a registry or from the build server (for #current builds). When we do so, we follow the approach that Josh suggested, unzipping the tgz into a temporary folder and then moving it to the correct location in the cache. We don't modify anything in the package, except fixing up the folder structure in a few old packages (as described here). SUSHI doesn't generate, modify, or even read .index.json files.
thx
:+1: for the temp folder approach. That indexing (or lack of indexing) behaviour is probably why Validator / IG Publisher builds its own, right Grahame?
builds its own what?
Indexes. If it finds a package installed by sushi, without indexes, thats when it would build them.
@David Otasek - Generally speaking, I think that packages have a .index.json file in their distribution package, so when we cache the unzipped package, it usually already has a .index.json in it. That said, I don't know when that started happening, so I guess any packages prior to that wouldn't have one.
that's the case - it was about 5 years ago
@Chris Moesel yes, that's my understanding as well. Sushi is doing a very literal install of the package, while the core libs are 'enhancing' it for optimization purposes. My proposition above was to do what Sushi does, and if .index.json or whatever is generated from another tool, it should go in .fhir/packages/.tool-id/
to keep the package install idempotent.
why need to do that if the index is generated prior to renaming?
That would have to be a requirement for every tool, then.
or we could delete and reinstall the package. Though that's not really the idea
I'm not excited about that idea. Though not exactly the same, that has the same feel as the package clearing that was causing us grief.
With the .fhir/packages/.tool-id
approach, I'm mostly attracted to the idea that the package will always be the same, and that different tools will have their own sandbox to do whatever they want. If I recall correctly someone mentioned they were maintaining some additional data of their own, which is exactly what we don't want.
that's firely - @Ewout Kramer
That sounds right.
Yes, we Firely also generate a .firely.index.json file which contains a bunch of stuff we add for optimizing our tooling. We also still generate the .index.json as described by the spec. We used to add our own stuff there too a couple years ago, but I decided to keep that just as described by the spec so that other tools still work.
@Marten Smits what do you add? Can we converge?
Let me check.
First of all, we put our index file at the root (so next to the package, example, and 'other' folder). So we have one index file for the entire package.
We add the following extra fields:
Most of these we use the make resolving files from the packages faster.
sounds like we can converge on that
You've added
valueSet is CodeSystem.valueSet - again, for resolving things
I can't see that I make use of derivation. So I don't know why I added it
oh no, I do. I use it to find/explore the type hierarchy without having to load all the structure definitions
Ok, that's easy enough indeed to align.
Do you want to keep having an index file per folder? Or no problem with moving it to the root?
I think I put it in each folder because we didn't say, and it seemed the most conservative option. But it does matter to me - I don't load examples unless examples are being looked for, for example
Sure, we can work around our root scope I guess.
Do we need to file a Jira ticket to change the index.json spec?
yes we do need one
For clarity, the full proposal here is that every tool installing a package to the package cache will:
.index.json
file, as defined by the spec mentioned (which will be updated)I'll go on record as saying I wanted tools to keep their dirty work to their own directories (.firely, .hl7-core, .sushi), but I can be on board with the above if that's the consensus.
@David Otasek I'm not sure, but if you're proposing adding requirements on how packages are created too, we'll need some FHIR-I tickets to update the requirements on https://hl7.org/fhir/packages.html
For now .index.json
and package.ini
are not mandator (or described in the latter case). But perhaps you are not proposing to make them mandatory in publishing, but in that case maybe you could update the proposal wording above to reflect that :thank_you:
Yes, this is restricted to tools writing to the package cache, and I updated the proposal above. I think that falls out of FHIR spec territory. I believe .index.json will still remain optional, and Grahame has stated that we will need a Jira ticket to make changes to that spec.
even though they are not mandatory, I always populate them when publishing
Well, actually the language in the package spec is intentional at saying tools can modify these indices at any time. Fine as long as they aren't in the package cache. If they're in the package cache, there will be a VERY specific point at which they can be changed, and then must remain static unless completely deleted.
I don't like that the package spec says something which is immediately contradicted in the package cache docs, but doesn't otherwise mention it. Maybe a note saying: "there are stricter rules regarding regenerating indices when utilizing a package cache".
At least a breadcrumb.
We don't support package.ini I think. Can someone explain what's its purpose?
it has two purposes - one to mark the version of the overall repository. that's currently 1
and second to track the last use date of packages, so a user can see which packages haven't been used.
but that's not really doing anything about the moment
Ah ok, we don't use it, or do anything with it. Is this a problem for anyone currently?
We don't use it in SUSHI either (we neither read nor write it).
@Ward Weistra My colleague tried grab us.core#7.0.0 package from https://packages.fhir.org/hl7.fhir.us.core The package lists up to 6.1.0 but does not have 7.0.0. I tried using direct url: https://packages.simplifier.net/hl7.fhir.us.core/7.0.0 which also shows that this version does not exist. How should I get us core v7 package into simplifier?
Thanks and have a safe travel back home.
I had the same issue, looking at the package on https://build.fhir.org/ig/HL7/US-Core/downloads.html, in the package.json file we have:
{
"name" : "hl7.fhir.us.core",
"version" : "7.0.0",
"tools-version" : 3,
"type" : "IG",
"date" : "20240627054756",
"license" : "CC0-1.0",
"canonical" : "http://hl7.org/fhir/us/core",
"notForPublication" : true,
with the bottom line ensuring it isn't published on the package registry. I guess the only option is to download it manually
@Eric Haas is this a publication bug?
I don't have control over package creation. The link is the same as in the publication history page. Simplifier only lists 6.1.0 and 3.1.1. Where are you downloading it manually?
@Ryan May @Eric Haas I think that may just be a result of looking at build.fhir.org. The package registry gets released versions from https://github.com/FHIR/ig-registry/blob/master/package-feeds.json -> https://hl7.org/fhir/package-feed.xml and the one there looks fine.
@Yunwei Wang The real issue is here: US Core 7 builds on VSAC 0.18.0 and that package is huge, so has been refused by the package registry infrastructure for now. US Core will next be refused because of missing dependencies.
The consensus is now that VSAC should indeed no longer be distributed as a FHIR package, but we're investigating if an exception can be made for the existing VSAC packages, perhaps one or more future ones and only those. And agree at FHIR-I on a package size limit value.
But this will take a moment...
(deleted)
The versioned US Core all point to the correct package version, The current version points to the current package, so I think this will be all sorted out in the next versions. :fingers_crossed:
IN case it helps - for the short term, any VSAC value set that is used in US Core (or C-CDA) has HL7 US Realm Program Management Author and HL7 US Realm Program Management Steward (Role : Steward). Should be part of the VS meta data
(deleted)
Also - minimally - IMHO the package should include only value sets that have status "active"
OK for the bigger issue though - I think my points stand. Also we could probably find out all IGs that use VSAC for VS build and source of truth and find out who the authors/stewards are and limit the VSAC package to that
@Grahame Grieve I'll continue to explore whether we can still load and serve VSAC 0.18.0+ for now.
But Gay has a suggestion above for a logical filtering of VSAC. Would this work for upcoming releases at least until a VSAC FHIR server is set up? For new VSAC packages it would be clear for users if they are missing a VS/CS when validating their IG.
(Or I'd welcome doing that retroactively for VSAC 0.18.0 and up too. Potentially we could run checks for all packages you know to depend on VSAC 0.18.0+ if they miss anything)
I'm going to investigate
here's my data:
we use value sets from the following stewards:
so @Gay Dolin's suggestion does not work
indeed, but does that make any difference?
I guess these could still be 19 separate packages. If need be, the next iteration of the VSAC package could depend on all of those.
I have no idea how many VS/CS those Stewards have in VSAC in total, but if that's a manageable amount you could include all for a Steward in such a subpackage so you don't need a new edition when someone needs one more.
hl7.fhir.us.vsac.hl7-usrpm
for example.
or simplifer could simply allow bigger packages, which would be way easier for everyone
How big are the 19 together.? I think that could be "The" package.
There is no need to pull in all the other sets - so many are really poorl value sets
US Realm Program mgt (US Core/C-CDA sets could REALLY take advantage of a seperate package. We possibly could then do away with depending on VSAC for the "Annual Releases" .
It would save HL7 about $40,000 a year, and ONC possibly even more
if people have used those 19, why would they not be allowed to use others?
Generally its IG authors who are building the sets
There is no need to pull in all the other sets - so many are really poorl value sets
if you can get formal agreement from #terminology that no one has a valid reason to use any other other stewards, I can remove them, sure
US Realm Program mgt (US Core/C-CDA sets could REALLY take advantage of a seperate package. We possibly could then do away with depending on VSAC for the "Annual Releases" . It would save HL7 about $40,000 a year, and ONC possibly even more
I don't know anything about annual releases, but US realm could just use THO or define it's own package. I'm told that VSAC is used because it's a better authoring environment
It is a better authoring environment
Heading into SD but will love to chat more about why this WOULD work
Grahame Grieve said:
if you can get formal agreement from #terminology that no one has a valid reason to use any other other stewards, I can remove them, sure
Terminology would never agree to that, as they should not. But perhaps the short term solution is using those current stewards, so publishing can get going again
Maybe long term solution is making simplifier bigger, but maybe each package update could still be based on: 1) IGs that have VSAC value sets 2) who are the stewards and then it will always be managable
If not the above, If we only pulled in value sets with status "Active" and some already published IG have sets that have flipped to "Not Maintained" - will that be problematic wrt to validation in implmentations?
If not - we could just pull in "active" sets - but that would still be pretty large - since VSAC now forces maintenance (https://www.nlm.nih.gov/vsac/support/authorguidelines/valuesetstatus.html) (though I'm guessing most of the "crap" value sets stewards/authors don't pay attenton to these emails that warn you your set is getting flagged as not maintained)
WRT the C-CDA annual release (which includes the shared US Core sets whose source of truth is VSAC), Since 2016 HL7/ONC has provided an annual release, basically to make up for the fact that all of C-CDA had not been balloted since 2015: https://vsac.nlm.nih.gov/download/ccda
At a cost to HL7 and ONC
We were hoping to get VSAC to create an ability for small releases to "push a button" and create a release, but they said "maybe we could start working on that in 2026"
So, @Brett Marquard and I are exploring if vendors (or vendor customers) even need a "release" and/or if we could offer it another way, hl7.fhir.us.vsac.hl7-usrpm
for example. :-)
WRT using THO - 1) not until the authoring space gets better 2) US Core and C-CDA increasingly share sets, so we prefer the sets are either in THO or VSAC rather than in one IG or another
Lastly, We are working with OCL folks (OCL is all (or mostly all, and goal is to be all) FHIR based) so that maybe someday in the future, their tooling would rival VSAC authoring and it could be used in the THO space and we would not have to use VSAC at all.
@Grahame Grieve - I know you are familiar with OCL, but in case others are not: https://openconceptlab.org/
But perhaps the short term solution is using those current stewards, so publishing can get going again
this is something that has already happened - the packages are already published. There is only one short term solution, which is for simplifier to remove the size limit for at least the vsac package
even if I can restrict to active only, that's for future publications
here's the value sets that are used that are not actively maintained:
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1032.115 (Not Maintained) @ MITRE used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1099.46 (Not Maintained) @ BSeR used by [hl7.fhir.us.bser#2.0.0-ballot]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1111.95 (Not Maintained) @ The Joint Commission used by [hl7.fhir.us.bser#2.0.0-ballot]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.10 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.41 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.50 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1142.57 (Not Maintained) @ SAMHSA Steward used by [ihe.iti.pcf#1.1.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1144 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1152 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1154 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1157 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1223 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1146.1270 (Not Maintained) @ CSTE Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1196.309 (Not Maintained) @ IMPAQ used by [hl7.fhir.us.nhsn-ade#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1196.310 (Not Maintained) @ IMPAQ used by [hl7.fhir.us.nhsn-ade#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1223.9 (Not Maintained) @ CareEvolution Steward used by [hl7.fhir.us.nhsn-med-admin#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113762.1.4.1240.1 (Active) @ HL7 USRPM used by [hl7.cda.us.ccda#3.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113883.3.464.1003.111.11.1021 (Not Maintained) @ NCQA used by [hl7.fhir.us.mihr#1.0.0]
* http://cts.nlm.nih.gov/fhir/ValueSet/2.16.840.1.113883.3.464.1003.111.12.1015 (Not Maintained) @ NCQA used by [hl7.fhir.us.mihr#1.0.0]
so active only doesn't seem to be a goer either
Grahame Grieve said:
so active only doesn't seem to be a goer either
OK - I thought that might be the case
Yunwei Wang so we’re stuck here. packages.fhir.org won’t handle the package. But packages2.fhir.org/packages handles it no problems, so people should always use the alternative server
Did you forget this part or did you think I abandoned it?
Ward Weistra said:
I'll continue to explore whether we can still load and serve VSAC 0.18.0+ for now.
However, I would still like to keep any size exception, if at all feasible, as small as possible. So if we could switch for any next VSAC publication from one supersize VSAC package to more tailored hl7.fhir.us.vsac.hl7-usrpm
per publisher that would be a great way to not make the future impact any bigger then necessary.
As part of the consumer and regulator-facing IPA website, we're creating a logo. We've created the following. We need to decide between them. They're not all finalized, and will be used as a design direction for further refinement. Please vote in the below poll to indicate your preference.
Option A: Globe and flame
smaller_variation_3_background_removed.png
Option B: Pixelated heart-shape with lame
own_identity_3.png
Option C: Torch as I in IPA
torch_2.webp
Option D: Hands cupping flame
hands_2.png
(Shoutout to @Andrew Fagan to creating these logo's!).
cc/ @Mikael Rinnetmäki , @Sheridan Cook , @John D'Amore , @Rob Hausam , @Brett Marquard , @Ricky Bloomfield , @Andrew Fagan , @Rashid Kolaghassi , @Vassil Peytchev , @Jason Vogt
/poll Which logo should IPA adopt?
Option A: Globe and flame
Option B: Pixelated heart-shape with flame
Option C: Torch as I in IPA
Option D: Hands cupping flame
A looks like a cannonball :thinking:
ChatGPT images aren't as good, but I was playing around with the idea of something other than fire, since "burning earth" and "burning hands" can both be problematic. So I asked for an earth with blue/orange rings around it, implying connecting the world. The "FHIR" reference is in the color of the rings, not actual burning fire. A real artist could make something much nicer, but maybe this is a better middle ground?
DALL·E 2024-10-02 13.12.19 - A logo featuring a stylized Earth at the center, surrounded by orbiting rings similar to those of a planet. The Earth is modern and sleek in design, w.webp
Maybe this also has less copyright risk if that's a concern.
Jens Villadsen said:
A looks like a cannonball :thinking:
Because if it's not love,
then it's the bomb, the bomb, the bomb...
that will bring us together..
by The Smiths: Ask, 1986
Personally not keen on any of the four logos under vote
I'm not deeply inolved in this, but I agree with @Kari Heinonen. I see no indication of "patient" or "access" in any of these. And I wonder how much FHIR is the background mechanics of patient access, rather than the headline.
That said I lean towards the torch, but worry that the "IPA" of the logo is a little anglo-centric for a spec that is supposedly "international".
The torch icon makes me think of the Olympic torch.
Logos are hard.
I love the enthusiasm and brainstorming in this thread! Please do post alternative ideas here.Please weigh in on your favorites, and issues with others.
We've got until end of day Sunday to make a decision. I'll take the best ideas to our web design firm then -- at which point we'll be locked into logo and color scheme.
Ok - tried to incorporate comments so far in a design that blends what folks like about Option A, C, and D. I know we won't make everyone happy but I like the symbolism of the torch and hands ("I'm an advocate for myself in bringing my data wherever I go, and healthcare systems are there to support my journey"). Better?
First: too "busy" i.e. too many elements in a small package, for example I couldn't make out the hands at first glance. Second: Don't like the torch to begin with, that's for The Olympics :smile: , and I believe "hands holding earth" is not a particularly original idea, right ?
Maybe modify previous "rings around world" to include a smallish flame "in orbit" leaving multicolored "trace" behind ? So sorry, can't do actual graphical design to save my life ...
maybe focus more on patient access more and international less
this stuff is hard
Richard, good idea! What would that look like?
༼ つ ◕_◕ ༽つ :fire:
(thats a logo that everybody understands :big_smile: - a person requesting fhir )
A colleague of mine suggested a theme of the patient "bringing data with them", and generated this with copilot --
image.png
Thank you, Grahame! The "galactic FHIR badge" is visible now, yes?
I don't think this is what you actually wanted for the second image:
but wow, that's perfect for IPA...
How about something like this?
Did not want to put too much effort into it yet, but the heart is supposed to pixellate a bit...
For what it's worth, in our call we discussed the torch and the prometheus aspect - which I kind of liked. Most gods think that ordinary people should not get access to FHIR, but not everyone agrees...
I'd like to perhaps explore the torch idea a bit further too. But the torch does not need to be the I of the IPA. Just a standalone symbol. And perhaps in a hand of the patient.
Grahame Grieve said:
but wow, that's perfect for IPA...
I do love @Grahame Grieve's accurate and concise illustration of the current state of affairs, but I don't feel it accurately captures the full ambition and the intent of IPA...
ipa-logo-hand.svg
SVG version, if anyone wants to utilize some of that.
Hi All, we are reviewing our use of a HAPI FHIR validator built into our Git pipeline, because in order to keep it up to date requires the time of a skilled developer, and as a busy team we may not always have this resource. We were considering buying an off the shelf cloud based solution like AWS HealthLake, but we're not sure if it would meet our needs, at a quick glance it looks like it is more focused on analytics than validation. Does anyone use any FHIR validators that they can recommend, that would fit easily within our Git pipeline, requires little customization, or has a good user interface, produces human readable validation reports and can work with our terminology server. The Hammer FHIR validator looks promising, utlizing the best of both worlds, running both .NET and JAVA validators side by side, but I wasn't sure if this was still experimental. I was looking for a website that would help me make a decision on this by independantly testing the FHIR validators, listing the advantages of their features and their disadvantages, does such a site exist?
A few things to keep in mind:
Taken together, it means that any tool you run locally is going to have an associated maintenance (and likely configuration) requirement. Sometimes that maintenance will have to happen in very short order.
You don’t say why the hapi validator takes a skilled developer to maintain, nor what terminology server you’re using. Nor whether you’re using the command line or you wrapped it in something
PRs are welcome, but my expectation is that any validator will require skilled maintenance.
Note that the Java FHIR validator is the most thorough validator by a long shot, and I can’t recommend to others because they don’t pass the test cases. The test cases are unfriendly so the mere fact they don’t pass isn’t necessarily a problem but I do know the others are less thorough
Hi @Grahame Grieve and @Lloyd McKenzie thanks for replying. It's more of a question that we are small team of public sector employees, busy working on FHIR interoperability solutions, and updating our existing customized HAPI FHIR validator that is built into our Git pipeline (in future we were hoping to tie this to our NHS Terminology Server (Customised version of CSIRO Ontoserver technology), takes a certain amount of effort. That's why we wondered if there was an off the shelf solution that would pass the test cases mentioned make validation easier, it doesn't sound like there is, as the validator would need to be customised to our needs, of being valid against the latest or appropriate version of the UK Core, and valid against the SNOMED CT terminology we use.
I was listening to @Vadim Peretokin excellent presentation on FHIR validation at past Dev Days event, and I think he says pretty much the same as you, that the Java validator is the most battle tested validator in the FHIR eco system. @Vadim Peretokin can we use the Hammer FHIR validator in a git pipeline? Currently, we have a FHIR Validator that runs at least once a day, and will check for changes when pull requests are made.
Grahame Grieve said:
You don’t say why the hapi validator takes a skilled developer to maintain, nor what terminology server you’re using. Nor whether you’re using the command line or you wrapped it in something
PRs are welcome, but my expectation is that any validator will require skilled maintenance.
Note that the Java FHIR validator is the most thorough validator by a long shot, and I can’t recommend to others because they don’t pass the test cases. The test cases are unfriendly so the mere fact they don’t pass isn’t necessarily a problem but I do know the others are less thorough
as the validator would need to be customised to our needs, of being valid against the latest or appropriate version of the UK Core, and valid against the SNOMED CT terminology we use.
neither of those things should require a customised validator, unless UK core isn't conformant with FHIR itself, in which case you've got a huge problem irrespective of which validator you use
since hammer is a wrapper around the validator, I don't understand what it gets you in a pipeline like that?
@Grahame Grieve the UK Core is conformant with the FHIR standard, but as an example, where we have constrained the Patient.identifier to use an extension, NHSNumberVerificationStatus, we would expect to see for example, where in an instance example of the where if the NHS Number is present and verified, then we would expect a validator to check that the extension is present and the code/ display value of "01" / "Number present and verified" is present in that instance example. That is the kind of custom validation that I am referring to that goes beyond the capabilities of an "out of the box" FHIR validator. For example, if someone had made a mistake in the instance example and used "02"/ "NHS Number present and verified", instead of "01" / "Number present and verified".
Regarding Hammer, I only heard about it this week, and I need to do some background reading to understand it's full capability full potential. If anyone can point to any documentation or sites, that can give advice on FHIR validators, feel free to comment.
Grahame Grieve said:
neither of those things should require a customised validator, unless UK core isn't conformant with FHIR itself, in which case you've got a huge problem irrespective of which validator you use
the out of the box validator will validate the proper extension if the profiles you are using declare them and have constraints like you mentioned.
ok sure, a validator isn't going to enforce business logic like that that's not expressed anywhere, though such logic can usually be expressed using FHIRPath, and then will be enforced
Hi @Jean Duteau where you mention about profiles declare an extension, is that the same as define an extension: https://www.hl7.org/fhir/defining-extensions.html ? For example , use a canonical URL that uniquely identifies the extension, specify it's context, set it's cardinality, publish and reference the extension's canonical URL that uniquely identifies the extension in the profile. We do that, it's the business rules, alongside the standard FHIR rules that we need a validator to work on,
Jean Duteau said:
the out of the box validator will validate the proper extension if the profiles you are using declare them and have constraints like you mentioned.
There's two steps:
@John George would you be using the validator during ig development in the ci-build? because as far as I see you have described the required rules already in https://simplifier.net/guide/uk-core-implementation-guide-stu2/Home/ProfilesandExtensions/Profile-UKCore-Patient?version=2.0.1 and could use that package to validate any examples by specifing the needed profile. if you could share your git pipeline requirements, that would be helpful.
The original post mentioned a desire to have a validator that "has a good user interface, produces human readable validation reports" - apart from a few web frontends for $validate, and Simplifier and Hammer, there are not a lot of options. "Human readable validation reports" - I don't think any of the tools support that (on the assumption that one could create human readable reports for deep validation).
John George said:
in future we were hoping to tie this to our NHS Terminology Server (Customised version of CSIRO Ontoserver technology)
Let me know what help you need and I'll make sure you get it (there isn't anything customised about the Ontoserver being used for the NHS Terminology Server so anything you can do with Ontoserver, you'll be able to do with the NHS Terminology Server)
"Human readable validation reports"
what's a human readable validation report? The java validator has various output formats, one of which is html, which everyone is used to looking at in the IG publisher wrapper
But otherwise, what? It seems like not a big lift to add something to the validator for an output of what is desired, and there's already a framework for multiple outputs, so that exists
the other challenge which I'm always sweating on is how to make the messages more comprehendible
I'd estimate that "99% of implementers don't use IG Publisher". Authoring an IG is only done by a very small % of the community, and one can't assume FHIR implementers to be familiar with it.
When it comes to validation, you'd want both a very precise indication where in a resource the error occurs, with a comprehensible error message for someone not terribly well versed in FHIR and strucDefs.
ok, not everyone, that's true
with a comprehensible error message for someone not terribly well versed in FHIR and strucDefs.
I've given up on that. I don't know how to explain stuff. I mean, I try, but the language is rooted in the FHIR definitions, and I don't expect much from non-FHIR developers
Thanks @René Spronk for your input, yes we want a validator that not only my team at NHS England can understand the error messages, but possibly in future one that can be widely used throughout the NHS, so implementors of our FHIR specification, who may not be so well versed in FHIR can easily understand an error message and pinpoint where exactly the problem is. Having many years ago, worked in a hospital on pathology messaging, I can relate to this, and it would have been useful to have a validator who messages were easy to understand, rather than escalating this to our IT supplier. I want to find out if the HAPI FHIR validator that we use along side the .NET FHIR validator in Simplifier.net is sufficient, or if there have been any recent developments in FHIR validators that may mean there are better FHIR validation solutions out there.
René Spronk said:
When it comes to validation, you'd want both a very precise indication where in a resource the error occurs, with a comprehensible error message for someone not terribly well versed in FHIR and strucDefs.
suggestions to improve the error messages are always welcome
To help to understand validation issues we are supporting a student project at the Bern University of Applied Sciences, who are trying to determine with the help of a LLM what the underlying problem is based on the error messages from the Java validator/matchbox and how it could be rectified. The background is that there can be a lot of follow up warnings/errors with specific FHIR documents due to the slicing, and we want to find out whether LLM can make a direct recommendation on what needs to be corrected.
slicing is particularly difficult, yes
Despite best efforts, messages will never be perfectly easy to understand, by everyone.
As well as making them as simple as is practical, perhaps a link to an online FAQ could be given, which could spell out some common explanations (e.g. what a slice is). Also the FAQ can give a link to Zulip, as a last recourse. We don't want to replace an automated tool with humans, but, otoh, it is always good to get people into the community.
well, Rik, there's only 1152 messages that validator can produce :grinning:
Grahame Grieve said:
suggestions to improve the error messages are always welcome
For grouping of bulk FHIR Validation results like we do by https://git.uni-greifswald.de/CURDM/Bulk-FHIR-Validation/src/branch/main/README.md IMHO it would help to have (the main part of) error messages additionally/structured (maybe extension) without the used code / only on codesystem level, since in some cases while bulk validation too many different but roughly the same error messages (for each used code of a codesystem if many different values like ICD codes from many validated resources) which i sometimes group for a FHIR element to one single/aggregated error for the whole code system (at the moment by removing the code from message by regex, which could fail if the validators messages/output format will change).
that actually sounds like a suggestion to not change the messages!
maybe giving you message id will make it easier? But how can I remove the code from the message? That doesn't sound like a useful thing to do
To change messages to improve / to understand them better is very good. To remove the code genarally would be very bad (it helps and i want to see it very often). :)
Just wanted to mention, it would be good to be able to not use it/remove it for some (not all) further analysis.
So if no additional/separate $validate operation outcome element for the message without the code available, it would help to have as possible stable output patterns to be able detect/separate/extract the codesystem and code parts of the output string.
Hi there,
We use multiple choice questions in Questionnaires and want to specify whether an answer is right or wrong. Is it possible to use an extension for this on the answerOption element? Or do I capture if an answer is right or wrong outside of the Questionnaire resource? Thank you for your time!
It seems like a reasonable requirement for a new standard extension. Would you be willing to submit a change request?
This sure is an interesting one. Including the "correct" answer in the definition could make "cheating" easier if one had access to the definition, and something that might want to be considered to be in a derived definition.
@Brian Postlethwaite I think the idea is that the correct answers are separate from the Questionnaire, but someone who has the answers is marking the user's answers right or wrong.
The correct answers could be a reference to a specific QR...
Then security can take care of the privacy part easier, and also works for all question types. Otherwise extension could be any of the answer types. Lots to consider here.
That might actually be a better option - it would work for more than just multiple-choice.
For long text answers, you could even list the key points that should be covered and the marks to be awarded for each.
I also agree that a QR is a better option. The QR will contain points for a specific answer (option of an answer)
I can experiment with the PHQ2PHQ9 questionnaire and implement scoring calculations based on this new approach.
@Nina Haffer
Is it possible to provide a more specific example? How would you like to use this feature?
{
"resourceType": "Questionnaire",
"id": "Questionnaireexample",
"title": "Acromioclavicular joint",
"status": "draft",
"item": [
{
"text": "The articulatio acromioclavicularis (acromioclavicular joint, ACG) is",
"type": "choice",
"linkId": "2131623898461",
"answerOption": [
{
"valueString": "morphologically and functionally a ball and socket joint"
},
{
"valueString": "morphologically and functionally a planar joint"
}
]
}
]
}
@Ilya Beda Hi, here is a minimal example of what I am trying to achieve. The right answer would be "morphologically and functionally a planar joint".
Brian Postlethwaite said:
The correct answers could be a reference to a specific QR...
Thanks @Brian Postlethwaite I like that idea. Will check it out! How would I define the QR instance with the right answer set as "the perfect one"?
For now just create an extension of type reference to the QR in the questuonnaire.
And submit a change request to get us to define a 'standard' extension :)
While defining this extension we considered some other issues that may arise while doing this type of response evaluation for an "exam" or "academic" test. Anyone got better wording for this where it's describing this type "knowledge based" vs normal scoring?
Also, would we want to have guidance (via some other extension) on other options that are considered incorrect.
So that you could provide feedback to the test taker why their result was right/wrong.
e.g. you selected A, but this was incorrect because blah.
e.g.2 you selected B, this is not correct because abc
e.g.3 you selected C, this is correct for reasons x,yy and Z.
Alternately you might just have a generic response.
e.g. A is incorrect due to blah, B is incorrect due to blah, C is correct for reasons x/y/z
The former style allows provide different text depending on what responses they gave.
Also interested if there is any interest in providing scoring (optionally with weights) for these tests over and above the actual "correct" answer?
Should there be some style for text based questions apart from exact answers?
(something that an AI might be able to check against the response?)
On the SDC call today we also considered some of these thoughts and if that style of information would live alongside the AnswerKey QR, or in a derived questionnaire that adds the extra metadata for test assessment.
And if using fhirpath expression(s) would be appropriate for some of this:
I've created samples to try this out using chatGPT to help draft the content!
https://fhir.forms-lab.com/Questionnaire/trivia-questionnaire
https://fhir.forms-lab.com/QuestionnaireResponse/trivia-response
Here's the conversation if that's of interest to others...
https://github.com/brianpos/fhirpath-lab/blob/develop/fhirpath-ai/sample-conversations/create%20sample%20general%20knowledge%20questionnaire.md
(I used the chatgpt hosted in the fhirpath-lab to be able to iterate more quickly too while testing - shameless plug)
And this other one that shows the annotations for the answers - note that I've used markdown rather than string or annotation as the datatype to permit formatted content - which seemed to make more sense in providing structured detail/highlighting.
https://fhir.forms-lab.com/QuestionnaireResponse/trivia-response-answer-key
Having thought about this some more, I think that adding some complex extensions into the QR would be better than breaking the Q/QR validation set.
something along the lines of:
alternateAnswerValue: 0..1 valueX // a possible answer that could be used (doesn't imply it is correct - may help with partial scoring)
alternateAnswerExpression: 0..1 expression // a fhirpath expression that can be used to evaluate if this "answer rule" is the appropriate one to select - an alternative to using the alternateAnswer which is exact
correct: 0..1 boolean // true this answer could also be considered correct alternative
score: 0..1 decimal // if the answer has a specific weighted score for it
calculatedScore: expression // if the anwer has a specific weighted score for it, but needs to be calculated - this can refer to other variables in the questionnaire - though not sure how this could work - would need to be able to embed variables for other scores?
additionalFeedback: markdown // specific feedback on this answer. - also equivalent to the `http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-item-answerAssessment` that we've already proposed
Note: the alternateAnswerValue/Expression could be optional if the item.answer is the only applicable one.
@Josh Mandel pointed out that we haven't noted where an actual test assessment result would go, along with their score.
Brian Postlethwaite said:
Josh Mandel pointed out that we haven't noted where an actual test assessment result would go, along with their score.
True, though that was not a part of the original request. I think we would need to hear whether that is needed before defining something.
What is the latest on the story of generating Java data models for profiles?
(I checked fhir-codegen but I see it has this issue)
( blast from the past https://github.com/jkiddo/hapi-fhir-profile-converter )
@Vadim Peretokin I might have a colleague that would like to pitch in some effort
To the MS codegen project
works, but there's some open issues with it
What are those?
don't remember :-(
@Grahame Grieve and this class here https://github.com/hapifhir/org.hl7.fhir.core/blob/master/org.hl7.fhir.r5/src/test/java/org/hl7/fhir/r5/profiles/PETests.java illustrates how it can be used, correct? It isn't wrapped in any executable or something like that already, right?
that tests out the underlying engine.
I don't think it tests out the generated code itself
This is a great start!
I've played around with the code generation and found the following issues, sorted in priority:
Would you like me to file them so we can keep track? Both me and @Jens Villadsen agree this is something worth developing further, perhaps we can get some community traction on this :)
This is gonna be a fun ride!
ca.uhn.fhir.model.api.annotation.*
are used in the generated results.6 missed a 'not'. But you explained it in 8, so nvm
Hello - when is it appropriate to use contained resources vs. creating a full resource? Is there any guidance on when to use contained resources. I see this in the description comments for the 'contained' data element on each resource: "This should never be done when the content can be identified properly, as once identification is lost, it is extremely difficult (and context dependent) to restore it again"
Can anyone share example scenarios of when they used contained resources vs. creating a full resource?
Thanks!
a typical scenario for using a contained resource is when converting from v2 or CDA to FHIR, and the only information you have for the Practitioner details is 'Mr John Smith' - you don't know which john smith. This is unidentified, and therefore you have to use contained resources
iif you don't have to, you shouldn't
when you don't have sufficient detail to fill out that resource usefully or causing noise. In my case I am decomposing a lab result, I have the various lab measurement types and values; but I don't have any business identifier to keep me from generating alot of duplicate Observations data. Thus it is saved as a contained Observation within the DiagnosticReport, as I do have a business identifier of the lab report.
I don't wish to confuse things, but if all you know is the single data point "Mr John Smith" (as a string) isn't that suitable for putting in reference.display, and not need a contained? I think that is a legitimate use. If you know another data point (e.g. phone), then that would require contained.
I always use the maxim of "does the resource have a lifecycle of its own? then don't put it as a contained resource."
Many implementers and IG authors use contained as a way to get around the FHIR reference mechanism. If you manage a resource's state and have an id for it, it should not be contained.
One thing I've heard on here is, using a contained questionnaire on a questionnaireresponse resource to indicate the questions as they were when answered.
The QuestionnaireResponse.item.text already does that. The only use case I’m aware of for contained Questionnaires is for adaptive forms where the Questionnaire is built on the fly for that specific response
There are some resources - like Location, that require only 'name', which could be anything. Seems like this would definitely be a candidate for a contained resource. I have not seen these in the wild yet, so not sure what will be populated in them. But I question how useful they are as stand alone resources.
Similar discussion with some other use cases here: #IPS > Contained vs Referenced Resources in a document Bundle
Do folks have suggestions on how to communicate the number of entries in a Bundle of type collection? For some reason Bundle.total is not allowed for type=collection, and since Bundle is based on Resource instead of DomainResource, top-level extensions are not allowed.
Why do you need a field to hold number of resources in bundle, when you can do a count(entries)?
Software can easily count. Having a total is very useful for humans.
But let me flip it around on you: why must we prevent the use of Bundle.total, given that it exists in the resource?
FHIR-48485 for the future. But still interested in suggestions for R4 era.
I see the usage of total in a search. It provides a data point not otherwise discoverable via the data in the bundle. (total number of matches of the query.)
I don't know why it was prohibited in non-search bundles. Maybe because the definition of the value becomes fluid? Would you say the total in a collection is just the primary (clinical) resources or primary + directory (practitioner) resources? In a composition does the composition resource itself participate in the total?
I'm not so sure that, outside of software developers, fhir technologists, that there are many humans that read the raw fhir.
Bundle.total doesn't say "how many are in the Bundle", it says "how many results are in the search set". If software has the Bundle, it can count how many entries are in the Bundle itself.
When you have a collection Bundle, you always have all of the entries. There's no mechanism to return a 'part' of a collection.
I'm with you @Cooper Thompson .
We have other things meant to help humans like Resource narratives - which actually make it harder for this human to read FHIR, in addition to throwing me constant unnecessary validator warnings ;)
Bundle.count seems potentially useful in contexts outside of search - I fail to see the need for a constraint preventing its usage.
As an aside, in my experience most servers only fill in Bundle.count for searches where the bundle contains the complete result anyway. It's expensive for most implementations to calculate the total count for searches that require paging, and most that I've worked with don't do so.
Bundle.count does not indicate how many records are in the Bundle. We don't want the meaning of it to change for different types of Bundles. Also, this is a normative resource.
It's a bit unfortunate, though.
We can't use it to know the total search-set size when they don't fit inside the bundle, because most servers don't calculate it.
We can't use it to tell you how many resources are in the actual Bundle.
It's kind of rendered effectively useless.
But it is, as you say, normative - so not much to be done I suppose.
.. hindsight and all
It should never be necessary to say how many resources are in the Bundle - you've got the Bundle and you can count. Letting you send a count separately is just an opportunity for inconsistency.
In our project, we want to use this expression in our questionnaires but our intended implementation is slightly different. Instead of enabling/disabling the options as per the documentation, it would be preferable to show/hide the options instead. We feel that if a user shouldn't select an option (e.g. pregnancy options for a male patient) then they shouldn't have to see it. If we can't use this expression, what's the best way to go about this?
You could use answerExpression to change the list options based on an expression.
The spec isn't clear whether "disabled" means hidden or merely greyed out. It might be cleanest to have an extension that indicates which of those two behaviors you want for disabled items and answers.
I thought we already had an extension for this, but I can't see it.
So yes I'd support that.
I thought we did too but couldn't find it either...
I thought I saw that a week or two ago (but only for items), but I'm having trouble finding it now.
Thank you for your quick responses. I'm not sure I'm aware of this extension either but if a new extension is needed, what is the process to get that done?
Log a ticket in jira, there's a link at the bottom of each page in the spec.
sdc is the guide to log out against.
I think I was thinking of Questionnaire.item.disabledDisplay (new in R5), but that doesn't help with answerOptions.
Ah. I couldn't find because I thought it was an extension. Yes, that's it. We could just update the various 'enable' extensions (enableOneExpression, enableOption) to say that they're governed by that same element. I don't know that there'd be a need to have behavior that's different on the appearance of the element vs. appearance of answer choices within the element?
have we added the extension about whether to hide/show or enable/disable the options?
I’m having an issue with FHIR JS client when interpreting decimals, it omits trailing zeros. However the clinicians want to see it with trailing zeros in the UI.
Issue happens with valueQuantity type in observation. Any trailing zeros returned by the FHIR API will be removed by the Java script front-end when displaying
"valueQuantity": {
"value": 0.10,
"comparator": "<",
"unit": "IU/mL"
},
Will be displayed as 0.1
Just wondering anyone has a workaround for this.
have you looked at the note in the json page about javascript?
And assumedly using the Precision Extension?
Thanks @Grahame Grieve @John Silva , while i dont have the ability to change the backend API to add the extension, is there way the fhir.js client can implicitly handle this scenario or any example of using https://github.com/jtobey/javascript-bignum library in this case as noted in the page?
If it's just the clinicians wanting to see a value presented with a certain number of decimals, the front-end can just use value.toFixed(2)
, right?
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toFixed
It's not about always showing 2 positions.
.1 means .1 +/- .05
.10 means .10 +/- .005
.100 means .100 +/- .0005
yes .1 == .10 == .100 but the "how accurate" the measurement is, is different.
if the original value was .100000000 then .1 +/- .05 is a lot different than .100000000 +/- .0000000005
@Daniel Venton is correct, the precision is not always going to be 2. However I ended up suggesting the following to the team.
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Agenda:
C'thon recap
Milestone #1 Review sheet: https://docs.google.com/spreadsheets/d/1Jg2ypM6QNUfyMTgnkQ_jvI0x5lvKxb03D2CHfMAxxDk/edit?usp=sharing
Dev Days: who's coming? what needs prep?
Agenda for today's meeting:
I had a chance to review the FHIR Community Process Requirements v1 document which looks like the most current official source and agree with @John Grimes 🐙 from our last call that the requirements would not be difficult for us to meet.
The main non-tactical question I have is around the concept of "FCP Participant".
The reqs state that any entity including "individual" can become a participant (FCP101) and also states that "any registration information e.g. business/company registration details" (FCP102) shall be provided.
Since we are currently organized as a loose group of volunteers (some of whom work for companies with commercial interests) what should be our form of organization?
Is it recommended or required our group "register" in some sense?
Let's discuss on the call today. Thx!
cc: @Josh Mandel @Grahame Grieve
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
Sorry guys, I will skip today meeting
Possible agenda for today's meeting:
We will also have @Kiran Ayyagari dropping in to tell us about Safhire.
Zoom link:
https://zoom.us/j/94516094652?pwd=Sk01NzBiMDdSTjRoSVFYemFYWlFiUT09
Meeting ID: 945 1609 4652
Passcode: 215929
During a review noted a couple of typos with case on Resource type.
https://github.com/FHIR/sql-on-fhir-v2/pull/262
When I have the following questionnaire
{
"item": [{
"answerOption": [
{
"valueString": "a"
},
{
"valueString": "b"
},
{
"valueString": "c"
}
],
"extension" : [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-maxOccurs",
"valueInteger": 2
}
],
"id": "checklistquestion",
"linkId": "checklistquestion",
"repeats": true,
"text": "Checklist question",
"type": "string"
}],
"name": "Questionnaire",
"resourceType": "Questionnaire",
"status": "active",
"title": "Questionnaire",
"version": "1.0"
}
And run the FHIR validator
java -jar validator_cli.jar -output-style compact maxoccurs.json
I get the following error
[2, 6] Questionnaire.item[0]: Error - The extension http://hl7.org/fhir/StructureDefinition/questionnaire-maxOccurs is not allowed to be used at this point (based on context invariant 'type!='display' and (required=true or (required.value.empty() and required.extension.exists()))')
This can be resolved by setting required to true, however I want to make this question optional and restrict the max number of answers to 2.
How can I fix this?
That invariant is on that extension, I'd say there is something else going on.
It validates fine in my system that checks SDC.
https://fhirpath-lab.com/Questionnaire/tester
Paste in your json and click the validate button. (document icon with tick on it in yhe corner)
@Brian Postlethwaite thank you for your reply.
Which library/config do you use for validation inside fhirpath-lab.com?
I used the downloaded validator from https://confluence.hl7.org/display/FHIR/Using+the+FHIR+Validator#UsingtheFHIRValidator-Downloadingthevalidator And with https://validator.fhir.org/ I get also the error
image.png
My own validator.
I also manually checked that extension and couldn't see that rule in it.
I'll recheck that again.
Wow, I must have missed that when I looked the other day. It's only visible in the json/xml and not on the other pages.
Can you log that and we'll get it fixed.
Thank you Brian. I created a ticket: https://jira.hl7.org/browse/FHIR-48468
In my workplace, we are facing disagreements on how to represent laboratory test results. One view suggests it is more coherent to use only Observations, and for complex tests (panels), group these singular Observations into a larger Observation. The other perspective, observed in several well-known implementation guides like US CORE and HL7 Europe Laboratory, always uses DiagnosticReport to represent the results, referencing Observations for the laboratory data. What would be the best interpretation of the data in this domain?
This is documented here: https://build.fhir.org/observation.html#obsgrouping - but the short answer is, follow the implementation guides and use DiagnosticReport to represent the lab report, Observations for the individual results.
Craig McClendon said:
This is documented here: https://build.fhir.org/observation.html#obsgrouping - but the short answer is, follow the implementation guides and use DiagnosticReport to represent the lab report, Observations for the individual results.
Always a Observatio (laboratory report) will be reference by a DiagnosticReport?
Always is a very strong word. I'll go out on a limb and say no, not always, under every condition, from every lab vendor, passed though who knows how many translation layers before you retrieved the observation resource.
The idea of the DiagnosticReport resource is to represent and convey a laboratory or other diagnostic report, with the report results themselves represented by one or more Observation resources (which can be "nested" using hasMember, when that is needed). It's not necessarily not quite as simple as that sounds, though, and there are different approaches and ideas about how best to handle some of the use cases - particularly when we are dealing with observation panels of various types. Some of the things to consider are, particularly for the "complex" results, what code(s) are appropriate to be used at the DiagnosticReport vs. the Observation resource(s) level, can/should you have nested DiagnosticReport resources (currently the answer is no), is it possible/sensible to include multiple DiagnosticReport resources in a single report "bundle"? Those are just some of the prime examples. There are a number of questions that we are working through in the OO WG, particularly in the context of the universal FHIR Laboratory Report IG ballot reconciliation.
Thank you for all the answers, they were very helpful for a better understanding of the context.
Would it be possible to have item.readOnly to also be calculable using FHIRPath expression?
A field can become readOnly if certain fields are answered / not answered / a condition from FHIRPath calculation which can have several variable extensions interacting with each other were met (https://jira.hl7.org/browse/FHIR-48466).
@Benjamin Mwalimu @Jing Tang
Yes. readOnly affects what a user can do. It doesn't affect what the form filler software can do. (If you think that needs clarification, feel free to submit a change request.) There's already a standard extension (cqf-expression) you can use on readOnly to make it dynamic - look at the SDC profile here
@Aurangzaib Umer Check this out
Then renderer needs to support it to.
@Lloyd McKenzie You're right, we'll use cqf-expression then, thank you!
Screen Shot 2024-10-02 at 05.27.16.png
The joint IPA-IPS breakout is going on in Dogwood B. We will have a recorded Zoom session: https://us06web.zoom.us/j/83516290904?pwd=xuEb5qxwJVxUZ3LxD3fKMOpYvYIEvu.1
The recording from the joint IPA-IPS session is available here: https://us06web.zoom.us/rec/share/zjRXBPX5XiUs1L4qvGahiOQwrRHfjYZK7M30CMFehWLRxtwZ0m63RGg1SreNi31X.uYt5uVleV3VkoLj6
I tried watching the recording and was able to see it, but can't seem to get the audio to play. Is it just me?
Mikael Rinnetmäki said:
I tried watching the recording and was able to see it, but can't seem to get the audio to play. Is it just me?
You are not alone :smile: https://chat.fhir.org/#narrow/stream/207835-IPS/topic/FHIR.20Connectathon.2037.20-.20Running.20topics/near/471999456
I apologize. My zoom was having issues (I figure out that audio was being dual connected to computer and bluetooth was causing conflict). I haven't taken down the link since you can read some of the subtitles but agree it's very hard to follow.
OK, thanks!
I would like to create a datatype profile on the CodeableConcept datatype that includes a binding on CodeableConcept level itself, not the .coding
. This would prevent me from having to add the binding in the profile that will use this CodeableConcept profile.
Is this (or should this be) possible?
I think so
I can't add the binding within Forge on the root of the CodeableConcept , also don't seem to be able to do this with FSH.
You can do it in FSH, but you need to drop into the caret syntax to do it:
Profile: MyBoundCodeableConcept
Parent: CodeableConcept
* . ^binding.strength = #required
* . ^binding.valueSet = Canonical(MyValueSet)
Ideally, it would be nice if you could say * . from MyValueSet
inside the CodeableConcept profile, but it seems SUSHI does not like that, probably because the root element does not have a type
.
@Chris Moesel thanks. That seems to work.
@Ward Weistra we should perhaps deep dive into why this is not supported by Forge.
Set up:
given a Questionnaire resource stored as at the bottom,
send a post request to http://localhost:4004/fhir/Questionnaire/11
with the following payload, get error message as following the payload below. Thanks in advance for the help!
{
"resourceType": "Parameters",
"id": "example",
"parameter": [
{
"name": "subject",
"valueString": "07e2c163-71f6-46f1-99d5-d43c1a002cf2"
},
{
"name": "local",
"valueBoolean": true
},
{
"name": "context",
"part": [
{"name": "name",
"valueString": "patient"},
{"name": "content",
"valueReference": {
"reference": "Patient/07e2c163-71f6-46f1-99d5-d43c1a002cf2"
}}
]
}
]
}
=====
{"issue": [
{
"severity": "error",
"code": "exception",
"diagnostics": "Error encountered evaluating expression (%patient.id) for item (patient.id): library expression loaded, but had errors: Could not resolve identifier %patient in the current library., Member id not found for type null."
},
{
"severity": "error",
"code": "exception",
"diagnostics": "Error encountered evaluating expression (%patient.birthDate) for item (patient.birthDate): library expression loaded, but had errors: Could not resolve identifier %patient in the current library., Member birthDate not found for type null."
}
]}
appendix:
{
"resourceType": "Questionnaire",
"id": "11",
"meta": {
"versionId": "1",
"lastUpdated": "2024-10-03T19:31:08.959+00:00",
"source": "#Gqo4bXgfgBTXHlxJ",
"profile": [
"http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-extr-defn"
]
},
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-launchContext",
"extension": [
{
"url": "name",
"valueCoding": {
"system": "http://hl7.org/fhir/uv/sdc/CodeSystem/launchContext",
"code": "patient"
}
},
{
"url": "type",
"valueCode": "Patient"
}
]
},
{
"url": "http://hl7.org/fhir/StructureDefinition/structuredefinition-wg",
"valueCode": "fhir"
}
],
"url": "http://hl7.org/fhir/uv/sdc/Questionnaire/demographics",
"version": "3.0.0",
"name": "DemographicExample",
"title": "Questionnaire - Demographics Example",
"status": "draft",
"experimental": true,
"subjectType": [
"Patient"
],
"date": "2023-12-07T23:07:45+00:00",
"publisher": "HL7 International / FHIR Infrastructure",
"contact": [
{
"name": "HL7 International / FHIR Infrastructure",
"telecom": [
{
"system": "url",
"value": "http://www.hl7.org/Special/committees/fiwg"
}
]
},
{
"telecom": [
{
"system": "url",
"value": "http://www.hl7.org/Special/committees/fiwg"
}
]
}
],
"description": "A sample questionnaire using context-based population and extraction",
"jurisdiction": [
{
"coding": [
{
"system": "http://unstats.un.org/unsd/methods/m49/m49.htm",
"code": "001",
"display": "World"
}
]
}
],
"item": [
{
"extension": [
{
"url": "http://hl7.org/fhir/StructureDefinition/questionnaire-hidden",
"valueBoolean": true
},
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "%patient.id"
}
}
],
"linkId": "patient.id",
"definition": "Patient.id",
"text": "(internal use)",
"type": "string",
"readOnly": true
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "%patient.birthDate"
}
}
],
"linkId": "patient.birthDate",
"definition": "Patient.birthDate",
"text": "Date of birth",
"type": "date",
"required": true
},
{
"extension": [
{
"url": "http://hl7.org/fhir/uv/sdc/StructureDefinition/sdc-questionnaire-initialExpression",
"valueExpression": {
"language": "text/fhirpath",
"expression": "today()"
}
}
],
"linkId": "today",
"definition": "today",
"text": "Date of today",
"type": "date",
"required": true
}
]
}
This may be a bug that @Brenin Rhodes fixed at the connectathon.
Yes, this will be fixed in the coming release of CR.
@Brenin Rhodes does CR mean "Clinical Reasoning module" ?, which version of Hapi will the fix be in, thanks!
Yes: https://github.com/cqframework/clinical-reasoning
We're hoping to get it in the Nov release of HAPI.
I set the domain - and now http://sql-on-fhir.org/ looks at GitHub pages.
Let's decide how we want to publish releases? Do we want to have the latest release on http://sql-on-fhir.org/
If we will be backward compatible, do we need different published versions? Or can live with only current?
@John Grimes 🐙 @Arjun Sanyal @Ryan Brush
Potentially we can use paths like http://sql-on-fhir.org/2.0.0 as the official IG publisher. I will require some gh page engineering to get all versions on the same site (probably intermediate bucket)
https is not working for me, maybe it's not enforced in the GH Pages setting?
I'm exploring with this now. My approach is the same as @Elliot Silver had, a batch bundle of transaction bundles. More specifically, I'm sending a big batch of observations from a measurement device, and would like to include provenance resources with them. Each transaction bundle would include an Observation and the related Provenance, and I'd like to send many of these in a batch bundle.
As mentioned in this thread, I also wondered whether the url for the transaction bundles in the batch bundle should be "" or "/". It would be nice to get this specified.
I've tried with a few available FHIR servers and haven't seen this work.
A simple example with just one transaction bundle within a batch bundle:
{
"resourceType": "Bundle",
"type": "batch",
"entry": [
{
"request": {
"method": "POST",
"url": ""
},
"fullUrl": "urn:uuid:e04a96eb-5c06-4a44-9a41-5defff50ac20",
"resource": {
"resourceType": "Bundle",
"type": "transaction",
"entry": [
{
"request": {
"ifNoneExist": "identifier=bundle-test-observation",
"method": "POST",
"url": "Observation"
},
"fullUrl": "urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665",
"resource": {
"code": {
"coding": [
{
"code": "2344-0",
"display": "Glucose [Mass/volume] in Body fluid",
"system": "http://loinc.org"
}
],
"text": "Interstitial glucose"
},
"identifier": [
{
"assigner": {
"display": "Sensotrend Oy",
"identifier": {
"system": "urn:ietf:rfc:3986",
"value": "https://www.sensotrend.com/"
}
},
"use": "official",
"value": "bundle-test-observation"
}
],
"resourceType": "Observation",
"status": "final"
}
},
{
"request": {
"method": "POST",
"url": "Provenance"
},
"fullUrl": "urn:uuid:9aa87028-6b8f-421f-9524-2e0ffac8f002",
"resource": {
"agent": [
{
"type": {
"coding": [
{
"code": "assembler",
"display": "Assembler",
"system": "http://terminology.hl7.org/CodeSystem/provenance-participant-type"
}
],
"text": "Assembler"
},
"who": {
"display": "My FHIR App",
}
}
],
"recorded": "2024-09-25T23:00:44.044+03:00",
"resourceType": "Provenance",
"target": [
{
"type": "Observation",
"reference": "Observation/urn:uuid:eb09e61a-e0c9-41b7-a412-d36daa873665"
}
]
}
}
]
}
}
]
}
@Elliot Silver, interested in learning where you ended up.
Mikael Rinnetmäki said:
More specifically, I'm sending a big batch of observations from a measurement device, and would like to include provenance resources with them. Each transaction bundle would include an Observation and the related Provenance, and I'd like to send many of these in a batch bundle.
This was essentially our case too.
Mikael Rinnetmäki said:
Elliot Silver, interested in learning where you ended up.
We headed in a different direction before proving or disproving this was possible with the server we were dealing with. Sorry.
My inclination is that the post url for the transactions should either be "." or "/". Using "" doesn't make sense to me.
Hello,
I need to use the Communication.payload.content[x]:contentCodeableConcept
element, which is part of the pre-adopted R5 specification, in our R4 implementation. According to http://hl7.org/fhir/R5/versions.html#extensions I'd need to add the package hl7.fhir.extensions.r4:4.0.1, but I'm not able to find it (the link to this package on the aforementioned page does not resolve).
Is there an alternative package or solution that would allow me to handle this scenario?
Thanks!
not right now - work in progress
@Grahame Grieve For now we decided to create a custom extension, mimicking the extension url
as much as possible (we use nictiz.nl instead of hl7.org in the url
), so that it can be replaced easily by the core extension as soon as the package becomes available. Since the element path contains brackets, is it correct to assume that the url
of the core extension will be http://hl7.org/fhir/5.0/StructureDefinition/extension-Communication.payload.content%5Bx%5D:contentCodeableConcept (i.e. with the brackets url escaped)?
Moreover we would like to mimic the id
as much as possible. How will the corresponding id
be constructed, since it's not allowed to include characters such as [, ], % and : in an id
? Currently we have omitted the content[x]: part altogether and use extension-Communication.payload.contentCodeableConcept as id
.
Thanks in advance!
@Gino Canessa