Yeah, they seem to have been quite sloppy with these vignettes.
Thought note that in the situation of the mislabeled real case, the formal solution is could be a retraction of the entire highlight article since it is against the (poorly implemented) policy to have a real case study.
Don't know how patient consent for being used in a case study works, did this author get a perpetual license, did they just copy something from another article they wrote, or from an article someone else wrote?
It looks like it has a short intro paragraph that talks about a specific case with no identifying details (beyond "a previously healthy 4-month-old boy"), citing this report by other doctors: https://pubmed.ncbi.nlm.nih.gov/27503268/ followed by further discussions of physician reports and survey data.
It looks like they labelled all of them fiction based on a single instance of one of the authors fabricating their case, a gross overcorrection. I wonder if they flinched at the prospect of actually assessing the validity of all of them and decided it was safer to just disclaim them.
Sure, if you emphasize selectively you can make it sound like it says that. Here are some other quotes from the article that clearly refute your interpretation:
> The journal decided when it first started publishing the article type “that the cases should be fictional to protect patient confidentiality,”
> While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.
> “The editor acknowledged that the editorial team is at fault for overlooking the fact that our case was real during the review process,”
It's pretty clear that the journal always thought of these as fictional vignettes, and either didn't realize or didn't care that that had not made that sufficiently clear to the readers. The New Yorker article clued them into the fact that it was a problem, so they added the correction to all of their case studies to clarify that they were intended to be fictional. In (at least) one case, the author also didn't realize they should be fictional, and submitted a real case study which has now been incorrectly corrected.
> While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.
Sounds like they were asking authors for fiction, so probably plenty of them are.
They asked the authors for fiction “at times”. Meaning that some are fiction, and some very well might not be. The best they can do is try to contact the authors and see if the case report they wrote is fictional or not. The second best is to admit that they made a mess and say “the case reports might or might not be fictional, we have no way of knowing”.
Speaking this as a spouse of a medical doctor -- case reports are sometimes a good way to increase the bullet point count in your CV if you are a medical resident. A lot of residents do that just for the sake of beefing up their CVs (to apply for fellowship for example).
I don't see anything wrong with that by itself; with the amount of patients doctors see there should be one once in a while that is worth reporting. Or are such cases so rare that the doctor is incentivized to lie?
I think you may have missed the original commenter's point. Residents (and medical students) are highly incentivized to publish unrealistic numbers of papers and case reports. One case report doesn't cut it—you need literally dozens of publications to match into some of the most competitive residency and fellowship programs. The NRMP (match organizer) publishes a document every 2 years that summarizes all of these stats. The 2024 version is in the link below, and page 12 supports what I'm saying.
This is another example of Goodhart's law in action, right?
Weirdly Pediatrics (chart 7) skews the other way (less publications tended to get into residency programs)? Are those doctors/administrators/programs somehow seeing through the nonsense?
In vet med, case studies are still pretty important, but that's because vet med is in its infancy compared to human medicine. At least one case study, usually two, are required to be eligible to take boards. Future board renewals, I think for most boards, are "published one original piece of research or two case studies" among a slew of other requirements.
> The articles usually start with a case description followed by “learning points” that include statistics, clinical observations and data from CPSP.
I can see the reason where fictional cases could be used here as teaching aid - based on real cases/ilnesses but simplified to make the learning points succinctly, but surely if the cases are being cited elsewhere someone should have raised the issue earlier?
Since it was for teaching I expect the case studies were always showing typical features of real cases, so there's nothing in the case vignette itself to give it away unless the author picks a funny name or something like that.
Rather it would be the entire form of these short highlight articles that would make you keep searching for a proper citation, unless you're lazy or pressed for time.
Wouldn't citing actual cases be a HIPAA violation? I can see why they would invent example cases, based on real ones, especially if they are fairly pedestrian cases.
I mean. Except if your pedestrian example does not reflect reality, then that is bad.
It's a privacy violation to reveal information that identifies the patient. It is not a violation (and is extremely common) to recount details without noting names, places, or even dates. Unless you already have access to a database of records you won't be able to track it down.
It's even common during talks to display diagnostic images that have had any identifying marks redacted.
I think this is mainly a case of the common "didn't notice when crucial literature for own published content was retracted, get caught with pants down when the replication police come knocking".
Obviously the poor labelling is bad, but 9 bad citations per year isn't the end of science and better labelling wouldn't discourage all the lazy authors who chose to cite these highlight articles, it'll just shift whos is to blame.
The real problem is hosting a review article about research that was retracted, and it sounds like they aren't moving very quickly on taking that piece down.
This is fine, though somewhat belated. But it does nothing to deal with the public's growing distrust of science in general, and medical science in particular.
The "growing distrust" is due to a concerted disinformation campaign which is independent of the facts.
There was indeed much negative information that the public was not aware of, and they should perhaps have held more skepticism than they did. But the gleeful acceptance of outright anti-science lies implies that they were never really in a position to make a sound judgment one way or the other.
In those circumstances I'll settle for people reaching the correct action: that practically all accepted medicine is correct and they should follow their doctor's advice. If they choose to over-inflate the importance of things that do indeed go wrong, then they are the ones failing to reach valid conclusions.
Like I said: every word out of your mouth there is a lie. Yes, I know the links you're about to hand me, to right-wing disinformation sites and actual news articles that don't say what you're pretending they say.
These are straight out falsehoods, collected for you deliberately, which you are repeating because you didn't even pretend to examine them critically. There is no way to discuss the actual mistakes made during the pandemic when it takes me ten times as long to refute the lies you're spreading.
You're very quick to throw around unsubstantiated accusations of spreading misinformation while providing nothing of substance to back it up. Pounding on the keys forcefully doesn't carry an argument. Come back when your temper tantrum subsides.
Serious question: Why do doctors change their practice so much based on one case study? Surely, even if there isn't any malice, a doctor can make a mistake?
I wondered this too after reading the original New Yorker article a few weeks back and was quite surprised.
However, the article also made me think that once a practice is adopted it’s hard for it to change even if the evidence support changing. (Which is how I expected it to be from the outside)
I figured there was some context that I was missing as to why some things are quicker to adopt and others less so. Maybe because adopting this change was seen to be “saving” lives by being more cautious about the how medicines and feeding interact - and reverting the change is “risky” in case there is truth to it.
Case studies are used in medical decision making only when there is no better form of evidence available, or there is a gap in current evidence. It is not the first place to look
Too late, it's already in the bloodstream, LLMs will be recommending things to pediatric doctors and families from fabricated archives for years, probably.
Obviously just sending it via email to the reviewers works just fine in practice anyway, the problem is really that they published a summary piece about research that was later retracted, but didn't take down their own article.
Would it be easier, though? Medical records (in the US) are covered by HIPAA and, to my knowledge, there is no anonymized canonical record, similar to what we have for legal decision. Without that, how difficult would it be to just "make shit up"?
out of context that makes sense...but in the context of a case report how do you implement that? The patients have privacy rights and the authors/doctors have a responsibliity to protect them. That doesn't justify this but it does force a conversation about what 'every single data point' means. Does it mean the patient's real name and social security number? their complete medical chart?
Case reports are descriptive not determinative and should be treated as such by other scholars. They are 'I saw this' not 'this is generalizably true'. They can (and often are) replicated or countered but they are not per se research as you are thinking about it. Whether it is fictitious or not, other scholars should be cautious in citing them as proof/evidence in papers that fit into the 'research' mold.
From a legal perspective, journal article authors can implement this by following the official HHS guidance for de-identification. This applies to any use of protected health information (PHI), not just case reports.
I don't mind the fact that the case reports were fictional -- actual cases can be problematic in terms of privacy as it may be easy to ascertain the patient's identity from the details -- but not putting a notice that it was fictional (or altered from a real case for privacy), for teaching purposes, is pretty bad.
The detail that makes this more than a labeling error: the fictional nature appeared in the journal's author guidelines, not in the published articles. Researchers who cited these 61 papers had no way to distinguish them from genuine case reports. 218 citations later, the fiction is embedded in secondary analyses and literature reviews written by people who had no idea.
The "Baby Boy Blue" (2010) case is the clearest example of the harm. An infant allegedly exposed to opioids through breast milk. That case influenced clinical guidance on codeine safety in nursing for years. The CARE guidelines (Consensus-based Clinical Case Reporting Guidelines) exist specifically to create transparency in case reporting. They're voluntary, which is how a journal can run a 25-year undisclosed fiction program and technically say the authors knew.
There is not good evidence that peer review improves quality and there is perhaps some to the contrary (many predatory journals are peer reviewed).
The arxiv (unreviewed) is among the most reliable sources available.
Yeah, it's almost like science is better when the scientific method is applied to everything, instead of delegating validation to some third party based on credentials or authority or social status.
Peer review is a sniff test. It cannot guarantee that the results are correct and the conclusions are right. It is just designed to limit some kinds of errors. Replication is important.
> One author of a case report was surprised to learn of the correction — because the case described in her article is true.
So they managed to mess up even the correction of their giant mess.
> correcting the correction "would be difficult."
I bet. That's why they should have got it right in the first place. I would be absolutely ballistic if they would be libelling my work like that.
reply