Federal Court awards $139,147 in Attorneys’ Fees Against HHS and NIH in IOM FOIA Case

The U.S. District for the Northern District of California awarded me today–having won my FOIA lawsuit–my entire attorneys’ fees in the amount of $139,147. Judge Vince Chhabria ordered the defendants, HHS and NIH, to pay me these fees. Please see below for a copy of the order.

In the Court’s order, the Judge noted:

Ms. Burmeister is clearly the prevailing party in the litigation. Moreover, as outlined in the order granting Ms. Burmeister’s motion for summary judgment, the government’s conduct throughout its dispute with Ms. Burmeister was unreasonable.  Ms. Burmeister stood to gain nothing financially from her attempt to obtain documents at issue from the government, and she conferred a benefit on the public through her successful effort to obtain a ruling against the government. [emphasis added]

The defendants’ conduct in this matter has been absolutely deplorable. They have fought tooth and nail trying to avoid compliance with federal law and to delay production of relevant documents relating to the IOM project as long as possible. Throughout the entire proceedings, the defendants have acted unreasonably and shamefully, really, in their relentless attempts to circumvent their obligations under FOIA. Their inexcusable conduct has put me through the wringer, which has had a direct and dramatic impact on my health. I will share with the community, at a later time, details of the many instances of the defendants’ appalling actions in this matter. But here is a high-level list:

The defendants failed to make a determination in response to my FOIA request (from more than a year ago, seeking documents relating to the IOM contract with HHS) within the 20 business days required by FOIA. I waited several weeks and sent them one last communication notifying them that legal action was imminent. When they still did not respond, I brought my suit pro se, i.e. I was representing myself in an attempt to avoid attorneys’ fees. After I filed the lawsuit, defendants produced a mere 88 pages, only 22 relating specifically to the IOM (one of them blank), for a very high-priority and extraordinarily controversial $1 million project. It was clear that their search and production of documents was woefully inadequate (as the Court later agreed when it granted my motion for summary judgment). The defendant’s subsequent response to my complaint was, once again, late. It was then that I realized that they had no intention of complying with the law in response to my entirely reasonable and very straightforward FOIA request, even faced with a lawsuit. Therefore, I hired the law firm of Baker & McKenzie LLP.

Every taxpayer dollar spent by HHS and NIH in this lawsuit–every single one–was caused by the government’s appalling tactics. Instead of remedying the inadequate search and production, they went into full-blown attack mode, filing a meritless and unwarranted motion, making frivolous legal arguments, making false statements under penalty of perjury, misrepresenting my statements and actions, misrepresenting legal authority, etc. They went so far as to accuse me of lying under penalty of perjury, which shows their mindset very clearly: Since they had no qualms about blatantly misrepresenting the facts, they thought accusing me of the same might work. It didn’t.

A few days ago, in response to the Judge’s order from September to produce all documents  I sought, Counsel for defendants delivered about 4,300 pages of supposedly responsive documents demonstrating very clearly the laughable number of documents originally produced. A cursory review of those documents shows that their misrepresentations–again made under penalty of perjury and in opposing Counsel’s motions–were far worse than it initially appeared. It is also obvious that this new production is again inadequate and does not comply with FOIA in many respects.

The community will be extremely interested in seeing the documents that they produced recently. I will make every effort to publish those of interest (I will save myself the energy of publishing the NICE guidelines or the IOM Gulf War Report from earlier this year.) as quickly as possible, but my health has been very poor as a result of this litigation, so I ask for some patience.

I want to thank Patricia Carter, the owner of MECFS Forums, for providing a helpful declaration in support of my attorneys’ fees motion. I also want to especially thank Eileen Holderman, the former and most effective patient representative on CFSAC in its history, for her invaluable assistance, including providing a declaration in support of my motion. Finally, my sincere thanks go to my attorneys, Bruce Jackson, Edward Burmeister and Christina Wong, as well as paralegal, Nada Hitti, and assistant, Chris von Seeburg, for their unflagging efforts and excellent representation in this case.

PS: I owe the entire amount to Baker & McKenzie and I would have had to pay it regardless of whether the Court had awarded me the fees. I guess what I am trying to say is that I don’t get to keep the money, just to avoid a misunderstanding on that front.

2014_11_07_11_42_57

2014_11_07_11_43_00

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , | 54 Comments

The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 4)

This is Part 4 of a four-part article on NIH’s Effort Preference claim.

Part 1 can be found here.

Part 2 can be found here.

Part 3 can be found here.

Readers who are not intricately familiar with ME history and politics might ask themselves how we got here. How is it possible that investigators with a glaring bias were allowed to be in a position to abuse this study to confirm their prejudices, set ME research back, and further damage the reputation of ME patients, leading to great harm?

A Short History of the Study

HHS hijacked CFSAC recommendation. The groundwork for the study was laid when the Department of Health and Human Services (HHS) hijacked the October 2012 recommendation by the Federal Advisory Committee (CFSAC) to the HHS Secretary. CFSAC recommended that the Secretary “promptly convene . . . at least one stakeholders’ ([ME] experts, patients, advocates) workshop in consultation with CFSAC members to reach a consensus for a case definition useful for research, diagnosis and treatment of [ME] beginning with the 2003 Canadian Consensus Criteria [CCC] for discussion purposes.”

The CCC, published in a peer-reviewed journal, were developed by an international group of scientists and clinicians with substantial expertise in ME research and treatment. Obviously, the CFSAC voting members had intended for the workshop to endorse or build upon the CCC as clinical and research criteria, which would have been a watershed moment for ME because it would have put an end to the use of the non-specific Fukuda definition in research and for clinical diagnosis, ensuring quality research and accurate diagnoses in clinical settings.

The Secretary basically never followed CFSAC recommendations, and she did not do so here. However, this recommendation was immensely threatening to her and others at HHS. Unlike with government definitions, such as the Reeves and the Fukuda definitions, HHS had had no control over or even input in the CCC, which is why that definition is so much more accurate and specific to ME  than any HHS-sponsored criteria. The universal adoption and endorsement of the CCC would have also lent substantial legitimacy to ME. The fear of HHS officials at the time was palpable. HHS had for decades waged a prodigious disinformation campaign against ME, leading to unrelenting suffering of ME patients. Having to course-correct HHS’s position on ME by accepting the experts’ definition would have been publicly humiliating. Careers were on the line, including careers of high-ranking officials. There was a discernible sense of aggressive urgency at HHS to quash the CCC momentum that had been building due to the CFSAC recommendation and other CFSAC work, an urgency that was highly unusual at the bureaucratic mammoth that is HHS.

As a result, HHS began tampering with the CCC recommendation. The concern at HHS was so grave that three CFSAC members—Ms. Eileen Holderman, the late Dr. Mary Ann Fletcher, and a third individual—were threatened by the CFSAC Designated Federal Officer (DFO), HHS’s Dr. Nancy Lee, with eviction from the committee for voicing their opinions. Ms. Holderman, who was the advocate on CFSAC at the time, chaired two CFSAC sub-committees, and was a member of the CFSAC Leadership committee, was removed from the CFSAC Leadership Committee in retaliation for her championing the experts’ workshop and the CCC, leaving that committee without patient input. Below is a clip of the CFSAC meeting during which Ms. Holderman and Dr. Fletcher went public with the intimidation and threats.

The shameful conduct by Dr. Gailen Marshall at the meeting was off the charts. Marshall was a med-school buddy of Lee who was willing to do HHS’s bidding, which landed him the gig as CFSAC chair. (HHS nepotism is legendary. Take a look at the Acknowledgements paragraph in the intramural paper.) When Ms. Holderman pointed out that the important case-definition issue deserved appropriate time for discussion, a position that cannot possibly be controversial, Marshall punished her by cutting her time to comment from three minutes to two minutes while he gave NIH’s Dr. Susan Maier, who was not a voting member, all the time she wanted for a surreal patient-hostile rant. Any dissent by HHS outsiders, no matter how valid or justified, will have consequences at HHS as confirmed recently by Nath, principal investigator of the intramural study; more on that below. It was stunning to watch Marshall’s, Lee’s, and Maier’s complete indifference and stone-cold faces when Ms. Holderman and Dr. Fletcher reported the DFO’s threats and intimidation to the committee. All Marshall had in him was an emotionally dead, “ok.” Being an HHS loyalist, Marshall’s only concern was to shut down the case-definition discussion, potentially the most important discussion CFSAC ever had, as quickly as possible. This constituted an enormous abuse of power, which Marshall seemed to relish.

Also pay close attention to the completely unprofessional, deeply resentful, and manipulative moaning by Maier about having had to work “every single freakin’ weekend.” There is a toxic victim-complex culture at NIH where NIH staffers clearly loathe the patients on whose behalf they claim to work, compensated with taxpayer money. NIH staffers seem to be viewing their jobs as doing patients a favor. Never mind that, without patients, there would be no mortgage-covering paycheck for them, but patients better not expect any results from NIH, and they certainly better not voice any criticism whatsoever even after decades of complete failure of the agency to advance ME science and even setting it back. I do not know a single ME patient who would not trade their living death for working on weekends. Nath, too, has for years whined to patients about his workload, which I will address further below.

The advocacy community stood firmly behind Ms. Holderman after she disclosed the government threats. Advocates filed a complaint with HHS’s General Counsel. The ensuing HHS investigation was a farce culminating in a perfunctory and dismissive response by the Assistant HHS Secretary. Nothing to see here. HHS business as usual.

Instead of following the CFSAC recommendation, the HHS Secretary and HHS component agencies misappropriated the recommendation when they used it as cover to urgently enter into a contract with the IOM (now NAM), funded by NIH, for yet another non-specific ME definition. Even though the CFSAC recommendation covered both clinical and research criteria, the IOM contract was limited to:

“evidence-based clinical diagnostic criteria for [ME] for use by clinicians.” [emphasis added]

HHS spent a million dollars on procuring the IOM criteria, which is another indicator as to how concerned the Secretary and others at HHS were about the CCC, a definition that was readily available and free. This purchase was the polar opposite of what CFASC had recommended. Instead of involving experts in the field, the IOM was hired by HHS, and instead of covering both clinical and research criteria, the IOM contract addressed only the clinical diagnosis. CFSAC was completely shut out of this process. The alleged reason for proceeding with the IOM was that HHS does not endorse or create disease definitions, a blatant lie as the agency had done it numerous times for ME.

By doing this, the Secretary disregarded a protest letter addressed to her by a total of 50 ME national and international researchers and clinicians endorsing the CCC for research and clinical diagnosis and urging HHS to exclusively use the CCC throughout HHS and its component agencies and programs:

“As leading researchers and clinicians in the field, however, we are in agreement that there is sufficient evidence and experience to adopt the CCC now for research and clinical purposes, and that failure to do so will significantly impede research and harm patient care. This step will facilitate our efforts to define the biomarkers, which will be used to further refine the case definition in the future.

“We strongly urge the Department of Health and Human Services (HHS) to follow our lead by using the CCC as the sole case definition for ME/CFS in all of the Department’s activities related to this disease.”

The experts also strongly opposed the engagement of the IOM and warned that engaging IOM non-experts would set science back, which is exactly what happened:

“In addition, we strongly urge you to abandon efforts to reach out to groups such as the Institute of Medicine (IOM) that lack the needed expertise to develop “clinical diagnostic criteria” for [ME]. … [T]his effort threatens to move [ME] science backward by engaging non-experts in the development of a case definition for a complex disease about which they are not knowledgeable.”

The advocacy community staunchly supported the ME experts with their own letter to the HHS Secretary, signed by 171 ME advocates. The community protest of the IOM definition was fierce and unprecedented.

The community has continued its protest of the IOM definition to this day and for very good reason. About half of the IOM committee consisted of individuals who were not experts in the field, which had been a major concern of CFSAC. The criteria that the panel authored are extremely broad, as the committee intentionally erred on the side of over-inclusiveness. Since the existence of PEM is rarely objectively confirmed by clinicians for a number of reasons, this results in a large number of misdiagnosed patients who suffer from a disease other than ME but are captured by the IOM criteria. Those patients could get effective treatment for their disease. Instead, they are languishing without any help. It is also harmful to actual ME patients when misdiagnosed patients benefit from exercise, psychotherapy, antidepressants, for example, reinforcing the prejudices that many practitioners harbor against ME patients as the result of HHS’s false characterization of ME. It is easy to falsely answer in the affirmative when questioned about PEM as most who do not experience post-exertional fall-out would have an incorrect understanding of it. Feeling worse after exertion without rising to the level of PEM, which is common in deconditioned individuals and certain diseases, is easily mistaken for PEM. In addition, the IOM criteria do not have any exclusions whatsoever, making their use in research fatal.

The IOM criteria are expressly and importantly limited to clinical diagnosis, i.e, they are not to be used for research. HHS’s purported reason for limiting the IOM contract to clinical criteria contrary to the CFSAC recommendation was that a new government research definition would follow, which, of course, never happened. This bait and switch was an evil stroke of genius because the IOM committee predictably delivered very broad criteria, making them unfit for research. Also predictably, researchers have nevertheless proceeded with conducting research on ME patient cohorts selected via the IOM definition with devastating consequences in the form of tainted research, which advocates had warned about from the very beginning. It is almost as though this was HHS’s plan from the start. The best way to prevent progress in a disease is to ensure the use of over-inclusive criteria in research. Of course, HHS knows this, so draw your own conclusions. It is noteworthy that none of the IOM definition authors have protested the improper use of their criteria in research. At least some extramural researchers likely have felt that they had no choice but to use the IOM criteria if they wanted to have a chance of obtaining NIH funding.

Of course, the NIH investigators were not faced with such an ethical dilemma. They were free to use appropriate research criteria without risking funding, but they nevertheless chose the IOM definition as one of three definitions. As discussed in Part 3, the IOM definition is so broad that all 17 ME patients in the NIH study satisfied it while only about half of the patients satisfied the stricter CCC, the definition ME experts and advocates had endorsed. The IOM definition is even broader than the Fukuda definition, which was satisfied by 82% of the patients in the NIH study.

Once the IOM published its redefinition of ME in 2015, it did not take long for CFSAC to be disbanded as it no longer had any usefulness for HHS. The table was set for the NIH study, which was announced shortly thereafter and, indeed, selected patients using the IOM definition (in addition to the not much better Fukuda definition and the CCC).

Collins roped in Nath. In September 2015, then NIH Director, Dr. Francis Collins, under increasing pressure from Congress and the ME advocacy community to cease stalling and obstructing ME research, gave an unsuspecting and unenthusiastic Dr. Avindra Nath, NIH senior investigator and NINDS intramural clinical director, no choice when Collins put the screws on him to spearhead the first NIH intramural study of ME in decades. Nath incessantly tells everybody who will listen that his plate was already full at the time and that he had no room in his busy schedule to take on this study on top of his other research.

Walitt volunteered. Nath was eager to dump this study into somebody else’s lap. Enter Dr. Brian Walitt, who was all too happy to come to Nath’s rescue by volunteering to run the study. Having recently joined the scientific big leagues of NIH, Walitt seized the chance to characterize ME in accordance with his prejudicial beliefs with the weight of the most powerful platform in the global bio-medical research world behind him, which he knew would have a significantly more potent impact than his previous abhorrent research missives about ME and Fibromyalgia.

Walitt is on record with his alleged conviction that ME and Fibromyalgia are identical. “The complaint that predominates your existence is how you end up being named, which has nothing to do with your physiology,” he said while he smirked and gesticulated in an effort to cultivate trust and familiarity with the audience to sell his defamatory and harmful views on ME and Fibromyalgia. Nath was not bothered by Walitt’s highly prejudicial and long-since disproved view that ME is merely a normal life experience instead of a medical entity (see Part 1) and appointed Walitt as lead associate investigator, who would end up designing the study and running its day-to-day activities, freeing up Nath to devote his time to research he obviously deemed more deserving. Nath became senior author of the paper, and Walitt became first author. Walitt’s role in the study gave him staggering power over a group of patients by whom he is so clearly repelled and whose disease he has dismissed and labeled in derogatory ways.

For Walitt, volunteering for the study was a double win: he would have a major impact on the direction of ME research at NIH and, therefore, worldwide (steering it toward the biopsychosocial narrative), and this study would open the door for him to build inter-agency connections due to the large number of scientists, clinicians, and institutes involved in the study, elevating his profile at NIH. It cannot be overstated how much this study boosted Walitt’s NIH career almost immediately after he joined the agency. Nath’s and Walitt’s relationship has been a symbiotic one.

Community protest. Once advocates learned of NIH’s planned intramural ME study eight years ago, they fiercely opposed a number of highly problematic aspects of it, including the involvement of Walitt and other like-minded NIH researchers, the planned use of the Reeves Criteria, a ridiculously broad Centers for Disease Control (CDC) definition of ME, the use of the IOM criteria, the inclusion of Chronic Lyme Disease patients and of patients with Functional Movement Disorder as control groups, etc. Advocates were able to mitigate some of the issues with the study design and staff, but mostly NIH scoffed at the protests and proceeded as planned.

Effort Preference. When NIH signaled indifference regarding the advocates’ protests in 2016, the writing was on the wall that the intramural ME study would emerge with a harmful characterization for ME; the only question was which one. That characterization turned out to be “altered Effort Preference.”

Nath’s Dereliction of Duty and Unprofessional Conduct

Nath presents himself as an aggrieved, innocent, hard-working civil-servant researcher while he has allowed, enabled, perpetuated, and even expanded a systemic pattern of anti-ME bias. His furious defensiveness is likely a personalized outgrowth of the weaponization of the “unworthy-sick” construct that Straus introduced on this side of the Atlantic with respect to ME patients. Nath feels entitled to dismiss and attack advocates precisely because that is what NIH has always done. Straus, for example, was verbally violent and threatening toward ME patients. Nath seems to resort to bitter fury because he has absorbed the NIH culture that patients better be grateful or else.

Nath seems entirely unconcerned with the fact that the alleged EEfRT findings, including the various misrepresentations and misinterpretations, etc., do not support the claims made in the paper with respect to the effort discounting of ME patients, which he, as principal investigator, is fully responsible for. One of two things must be true: either Walitt and/or Madian manipulated Nath and made him look like a fool with respect to the unsupported EEfRT claims, or there is no daylight between Nath and Walitt. Personally, my money is on the latter. In fact, it appears as if Nath himself is a disciple of the biopsychosocial school. This is supported by Nath’s attendance of a symposium on the biopsychosocial aspects of Long COVID in Finland last year as reported by David Tuller. There is no chance that Nath did not realize that his attendance, as an NIH representative and principal investigator of an NIH study on ME, would lend enormous credibility to the biopsychosocial theory of ME, which indicates that he’s fully on board with the Walitt agenda and which would explain the NIH paper’s redefinition of ME in accordance with the ME propaganda of Wessely, who has been propped up by HHS for decades. All of this makes it exceedingly unlikely that  Nath is unaware of what was done to the EEfRT data and of the serious issues I have pointed out regarding the EEfRT testing in Part 2 and elsewhere in this four-part article.

Nath’s attitude toward the community and the study. Nath has been hostile, condescending, disdainful, and manipulative toward the community from the very beginning of this study. For example, he shamefully used a slide listing the study team members that he derisively entitled “Team Tired.”

Watch Nath’s irritated reaction to a petition started by @meadvocacy_org and signed by hundreds of medical professionals and ME advocates and patients asking NIH to cancel the then-planned intramural study due to serious concerns by the community. The petition deserved genuine consideration, but all Nath could muster was annoyance. What was important to him was his own inconvenience and embarrassment, not the impact a botched NIH study would have on the lives of millions worldwide.

By his own accounts, Nath has been perturbed about being expected to take on the ME study without compensation, as he frequently puts it. He frames this as having been forced to donate his time. Just like Maier, Nath seems unfamiliar with the concept of salaried employees. More importantly, Nath’s relentlessly raising this issue is a tell that he did not want to be part of this study, has felt resentful about it for almost nine years, and feels that he has been magnanimous by being involved at all, which again eerily mirrors Maier’s attitude. As far as a preference for effort goes, Nath clearly prefers to put no effort into even pretending that this study has ever been a priority for him. To the contrary, he keeps stressing that hardly any of the involved investigators had bandwidth for this study. He somehow feels that this makes him and his colleagues look good.

This has, of course, directly impacted the outcome of this study. Nobody does their best work when they resent the work, especially when they also do not have any time for it. As the adage goes, the fish rots from the head, and an attitude such as Nath’s would have inevitably negatively affected the quality of the work by other NIH staffers on this study. Nevertheless, Nath recently called it “probably the best study that’s ever been done.” That self-aggrandizing statement is categorically untrue as I have shown. Aside from illusory statements about this study, which patients are supposed to just accept despite enormous issues with the study design and implementation, Nath has little to show for himself when it comes to ME after this tiny study using badly selected cohorts failed to find many well-established abnormalities in ME patients, such as orthostatic intolerance, low Natural Killer Cell function, reactivated viruses, and cognitive dysfunction to name just a few, and the completely bastardized EEfRT analysis is certainly an ugly stain on Nath’s ME record. I would be remiss not to mention Nath’s recent Time100 Health Most Influential People mention for his involvement in ME, a disease whose true nature he and the investigators he supervised obfuscated; the Time100 Health profile of Nath describes ME as a “condition characterized by extreme fatigue.” Q.E.D.

NIH’s timing of the May 2, 2024 Symposium on the study to coincide with the announcement of Nath’s recognition felt like a desperate and transparent attempt to prop up the study. One thing is for sure, this study has been career-advancing for several people at NIH while advocates have had to sacrifice their health in their effort to get the study retracted. With few exceptions, it is advocates who work without compensation, not Nath, who is well compensated.

Nath’s temper and attempts to intimidate advocates. NIH’s hiring of Walitt is likely indicative of an agency-wide agenda and not an accident, but Walitt’s capture of the intramural ME study happened on Nath’s watch. Advocates and patients have shown super-human restraint under the circumstances, not just with respect to this study but over decades given the malfeasance of federal health agencies with respect to ME. Nath, on the other hand, seems to be on a hair trigger when it comes to ME advocacy. During the recent NIH Symposium on the intramural study, Dr. Team Tired had the audacity to accuse ME advocates of causing NIH researchers “pain and suffering” by criticizing their research (at 22:18). Without a shimmer of self-awareness or a smidgen of consideration for what ME patients have endured at the hands of HHS and its component agencies for decades, Nath went full-blown DARVO. This is somebody who is obviously not used to having to answer for his actions. This adage comes to mind:

“When you’re accustomed to privilege, equality feels like oppression.”

We all have watched the suffering of ME patients, including children, and many of us have lost numerous friends to this disease over the years, but we had to listen to a senior NIH bureaucrat whine about his own pain and suffering and that of Walitt?! Nath, a high-ranking federal-health official, quite literally feels persecuted by extremely sick advocates while patients are suffering and even dying as the result of decades of government mistreatment and neglect and while he advances his career. It is obscene.

Nath’s anger is entirely misdirected; it is not the patients’ fault that NIH has held their bodies hostage for decades by not undertaking or funding ME research and that Collins put Nath and his colleagues in the position of having had to squeeze their ME work into their lunch breaks, which, by the way, explains why this study took more than eight years and why the resulting paper, “Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome,” is as ghastly as it is. The embarrassingly large number of mistakes with respect to the EEfRT findings (see part 2) illustrates that the authors did not even proofread their paper.

It is important to see Nath’s accusation of pain and suffering allegedly inflicted on him and his colleagues in another context. Wessely & Co. have accused ME patients of threats that were never substantiated but caused serious reputational harm. Nath has now taken this very page out of Wessely’s playbook.

Listen as Nath loses his cool when powerhouse advocate Ms. Holderman politely asked Nath to make sure that the researchers involved in his study understand the difference between the ubiquitous and non-specific symptom/condition of chronic fatigue and the distinct disease ME; NIH staffers frequently use them interchangeably. Surely, it would be unacceptable at NIH if its researchers studying lung cancer kept referring to the disease as chronic cough, but Nath snapped at Ms. Holderman’s entirely reasonable request saying that he “can’t police anybody.” Ms. Holderman replied that she wasn’t asking for policing but for educating.

Ms. Holderman also asked for assurance that the intramural study will be using the experts’ criteria, the CCC and the ME:ICC. Nath, audibly annoyed at Ms. Holderman’s valid concern about patient recruitment, claimed emphatically that he has made “absolutely certain, beyond any element of doubt, that [NIH was] recruiting the right patients for the study.”

It is a bit rich, but typical for Nath, to curtly dismiss advocates’ cohort concerns given that he allowed the inclusion in the patient cohort of a high percentage (about a quarter) of individuals who spontaneously recovered, i.e., whose ME diagnosis is highly questionable, as well as the inclusion in the control group of individuals with health issues that have substantial symptom overlap with ME, such as orthostatic intolerance (Supplementary Results, page 9, see screenshot below), Psoriasis (Supplementary Results, page 21, see screenshot below), a chronic systemic immune-mediated (autoimmune) disease without a cure (notwithstanding the authors’ attempt to make it look as though the control with Psoriasis recovered), Chronic Lyme Disease (Supplementary Results, page 21, see screenshot below) as well as blood relatives of patients with ME, a disease with at least an infectious component.

Nath’s ill-tempered petulance is particularly hypocritical in light of the bait-and-switch “commitments” that were made by NIH but not kept with respect to the study. For example, after promising the community in 2016 that every patient participant would satisfy the CCC, only about half of the patients in the intramural study did.

Nath’s study also did not objectively confirm that ME patients in the study suffered from PEM (which would have required a two-day CPET), again contrary to what NIH had committed to in 2016. Nevertheless, Nath claims that “the patient population was very clean” with “the best patients” when, it truth, the ME cohort was messy. It is almost as if words have no meaning at NIH, both linguistically and ethically.

Ms. Holderman put it well when she recently tweeted the following:

During the May 28, 2024 NIH ME/CFS Advocacy call, Nath again escalated his anti-advocacy rhetoric. He started out by saying, “We made significant progress so far studying ME/CFS, but I think further progress is only possible if we have a true partnership with the patients.” He continued saying that he sees this as key for further NIH research on ME. He then made it very clear that ongoing criticism of the NIH study by patients and advocates will have consequences for future ME research:

“But if you’re hypercritical of the very people and the researchers who are trying to help you, then we do get demoralized, and you tear us apart, and then it becomes very difficult to achieve the goals that we set out to achieve.”

(NIH assured us on the day of the event that a recording and transcript of it will be posted to the NIH website “soon,” but the agency has still not released them or any statement as to what is causing the delay.)

Although Nath used euphemisms (“very difficult to achieve the goals that we set out to achieve”), this was an attempt, by a high-ranking NIH scientist and bureaucrat, to intimidate and silence a patient group that has been belittled, mocked, neglected and harmed for decades. It is abusive and unbecoming of a civil servant of Nath’s stature in the agency’s hierarchy to denigrate and guilt-trip ME advocates and promise consequences in case of sustained criticism. Nath is exploiting the tremendous power differential between NIH and patients.

One has to ask: A scientist who was involved in early AIDS research and, therefore, witnessed the AIDS community’s Act Up advocacy is trying to vilify ME advocacy as too much?! It is absurd. What is different to Nath about ME patients that makes him deem them not entitled to fight for their health even with much less aggressive advocacy methods than AIDS patients did due to the severe health limitations ME imposes? The fact that they are not mostly men? This is not the first time that attempts have been made to shame ME advocates and put them on the defensive with demands for “inside voices,” but it is time that the appropriate authorities take a look at Nath’s tactics.

Advocates had laid out the case against Walitt in 2016 in no uncertain terms, but Nath has been so disgruntled by being forced to be involved with ME research in the first place that he hardly could have cared less. In fact, Nath clearly feels that he owes Walitt for getting him out from under the weight of the ME study. Nath frequently goes out of his way with over-the-top praise for Walitt, alleging that the intramural ME study would not have been possible without Walitt, a preposterous assertion. If Walitt were indeed holding NIH together, then it really would be no wonder that the reputation of the agency is crumbling.

Nobody can accuse advocates of not trying, every way physically possible, to get Nath and NIH to take seriously their manifestly valid concerns with respect to this study, but Nath has been brushing them off time and time again. He has no plausible deniability. Nath had a number of off-ramps to keep harm from coming to ME patients, but he chose none of them. Instead, he proceeded to let Walitt run the intramural study. The fact that Walitt volunteered should have been a red flag for Nath, and certainly when Walitt included the EEfRT testing in the study design, Nath should have stepped in. Not even Walitt’s invitaton of Dr. Edward Shorter, who has been openly expressing his loathing for ME patients, in order to indoctrinate other NIH researchers with biopsychosocial brainwashing before the study even started gave Nath pause. Nath approved the inclusion of the EEfRT testing, the misinterpreted and misrepresented EEfRT results, and the framing of ME exactly as Wessely has done: as a condition characterized by a false sense of effort followed by deconditioning.

The community has been subject to major-league gaslighting as a result of NIH’s riding out of the huge protest against Walitt’s involvement with the study instead of acting on it and replacing him. It is breathtaking that Nath has allowed somebody with Walitt’s odious views to capture ME at NIH by designing and running the first intramural ME study in decades and becoming principal investigator of other NIH intramual ME studies. Instead of protecting ME patients against that bias, Nath has enabled Walitt.

Short of giving a press conference announcing that ME patients are malingerers, NIH could not have done more harm than it did with this study, but when advocates protest the outcomes that they had been trying to prevent, Nath farcically takes the criticism personally and centers himself without addressing or even considering the issues raised in earnest. How dare advocates point out the dangerous consequences of his paper, which he either never considered or found desirable. The level of hubris is astounding.

Any alleged ignorance of the false EEfRT findings on the part of Nath is betrayed by his defensiveness and unequivocal commending of Walitt. Nath feels boxed in, and lashing out—not against Collins or Walitt but against advocates—is all he can think of as a response. Taking responsibility and changing course has apparently not crossed his mind.

Aren’t scientists supposed to be curious? The firestorms relating to Walitt and Effort Preference would have given any inquisitive, reasonable, non-defensive, and open-minded researcher pause and a chance to realize that he or she has missed something important, but not Nath; he went straight to recycling the harmful patients-are-difficult trope.

Nath’s decrying of, and discomfort with, advocates’ “pick[ing] apart every single word” (NIH Symposium at 22:18) is another shameless push-back against accountability of tax-payer-funded civil servants. There would be no need to be afraid of their words being scrutinized if Nath and some of his colleagues did not harbor the ugly prejudices that they do and if their research was not fairly obviously inserting contrived non-medical terms and concepts into the literature without validation and on an extremely unstable experimental foundation. It is telling that, at the NIH Symposium, real-time questions from the virtual audience (something that is technologically easy to do in 2024) were not allowed. Instead, online attendees had to submit their questions in advance, allowing the NIH investigators to spin their answers in advance or to choose not to answer them at all.

Nath’s demands of accolades. Nath has more than once urged patients to thank the NIH researchers involved in the intramural study, including during the recent NIH Symposium on the intramural study (at 22:18). He did this despite being acutely aware that there has been a tremendous amount of compelling criticism of the study. Nath seems to feel deeply offended by any scrutiny of the research he is responsible for and cannot fathom what could possibly give rise to any condemnation of his study or the way that NIH has handled ME over the decades.

Nath, as a loyal company man, is trying to deny, all evidence to the contrary, the deeply entrenched and well-established institutional bias against ME patients that pervades NIH and that was clearly a major factor in the outcome of the intramural study by acting as though the decades-long neglect by and harm from NIH has not happened, is not ongoing, and does not matter. Promoting that revisionist history alone is harmful, something that Nath is intellectually perfectly capable of seeing but refuses to acknowledge.

Astonishingly, Nath admonished Ms. Holderman on the May 28, 2024 NIH Advocacy Call and demanded:

”You should be more appreciative instead of critical.”

Ms. Holderman’s offense? She questioned the use of three vastly different diagnostic criteria for patient-selection purposes in the intramural study, in particular the strongly opposed IOM definition.

As I explained throughout this four-part article, there were serious cohort issues, which Nath continues to deny. He rationalized the investigators’ choice of diagnostic criteria in response to Ms. Holderman by asserting that there are no good objective diagnostic criteria and that all ME symptoms are subjective. So, the principal investigator of the intramural ME study who just received an award for his involvement with this very study is unfamiliar with the well-established objective symptoms in ME and the ME:ICC, the gold standard for use in research among all the ME definitions? Nath did not even try to justify the use of three different criteria. Surely, NIH would have been able to find 17 patients who satisfied the same diagnostic criteria.

Nath’s concealing of the effort testing from the ME community. Whenever Nath has presented on the study details to the patient community over the years, he scrupulously avoided any mention of Effort Preference or the EEfRT, which indicates that he was acutely aware of the harmful nature of the effort “inquiry,” knew how it would be perceived by the community, and was hiding that prejudicial ball from patients and advocates as long as possible. At the same time, he has not hesitated to now broadcast the Effort Preference claim to the media, and, of course, it is front and center in the paper. This selective and strategic concealing of the effort testing and claim in presentations to ME advocates and patients  shows that Nath has, of course, not been some obtuse bystander who did not realize what Walitt was up to.

To this day, when Nath and/or Walitt present on the intramural ME study to the ME and Long COVID communities, they carefully conceal the fact that they tested for effort in the ME study. Neither ever mentions their Effort Preference claim in those presentations. Below is the slide that Walitt used on the May 6, 2024 CDC ME/CFS Stakeholder Engagement and Communication call, where he and Nath were guest speakers and which turned out to be mostly about NIH’s Long COVID research. As you can see, the effort testing is not mentioned. Nevertheless, Walitt will use the same testing NIH used in the ME study in ongoing Long COVID research. In fact, Nath said during the CDC call that the Long COVID research will be “based on what we did with ME. We are doing very similar kinds of studies in Long COVID so we can compare the two.” Long COVID patients are getting a preview of what is in store for them at NIH.

Nath’s cognitive dissonance. It is hard to imagine that anything ME advocates have to say—no matter how morally, scientifically, or logically compelling—could get through Nath’s defenses. He is committed to refusing seeing himself and this study in the context of the river of malfeasance, neglect, and abuse of NIH et al., or what Nath calls patient “conspiracy theories.” At his level of cognitive dissonance, he is unreachable and unwilling to acknowledge his part in this particularly destructive era of ME history and will just dismiss as unwarranted and invalid any patient input other than unqualified praise. His sense of self-preservation and need to protect his immaculate self-image are obviously stronger than his sense of duty and his sense of responsibility vis-a-vis patients who have waited half a lifetime for any help from NIH.

The only path forward between the ME community and NIH. There can be no healing of the NIH-patient relationship without removing all adherents of the biopsychhosocial school from future ME studies at NIH, a retraction of the intramural ME study, a Congressional investigation into the institutional and decades-long NIH malfeasance (advocates want names!), and major restorative funding commensurate with the burden of ME—at the very least at the same level as Multiple Sclerosis is funded, which for 2025 is estimated to be $116 million per year—and retroactive for 40 years. ME patients have every right and compelling reasons to demand accountability.

At the moment, NIH is moving in the opposite direction: both Koroshetz and Nath continue—after 40 years of this very neglect by NIH—to be unequivocal and unabashed about their expectation that ME patients be satisfied with any crumbs that may or may not materialize as the result of Long COVID research. Just one example of ME’s bottom-of-the-list status at NIH is the recent intramural deep phenotyping Long COVID study (principal investigator and senior author: Nath) for which recruitment began in 2020. The paper was published in May of 2023, less than three years later. That is in stark contrast to the more than eight years it took for NIH’s intramural ME study. If it is left up to NIH, it will never be ME’s turn.

Nath’s legacy. I predict that his fateful decision to put Walitt in charge of the intramural ME study will become Nath’s legacy. The vocal protests of the study are here to stay until the paper is retracted and those responsible held to account. Advocates will keep fighting this perversion of science and decency as long as it takes and remind everybody of what has happened under Nath’s supervision. They will not be bullied into silence by Nath or anybody else at NIH, and any further such attempts will only strengthen their resolve to expose this study and get it retracted. NIH pushed it too far with its unsupported and destructive Effort Preference claim, and somebody will have to atone for it. Nath chose harm for ME patients. There is room for him to redeem himself by blowing the whistle on this study. The decision is his, but I am not holding my breath.

Conclusion

By NIH’s own admission, the intramural ME study has been first and foremost a delivery vehicle for the brand-new Effort Preference term and concept in ME, which, in essence, asserts, instead of being objectively physically limited, that ME patients falsely believe, as the result of a dysfunctional effort perception, that they are unable to exert themselves past their limits, resulting in deconditioning and functional disability. NIH had obviously decided that the fatigue label, which it has used to falsely describe ME for decades, was no longer damaging enough.

NIH invented Effort Preference as defining of ME, which is not supported by the data and which is deeply and glaringly derogatory and severely harmful. The agency then contaminated everything else in the intramural paper by relating it all to the new Effort Preference concept. By doing so, NIH formalized Wessely’s theory on ME as an issue of false perception of effort and fatigue. This is about as hostile to the ME community as one can imagine. If NIH had had even just neutral intentions, the authors would have presented the immunological findings as the primary findings instead of altered Effort Preference

This constitutes a defamatory assault on ME patients. Since the publication of the paper, NIH has been flailing, trying to put a benign spin on the effort claim with a disingenuous, contrived re-framing that does not stand up to scrutiny. Because the alleged Effort Preference findings are tied in with the other findings of the study, the entire paper is tainted and has to be retracted. We all know that the peer-review process is broken, and this paper is an effective illustration of that.

The issues laid out in this four-part series are extremely serious. Whether or not they rise to the level of research misconduct is yet to be determined. The sheer number and the nature and severity of the issues combined with the well-documented ME agenda on the part of the agency and several of the investigators in this study suggest at the very least recklessness—that much incompetence at NIH would be shocking but not surprising in light of the absence of accountability at the agency—if not knowledge or, more likely, intent.

Fortuitously, Congress is primed to take this on because NIH has been under tremendous and well-deserved scrutiny lately as the result of its habitual and systematic FOIA violations and its failure to advance Long COVID research, etc. NIH is certainly not as untouchable anymore as it had been for decades as Congress is breathing down its neck, and that will work in favor of the ME community as it continues to take on this study. Maybe what NIH had done to the ME community will be the last straw that will lead to meaningful NIH reform.

Further effort research in ME is ongoing at NIH. The agency is determined to reduce ME to a condition of misunderstood capacity resulting in deconditioning. The harm done as a result of reframing ME as chronic fatigue in 1988 will pale in comparison to what lies ahead in the wake of Effort Preference.

Call to Action

I urge readers to file complaints by sharing my four-part analysis with the following authorities:

  1. your U.S. Senators and Representatives
  2. the NIH Director, Dr. Monica Bertagnolli:
    • monica.bertagnolli@nih.gov
  3. the Director of Research Integrity and the Agency Intramural Research Integrity Officer (AIRIO), Dr. Kathy Partin (https://oir.nih.gov/sourcebook/ethical-conduct/research-misconduct):
  4. the HHS Office of the Inspector General (OIG). OIG complaints can be filed in a variety of ways as follows:
    • by U.S. mail:
      • U.S. Department of Health and Human Services
        Office of Inspector General
        ATTN: OIG HOTLINE OPERATIONS
        P.O. Box 23489
        Washington, DC 20026
    • by fax: (800) 223-8164

with requests to:

  1. investigate the study
    1. with respect to potential gross misconduct and potential research misconduct, including, but not limited to, falsification of data, i.e., manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record
    2. constituting a significant departure from accepted practices of the relevant research community and committed intentionally, knowingly, or recklessly,
  2. discipline any individuals involved in the foregoing improprieties, if any, and
  3. retract the NIH intramural study.

As we cannot know all the individuals who were involved with the EEfRT testing, the complaints should be made against:

  1. Dr. Avindra Nath as principal investigator of the study and
  2. any other individuals at NIH involved in any the EEfRT testing in any way (including, but not limited to, the design, administration, and interpretation of the EEfRT testing), including, but not limited to, Dr. Brian Walitt and Dr. Nicholas Madian.

Please feel free to use the following summary of some of the potential misconduct, the inclusion of which in your correspondence may expedite the authorities’ determination to investigate the matter. Please make it clear that this is a non-exhaustive list of potential misconduct:

  1. The inclusion in the paper of a graph, Figure 3a, in order to support the Effort Preference claim, that was the result of statistical manipulation resulting in a completely misleading presentation of the data, which, when graphed, present a picture that contradicts Figure 3a
  2. The claim that ME patients chose fewer hard tasks than controls “at the start of and through the [EEfRT]” with Figure 3a claiming that patients chose fewer hard tasks in every single trial
    1. During the May 2, 2024 NIH Symposium on the study, Dr. Madian stated, “We did again see a difference at baseline, which persisted throughout the task, indicating differences in effort discounting.”
    2. Out of the first four trials, ME patients and controls chose the exact same number of hard tasks per participant. For the very first trial, arguably “the start” of the EEfRT, patients chose twice as many hard trials as controls, even though the patient cohort consisted of one fewer individual than the control cohort.
    3. For 34% of the trials, ME patients chose hard tasks at a higher rate than controls. For another 2% of trials, both groups chose the same percentage of hard tasks. During an additional 14% of tasks, both groups’ hard-task choices were nearly identical, and the difference was, therefore, not statistically significant.
  3. The inclusion of randomly assigned tasks (hard versus easy) in the analysis of hard-task choices despite no choice having been made in those cases, which occurred substantially more often in the case of patients than controls and with a substantially higher percentage of the randomly assigned tasks being easy tasks in the case of patients compared to controls
  4. The omission of an analysis of the obvious impact of patients using a game optimization strategy and the conclusory claim (without discussion) that there was no resulting group difference in probability sensitivity despite the fact that there was a significant difference between groups for 12% and 50% probability trials but not for 88% probability trials, negating any basis for the Effort Preference claim
  5. The use of an improper metric, number/ratio/probability of hard-task-trial choices, in support of the Effort Preference claim, as opposed to the correct metric, the average rewards earned by both groups, for which there was no significant difference between the two groups (less than 1%), negating any basis for the Effort Preference claim
  6. The failure to address or even acknowledge significant confounding factors and to attempt to control for them or at least minimize their impact, contrary to other EEfRT studies, for example:
    1. the failure to exclude from the EEfRT patients taking benzodiazepines
    2. the failure to control for patients’ motoric or other physical impairment to complete hard tasks by calibrating the maximum required button-press rates to individual physical ability despite numerous prior EEfRT studies emphasizing the need to do so
    3. the failure, contrary to what prior EEfRT studies have done, to exclude five patients who were physically unable to complete hard tasks at a reasonable rate or at all (the combined hard-task completion for those five patients was less than 16%) leading to a significant group difference in the ability to complete hard tasks (96.43% for controls versus 67.07% for patients), invalidating the EEfRT data and analysis
      • During the May 2, 2024 NIH Symposium of the study, Dr. Madian stated, “What the [original EEfRT] paper describes is that the EEfRT was designed so that the sample of patients used within that original study could consistently complete the task. This does not mean that everyone who takes the task must be able to complete the task without issue for the administration or data to be valid or interpretable. It seems that the creators wanted to ensure that in general as many people as possible would be able to complete the task but without compromising the task’s ability to challenge participants. Furthermore, I think, it bears mentioning that although our ME participants did not complete the task at the same 96-100% rate as the participants in the original study or at the same rate as our healthy controls, they still completed the task a large majority of the time. To wrap things up, to answer the question, consistently completing the task is not a requirement for a valid EEfRT test administration, and by all accounts we believe our data is valid and is, thus, interpretable as a measure of impaired effort discounting.” This is a misrepresentation of the what the original EEfRT study found (required task completion by “all subjects”) and what subsequent EEfRT studies have stressed. Furthermore, it is untrue that patients “completed the task a large majority of the time.”
  7. The inappropriate use of a test (the EEfRT), that was designed for and has been exclusively used for mental-health issues (or in healthy individuals), in order to support a novel and newly introduced term and concept, Effort Preference, in a physical disease
  8. The failure to discuss the validity of the use of the EEfRT in an unprecented way, i.e., to measure alleged disrupted effort discounting as opposed to the established use of EEfRT results as an assessment of effort-based, reward-based motivation
  9. The failure to identify any limitations of the EEfRT testing contrary to what other EEfRT studies have invariably done
  10. The failure to exclude the data of four “spontaneously recovered” ME patients (about a quarter of the patient cohort), a recovery rate well above of what has been found by credible researchers, indicating that at least some of those patients were misdiagnosed
  11. The over-generalization of the unsupported Effort Preference claim beyond the expending of effort for small gambling rewards, i.e., for any effort exertion by ME patients
  12. The over-generalization of the unsupported Effort Preference claim to millions of ME patients worldwide based on the one-time EEfRT performance of 15 ME patients, some of whom seem to have been misdiagnosed
  13. The inclusion of data from healthy controls with diseases that have substantial symptom overlap with ME (orthostatic issues in high numbers, Chronic Lyme Disease, and Psoriasis) as well as the inclusions of two blood relatives (siblings) of ME patients in the study despite the fact that there seems to be at least an infectious component to ME
  14. The choice of a new and exceedingly prejudicial label for a patient community that has suffered grave harm from decades of misrepresentation of the disease nature and from sustained and relentless defamation, including by NIH
  15. The use of three vastly different criteria (two of which are overly broad) for patient selection, including one set of criteria, the IOM definition, that is not a research definition, which likely resulted in including individuals in the patient group who were not ME patients
  16. The claim to have established the ME phenotype based on an exploratory, hypothesis-generating study of a cohort of only 17 patients, with many tests run only on even smaller sub-subsets of patients
  17. The misrepresentation of the nature of ME by reducing it to mere fatigue, exercise intolerance, malaise, and cognitive complaints, which is a non-specific description that does not capture ME, a multi-system disease with a variety of other disabling symptoms
  18. The assigning of a researcher to design the study and run its day-to-day activities, Dr. Brian Walitt, who is on record with his unscientific views of ME, for example, that it is merely a normal way of experiencing life and not a medical entity
  19. The hostility, derision, and unprofessional conduct by the principal investigator Dr. Avindra Nath, a high-level civil servant, toward the ME community:
    1. his persistent demands of unqualified praise from the ME community,
    2. his veiled threats as well as his overt intimidation with respect to future ME research in an attempt to silence criticism by advocates of the intramural study and NIH’s research and conduct, causing pain and suffering to ME patients,
    3. his presenting himself as a victim of ME advocates, gravely sick patients, causing reputational harm to the ME and ME advocacy community, and
    4. his relentless stressing that he and the other researchers in the study have allegedly been forced to donate their time and work without compensation for ME patients

Many thanks to all advocates, patients, caregivers, and concerned stakeholders who are taking part in this Call to Action. I realize that what I have described is outrageous and might invoke justifiably strong emotions, but keeping our correspondence and meetings (some of which are already underway) professional and polite is most likely to produce results.

***

I would like to thank Eileen Holderman, Ella Peregrine, Carrie Patten, and Ed Burmeister for their invaluable feedback, contributions, and support. I am solely responsible for any inaccuracies or mistakes in the analysis.

I have no conflicts of interest and, in particular, am not compensated by any government agencies, ME non-profits or experts, or other patients.

Open Access: In my four-part series on the Effort Preference claim, I shared quotes, data, images from the paper “Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome” and its attachments under the Creative Commons license, a copy of which can be found here. I indicated when and how I re-analyzed the data.

Posted in Uncategorized | Comments Off on The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 4)

The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 3)

This is Part 3 of a 4-part article on NIH’s Effort Preference claim.

Part 1 can be found here.

Part 2 can be found here.

In this Part 3, I will discuss the EEfRT as a psychological measure, NIH’s frantic attempt of damage control in response to the firestorm reaction to the intramural paper, the agency’s decades-long obfuscating characterization of ME as merely fatiguing, its reframing of fatigue in ME as being purely subjective, the investigator’s fear of using a second-day exercise test, and NIH’s ongoing research of an allegedly dysfunctional Effort Preference in ME.

The EEfRT as a Psychological Measure

The EEfRT is a psychological measure. The study of motivation and effort-based, reward-based decision-making belongs to the field of psychology, and the EEfRT itself is a behavioral measure that—outside of studying motivation in autism and the impact of factors such as caffeine, tobacco abstinence, age, the indirect dopamine agonist d-amphetamine, and social conformity on the motivation of healthy individuals—has been used exclusively in psychological conditions, such as depression, schizophrenia, bipolar disorder, and binge eating.

The EEfRT was developed by Dr. Michael Treadway, a clinical psychologist with a focus on studying psychiatric symptoms related to mood, anxiety, and decision-making. His TRead Lab—TRead stands for “Translational Research in Affective Disorders”—researches decision-making in affective disorders, e.g., depression and bipolar disorder. The EEfRT was developed specifically for use in psychiatric populations.

Deficits and dysregulation of motivation have been identified as a key contributor to psychopathologies, such as anhedonia and addictive behaviors, and the EEfRT effort valuation has been established as a psychological construct used to understand psychopathology as well as a significant predictor of psychopathology. See for example the paper “Effort valuation and psychopathology in children and adults” (screenshots below).

According to the paper, “Previous research regarding the role of effort valuation in psychopathology has consistently found differences in effort-based decision-making between patients with mental disorders and healthy comparison subjects.”

Another paper states, “Research using the EEfRT across psychiatric populations can identify similarities and differences between [eating disorder] and other psychiatric conditions on effort expenditure….”

Claiming that the EEfRT is not a psych measure is simply untrue.

NIH’s Attempt to Justify the Effort Testing

Since the paper has been published, the Effort Preference part and other aspects of the study (such as the pitiful cohort sizes, which undermine the significance of any findings) have been met with enormous condemnation by researchers, ME advocates, and patients. As a result, NIH has gone into damage-control mode, unsuccessfully arguing that Effort Preference is not a psychologizing or otherwise harmful concept. The agency has also activated surrogates, such as Komaroff—Anthony-of-all-trades and NIH study case adjudicator, manuscript commenter, and paper reviewer—to deliver to the media the seemingly immortal, destructive trope that ME is real as if there is any reasonable doubt about it, thereby creating and/or perpetuating doubt.

NIH-Symposium—Koroshetz
NIH has circled the wagons going as far as involving Dr. Walter Koroshetz in the recent NIH Symposium on the study (at 0:46). Koroshetz spent more than half of his remarks on highlighting the issues of fatigue and effort in ME, trying to make a silk purse from the sow’s ear of Effort Preference. That is extraordinary when you consider that Koroshetz is the Director of NINDS, reporting directly to the NIH Director. Koroshetz promoted none of the other study findings; his main goals of presenting at the Symposium were to sell, and lend a scientific air of legitimacy to, Effort Preference, the primary finding of the intramural ME study, and to tell ME patients to stay tuned for Long COVID research to possibly benefit them. He also heaped high praise on Walitt and Nath.

Koroshetz labored to persuade the audience that effort means something different to neurologists than it does to the lay person. He claimed that the study has shown that there is a computational problem in the brain of the ME patients that leads to them unconsciously overestimating effort and/or underestimating rewards due to no fault of their own. He was emphatic when he said that NIH is not claiming that patients do not want to make an effort, except that that is exactly how the term Effort Preference will be interpreted—the only reasonable interpretation. No intellectually honest person would argue otherwise unless completely clueless.

NIH-Symposium—Madian
Madian’s Symposium presentation followed in Koroshetz’s footsteps (at 2:21:16). Madian is first author of the paper “Repetitive Negative Thought and Executive Dysfunction: An Interactive Pathway to Emotional Distress” on anxiety and depression and senior author of the paper “Non-pharmacological Treatment of Pain: Grand Challenge and Future Opportunities” on mind-body therapeutic modalities for pain.

Like Koroshetz, Madian distinguished the layperson’s understanding of effort from effort exertion as the result of the brain’s valuation network that unconsciously computes the cost-benefit ratio of exertion based on weighing energy costs and potential rewards (effort discounting). According to Madian, the inter-variability of that computation is the definition of Effort Preference—as though Effort Preference were a medically accepted characterization. To be very clear, there is no established meaning of the term Effort Preference, and there is certainly no accepted meaning of the term or concept as a defining feature of any disease.

The paper, too, attempts to give the impression of legitimacy of Effort Preference by pretending that it is well established, which it is not:

“[Effort preference] is often seen as a trade-off between the energy needed to do a task versus the reward for having tried to do it successfully.”

“[O]ften seen” by whom? By precisely nobody other than NIH when it comes to ME only.

Effort Preference as a Choice
Koroshetz and Madian talked a good game during the Symposium in their shameless attempt to reframe Effort Preference and pacify the community, but the truth is that there is no discussion of the unconscious nature of Effort Preference in the paper. Instead, the paper focuses exclusively on the behavior of patients during the EEfRT with respect to choosing hard versus easy tasks, clearly referring to conscious decision-making and implying blame.

Nobody is arguing with neurology or neurobiology. However, the invoking of unconscious decision-making is outlandish with respect to the EEfRT, a task that quite literally involves a deliberate, i.e., conscious, choice between hard and easy tasks. The variation by trial in reward magnitude and the probability of a trial being a win trial are intended to influence the participants’ conscious decision-making. In fact, those aspects of the EEfRT regularly result in participants’ making strategic choices. NIH’s argument that decision-making on EEfRT is unconscious is inconsistent with the design of the task.

Linguistically, the term preference strongly suggests a choice—in this case the choice of ME patients, due to some alleged miscalculation of how much they can exert, to invest less effort than the authors claim patients safely could. The American Psychological Association Dictionary of Psychology (remember that effort research is typically done in the field of psychology, see more under “The EEfRT as a Psych Measure”) confirms this interpretation.

In addition, the authors admit that the EEfRT involves conscious decision-making when they claim that patients were pacing by reducing their button-pressing speed on the easy tasks. Pacing is, of course, unquestionably a conscious choice. The authors themselves quote an ME study participant expressly describing pacing as “a conscious choice” as allegedly supportive of their framing of disrupted effort discounting and an “unfavorable” Effort Preference being unconscious concepts, thereby directly contradicting their own claim:

The finding of a difference in effort preference is consistent with how participants describe pacing. One participant describes: ‘You have to make a conscious choice of how much energy [to use and] whether or not something is worth crashing for. It’s hard because no sane person would ever participant [sic] to suffer and … that’s what you’re doing [by choosing] an activity that will make you crash. You are going to suffer… You have to decide what gives you meaning and what is worth it to you.’” [emphasis added]

On the topic of the authors contradicting their own findings, it is worth pointing out that they did not try to explain why they did not entertain the possibility that patients were pacing when they chose slightly fewer hard tasks than controls. The authors were in a pickle and had to settle on an inconsistent analysis of their EEfRT findings and hope nobody would notice. They had no choice than to invoke pacing as the reason for the decline in easy-task button-pressing speed by patients because the only alternative explanation would have been fatigue, which they authors could not concede because that would have refuted their Effort Preference claim since a group difference in fatigue sensitivity on hard-task choices would have ruled out an altered Effort Preference. The pacing claim is a questionable interpretation; it is more likely that patients realized that they did not have to press the button on the easy task as quickly as they did initially because they continued completing easy tasks even after they slowed down. But, in order to protect their Effort Preference claim, the authors had to stay away from considering pacing as the underlying reason for patients selecting slightly fewer hard-task choices than controls because of the conscious nature of pacing, which would have contradicted the effort discounting and Effort Preference claims, which are built on the unconscious narrative.

“Oh, what a tangled web [they] weave, when first [they] practice to deceive.” —Sir Walter Scott

Effort Preference is an Overwhelmingly Prejudicial Label
Even when the term preference is viewed as a liking or favoring as opposed to a choice, it is a tremendously stigmatizing label when it comes to preferring something undesirable, such as preferring to perform easy instead of hard tasks, and the authors would have been keenly aware of that. In fact, they characterized the Effort Preference of ME patients as “unfavorable.” As mentioned, the label Effort Preference has never been used in connection with any disease. Not even the creators of the EEfRT have used the term Effort Preference, and neither did any of the subsequent EEfRT studies, but the allegedly unbiased NIH investigators (Walitt claimed to have no bias, no bias at all, about anything) felt it was just fine to debut a stigmatizing label for a disease that has been as disparaged as ME has. They might as well have used the term Malingerers’ Preference. The label Effort Preference makes the term Chronic Fatigue Syndrome seem downright scientific, an extremely high bar. NIH managed to take the defaming of ME patients to a new level, and doing so was obviously intentional.

NIH claims that their effort finding is based in neurobiology. Then why not name it something that sounds like neurobiology instead of laziness? For example, the authors point to a possible connection between the hard-task choices of 15 patients made on one occasion and a dysfunction of brain regions that drive the motor cortex, such as the temporo-parietal junction (TPJ) as if correlation equals causation. Given that NIH made up a new term out of whole cloth, it would have been easy to choose a respectful, scientific designation, such as TPJ Dysfunction, but that would have defeated the purpose of the EEfRT testing. Obviously, the term Effort Preference was chosen for maximum detrimental impact on the reputation of ME patients. The agenda of the NIH authors here is glaring, their contrived rationalization is unabashed, and their gaslighting skills are advanced. Defaming a vulnerable patient population is not science, and it should definitely never be part of government-generated, tax-payer funded “science,” and yet that is exactly what NIH did.

Prior Use of the Term
Again, Effort Preference is not an established term. It has been very sparingly used in a few different, inconsistent ways and never as an established term or concept or in connection with a disease, let alone as the defining feature of a disease.

For example, one study used effort-preference terminology to describe the preference to receive effort information before receiving reward information in choosing to complete a task.

Another study used the term in connection with the finding that participants preferred to exert effort for others rather than themselves.

NIH’s Reframing of the EEfRT
A virtual-audience member asked during the NIH Symposium (at 2:52:19) why NIH used the EEfRT in an unprecedented (non-validated?) way. There was no hint in the paper that NIH was completely reframing the EEfRT as a measure of effort discounting, i.e., as a measure of whether ME patients falsely perceive effort and their ability to exert, instead of following the established interpretation of the EEfRT as a measure of reward motivation, effort allocation, and reward-based decision-making. The reader of the paper would have no idea that NIH was introducing an entirely new interpretation of the EEfRT.

Walitt fielded the question. He referred to Madian’s comments (see above) in terms of “what Effort Preference is for us.” That is an apparent admission of the fact that NIH went rogue with the interpretation of the EEfRT. It is highly unusual for researchers to interpret an established test in a brand-new way with potentially far-reaching consequences without discussing and justifying doing so and without showing that their new interpretation has any validity. Scientists are typically eager to receive credit for novel and unprecedented research findings or interpretations. Instead, the NIH investigators carefully attempted to cloak their bastardization of the EEfRT by pretending that its use to assess effort discounting and identify an altered so-called Effort Preference in a disease is established science. It is not. Walitt continued by saying that the authors chose to frame the EEfRT results as a measure of Effort Preference in order “to reflect the conscious and unconscious aspects that guide the moment-to-moment choices that are made during the effort task.” If that is so, then why choose a term that indicates and even stresses a conscious choice, and why choose the EEfRT?

Both Walitt and Madian were clearly petrified of ad-libbing when they spoke about NIH’s EEfRT conclusions. They had written out in advance the EEfRT presentation and answers to questions from the virtual audience about the EEfRT and were reading both. This indicates an awareness of just how thin the EEfRT findings are and how carefully they have to be framed. Constructing such a vulnerable house of cards as Effort Preference in ME requires extreme discipline when presenting the alleged results. One slip-up, and the authors are exposed. No wonder scrutiny is considered hostile by Nath.

The scarce EEfRT parts of the paper have been written in the same vein: the authors chose to discuss only the absolute minimum of their effort testing and alleged results, leaving out key items, such as limitations, validity issues, potential confounding factors, relevant analyses and graphs, explanations of decisions regarding inclusions and exclusions, etc. Unlike with exclusions in other parts of the study, the data-exclusions spreadsheet (supplementary data 23) does not explain why control F was excluded from the EEfRT analysis; all the spreadsheet says is that his data was invalid but not why.

What the authors did include in the paper contains significant misrepresentations and misinterpretations of the results. It is obvious that substantial effort was invested by the authors to obscure the improprieties in the EEfRT analysis,

Other Neurological Diseases
Madian claimed that the EEfRT has been used in several neurological disorders. I have reviewed dozens of EEfRT studies and have found none that involved neurological diseases or any other medical context other than mental health conditions. In fact, numerous EEfRT studies have excluded potential participants because of a history of a major/lifetime/significant medical disease, which makes sense given the physically intense nature of the EEfRT.

Parkinson’s Disease Study
Madian equated NIH’s use of the EEfRT in ME patients to a study that measured the impact of dopaminergic medication on the exertion of effort for rewards by patients with Parkinson’ Disease. That study, although it studied effort, did not use the EEfRT. It found that Parkinson’s patients on dopamine medication chose to invest more effort for a given reward compared to Parkinson’s patients off dopamine. Madian argued that the NIH study’s findings and the Parkinson study’s finding are analogous because NIH allegedly found a positive correlation of norepinephrine levels in the cerebrospinal fluid of ME patients and their Effort Preference. Both dopamine and norepinephrine are neurotransmitters, of course.

It appears that NIH had been frantically searching for a justification after the fact for their use of the EEfRT in ME patients. The Parkinson’s study, which the NIH authors seem to have located only after publishing their ME paper, was apparently the inspiration for referring to some alleged neurobiological aspects relating to the Effort Preference claim into Madian’s and even Koroshetz’s NIH Symposium presentations.

However, the Parkinson’s study is distinguishable in a number of ways. First, it designed the experiment in a way that controlled for confounding factors. For example, it calibrated the test requirements to each participant’s maximum performance, resulting in the outcomes actually reflecting the choice to invest effort based on an action’s expected value as opposed to reflecting motoric ability. The NIH investigators, on the other hand, chose not to control for differences in motoric or other ability, and as a result, there was a dramatic difference in the groups’ physical ability to complete hard tasks, invalidating NIH’s effort findings. (see more on NIH’s failure to do so in Part 2 under “Confounding Factors and Validity Issues of EEfRT—Physical Inability of ME Patients to Complete Hard Tasks”).

Further, there was no choice to be made by the Parkinson’s study participants between easy and hard tasks (an invalid metric for the willingness to expend effort in the confines of the EEfRT as explained in detail in Part 2 under “Game Optimization Strategy”), and there was no element of luck; instead, the Parkinson’s investigators instructed the participants to accumulate as many virtual stakes as possible, and if the participants performed the required task, they did win the stakes in all trials as opposed to in win trials only, which was the case in the NIH study. The Parkinson’s study was, therefore, not confounded by participants’ optimization strategies. If the NIH investigators had indeed been influenced by the Parkinson’s study to test effort in ME, why did they not use its study design, which is clearly superior to the EEfRT because it cuts down on validity concerns. To avoid a misunderstanding, I believe that most effort testing is fraught with severe issues, but I want to give the Parkinson’s study investigators credit for at least trying to address some of them, unlike the NIH investigators.

Of course, the Parkinson’s researchers also did not call what they described Effort Preference, nor did they allege that the study results spoke to anything other than effort exertion for rewards. The NIH researchers, however, over-generalized the significance of their findings by claiming that they apply to any and all decision-making by ME patients involving effort (see details in Part 2 under “Confounding Factors and Validity Issues of the EEfRT—Misrepresentation of EEfRT Scope”).

Moreover, the controls in the NIH study showed an inverse correlation between cerebrospinal-fluid norepinephrine levels and alleged Effort Preference (ME patients had shown a positive correlation), so it is entirely unclear what the norepinephrine results mean if anything, especially since the NIH paper also claims that norepinephrine levels in cerebrospinal fluid did not differ between the two groups.

The reference to the Parkinson’s study by Madian felt desperate. Surely, if that study had been the NIH authors’ motivation to study effort in ME patients—if they had hypothesized that neurotransmitters have an impact on the willingness of ME patients to exert themselves—they would have mentioned that in their paper; they did not. Instead, by the authors’ own admission, their “primary objective” of the intramural study was to show “the existence of EEfRT performance difference” between ME patients and controls (see Part 2 under “Effort Preferences as a Defining Feature of ME”).

Moreover, the NIH paper itself tells us that the Parkinson’s study was assuredly not why NIH investigators chose the EEfRT. The real reason was that they were hoping that it would give them an opening to claim that ME patients only believe they cannot exert themselves when they actually can. We know this because the NIH paper’s Supplementary Results (page 9) expressly tell us the investigators’ reason for including the EEfRT testing in the study:

“Alterations in the “sense of effort” have been reported in the literature.”

The citations for that statement are three papers asserting that ME patients suffer from a disturbed or elevated sense of effort:

• “Is the chronic fatigue syndrome best understood as a primary disturbance of the sense of effort?” (citing several papers by Wessely and cognitive-behavioral-therapy and graded-exercise-therapy cheerleader and Wessely soulmate, Sharpe)

• “Perception of cognitive performance in patients with chronic fatigue syndrome

• “Elevated Perceived Exertion in People with Myalgic Encephalomyelitis/Chronic Fatigue Syndrome and Fibromyalgia: A Meta-analysis

Here are some claims and suggestions from those papers:

• There is no muscle weakness in ME patients.

• “It has been empirically established that the clinical course of CFS can be modified by cognitive-behavioral therapy … to correct the reduced physical fitness resulting from excessive rest and ‘dysfunctional attitude to exercise'” (citing Sharpe). The authors celebrate that the alleged “benefits [of cognitive-behavioral therapy] continue to accrue after stopping formal treatment.” (Remember that the NIH authors claim that what allegedly defines ME, Effort Preference, is what leads to deconditioning and functional capacity in ME patients.)

• Re-education of ME patients in the area of motor function was suggested. (This gives definite Lightning Process vibes.)

• ME patients suffer from “cognitive distortions” by having an “impossibly high standard of … performance”—both their own performance and what normal performance should look like.

• “[A} nagging sense of insufficiency over what patients feel they should be accomplishing may indeed contribute to an overestimation of exertion and fatigue.”

• “People with ME/CFS and [Fibromyalgia] perceive aerobic exercise as more effortful as healthy adults.”

• “People with ME/CFS and [Fibromyalgia] exhibit elevated [perceived exertion] during exercise.”

These psycho-babble papers, which align perfectly with documented beliefs of NIH’s biopsychosocial disciples, are what inspired the NIH investigators, by their own admission in the paper, to test ME patients’ effort exertion. As you can see, there is no mention of a correlation between neurotransmitters and effort. Madian seems to have used the Parkinson’s study as a fig leaf after the fact for the utterly indefensible decision to use the EEfRT in the intramural study and to declare a dysfunctional perception of patients’ ability to exert a defining feature of ME.

NIH’s Obsession with Fatigue in ME, Reframing of Fatigue, and Denial of Established CPET Science

The authors used their new Effort Preference concept to reframe fatigue in ME:

NIH’s Historical Obsession with Fatigue
There is a well-documented history of NIH’s obfuscation of the true nature of ME by falsely reducing ME to fatigue. One of countless examples is the following image in a slide used by Nath; look at his emphasis of the word fatigue.

(Also note the preposterous stock photo used in this slide supposedly illustrating the severity of ME: a sad boy resting his head on his forearm.)

NIH’s culture of misrepresenting ME goes back decades, all the way to the late Dr. Stephen Straus, NIH virologist, who was involved in renaming ME Chronic Fatigue Syndrome and redefining ME with the Fukuda criteria, an overly broad definition centered around fatigue, thereby capturing many who do not have ME. The late ME advocate and author of The CFS Report, Craig Maupin, obtained, though a Freedom of Information Act request, a letter written by Straus to the lead author of the Fukuda criteria. In that letter, Straus went on record with his plan to reframe ME as “Chronic Fatigue,” with the stated goal of causing ME to cease to exist as a recognized disease, which Straus called a “desired outcome.”

Straus’s agenda is as strong as ever at NIH. In fact, there is a push at NIH and other federal health agencies to promote all diseases it views as post-infectious as a monolithic entity—thereby misdirecting and hampering research and treatment for these complex diseases—by identifying them through a single shared symptom, fatigue, which is common in a prodigious number of health issues. Nath asserted, during the May 28, 2024 NIH Advocacy call, that it would be more appropriate to use the term Post Acute Infection Syndromes for ME, Long COVID, and Post-Ebola Syndrome instead of allowing the continued use of their distinct names. Nath proposed to put “a whole slew of diseases” under that umbrella term. He wants to include Gulf War Illness and Sick Building Syndrome; never mind that they are not necessarily triggered by an infection.

Of course, Walitt has been promoting the lumping of ME and Fibromyalgia for many years (see Part 1).

There is a pronounced difference between studying effort in ME, which has been maligned by NIH and other federal health agencies as merely fatiguing for decades, and diseases such as Cancer, Multiple Sclerosis, or Rheumatoid Arthritis, which are not at risk of being distilled down to fatigue.

NIH’s reductionist focus on fatigue is tantamount to grasping the tail of an elephant and insisting one is holding a snake. There is a vast body of ME research on which NIH could have built in lieu of testing for effort if it had not been for NIH’s pathological and deceptive fixation on fatigue. ME is a complex disease that presents with many disabling symptoms. It is not mainly characterized by fatigue but rather by the persistent expression of the profound and often severe, complex dysregulation of multiple bodily systems, including, but not limited to, the immune, neurological, cardiovascular, and endocrine systems; energy metabolism, production, and transport systems; the Hypothalamic-Pituitary-Adrenal Axis; and other systems. Fatigue is a symptom, not the symptom.

As fatigue is not at the driving factor or core symptom of ME, curing or treating fatigue in ME would leave patients still disabled just like curing or treating fatigue in cancer would still lead to the death of most cancer patients in the absence of a treatment that stops the abnormal growth of cells. The following signs and symptoms have been well established in ME. This non-exhaustive list is based mainly on the ME:ICC and the Myalgic Encephalomyelitis International Consensus Primer for Medical Practitioners authored by expert clinicians and scientists in the field of ME.

With respect to issues with muscle strength after exertion, they are much more likely a protective downstream response designed to keep the body from being damaged by further exertion than they are the result of a miscalculation of effort. And if that is the case, then treatment with medication that affects neurotransmitters, such as dopamine and norepinephrine, in an attempt to impact the valuation center in the brain that deals with effort-based decision-making—the unconvincingly purported impetus of NIH’s effort inquiry in this study (see above under “NIH’s Attempt to Justify the Effort Testing—Parkinson’s Disease Study”)—are likely to be harmful as it will cause patients to override their limits. That is why treatments with stimulants such as Ritalin (affecting dopamine and norepinephrine levels) or caffeine tablets (impacting dopamine, glutamate, and GABA) tend to have horrid long-term outcomes in the ME population not unlike modalities such as the Lightning Process that indoctrinate patients into ignoring their limits.

NIH’s Escalating Reframing of the Fatigue Narrative
In this paper, NIH has reframed and, thereby, escalated, its ME fatigue narrative. Although greatly diminished muscular strength and work capacity, i.e., fatigue/exhaustion and fatigability, are obviously part of the ME presentation, NIH claims to have found them only in controls but not in ME. Apparently, the agency went from insisting that ME is nothing but fatigue to the even worse narrative of asserting that there are no objective fatigue or issues with muscle strength in ME, but rather that any fatigue patients experience is subjective, i.e., the result of a false perception of effort and fatigue (i.e., an altered Effort Preference) that leads to deconditioning and functional disability. That is what NIH purports to believe, and falsely claims, defines ME.

The paper has laid the groundwork for reframing ME accordingly:

“[E]ffort preference, not fatigue, is the defining motor behavior of this illness.”

and

“Fatigue is defined by effort preferences….”

In NIH’s press release for the study, Walitt claims:

“We may have identified a physiological focal point for fatigue in this population. Rather than physical exhaustion or a lack of motivation, fatigue may arise from a mismatch between what someone thinks they can achieve and what their bodies perform.”

Nath describes the allegedly altered Effort Preference in patients as relating only to pacing whereas according to Koroshetz, Walitt, Madian, the paper itself, and the NIH press release, the alleged dysfunctional effort discounting in ME patients results in a general false perception of effort and fatigue without any limitation of the Effort Preference claim to pacing. The more inconsistent and confusing the narrative, the less subject to scrutiny and criticism it will be, or so NIH hopes.

How does an altered Effort Preference explain objective signs and symptoms such as the objectively measurable exacerbation of ME symptoms as the result of exertion (PENE, often referred to by the less specific and less scientifically accurate term and concept Post-Exertional Malaise or PEM), susceptibility to viral infections with prolonged recovery periods, reactivated viruses, low-grade fevers, enlarged lymph nodes, orthostatic intolerance (POTS and Neurally Mediated Hypotension), elevated resting heart rate, elevated heart-rate during and after eating, low heart-rate variability, abnormally low body temperature, etc.? It cannot. NIH is, of course, fully aware of that, but it has sacrificed ME science on the biopsychosocial altar.

The decision to study effort in ME is clear evidence that the investigators refuse to accept the findings of ME expert researchers, who have amassed over 10,000 published, peer-reviewed papers documenting the biomedical abnormalities of the disease, as well as the reported experience of ME patients. There was an overwhelming confirmation bias at play here: the investigators were motivated to claim that ME is defined by a particularly prejudicial form of fatigue that they made up, i.e., Effort Preference, in direct contradiction to the ME expert clinicians and researchers.

NIH’s new focus on Effort Preference constitutes a continuation and intensification of the agency’s misrepresentation of the nature of ME and a misdirection of its efforts and the efforts of other researchers by gatekeeping extramural research, continuing the indefensible decades-long practice of rejecting nearly all grants for biomedical research into ME with contrived reasons. How many more “Fatigue Self-Management” or “Biofeedback and Hydrogen Water as Treatments for Chronic Fatigue Syndrome” studies by Dr. Fred Friedberg will NIH fund? Will NIH now approve grants only for those extramural researchers who toe the Effort Preference line? The NIH researchers and bureaucrats are nowhere near as subtle in their institutional bias as they seem to think they are.

NIH’s Relentless Misnaming of ME
NIH broadcasts its persistent bias by repetitively and nearly exclusively using the trivializing and inaccurate terms chronic fatigue (a non-specific symptom in a large number of health issues) or Chronic Fatigue Syndrome when referring to ME as well as labels such as condition or disorder. They also like to mix and match medical entities, such as ME and Chronic Fatigue Syndrome (ME/CFS). Koroshetz added a new twist to this during the NIH Symposium when he repeatedly referred to ME as “this problem” or “this general problem.” Ongoing institutional refusal to use appropriate nomenclature perpetuates the misperception and discounting of this severely disabling neuro-immune disease.

NIH’s Denial of Well-Established Two-Day CPET Science
A major symptom of ME is PENE. Without PENE, an ME diagnosis is a misdiagnosis. The only way to objectively confirm PENE is with a cardiopulmonary exercise test (CPET) performed on two consecutive days. The dramatic drop-off on day two on this test, during which maximum effort is objectively confirmed ruling out cheating, is incontrovertible evidence of the severe limitations of capacity in ME. That being the case, both things cannot be true: (a) that patients are objectively unable to repeat on day two (or whenever they are in PENE) their day-one CPET results (or their baseline performance) and (b) that patients think that they cannot exert on day two (or whenever they are in PENE) as much as they did on day one (or at their baseline) due to an altered perception of effort and/or rewards resulting in a misperception of their capacity to exert. The authors agree with that dichotomy:

“as fatigue develops, failure can occur because of depletion of capacity or an unfavorable preference.” [emphasis added]

The two-day CPET results of ME patients, without fail, demonstrate that there is a depletion of capacity, ruling out the “unfavorable”-preference alternative. The way the authors get around this fatal contradiction in their claim is by ignoring, if not denying, the extensive body of well-established CPET science in ME. Without even trying to look into the capacity issue in the patient group by including a second-day CPET, the authors allege that ME patients have “an unfavorable preference.” Their proof? The debunked results of the EEfRT, an unreliable behavioral measure.

The investigators seem to have been terrified of including a two-day CPET—the gold standard for confirming PEM or PENE—in their protocol and at the direction of Walitt did not. The purported rationale for not using a two-day CPET given by Walitt during the Symposium (at  3:59:08) is that the physiological measurements of the CPET (“further collapse of metabolic activity”) do “not measure the experience of [PEM]” but rather cardio-respiratory performance. Moreover, Walitt claims that a second-day CPET was not necessary in light of a brand-new instrument, Qualitative Interviews, that was developed by NIH, for use in the phenotyping study, as the result of a focus-group PEM study. Walitt asserts that the investigators were able to induce PEM with only a one-day CPET allegedly shown by the Qualitative Interviews. Never mind that only eight ME patients participated in the CPET (although the Supplementary Information of the paper, on page 14, inconsistently states that it was nine patients). Finally, Walitt said the researchers did not want to risk harming patients by asking them to do a second CPET.

Whether or not one believes the last reason is likely impacted by one’s depth of awareness of ME history, including NIH history with the disease, and of one’s understanding of what NIH has done in this study, but the fact remains that PEM was not objectively established by the NIH investigators led by Walitt. The second-day CPET  shows, without a doubt, whether somebody is in PEM (or PENE, which NIH did not consider). CPETs are an objective measure that would detect cheating. They provide specific and objective data on reproducibility, metabolic responses, workload, etc., at the anaerobic threshold as well as at peak. Qualitative Interviews or Visual Analog Scales, which were also used in their feeble attempt to confirm PEM, are unable to do that as merely subjective tools, and the claim that patients were in PEM after the CPET is a conclusory assertion that was not objectively confirmed. Exercise intolerance occurs in diseases other than ME, but it does not constitute PEM in most of those other diseases, and it is exceedingly easy to misunderstand the concept of PEM when one has never experienced it and to mistake feeling worse after exercise for PEM without experiencing actual PEM.

Given that a large percentage of the individuals in the patient group were selected with non-specific criteria (IOM and Fukuda criteria), objectively confirming PEM was critical. Both the IOM and Fukuda criteria were paid for by U.S. federal health agencies. The Fukuda definition was co-authored by UNUM-sponsored Sharpe. Wessely was on the International Chronic Fatigue Syndrome Study Group that was involved in the Fukuda definition’s creation. Both definitions capture many individuals who do not have ME and are, therefore, much too broad for research purposes (despite the fact that Fukuda has been used extensively in research, inappropriately so due to its lack of specificity, certainly since there have been more specific definitions, such as the ME:ICC). Any scientist interested in solid findings would have insisted on using the strictest criteria available, the ME:ICC, or at the very least the CCC.

The IOM definition was satisfied by 100% of patient participants; the Fukuda definition was satisfied by 82% of the patient participants, and the narrower Canadian Consensus Criteria (CCC) were satisfied by only 53% of the patient participants. So, nearly half of the patients did not satisfy the CCC, the only definition of the three that is appropriate for use in research. Consequently, there is a real chance that almost 50% of the ME study participants did not actually have ME.

When NIH purchased a new definition from the IOM 2013, advocates protested fiercely and warned of, among other issues, its use in research. They predicted that careless researchers, such as the NIH investigators, would disregard the danger that stems from the fact that the IOM definition is a clinical definition only, not a research definition (see below screenshot of the IOM Report), that is deliberately broad because the IOM authors aimed to be overinclusive.

NIH’s recalcitrant refusal to exclusively use strict criteria for this study speaks for itself. But in any event, if the investigators had been serious about rigorous science, they really had no choice than to objectively determine the presence of PEM in every patient after failing to choose narrowly focused diagnostic criteria that are appropriate for research. They did not despite having had promised to do so in 2016:

“All patients will be objectively tested for post-exertional malaise (PEM).”

Diagnosing PEM after the one-day CPET performed in only eight patients through subjective assessments is exactly why a quarter of the patient cohort “spontaneously recovered.” Anybody who knows just the basics of ME understands that a quarter is a recovery rate that is well beyond what credible expert clinicians or researchers in the field have found. The generally accepted recovery rate is 5%. There is no explanation for the outlandish “recovery” rate in this study other than that at least some if not all of those “recovered” patients were misidentified as having ME, which then begs the question if any of the other patients also did not actually have ME.

We do not know how many individuals in the patient group were misdiagnosed because of the lack of the second-day CPET. The fact that adjudicators agreed on the diagnosis is a self-serving NIH talking point, but consensus does not equal objective confirmation. The existence of PEM is not a popularity contest subject to a vote. The authors did not explain why they did not feel it necessary to exclude the data of the “recovered” individuals from their analysis, thereby tainting the entire study.

The results of the second-day CPET would not only have ensured that NIH would be studying the correct cohort—something that would have resulted in very different outcomes as it would have weeded out the misdiagnosed individuals in the patient group—but they also would have dominated the paper as the objectively measured drop-off on day two would have been dramatic. This was something that some of the investigators wanted to prevent from happening at all costs because it would have made it impossible to claim that there is a problem with how patients perceive effort and rewards and with their perception of their abilities.

The authors imply that their Effort Preference claim is confirmed by the fact that patients needed “strong encouragement during CPET” in order to reach a respiratory exchange rate of 1.1. That is, frankly, embarrassing. Strong encouragement is typical with CPETs, even with healthy individuals. There is nothing unusual about it. It is not a sign that ME patients can perform at higher levels than they think. The authors were shamelessly grasping.

NIH’s Inability to Explain Outbreaks and the Efficacy of Serious Medication
The investigators owe the ME community an explanation of how Effort Preference could possibly explain the numerous outbreaks of ME over decades. It disrupted effort discounting contagious? It would also be interesting to hear the NIH authors explain how serious medications such as Ampligen (an immune-modulator and antiviral), IVIG (a blood product), Mestinon (a Myasthenia Gravis drug), Low Dose Naltrexone (an anti-inflammatory agent improving sleep, pain, inflammation, and autoimmunity), antivirals such as Vistide and Valtrex, Rituxan (a monoclonal antibody used to treat cancer and autoimmunity), anti-Tumor Necrosis Factor biologics such as Enbrel (used to treat autoimmunity), or even just B12 shots or normal saline, etc. could possibly improve the patients’ perception of effort and of their capacity for exertion to the point of providing relief for patients, often significantly so.

Ongoing Intramural ME Research

NIH is not done “studying” “ME.” The phenotyping study was only the first of three phases of intramural ME research.

What is particularly concerning is that NIH will continue studying what the agency calls, in the case of ME only, Effort Preference. During the NIH Symposium, Madian said:

“Further research on valuation-network damage or dysfunction as a possible contributor to these symptoms is also warranted. A closer investigation of valuation-network functioning in people with ME/CFS is already underway.”

NIH’s obvious goal is to establish Effort Preference as an accepted medical concept, which it currently is not. A few years from now, that will likely have changed, unless NIH is stopped. NIH’s ultimate objective seems to be to blame all ME symptoms on deconditioning secondary to an alleged disorder of perception.

In Part 2, I have shown, without a doubt, that the intramural study does not provide a basis for the further study of effort or of any “valuation-network damage or dysfunction” in ME patients. In fact, the EEfRT testing demonstrated clearly that there there is no such damage or dysfunction. It is crucial that the community shut down this line of inquiry by NIH; otherwise, harm is guaranteed.

***

Open Access: I shared quotes, data, images from the paper “Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome” under the Creative Commons license, a copy of which can be found here. I indicated how I re-analyzed the data.

Posted in Uncategorized | Comments Off on The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 3)

The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 2)

This is Part 2 of a 4-part article on NIH’s Effort Preference claim. Part 1 can be found here: https://thoughtsaboutme.com/2024/06/10/the-nih-intramural-me-study-lies-damn-lies-and-statistics-part-1/

In this Part 2 of my 4-part series, I am analyzing the EEfRT data to show that they do not support the claim that ME patients’ symptoms are caused by dysfunctional effort discounting (overestimating of effort and underestimating of rewards and capacity), which is what NIH calls an altered Effort Preference. The authors included a graph, Figure 3a, which is the main illustration of the false Effort Preference claim, that completely misrepresents the EEfRT data and, in short, presents an entirely false picture of the EEfRT results. In addition, they failed to exclude patients who were physically unable to complete hard tasks at anywhere near acceptable levels for the EEfERT data to be valid. Moreover, the authors failed to report—other than their false conclusion—their analysis of a metric that is typically at the heart of the EEfRT analysis: the assessment of whether a group difference in probability sensitivity (typically due to game optimization strategies) is responsible for the lower proportion or number of hard-task choices by patients. Moreover, based on the data reported by the authors, patients performed better on the EEfRT than controls did, which the authors concealed by not sharing the relevant analysis (virtual rewards obtained). I will also show that the recorded EEfRT data is unreliable as at least some of it is false. In addition, I will identify a large number of careless mistakes in the paper with respect to the EEfRT, demonstrating that NIH’s work on ME was phoned in.

This post is the longest in the series and requires a fair amount of stick-to-it-iveness both in terms of length and complexity of the issues and details discussed. I realize that this will, unfortunately, be beyond the limits of many ME patients, but I decided not to divide it into smaller parts due to the connectedness of the issues and in order to allow for easy sharing with and reporting to the appropriate authorities and other interested parties of the main reasons for why this study should be urgently investigated and retracted.

In order to follow along, it is important to understand the EEfRT game rules as well as the alleged findings, so I will begin with explaining those.

Modified EEfRT Game Rules

The modified EEfRT as used by the investigators is a multi-game test in which participants complete a series of repeated button-pressing trials with the goal of winning as much virtual money as possible. On each trial, participants were asked to choose between a hard and an easy task. A hard task involved pressing a button 98 times in 21 seconds using the non-dominant pinky finger; an easy trial required pressing a button 30 times in seven seconds with the dominant index finger. During the trials, each button press fills a white bar gradually with red color indicating progress toward completion of the task.

Participants were told that they would win the virtual money allocated to each trial if they, by pressing the button quickly enough, raise the bar to the top within the time allowed and if the trial was a win trial. Participants were not guaranteed to win the reward if they completed the task; if a trial was a no-win trial, participants did not win the allocated amount even if they successfully, i.e., timely, completed the task. Before choosing a hard versus an easy task, participants were informed of the probability of a trial being a win trial and of the potential reward value, the winnable virtual dollar amount, of each trial.

Based on that information, participants decided whether to choose the hard or the easy task for each trial. There were three levels of reward probability: 12% probability, 50% probability and 88% probability. The specified probability level for each trial was the same for hard or easy tasks, and there were equal proportions of each probability level across the test. Each easy task was eligible to win $1; hard tasks were eligible to win between $1.24 and $4.12. The levels of probability of reward attainment and the reward magnitude for hard-task choices were presented in the same order for each participant.

Participants were told at the beginning of the EEfRT that they would get to take home the actual dollar amount from two of their winning trials, which would be randomly chosen by the computer program, at the end of the test. The minimum amount a trial participant could take home was $2, and the maximum amount was $8.24.

Each trial started with a one-second blank computer screen, which was followed by a choice period of five seconds during which participants were informed of the probability of receiving a reward and the reward value assigned to that trial. If participants did not select an easy or hard task during the five-second choice period, a task was randomly assigned to them by the computer. After choosing a hard or easy task, there was another one-second blank screen before the task began. After the time for a task was up, participants received on-screen feedback regarding their successful completion of the task and whether and how much money they won. Participants continued to choose and attempt to complete hard or easy tasks for a total of 15 minutes. The button pressing for the hard task took three times as long as the button pressing for the easy task, but with the pre-task items (blank screens and choice time) and the post-task items (feedback on completion and amount of virtual money won, if any), hard tasks ultimately took about twice as long as easy tasks.

(I use the terms hard tasks and hard trials interchangeably because once a participants chose a hard task for a given trial, that trial became a hard trial.)

The Study’s Alleged Findings

Below is a summary of the authors’ Effort Preference claims:

1. Hard-Task Choices. This is the main Effort Preference claim made in the paper. According to the authors, ME patients chose significantly fewer hard tasks than controls (p=0.04). After applying some statistical legerdemain to the raw data—including eliminating the data of a poorly performing control who chose the lowest number of hard tasks among controls by a large margin, on par with the lowest-performing ME patient, who was not excluded—the authors claim that the probability of choosing hard tasks was significantly higher in controls compared with ME patients at the beginning of and throughout the test (p=0.04) . The proportion of hard-task selections by ME patients was used as a correlate for what NIH calls Effort Preference, the decision to avoid the harder task, which the authors claim indicates an altered so-called Effort Preference in ME patients (Figure 3a).

The authors further claim that this result cannot be explained by fatigue sensitivity because there was no group difference in the decline over time regarding the ratio of the hard-task selections (p=0.53), by reward sensitivity because both groups increased their ratio of hard-task choices at the same rate with increasing reward value (p=0.07), or by probability sensitivity because there was no group difference in participants’ ratio of hard-task choices based on the probability of a trial being a win trial (p=0.43).

2. Button-Pressing Rate. ME patients demonstrated a significant decline in button-pressing rate over time while performing easy tasks (Figure 3b, p=0.003). Because such decline was not observed during hard tasks (Figure 3b), the authors concluded that the decline was not due to fatigue.

3. Completion Rate. ME patients were less likely to complete hard tasks than controls “by an immense magnitude” (p=0.0001) but not less likely than controls to complete easy tasks (p>0.05).

4. Pacing During Easy Tasks. Because the decline over time in the button-press speed of ME patients for easy tasks (see 2 above) did not result in a group difference with respect to the probability of ME patients’ completion rate for easy tasks (see 3 above), the authors concluded that patients “reduced their mechanical effort while maintaining performance on the easy tasks,” i.e., that ME patients were pacing during the easy tasks.

Hard-Task Choices

The metric underlying the authors’ Effort Preference claim is the proportion of hard-task choices the two groups made. The investigators used the ratio of hard-task choices as a correlate for an alleged misperception by patients as to their abilities or what the authors call an altered Effort Preference.

Exclusion of Control F and Improper Inclusion of Patients Too Sick for the EEfRT
According to the Figure 3a spreadsheet (attached to the paper in the Source Data file), the investigators determined that the EEfRT data of control F was invalid and excluded that data from their analysis. It is possible that I overlooked it—this is a long paper with multiple sizable attachments that do not cross-reference each other well or at all—but I was unable to determine what led to the alleged invalidity of that individual’s data. As far as I could tell, the fact that this data was excluded is not mentioned in the paper, let alone explained. Other EEfRT studies discuss why certain participants’ data were excluded, if any, as is customary in scientific papers. Moreover, the NIH paper itself explains why individuals were excluded with respect to other testing (see Supplementary Results, page 18, “Sex-based differences in Gene Expression were Validated in other Data Sets”).

Excluding the data of control F was certainly convenient for the authors since that individual chose by far the fewest total hard tasks in the control group, on par with the ME patient who chose the fewest total hard tasks, whose data were not excluded. Was the exclusion of this particular control what the authors needed to get over the statistical-significance hurdle, which they barely did with a p value of 0.04? Controls chose an average of 19.25 hard trials per control, but when you include control F, that number goes down to 18.65.

In addition, five of the fifteen ME patients who participated in the EEfRT—one third of the patient group—were physically unable to complete hard tasks at an acceptable rate as evidenced by an extremely low completion rate for their hard tasks (each far less than 50%). They completed hard tasks at a combined rate of less than 16% whereas controls completed hard tasks at a rate of more than 96%. The authors themselves did not mention those percentages but found, based on the group’s actual performance, that patients were less likely to complete hard tasks compared to controls “by an immense magnitude” (p<0.0001). Had the authors excluded the data from patients that were unable to complete at least 50% of the hard trials as they were required to under the EEfRT (discussed in detail below under “Confounding Factors and Validity Issues of EEfRT—Physical Inability of ME Patients to Complete Hard Tasks”), the average number of hard tasks chosen per patient would have gone up from 16.6 to 17.3.

Four out of six patients (the sixth one completed barely more than 50% of hard tasks) who had the most physical difficulty completing hard tasks at a rate acceptable for EEfRT validity purposes chose the fewest hard tasks in the patient group. This is an indicator that their physical struggle to complete the hard tasks directly impacted their hard-task choices. Therefore, their decisions whether to choose hard or easy tasks reflected their physical limitations, not a misunderstanding of what they are capable of performing or of disrupted effort discounting.

Consequently, the question is if there would still be any statistical significance with respect to the ratio of hard tasks chosen by the two groups if control F had not been excluded and if the five patients who struggled to complete hard tasks had been excluded. Had the authors done this, the average number of hard tasks chosen by patients would have been 17.3, and the average number of hard tasks chosen by controls would have been 18.65—a tiny difference.

Example Graphs
The investigators created example graphs for their interpretation of what various outcomes—effort sensitivity, fatigue sensitivity, and reward sensitivity—would look like (see Supplementary Figures S5b-d below).

Fatigue sensitivity. They theorized that if a difference in fatigue sensitivity between the groups had been the reason for patients having chosen fewer hard tasks than controls, i.e., if the ratio of hard-task choices by patients had decreased over time at a rate greater than those of controls, that would indicate that abnormal fatigue sensitivity rather than an issue of false perception of effort explains the fewer hard-task choices by patients. The authors showed a simplified version of what fatigue sensitivity would look like in Supplementary Figure S5c below, where the ratio of hard-task choices decline in one group, the one with increased fatigue sensitivity, as participants complete more trials.

Reward sensitivity. If, on the other hand, patients did not value rewards properly, i.e, did not choose more hard tasks as reward values increased at a rate comparable to controls, then patients would demonstrate diminished reward sensitivity, i.e., an issue with effort discounting (or an altered Effort Preference). That is, as shown in Supplementary Figure S5d below, in the absence of appropriate reward sensitivity, the proportion of hard-task choices would not rise as reward values increase.

Effort sensitivity. Finally, if patients had had an “aversion” to effort, there would have been a group difference in effort sensitivity or Effort Preference, i.e., patients would have selected a lower percentage of hard-task choices at the beginning of and throughout the entire task, illustrated by two parallel lines with the control group sitting higher on the y-axis than the patient group (see Supplementary Figure S5b below). This, according to the authors, would demonstrate that disrupted effort discounting (or an altered Effort Preference) explains the lower rate of hard-task choices by patients.

Supplemental Figures S5b-d:

The Alleged Outcomes
The authors compared the ratio of hard-task choices made by the two groups and claim that (1) controls chose more hard tasks than patients (p=0.04) and (2) the probability of choosing hard tasks is significantly higher in controls than in patients “at the start of and throughout” the EEfRT (p=0.04). They refer to Figure 3a for both claims.

Figure 3a (The same outcome is depicted in Supplementary Figure S5e.):

However, Figure 3a (or the basically identical Supplementary Figure S5e) tells us nothing about the first finding (actual number of hard-task choices made by groups per trial) because it only depicts the probability of choosing the hard task. The actual hard-task choices and the probability of choosing the hard task are not interchangeable, and the same graph can, obviously, not illustrate two separate findings with different parameters, but that is exactly how that graph is used in the NIH paper. There is no graph in the paper depicting the actual hard-task choices or proportion of hard-task choices. (The authors seem to refer to number of hard tasks chosen and the proportion of hard-task choices interchangeably.)

With respect to the second finding (probability of choosing the hard task) depicted by Figure 3a, that approach grossly distorts the data. The use of estimating techniques is not appropriate when one has the actual data regarding the proportion or number of hard-task choices that were made for each trial, which was the case here. In other words, one cannot convert the proportion or number of actual hard-task choices into a probability of hard-task choices with respect to EEfRT choices that have already been made.

To the degree that the authors’ claim regarding the probability of choosing hard tasks might have been forward-looking, that would be preposterous. The probability of what? The probability of the same 15 ME patients choosing the same number of hard tasks on re-testing, the probability of a different group of 15 ME patients choosing the same number of hard tasks, or the probability of the entire ME patient population choosing the same number of hard tasks? Surely, the data of 15 patients (some of whom likely did not have ME), cannot tell us anything about what the probability of other ME patients choosing the hard tasks on the EEfRT would be.

A scenario in which the use of estimating techniques might be appropriate is when the researchers have only a few data points and need to fill in the likelihood of the hard-task choices on the non-measured data points. That is not the case here as the investigators collected data for each trial. The authors did not explain why they converted the actual proportion or number of hard-task choices into the probability of choosing the hard task.

No group difference in fatigue sensitivity. Because Supplementary Figure S5e (see below) does not resemble Supplementary Figure S5c (see above), the investigators concluded that there was no group difference in fatigue sensitivity:

“Two-way interactions showed no group differences in response to task-related fatigue….”

and

“Lack of interaction indicates similar fatigue sensitivity.”

(Remember that Supplementary Figure S5e (below) is basically the same graph as Figure 3a (above).)

In other words, the percentage of hard-task choices decreased at a similar rate by group as the number of trials increased, indicating that a difference in fatigue sensitivity does not explain the fact that, overall, patients made hard-task choices at a lower rate than controls.

What the authors omitted from their fatigue-sensitivity analysis is that patients were likely in a so-called adrenaline surge during the 15 minutes of the EEfRT. Adrenaline surges allow ME patients to temporarily display higher functionality due to bursts of false, unsustainable energy (possibly driven by adrenaline) when patients are unable to pace, such as for medical appointments, emergencies, important tasks, participating in the first NIH intramural ME study in decades, etc. Those adrenaline spikes often result in so-called crashes when the patients’ system do not correctly down-regulate and are, therefore not a reflection of patients’ true or safe capacity. Despite describing this symptom in ME in one of the nested NIH studies on post-exertional malaise (PEM) done as part of the phenotyping study and published years before the phenotyping paper, the phenotyping investigators proceeded with the EEfRT inquiry without taking the impact of adrenaline surges into account.

No group difference in reward sensitivity. Because Supplementary Figure S5f (see below) does not look like Supplementary Figure S5d (see above), the authors concluded that there was no group difference in reward sensitivity:

“Two-way interactions showed no group differences in response to … reward value….”

and

“Lack of interaction indicates similar reward sensitivity.”

In other words, the percentage of hard-task choices by the two groups increased at a similar rate as reward values increased, indicating that a difference in reward sensitivity does not explain the fact that, overall, patients made hard-task choices at a lower rate than controls. Consequently, there is nothing wrong with how patients valued rewards or with their effort discounting with respect to rewards.

Supplementary Figures S5e-f:

Because they found no group difference in fatigue or reward sensitivities, the authors concluded that only an “unfavorable” effort sensitivity, i.e., an altered Effort Preference, in ME patients can explain the difference between groups in the proportion of hard tasks chosen. That interpretation is incomplete—having left out probability sensitivity—and, therefore, incorrect.

Group Difference with Respect to Probability Sensitivity
The authors decided not to include a graph analyzing whether a group difference in probability sensitivity, choices made based on the probability of a trial being a winning trial, was the reason for the difference in the proportion of hard-task choices between groups. I will discuss this in detail below (under “Game Optimization Strategy”), but in essence, there was a group difference with respect to low and medium-probability trials only but not with respect to high-probability trials, which is evidence of patients having made strategic, i.e., smart, choices in line with the EEfRT instructions to win as much virtual money as possible. Therefore, there is nothing wrong with how patients incorporated into their hard-task decisions the probability of trials being win trials.

(Please be careful not to confuse the probability of choosing the hard task (graphed in Figure 3a and Supplementary Figure S5e) with the probability of a trial being a win trial in accordance with the EEfRT game instructions; those are completely different aspects of the EEfRT testing.)

Statistical Legerdemain
Before I address the failure of NIH to include the probability-sensitivity analysis (under “Game Optimization Strategy”), let’s focus on Figure 3a (identical to Supplementary Figure S5e), which is at the heart of the Effort Preference claim.

Figure 3a:

This graph certainly makes it look as though patients chose fewer hard tasks than controls from the outset and throughout the test, doesn’t it? However, Figure 3a is the result of the investigators’ application of some statistical legerdemain to the actual data. The actual raw data as a percentage of hard tasks chosen by each group per trial when no statistical measures are applied paints a completely different picture.

I created the graph below depicting the percentage of hard-task choices made by the groups per trial, which is based on the data underlying Figure 3a, provided by NIH in the corresponding spreadsheet attached to the paper. (I excluded the data from control F as their EEfRT data was determined by the investigators to be invalid and excluded.) What actually happened during the EEfRT looks nothing like Figure 3a or what the researchers want us to believe in terms hard-task choices patients made when compared to controls.

(The above graph reflects a minor correction to the one I originally published.)

To allow the reader to get a sense of the actual numbers of hard tasks chosen by the groups, I also created the following graph depicting the total number of hard tasks chosen by group per trial. Because there were 16 controls and 15 patients, I excluded (in addition to control F whose data was allegedly invalid) an additional control to arrive at the same number of ME patients and controls (15). For that I chose control Q, who had selected 19 hard-task trials. As mentioned, the average number of hard-task choices per control was 19.25, so this graph is slanted slightly in the authors’ favor. There is little difference between the two graphs I generated, which is to be expected because control Q performed in the middle of the pack in the control group, but in any event, the two groups performed almost identically in both graphs.

The following area graph shows a bit more clearly just how slim the group-difference margins are. Basically, the area to which the black arrow points—a total of four out of 50 trials—is the main difference in terms of number of hard-task choices between the groups and, therefore, the basis for NIH’s claim that a dysfunctional Effort Preference, i.e., an alleged misunderstanding by patients as to their true capacity, defines ME.

It is easy to see why the authors chose not to generate a visual of what actually happened during the EEfRT and instead resorted to manipulating the data with statistical tools until they arrived at a figure that fit their desired outcome (Figure 3a and Supplemental Figure S5e). The latter allowed them to make it look as though patients chose significantly fewer hard tasks for every single trial throughout the EEfRT while the former shows clearly that their Effort Preference claim has no legs.

Let’s look more closely at the data in relation to the paper’s claim that the difference in hard-task choices persisted from the beginning of and throughout the EEfRT. Dr. Nicholas Madian, psychologist and NIH postdoctoral fellow who was apparently responsible for the implementation of the EEfRT, repeated, during the NIH Symposium (at 2:29:22), the paper’s false claim. When discussing fatigue sensitivity, he said, “We did again see a difference at baseline, which persisted throughout the task, indicating differences in effort discounting.” That is categorically false.

At the start. Contrary to NIH’s assertion, controls did not choose more hard tasks at the start of the EEfRT. The exact break-down of the total number of actual hard tasks chosen by each group during the first four trials (including all patients and all controls except control F whose data were excluded by the authors) is as follows:

Out of the first four trials, ME patients and controls chose the exact same number of hard tasks per participant. For the very first trial, arguably “the start” of the EEfRT, patients chose twice as many hard trials as controls. Contrary to what the paper and Madian claimed, controls did not choose more hard tasks than ME patients at the start of the EEfRT.

Throughout. NIH’s claim that controls chose more hard tasks throughout the entire EEfRT, with Figure 3a giving the impression that they chose more hard tasks in every trial, is false, too. For 34% of the trials, ME patients chose hard tasks at a higher rate than controls. For another 2% of trials, both groups chose the same percentage of hard tasks. During an additional 14% of tasks, both groups’ hard-task choices were nearly identical. This is entirely contrary to the impression the authors give, i.e., that controls chose more hard tasks in every single trial.

What happened? If the authors had used bar graphs to depict the group difference in hard-task choices as other EEfRT studies typically do, they could have shown that the patient group undeniably chose slightly fewer hard tasks and a slightly lower ratio of hard-task trials than the control group. However, in order to demonstrate that fatigue was not a factor for that group difference—something that was clearly a priority for the authors given NIH’s pathological, habitual framing of ME as fatigue, which ironically threatened to get in the way of the altered-Effort Preference claim—the actual hard-task choices would have had to be charted by trial, in order to show that both groups fatigued at a similar rate, thereby ruling out a difference in fatigue sensitivity as a factor. The resulting graph would have looked like my graphs above, i.e., clearly undermining their claim. Graphing the raw data by trial would also have made it impossible to claim that patients chose fewer hard tasks at the beginning of and throughout the EEfRT because it would have been obvious that this was not the case. The only path to alleging that patients underestimate what their bodies can perform was to use statistical manipulation that would result in a smooth graph (Figure 3a) that allowed for the false claim that patients chose fewer hard tasks from the start and on every single trial throughout the EEfRT, neatly supporting the Effort Preference claim. The only problem is that Figure 3a is contradicted by the data.

Random Assignment of Tasks
The data is even weaker than I have demonstrated with my graphs because the random assignment of tasks (hard versus easy) when no choice was made in the five-second choice period skewed the number of hard-task choices in favor of controls due to the fact that, in the case of controls, more than half of the randomly assigned tasks were hard tasks (57%). By contrast, for ME patients, only one third of the randomly assigned tasks were hard tasks (33%). This is further amplified by the fact that ME patients had tasks randomly assigned to them more than twice as often (15 times) as controls (seven times).

Such uncontrolled differences undermine the conclusions, especially given the tiny cohort sizes. Of course, it is preposterous to include in the measure of hard-task choices those trials where no choice was made. Other EEfRT studies (for example, this one) excluded trials for which participants failed to make a choice within the choice period. That does not entirely cure the problem as participants might have chosen the hard task for those trials, and this becomes particularly problematic when the number of times this happens differs by groups in a meaningful way as it did here, but it is preferable to including the data as the NIH authors have done. The number of these intances is not dramatic, but when the margins of group difference are as slim as they were here with respect to hard-task choices, the impact of even just a few cases can easily change the outcome.

Misrepresentation of Hard-Task Choices as the Relevant EEfRT Measure
It is indisputable that controls chose slightly more total hard tasks than ME patients on the EEfRT. However, that is irrelevant for two reasons: (1) the correct measure of who performed better in the EEfRT is the amount of virtual rewards won, which is not determined by the proportion or number of hard tasks chosen (discussed below under “Game Optimization Strategy”) and (2) the reason for the difference in the proportion of hard-task choices was a group difference in probability sensitivity, which means that patients made strategic choices in order to win the game in accordance with the instructions (discussed below under “Game Optimization Strategy—Probability Sensitivity”), leading to their choosing of fewer hard tasks. In fact, selecting fewer hard tasks resulted in ME patients winning more virtual money than controls based on the reported data, demonstrating that, to the extent that the EEfRT tells us anything about motivation or effort discounting, patients exhibited superior motivation and effort discounting compared to controls contrary to NIH’s claim (discussed below under “ME Patients Performed Better on EEfRT”). Both points completely gut NIH’s claim of disrupted effort discounting or an altered Effort Preference in ME patients.

Let me begin by addressing the impact of a game optimization strategy and the group difference in probability sensitivity.

Game Optimization Strategy

A major validity issue of the EEfRT is the confounding factor of game optimization strategies used by participants. After all, the EEfRT is a game with varying reward and probability levels. In its discussion of the EEfRT, the NIH authors assert the following:

“The primary measure of the EEfRT task is Proportion of Hard Task Choices (effort preference). This behavioral measure is the ratio of the number of times the hard task was selected compared to the number of times the easy task was selected. This metric is used to estimate effort preference, the decision to avoid the harder task when decision-making is unsupervised and reward values and probabilities of receiving a reward are standardized.”

The assertion that the proportion of hard-task choices is the primary measure of the EEfRT is demonstrably false. Based on the EEfRT instructions, it is improper to use the EEfRT as a measure of motivation—or an alleged false perception of effort as NIH has done. In the structure of the EEfRT, always choosing the hard task over the easy task (even choosing the hard task a large majority of the time) is not optimal if one is trying to receive the maximum reward. The use of rewards is designed to be the motivating factor, and winning as much money as possible is the goal of the test. For example, one EEfRT paper clearly and simply states the following (other EEfRT papers contain similar language):

“The goal of the EEfRT is to win as much money as possible by completing easy or hard tasks.”

Hence, merely looking at the relative proportion of hard versus easy tasks is not the correct way to assess results of the EEfRT because that approach would most definitely not lead to a maximization of rewards. If the instructions had been to choose as many hard tasks as possible, then the proportion of hard tasks chosen would be the primary outcome measure, but that is not the case in EEfRT studies and was not the case in the NIH study.

The optimal approach to increase one’s chances of receiving the maximum virtual rewards is more complex. It involves reviewing the parameters of the test to determine the best reward strategy. The key elements for each trial are the hard versus easy choice, the probability of the trial being a win trial (one where a reward is earned upon successful completion of the task), and the amount of the potential reward. It is also critical to note that the easy-task choice requires only about half the time of the hard-task choice. Therefore, making the easy-task choice allows for more trials to be completed within the 15 minutes allocated for the EEfRT and also increases the ability to choose potentially higher-reward/higher-probability hard tasks later in the test. Hence, the optimal strategy would generally dictate making the easy-task choice for low-probability/low-reward trials and the hard-task choice for high-probability/high-reward trials. With respect to 50% probability trials, the hard task would be optimal only in the case of very high reward levels and possibly not even then.

For example, if a trial has a 12% probability of being a win trial and a hard-task-reward magnitude is $1.24 (remember, an easy-task choice always has a reward value of $1), it is clear that a participant should make the easy-task choice. In such a trial, she has the same chance (12%) of obtaining about the same reward ($1 versus $1.24) and has the time to do another easy-task trial to win another dollar in the same amount of time it would take to complete the hard-task trial. That would result in winning $2 by choosing two easy trials as opposed to $1.24 by choosing one hard trial, which would take about the same amount of time as the two easy trials. Moreover, the second easy trial might have a higher probability of being a winning trial. That same logic holds for most low-probability trials, particularly when the potential hard-task reward is less than $2 or $3, which is the case for five or 11, respectively, out of the 18 different potential hard-task reward values. Of course, any trial with a 12% probability is not likely to get any reward at all, so choosing the hard task is not at all a compelling choice. In other words, why choose the hard task if there is only a 1 in 8 chance of winning?

If a participant chose only hard-task trials, she would complete fewer than 32 trials (we do not know how long the feedback period is, so it is not possible to pin this down exactly) in the allowed 15 minutes for the modified EEfRT test (a third with only 12% probability of winning) versus 64 easy-task trials. Only choosing easy tasks is also not the correct strategy as that would leave significantly higher rewards from hard tasks on the table.

As a result, various EEfRT studies have excluded “inflexible” participants, those who chose only either easy or only hard tasks. For example, one EEfRT study “removed from analyses participants who made only hard selections across all reward levels. This removed from analysis subjects who had no room to demonstrate increases in effort allocation.”

Numerous prior EEfRT studies have cautioned about the confounding nature of game optimization strategies. For example, a 2022 study examining the reliability and validity of the EEfRT (EEfRT reliability and validity study) concluded that:

“the original EEfRT comes with a major downside: At least some participants understand that choosing the hard task is often lowering the possible overall monetary gain as the hard task takes almost 3 times as long as the easy task and the overall duration of the task is fixed. Hence, at least some participants’ choices are partly based on a strategic decision and less on approach motivation per se.”

The same study found that:

“the percentage of hard-task-choices within the original EEfRT did not correlate with participants‘ motivation to win money in any trial category”

and that

“[t]he original EEfRT has been shown to be partly related to individual strategic behavior, which is not related to participants’ actual approach motivation.”

According to another schizophrenia EEfRT study, the failure to “put forth the mental effort required to develop a systematic allocation strategy with regard to reward and probability information” is a sign of lack of motivation itself. Conversely, employing a strategy on EEfRT testing, as patients very effectively did, is evidence of strong motivation.

Prior EEfRT studies urged future EEfRT researchers to inquire with participants about their potential use of strategies in an attempt to limit their confounding impact as well as to modify the EEfRT to remove optimization strategies from the equation:

“Therefore, future studies should counteract these limitations by (1) systematically asking participants about their strategies while playing the EEfRT and/or (2) optimizing the EEfRT, such that the only valid strategy for participants to increase their rewards is to increase their effort allocation.”

Nevertheless, the NIH investigators chose not to do so.

Obviously, the optimal strategy involves a mix of hard and easy-task trials, mostly depending on probability of winning. In fact, as noted above, a prior EEfRT study excluded from analysis “inflexible” participants because they obviously did not take the instructions for, and the goal of, the EEfRT into account in their choices.

A computer could, no doubt, calculate the precise optimization strategy in each case, but this is not so easy for EEfRT participants during the brief time they are given to choose a task (five seconds). The bottom line is that, to the extent that the EEfRT has any validity, the key result measure is the amount of rewards earned during the trials and definitely not the percentage of hard-task choices.

(In some EEfRT studies, as was the case with the NIH version of the EEfRT, the ultimate reward actually paid out to participant is based on two win trials randomly selected at the end of the EEfRT. Since the rewards in the hard-task trials are higher than in the easy-task trials, the random rewards at the end make it a slightly more optimal choice to select a hard task on the margin. However, those randomly selected trials are not likely to influence participants’ choices because the selection among the win trials at the end of the test is random and, therefore, unpredictable and because there is no feedback for them due to the reward delay (temporal discounting) whereas there is immediate reward feedback for each trial, which will influence participants’ strategy as they go through the trials and make the choice for each of them.)

Probability Sensitivity
As explained, utilizing a game optimization strategy can impact the choices made with respect to hard versus easy tasks based on the probability of a trial being a winning trial. The NIH authors claim that the EEfRT measures only effort (number or proportion of hard-task choices), the potential impact of fatigue, and the potential impact of reward sensitivity.

That is an incomplete, and, therefore, false statement to the extent that the implication is that the EEfRT does not assess probability sensitivity. The EEfRT does measure probability sensitivity, which is typically analyzed in EEfRT studies in addition to effort, fatigue, and reward sensitivities. For example, an EEfRT paper on anhedonia in Major Depressive Disorder expressly states that “[p]robability is manipulated in the EEfRT.”

Not only that, but probability sensitivity is far and away the most important factor in choosing between a hard and an easy task because the probability of getting any reward at the 12% probability level (1 out of 8) is 1/7th of the probability of winning a reward at the 88% probability level (7 out of 8 ). On the other hand, the reward values vary only by a factor of about four (between $1 and $4.12 ). The relevant inquiry with respect to probability sensitivity is whether, as part of a game optimization strategy, patients chose a lower percentage of hard tasks than controls when the probability of winning was only 12% or 50% as opposed to 88%. A group difference with respect to probability sensitivity would explain the difference in the proportion of hard-task choices or effort between the groups, and there would be no basis for any conclusion that ME manifests as a disrupted perception of effort or a misjudging of ability (i.e., Effort Preference).

The sharing of the analysis of the data not only with respect to reward levels, which the authors did, but also regarding probability levels, which the authors did not include in their paper, is typically at the core of EEfRT studies. In line with their false assertion that the EEfRT does not assess probability sensitivity, the authors shared graphs that depict their findings only with respect to fatigue sensitivity and reward sensitivity (see above under “Hard-Task Choices”) and, unlike other EEfRT studies, the NIH investigators did not include any graphs for their analysis of the probability sensitivity. Instead, they merely claimed that there was no group difference with respect to probability sensitivity:

 “Two-way interactions showed no group differences in responses to … reward probability (ROR = 0.50 [0.09, 2.77], p = 0.43)….”

That is false.

The heavily redacted Peer Review File, which includes the comments of some but not all peer reviewers (and the corresponding answers by NIH), shows that one of the reviewers (we do not know the names of the reviewers other than Dr. Anthony Komaroff’s, and we do not know who made this particular comment) raised the issue of proper analysis of the EEfRT data. The reviewer’s question was redacted in its entirety. Based on NIH’s answer, the reviewer seems to have suggested the use of a different statistical tool, the Cooper 2019 approach, rather than the tool the NIH investigators chose. In their response to the reviewer comment, NIH rejected the Cooper 2019 approach, which apparently would have shown any group difference regarding probability sensitivity according to NIH. Below is NIH’s explanation for their choice:

“[The Cooper 2019 approach] is designed to dissect out how participants are making their decisions (i.e. which aspects of the task are being weighed in making decisions about hard/easy task selection). Use of the Cooper 2019 approach would help determine the contribution of individual aspects of the task to the performance outcome, such as how subjects integrate reward, effort, and probability to guide decision-making. As our data did not show differences in reward sensitivity and probability sensitivity by group, this approach seems unlikely to provide information regarding the primary outcome.” [emphasis added]

In other words, the NIH investigators’ rationale for rejecting the Cooper 2019 was that there was allegedly no difference in probability sensitivity between groups as the paper itself also asserts. Again, that is incorrect. There was a group difference with respect to probability sensitivity.

I generated the following graph based on the raw data attached to the paper. The graph shows the relative percentage of hard-task choices between the two groups at the three probability levels. As you can see, there is a substantial group difference in probability sensitivity at the lower probability levels (12% and 50%) and essentially the same probability sensitivity between groups at the 88% probability level.

(I again excluded the data from control F, which was deemed to be invalid by the investigators.)

It is true that patients overall chose hard tasks at a lower rate than controls, but almost all of the difference in hard-task choices occurred in the 12% and 50% probability trials where an optimal strategy is most impactful and dictates significantly fewer hard-task choices if one is looking to maximize the total virtual rewards in accordance with the EEfRT instructions. In contrast, at the 88% probability level, a hard-task choice is optimal except for trials with very low potential reward values.

At the 12% probability level, controls chose about 46% more hard tasks than patients; at the 50% probability level, controls chose about 40% more hard tasks than patients; and at the 88% probability level, controls chose less than four percent (4%) more hard tasks than patients. There is clearly a statistically significant group difference with respect to choosing hard tasks between the lower probability levels (12% and 50%) and the high probability level (88%).

In essence, ME patients seem to have made strategic decisions about when to choose hard trials in order to win as much virtual money as possible in accordance with the EEfRT instructions. That was the right decision as confirmed by the fact that patients indeed won more virtual money than controls in accordance with the reported data, which I will discuss in the next section. This difference in probability sensitivity—and not disrupted effort discounting or an altered Effort Preference—explains the group difference with respect to choosing hard tasks.

Based on NIH’s Own Data, ME Patients Performed Better on EEfRT

With the understanding that the goal of the EEfRT is to maximize virtual winnings and that the participants were instructed accordingly, let’s look at which group actually performed better as measured by the virtual rewards won by both groups.

ME patients received, on average, more virtual rewards during their win trials ($58.13 per patient) than controls ($56.71 per control) and, therefore, out-performed controls on the EEfRT despite choosing fewer hard-task trials. ME patients’ selection of fewer hard choices was the better strategy as demonstrated by the ultimate results. Maybe this difference is not statistically significant—the margins are slim—but the fact remains that ME patients did better on EEfRT testing according to the data the authors reported, so if there was no statistically significant group difference, the authors should have reported that their effort inquiry did not yield anything. Instead, they ignored the most relevant EEfRT outcome altogether, and what they did report is entirely contrary to the actual result of the EEfRT, allowing them to make their spurious Effort Preference claim.

I created the following graph to illustrate those results.

(I again excluded the data from control F, which was deemed to be invalid by the investigators.)

The investigators’ use of the EEfRT to try to show that bodies of ME patients are able to perform at a higher level than patients think, due to dysfunctional effort discounting, was a complete failure. The data actually shows the opposite: that patients, despite being gravely ill and physically and cognitively limited, performed better than controls on the EEfRT by winning more virtual rewards than controls and that there is nothing wrong with the conscious or unconscious motivation of patients, with their effort discounting, or with their Effort Preference.

False Recording of EEfRT Data

The spreadsheet with the raw EEfRT data (Figure 3a spreadsheet in the Source Data File, attached to the paper) shows that there was a serious data recording or data entry issue with respect to the granting of rewards. In the case of 79 trials (including practice trials, excluding control F), the spreadsheet recorded the granting of rewards despite those trials not having been completed. This is contrary to the EEfRT game rules and what the participants had been told at the beginning of the task. Essentially, rewards were granted that had not been earned.

This happened more frequently in the case of the patient group, but with the proper exclusions (see details below under “Confounding Factors and Validity Issues of the EEfRT—Physical Inability of ME Patients to Complete Hard Tasks”), the two groups performed essentially the same (with less than a 1% difference), so these results are basically a tie despite patients’ severe limitations. This demonstrates that patients did not overestimate effort, underestimate rewards, or underestimate their capacity and do not have an altered Effort Preference even when one deducts for both groups the rewards that were granted improperly, i.e.., granted despite being unearned.

Some examples of falsely recorded reward data are depicted below. Note the highlighted entries in the last two columns depicted below.

This is what Dr. Walter, Koroshetz, Director of the National Institute of Neurological Disorders and Stroke (NINDS), called “data as pristine as you can get” (at 02:16) during the May 2, 2024 NIH Symposium on the study. There is a tell in how this study is being presented by NIH. Whenever the authors or NIH bureaucrats or surrogates use superlative statements—for example, that the patient cohort is exceptionally “clean,” that the data is “pristine,” that this is the “best study ever done,” that this is a groundbreaking study that, for the first, time has found any biomedical abnormalities in ME, etc.—they are trying, by sheer repetitive brute force and appeals to NIH’s authority, to convince the public that there is no reason to scrutinize the study in any way. If only they repeat these self-aggrandizing assertions often enough, they think, people might just believe that they are true, and, unfortunately, that sometimes works with new patients or patients who have not followed ME politics or the history and science of the disease. Every time we let NIH or CDC or FDA placate us with obvious falsehoods, another decade is lost for patients.

Given the significant number of instances where reward data was falsely recorded (56.03% of uncompleted trials) and the extremely large number of data entered into the Figure 3a spreadsheet (1621 rows and 15 columns for a total of 24,315 entries), it is highly unlikely that these would have been the only mistakes made in the process of recording or entering the data in the spreadsheet. It is also possible that the computer administered the EEfRT incorrectly. In any event, in light of these mistakes, none of NIH’s EEfRT data can reasonably be relied upon.

(Added June 13, 2024: A different interpretation of the data in column K (“Reward Granted”) has been suggested. Under that theory, NIH, instead of falsely entering the data, might have mislabeled column K and recorded in that column whether or not a particular trial was a win trial, i.e., a trial for which rewards were earned by every participant who successfully completed that trial as opposed to whether a reward was granted to the participant identified in column A only.

If NIH, indeed, mislabeled that column, that would be a significant issue. Words have meaning, and if NIH gets to, after the fact, just say, “Oh, no, we meant something entirely different than what we said,” then none of their findings have any significance. Maybe Neurally Mediated Hypotension really means brain damage?

Moreover, interpreting the title NIH used for column K (“Reward Granted”) to mean Win Trial is entirely unreasonable and utterly indefensible in the universe on this side of the Looking Glass. The data entered in the spreadsheet for Figure 3a is clearly data specific to each participant: age, sex, valid data, trial number, trial difficulty, choice time, recorded presses, successful completion, completion time, button press rate. The only exceptions are where it is clear that it is data that relates to a specific trial that is the same for every participant: required presses, value of reward, and probability of reward; that is not the case with respect to “Reward Granted,” which clearly relates to the participant, not the trial. Therefore, the only possible interpretation of the words “Reward Granted” is that a reward was, indeed, granted to that individual for that trial.

Moreover, if the entry in column K applied to more than one individual, it would be entitled “Rewards (plural) Granted” as it would apply to all trials with the same number, not singular as it is. Of course, a more accurate title, in any event, would be “Win Trial” if NIH indeed meant to capture the win-trial status of a trial.

Either way (entry error or labeling error), NIH made a significant mistake, raising the question of how many other mistakes were made in recording and/or labeling the EEfRT data.

In any event, this alternative interpretation of column K does not change the fact that the groups essentially performed the same regarding the virtual rewards won (with the proper exclusion of those patients who were unable to complete hard tasks at a valid rate or at all). There was no significant group difference (less than 1%) in virtual rewards earned and, therefore, no basis for the claim that there is disrupted effort discounting or an altered Effort Preference in ME patients.)

Button-Pressing Speed over Time

With respect to button-pressing speed, ME patients started stronger on the easy tasks than controls, which does not jibe with the authors’ claim that patients lacked motivation or that their effort discounting is dysfunctional. Similarly, it is worth noting that the button-press rate for the hard tasks actually increased for ME patients throughout the EEfRT, which again contradicts the claim of an impaired preference with respect to exertion.

Figure 3b:

Confounding Factors and Validity Issues of the EEfRT

There are significant limitations, potential confounding factors, and validity concerns discussed by many prior EEfRT study authors, including the creators of the EEfRT. The EEfRT is a highly problematic measure that is vulnerable to distortion even in the hands of the most unbiased and benevolent researchers, which is not the situation we find ourselves in.

Impact of Task Properties and Administration
The authors of the EEfRT reliability and validity paper point out that “[s]eemingly small differences in task properties and administration could have a great impact on task behavior.” This is highly relevant given the obvious bias of at least some of the NIH investigators, who set out to “confirm,” one way or another, their pre-conceived notions about ME.

As an example of the potential impact of task administration, numerous EEfRT studies excluded participants who were taking benzodiazepines, a medication that can confound EEfRT outcomes. Not so the NIH investigators, who included ME patients who were, at the time of the NIH study, taking benzodiazepines. At least one EEfRT study found that the proportion of hard-task choices can be affected by poor sleep quality; ME patients, of course, suffer from non-restorative sleep and other sleep dysfunction, making the EEfRT an inappropriate test for them. These are just two examples of how easily EEfRT results are skewed.

Measuring Motivation?
EEfRT results are liable to a wide range of confounding factors, resulting in measuring metrics other than motivation, the willingness to exert effort, or the valuing of rewards. For example, the EEfRT is not capable of distinguishing choices made based on motivation as opposed to personality.

As the authors of one EEfRT study state, “there is currently a lack of clarity about the specific drivers of motivated action during this task.” Another EEfRT paper on Bipolar I Disorder acknowledges that “it was not possible to adequately disentangle whether willingness to expend high levels of effort for reward can be considered a mechanism driving ambition.”

The authors of the EEfRT reliability and validity study state:

“[P]revious studies indicate that effort allocation within the EEfRT can be manipulated by a wide range of factors, ranging from mood inductions over neurophysiological manipulations to the influence of reduced motivation, or the intake of caffeine. So how does a person decide whether to increase effort to potentially gain a greater monetary reward within the EEfRT? Our mixed pattern of results shows that there is no simple answer to this question. Especially the impact of reward attributes hints at a complex pattern behind participants’ decisions and at the importance of individual reward evaluation.”

I discussed the impact of the mentioned reward attributes in more detail under “Game Optimization Strategy” above.

Misrepresentation of EEfRT Scope
The authors state, “Motivation was assessed using the Effort-Expenditure for Rewards Task….” In other words, the authors purport that the EEfRT is capable of measuring any and all motivation or general motivation. That is incorrect. There are various types of motivation, and the EEfRT does not, for example, purport to measure intrinsic motivation, which is driven by interests, passions, personal values, instincts, etc. and is difficult, if not impossible, to assess. Instead, the EEfRT is limited to measuring reward-based motivation, i.e., a specific form of extrinsic motivation (another being recognition, for example, which, too, the EEfRT is incapable of assessing). It is right there in the name: Effort Expenditure for *Rewards* Task. Moreover, the rewards for which the EEfRT can possibly claim any validity at all are immediate, not delayed, rewards. In addition, the only type of reward that the NIH investigators have any data on is very small monetary gambling rewards.

Consequently, any EEfRT claims by the authors should have been limited to a small subset of motivation. In essence, the NIH investigators misrepresented what the EEfRT is designed to assess and capable of assessing. There is no discussion of NIH’s results-oriented, overreaching broadening of the interpretation of the EEfRT outcomes. Overgeneralizing findings is an absolute no-no for any ethical scientist.

Moreover, it is not reasonable to draw any conclusions about the general motivation (or the effort discounting) of extremely sick patients, living on the margins and finding themselves in a daily existential struggle, merely based on their choices relating to a relatively meaningless social reward, such as to win a button-pressing game, while going through a grueling protocol that was guaranteed to cause significant physical fall-out. Aside from the special circumstances of ME patients, people value money differently, and it is not supportable to conclude that somebody has a problem with motivation in general (or, in the case of the NIH study, suffers from a false perception of effort and their own abilities) if that person assigns less importance to monetary gain.

The authors of the EEfRT reliability and validity study agree:

“It is reasonable to assume that the evaluation of potential benefits and costs can differ greatly between participants. … A potentially important factor is the type of reward and how much a person values this reward.”

Low Statistical Power and Risk of False Positives
The difference between groups in the proportion of hard tasks chosen in the NIH study are mostly minor, which is reflected in the weak p-value of 0.04 (with a generally accepted p-value threshold of <0.05 for statistical significance). Many prior EEfRT studies, which were almost all larger in cohort sizes, some dramatically so, than the NIH study (16 controls and 15 ME patients) noted as a limitation their small sample sizes and cautioned about low statistical power and the attendant risk of false positives.

The more far-reaching the implications of the interpretation of the EEfRT data are, the more problematic an exceedingly small cohort size is. One can hardly imagine a smaller cohort than the one in the NIH study short of a single case study. To extrapolate from the choice of hard tasks made by 15 ME patients on one occasion to tens of millions of ME patients worldwide by declaring that the ME phenotype is defined by an altered Effort Preference would be embarrassing to any serious scientist.

It is as though one used the result of 15 coin tosses to demonstrate the probability of the result of any coin toss being heads. I tried that one time only and got 6 heads and 9 tails. Does that mean that the probability of a random coin toss landing heads is 6 out of 15 or 40%? Let’s be sure to tell the NFL captain calling the coin toss at the Super Bowl to call tails.

With respect to the NIH study, there has been a clear pattern of the investigators trying to defend their inability to replicate well established abnormalities in ME, raising important questions of cohort selection (both the patients and the control cohorts). Whenever the authors are confronted with uncomfortable questions—for example, their failure to find POTS in ME patients—they are quick to point out, as an excuse, that this was an exploratory, hypothesis-generating study with a small cohort. And yet, the small cohort size was no deterrent for the authors to claim that ME is defined by patients underestimating their capacity. Heads they win; tails we lose.

Physical Inability of ME Patients to Complete Hard Tasks
Based on the actual number of hard tasks completed by the two groups, controls were more likely to complete hard tasks than ME patients “by an immense magnitude.” This translated into a respectable p-value of 0.0001.

I determined, based on the Figure 3a raw data, the percentage of completion of hard-task trials by group. Controls completed hard tasks at a rate of 96.43% (in line with the original EEfRT study) while ME patients completed hard tasks at a rate of 67.07%. That is a dramatic difference. This inability of ME patients to complete hard trials at anywhere near the same rate as controls is a strong indicator that the patient group struggled physically to complete hard tasks.

The authors carefully avoided including an illustration of this highly relevant group difference, so I created the following graph:

(I excluded the data from control F, which was deemed to be invalid by the investigators.)

For starters, the authors misstate how one determines if trial completion was an issue:

“The three-way interaction of participant diagnosis, trial number, and task difficulty was evaluated in order to determine whether participants’ abilities to complete the easy and hard tasks differed between diagnostic group.” [emphasis added]

That is false. The number of trials comes into play for this analysis only if all one is looking at is fatigue (NIH did not find a group difference with respect to fatigue sensitivity, see above under “Hard-Task Choices”). However, the physical ability to complete the trials can be impacted by issues other than fatigue, and the trial number is entirely irrelevant for assessing whether participants were unable to complete trials at a reasonable and reasonably equal (to the control group) due to health issues other than fatigue, for example, due to individual motoric ability. All that is required is comparing the percentages of trials that was completed by each group. NIH’s pathological insistence that ME is basically nothing more than fatigue quite literally allowed the investigators to misreport the EEfRT results.

Prior EEfRT studies. The creators of the EEfRT cautioned other researchers with respect to the importance, for the validity of the outcomes, of the ability to complete the EEfRT tasks:

“An important requirement for the EEfRT is that it measure individual differences in motivation for rewards, rather than individual differences in ability or fatigue. The task was specifically designed to require a meaningful difference in effort between hard and easy-task choices while still being simple enough to ensure that all subjects were capable of completing either task, and that subjects would not reach a point of exhaustion. Two manipulation checks were used to ensure that neither ability nor fatigue shaped our results. First, we examined the completion rate across all trials for each subject, and found that all subjects completed between 96%-100% of trials. This suggests that all subjects were readily able to complete both the hard and easy tasks throughout the experiment. …”

That first EEfRT study’s participants’ completion rates of between 96% and 100% have been typical in subsequent EEfRT trials. One study resulted in a completion rate of only 88% owing to the fact that it included a large number of older adults with a mean age of 73 years in order to study age-related differences; the mean age of ME patients in the NIH study, however, was less than 38 years, and in any event, the two groups were matched for age.

Various subsequent EEfRT studies agreed with the need to control for motoric ability as it has been “shown to strongly impact performance” on the EEfRT. For example, an EEfRT paper on depression states that unequal completion rates between groups indicate issues of “psychomotor retardation.”

The authors of an EEfRT paper on schizophrenia determined the maximum number of button presses for both easy and hard tasks for each individual by instructing participants to press the button as many times as possible and setting the button-pressing requirement for hard trials at 90% of the maximum rate in order “to control for nonspecific differences in motoric ability between groups, [sic] and to assure that each individual had the capacity to complete the trials.” Those authors stressed the importance of controlling for motor function:

“This control is of critical importance as most of our inferences on incentive motivational systems depend on instrumental responses (Salamone et al., 2007). Hence any individual differences in motor ability will bias the caliber to which the instrumental response is executed, which may in turn be incorrectly interpreted as decreases in motivated behaviour.”

Another study also calibrated the button-pressing requirement for hard-task trials to 90% of the participants’ maximum keypress speed and excluded participants who completed less than 50% of their trials, which resulted in a completion rate of 97.5% for the hard trials and 98.6% for the easy trials.

In another EEfRT study on schizophrenia, the investigators adjusted the required button-pressing number for hard trials to 85% of the individually calibrated number of button presses in order to control for motor-speed and dexterity differences.

An EEfRT paper on binge eating excluded those participants who did not at least complete 50% of their trials.

Yet another study recognized the strong impact of motoric ability on EEfRT outcomes and performed “a pre-analysis” to probe “whether higher motoric ability is associated with greater average number of clicks throughout the actual task and should therefore be statistically controlled for in the analysis.” Their preliminary analysis “revealed a large impact of participants’ individual motoric abilities on the number of clicks they exerted” and that the groups differed in their motoric abilities as measured via the motoric trials. Since those differences do not reflect actual motivation, the authors concluded that “not including this factor could have distorted the results.” Even after considering the issue of differences in motoric ability and attempting to control for it by including motoric trials, the authors were concerned that “the large impact of motoric abilities may still be considered a possible downside of this task. Future studies should address this limitation.”

One study “individually calibrated” the number of presses required for the hard and easy tasks for each participant” and set the number of button presses for the hard task at 85% of the participants’ maximum and the number of easy tasks at one third of that.

Another study also determined the maximum number of button presses and required participants to execute 70% of their maximum button-press number determined during calibration for the easy task and 90% for the hard task.

These are merely examples and not meant to be a complete list of how the prior EEfRT studies addressed the issue of difficulties with trial completion. It is obvious that other EEfRT researchers were acutely aware of the attendant validity issue and were trying to control for it.

It is important to remember that none of the prior EEfRT studies were done on individuals with organic diseases. They were either done on healthy participants or participants with primary mental-health issues. Those individuals were much less likely to be as physically limited as ME patients; nevertheless, the authors of those studies recognized the need to protect the validity of their data.

NIH investigators. Despite having had the benefit of prior EEfRT studies educating the NIH investigators on the importance of the issue and on the design of a modified version of the EEfRT for the special needs of ME patients, the NIH investigators did not calibrate the participants’ maximum button-press rate, even though they could have easily done so, nor did they exclude the data of the patients who were unable to complete hard tasks at a reasonable rate even after having found an “immense” (their own word) group difference with respect to the ability to complete hard tasks. In other words, the NIH investigators knew that they had to control for motoric ability but did not do so; they also knew that their results were not a valid measure of decision-making or effort discounting in light of the dramatic group difference with respect to the ability to complete hard tasks, but they chose to publish the EEfRT outcomes regardless and to claim that there is something wrong with ME patients’ perception of their physical abilities. There is no discussion of this validity issue in the paper.

All NIH did was to purport to control for fatigue and call it a day. ME, of course, frequently manifests with altered motor coordination and speed as well with issues with dexterity and reaction time. The fact that patients performed noticeably worse than controls on pegboard testing, particularly with the dominant hand, (Supplementary Data 13), which was part of the cognitive testing, corroborates that. That test requires motor function and physical exertion and is, therefore, another indication that the physical ability of patients to complete tasks involving motoric ability was impaired. Of course, the ability to participate in the EEfRT can be impaired for reasons other than impaired motor function, and the investigators completely ignored any ME symptoms that likely impacted the button pressing of patients, for example: non-restorative sleep (as mentioned above), benzodiazepine use (as mentioned above), various pain, POTS, Neurally Mediated Hypotension, nausea, dizziness, sensitivity to light and noise, etc.

The completion data for hard trials leaves no doubt that the EEfRT data of some patients were invalid. This is a textbook case of res ipsa loquitur, the thing speaks for itself. The EEfRT is designed to assess investment of physical effort for monetary rewards. It is beyond the pale for researchers to administer effort testing that requires physical exertion, such as the EEfRT, in a patient population that has been established to be physically unable to function as healthy individuals and when patients indeed do not perform at the level of controls, conclude that there is something wrong with how patients perceive effort and their own ability to exert.

Based on prior EEfRT studies, patients who completed less than than 50% of their hard trials should have been excluded by NIH. That applies to five patients, i.e., a third of the ME group. When one excludes those five patients, the hard-task completion rate by patients jumps from 67.07% to 89.60%, which is much closer to the hard-trial completion rate of controls (96.43%), although possibly still lower than what one would expect from participants who are able to complete EEfRT tasks reliability and validly. However, since NIH did not control for patients’ inability to complete tasks, the entire EEfRT findings are invalid.

NIH Symposium—Madian. A virtual-audience member asked about this issue during the recent NIH Symposium. Madian had been alerted to this question in advance and responded (at 2:50:16) as follows:

“What the [original EEfRT] paper describes is that the EEfRT was designed so that the sample of patients used within that original study could consistently complete the task. This does not mean that everyone who takes the task must be able to complete the task without issue for the administration or data to be valid or interpretable. It seems that the creators wanted to ensure that in general as many people as possible would be able to complete the task but without compromising the task’s ability to challenge participants. Furthermore, I think, it bears mentioning that although our ME participants did not complete the task at the same 96-100% rate as the participants in the original study or at the same rate as our healthy controls, they still completed the task a large majority of the time. To wrap things up, to answer the question, consistently completing the task is not a requirement for a valid EEfRT test administration, and by all accounts we believe our data is valid and is, thus, interpretable as a measure of impaired effort discounting.”

The part of the original EEfRT paper that Madian referred to is reproduced via a screenshot below:

Madian’s reply is severely misleading. First of all, referring to the initial EEfRT study fails since that study did not use the EEfRT data to completely re-define an organic disease. Instead, those authors were looking for a correlation between the level of anhedonia and decreased motivation for rewards in patients with Major Depressive Disorder. Surely, the threshold requirement for the completion rate must be stricter when the results of the EEfRT are used in as consequential a way as they were by NIH: to draw sweeping, definitive conclusions about ME.

Furthermore, the original EEfRT authors conclude “that all subjects were readily able to complete both the hard and easy tasks throughout the experiment” based on a consistently high completion rate. [emphasis added] There is no language in the original EEfRT paper to the effect that it is sufficient that “as many people as possible would be able to complete the task” contrary to what Madian claimed. In fact, it states the opposite:

“The task was specifically designed to require a meaningful difference in effort between hard and easy-task chocies while still being simple enough to ensure that all subjects were capable of completing either task.” [empahsis added]

Madian himself initially conceded that consistent task completion is required but later inexplicably contradicted himself, the EEfRT creators, and many other EEfRT studies when he claimed the opposite. All EEfRT studies that have since addressed the issue have been unequivocal regarding the requirement that the physical inability to complete tasks renders the EEfRT results invalid. At least two studies set that threshold at a 50% completion rate.

In the NIH study, a third of the patients was unable to complete hard trials consistently or at all. One patient was unable to complete a single hard trial out of 18 attempts, and another was able to complete only two out of 21 attempted hard trials. The following table illustrates the extremely low completion rate of the five patients who were not able to complete at least 50% of hard trials.

An additional patient barely got over the 50% threshold with only seven out of 13 hard tasks completed.

Madian admitted that patients “did not complete tasks at the 96-100% rate as the participants in the original [EEfRT] study” did but said that that does not impact the validity of the NIH data because patients completed tasks “a large majority of the time.” At a combined completion rate of 67.1% for hard tasks by patients, that is demonstrably false.  To illustrate how dramatic the issue is: the combined hard-task completion rate of the five patients who struggled completing hard tasks was less than 16%. That is not even close to a simple majority and a far cry from “a large majority of the time,” which Madian falsely claimed was achieved by patients.

It is difficult to believe that Madian could have gotten it so wrong by accident given that (by his own admission) he agonized over his answer in preparation for the Symposium and spent a significant amount of time on it in an alleged attempt to ensure accuracy of his response, wrote out his answer in advance, and apparently read it word for word to avoid making a mistake. Again, according to the NIH authors themselves, the groups differed with respect to the hard-task completion “by an immense magnitude” and with a stunning resulting p-value of 0.0001.

It is no coincidence that four out of the six patients who completed hard tasks at an abysmal rate chose the fewest hard tasks out of all patients. Their difficulty in completing hard tasks obviously had an impact on their number of hard-task choices.

Moreover, Madian acknowledged only the very first EEfRT study, although there are dozens of EEfRT studies by now, many addressing this very issue, and none of them agree with Madian. He also did not at all address the issues of not having calibrated the button-press requirement to patients’ individual ability or the requirement to exclude the data from those patients who clearly struggled with completing the hard trials.

The data of those patients should have been excluded because the proportion of hard tasks they chose is a reflection of their physical inability to complete hard tasks, not of their motivation or an alleged false perception of exertion capacity because it is highly likely that those patients’ inability to complete hard tasks impacted the number of hard tasks they chose. As a result, the EEfRT testing did not actually measure patients’ motivation or effort discounting (i.e., Effort Preference), and it is unacceptable to re-define an entire disease based on this invalid data. NIH’s failure to exclude the data from the patients who were too impaired to complete hard tasks consistently or at all means that the entire EEfRT findings are invalid.

Sloppy Paper

The intramural paper, with its apparent absence of proof-reading, is a mess. There is so much wrong with it that one has to wonder if the authors accidentally published a draft instead of a final, carefully proofed version. Below are examples that illustrate the point.

Supplementary Figure S5
The authors generated graphical examples (Supplementary Figures S5b-d) for how the selection ratio of easy versus hard tasks by the two groups would allegedly look in three different scenarios and explained their graphs in the corresponding analysis below Supplementary Figure S5. (I discussed how to read those graphs under “Hard-Task Choices” above.)

Supplementary Figures S5b-d:

Figure S5 analysis:

I am listing issues in the Supplementary Figure S5 analysis in the order in which they appear:

There is a duplicative sentence under B.

The authors claim that the reward value is charted on the y-axis of graph d; however, the reward value is obviously represented by the x-axis.

The analysis for graph b (under (B)) is wrong. The authors claim that graph b illustrates the following scenario: “[a] difference in effort sensitivity is represented by a constant reduction in hard task choices through the entire task, with the blue group having lower effort sensitivity than the gray.” Obviously, there is no reduction at all in graph b; the lines are parallel to the x-axis throughout.

The interpretation of the graphs refers to blue and gray groups, but the arrows in the graphs are actually purple and teal.

The analysis under Supplementary Figures S5e-f falsely specifies the sample size of the control group with 17. This is likely owing to the fact that the data for control F was excluded from the EEfRT analysis after the fact without adjusting the sample-size number under Supplementary Figures S5e-f accordingly.

Supplementary Figures S5e-f:

There is no legend for Supplementary Figures S5e-f indicating that the control group is graphed in blue and the patient group in red.

A careful scientist would have included a unit for the reward value for Supplementary Figure S5f.

Supplementary Figure S5e shows data for a trial 0. There can be, and was, no such thing as a trial 0. The same issue exists for the nearly identical Figure 3a.

Figure 3a:

When you take a look at graphical examples in Supplementary Figures S5b-d above, you will notice that the y-axes are labeled “Hard/Easy Task Choice Ratio.” Those graphical examples are meant to relate to the actual EEfRT data allegedly depicted in Supplementary Figures S5e-f; however, the y-axes of Supplementary Figures S5e-f are labeled “Probability of Choosing Hard Task,” even though the y-axes of all five figures should be identical. Madian corrected this inconsistency in his Symposium presentation slide by changing the designation of the y-axes for Supplementary Figures S5b-d (see below), but the inconsistently designated y-axes of Supplementary Figures S5b-d remain in the actual paper.

Throughout the paper, the authors play fast and loose with the concepts of number of hard-task choices, proportion of hard-task choices, and the probability of choosing the hard tasks, which they seem to use interchangeably as though they are the same. That obfuscation goes a long way in the authors’ attempt to make the EEfRT findings close to impenetrable.

Supplementary Figure S5a:
Supplementary Figure S5a shows the sequence of steps for the EEfRT. The steps are represented by computer screens. The blank screen before and after the choice screen are not depicted. Moreover, the first screen shows that if the participant were to choose the hard task, he or she might win $2. This was not one of the options in this study. Moreover, the last screen, which shows the participant’s actual reward, indicates that he or she won $2.37, which was not the option for that particular trial according to the first screen and was not an option for any of the trials in the study. It is not possible to win more than the potential reward value indicated when a participant’s choice is made for a trial.

That’s an astonishing number of mistakes to make in a single paragraph and the corresponding graphs/illustrations.

Figure 3b
For another example of the sloppy drafting, take a look at the two graphs below (Figure 3b). The button-press rate for the easy tasks is illustrated in the graph on the left, and the button-press rate for the hard tasks is captured in the graph on the right. Nevertheless, the authors claim the reverse. This error is a powerful demonstration of NIH’s up-is-downism or left-is-rightism in this case.

Range of total winnings
The paper incorrectly states that the maximum total amount a participant could potentially take home is $8.42. That is false. The correct amount is $8.24 as the highest reward level is $4.12 and participants get to take home the winnings from two randomly selected trials.

It is highly likely that there are many more mistakes in this paper given the large number I found by focusing merely on the EEfRT analysis.

In Part 3 of this 4-part series, I will address the EEfRT as a psychological measure, NIH’s desperate attempts to justify their EEfRT conclusions, the agency’s history of falsely reducing ME to fatigue and its pivot to Effort Preference, the investigators’ denial of established CPET science, and the ongoing inquiry by NIH into Effort Preference.

***

Open Access: I shared quotes, data, images from the paper “Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome” under the Creative Commons license, a copy of which can be found here. I indicated how I re-analyzed the data.

Posted in Uncategorized | Comments Off on The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 2)

The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 1)

The infamous intramural National Institutes of Health (NIH) paper on post-infectious Myalgic Encephalomyelitis (ME), a disease affecting many millions worldwide, purports to define the ME phenotype based on a cohort of 17 ME patients. With this study, NIH continues its obstinate false portrayal of ME as a disease characterized mainly by fatigue. However, the agency put a new spin on its decades-old fatigue narrative. Using the Effort Expenditure for Rewards Task (EEfRT) in a 15-patient sub-set, the investigators reframed fatigue as “unfavorable preference” to exert effort or an “unfavorable” “Effort Preference”—which they say is the decision to avoid the harder task—to be a “defining feature” of ME. According to NIH, this Effort Preference outcome was the study’s “primary objective.” The agency, in essence, pathologized pacing and branded ME with a new and highly prejudicial malingerers’ label.

The Effort Preference claim is an endorsement and expansion of the work of Dr. Simon Wessely, the knighted potentate of the biopsychosocial brigade, which disparages the disease and its patients. According to Wessely, ME is a disorder of the perception of effort, which is identical with NIH’s characterization of Effort Preference. NIH used Wessely’s body of work as a blueprint for the NIH intramural study.

I have analyzed the EEfRT data and have found some serious issues. I will show that the investigators arrived at their false Effort Preference claim by failing to control for a number of confounding factors, for example by not excluding those patients who were demonstrably too sick to validly participate in the EEfRT as well as by misinterpreting and/or misrepresenting the effort data in a number of ways. Furthermore, the effort testing actually demonstrated that ME patients performed better on the EEfRT than the control group based on the reported data, something the authors obscured by failing to include the relevant analyses and disregarding the fact that patients employed a more effective optimization strategy on EEfRT testing than controls did, disproving the Effort Preference assertion. The study is a textbook case of the breathtaking power of statistics in the hands of researchers inclined to reverse-engineer their desired outcome. There is also a serious issue with the integrity of the data, some of which has clearly been falsely recorded, rendering the entire EEfRT data set unreliable.

These and the many other issues I will discuss are a manifestation of the deeply entrenched institutional NIH bias against ME that pervades the agency.

I would like to ask for your indulgence in reading this article because it is long and quite technical in parts, which is why I have split it into four parts. The first part explains that the Effort Preference claim—exactly—mirrors Wessely’s claims about ME. In other parts, I will share my analysis of the EEfRT data, discuss the various confounding factors invalidating the EEfRT results, explain the historically exclusive use of the EEfRT in the field of psychology and the unprecedented nature of NIH’s coining of the gravely prejudicial term Effort Preference to redefine a disease, refute NIH’s attempts to justify their Effort Preference claim, demonstrate the low quality of the intramural paper, and discuss how NIH’s institutional bias has gotten us here through their unconscionable staffing.

I promise that staying with the article is worth it. It is not every day that NIH researchers are shown, without a doubt, to have repeatedly misrepresented their findings in blatant fashion.

Effort Preference as a Defining Feature of ME

The investigators used, as the basis for their Effort Preference claim, the results of the button-pressing test of the modified (from its original design) EEfRT, a behavioral measure of reward-based motivation and effort-based decision-making that analyzes choices of hard as opposed to easy tasks in the context of the EEfRT.

According to the NIH study authors, Effort Preference is

“how much effort a person subjectively wants to exert”

and

“the decision to avoid the harder task.”

The alleged Effort Preference finding is emphasized in the paper’s abstract and conclusion over any other findings by being mentioned first and by being described as a “defining feature.”

In contrast, other findings (for example, alterations of gene expression profiles and sex differences) were mentioned later in the abstract and/or conclusion and presented as “consistent with,” not as “defining.” The study’s outcome also pointed to chronic antigen stimulation of an infectious pathogen, but those findings were downplayed as merely “suggested.” In other words, findings other than Effort Preference were given less weight. In total, Effort Preference was mentioned 26 times in the paper. By highlighting and amplifying the Effort Preference claim, the authors are signaling that it is the most important result of their study.

Here is how the authors explained their Effort Preference claim during the May 2, 2024 NIH Symposium on the intramural study: There is a valuation network in the brain that computes the cost-to-benefit ratio of effort and rewards, which impacts how effort feels. The process by which the valuation network determines the effort-reward ratio is called effort discounting. The authors claim that the EEfRT results are a behavioral sign for impaired effort discounting in ME patients. NIH named that impairment an “altered Effort Preference.” It manifests, according to NIH, in a discrepancy of how much ME patients think they can exert and how much they can, in fact, exert.

In other words, the allegedly altered Effort Preference in patients manifests in a dysfunctional perception of effort and/or rewards resulting in a misperception of patients’ capacity to exert. This, NIH claims, leads to deconditioning and functional disability. That is the main outcome of the intramural paper.

In addition, the investigators claim that only controls—but not ME patients—showed signs of peripheral muscular fatigue and neuromuscular fatigue.

Note that the abstract refers to ME as a “disorder” in blunt contravention of the 2011 expert Myalgic Encephalomyelitis: International Consensus Criteria (ME:ICC) and the 2015 NIH-funded clinical definition by the Institute of Medicine (IOM) (now National Academy of Medicine). It is also untrue that there are no disease-modifying treatments. There are a number of treatments that improve ME symptoms, including the immune-modulator and antiviral Ampligen, which is extremely effective in a sizable group of properly identified ME patients. What would have been accurate to say is that the FDA has been derelict in its duty to approve Ampligen, which has been in FDA-approved trials for decades, and other clearly effective therapeutics for ME. Of course, the claim that the patient-selection process was rigorous is self-serving and false as well as has been discussed many times and is addressed further in Parts 3 and 4 of this article. Finally, NIH just had to throw in the trope that ME is poorly understood, conflating their refusal to accept the findings of many thousands of peer-reviewed, published papers by ME researchers documenting the biomedical abnormalities of the disease with a lack of understanding of the disease. The abstract set the tone for the rest of the paper.

The Staffing of this Study Guaranteed Harmful Results
Given the staffing of this study, its alleged findings were guaranteed to be extremely prejudicial and harmful for ME patients.

Saligan’s Promotion of GET to Cure Catastrophizing in ME. Dr. Leorey Saligan, co-author of the intramural paper co-authored, while at NIH, a literature review on Chronic Fatigue Syndrome (or what he calls “chronic fatigue”) and catastrophizing, “Association of catastropizing and fatigue: A systematic review,” which promotes cognitive-behavioral therapy for ME patients. Advocates had protested Saligan’s involvement with the intramural study to no avail. See this excerpt from the catastrophizing paper:

“Catastrophizing was defined in these three studies as a belief that fatigue can cause negative outcomes such as dying. … Individuals with CFS grouped as high catastrophizers reported significantly greater fatigue severity than the non-catastrophizers. Although the high catastrophizers and non-catastrophizers experienced the same number of CFS-related symptoms, the high catastrophizers reported significantly greater disruption of fatigue with their activities of daily living than the non-catastrophizers. … One study investigated the effect of [mindfulness-based cognitive therapy] on fatigue by specifically targeting catastrophizing. Both fatigue and catastrophic thinking of CFS patients decreased immediately, at two and six months after [mindfulness-based cognitive therapy]. This result indicated that catastrophizing may serve as a behavioral marker that can be a target for fatigue reduction intervention like [mindfulness-based cognitive therapy]. Two of the three articles reviewed in this section showed small to moderate associations of catastrophizing on fatigue severity, and one showed a large association of catastrophizing on momentary fatigue and fatigue recall discrepancy.”

Walitt does not believe that ME is a medical entity. Dr. Brian Walitt, who designed and ran the study, is an acolyte of the biopsychosocial school. If you are not familiar with Walitt’s work, I recommend reading his Fibromyalgia papers. His views on Fibromyalgia mirror his views on ME because, in his opinion, the only difference between the two is which symptoms a patient predominantly complains about. He claims that if that is pain, the patient has Fibromyalgia, and if that is fatigue, the patient has ME.

This is perfectly aligned with Wessely’s view:

“The distinction between fibromyalgia and CFS is largely arbitrary and both overlap with affective disorder.”

That is, of course, false. Yes, there is symptom overlap between ME and Fibromyalgia, but the two also differ in meaningful ways and are separate medical entities. For example, Fibromyalgia patients experience symptom relief with exercise whereas ME patients are harmed by exercise. The paper “Culture, science and the changing nature of fibromyalgia” is a good place to start familiarizing yourself with Walitt’s beliefs. In it, Walitt opined that Fibromyalgia—and, therefore, ME— is a psychocultural construct (i.e., “shaped primarily by psychological factors and societal influences”) and a somatic symptom disorder that is associated with psychological illness (including major psychopathologies). Walitt asserts that patients gravitate toward Fibromyalgia or ME diagnoses because they are more desirable diagnoses than a psych diagnosis as they are more socially acceptable.

Below are two Walitt clips. In the first one, he talks about the relief of physicians once he educates them—”say[s] his message”—about no longer having to pretend that Fibromyalgia patients suffer from any abnormalities. In the second clip, Walitt claims that any and all life experience is psychosomatic.

 

The two clips above are from an interview Walitt gave in 2015. If you have not seen it yet, I encourage you to watch the ten-minute interview in its entirety; it will leave no doubt where the NIH study would inevitably lead under Walitt’s direction. In Walitt’s opinion, Fibromyalgia and ME are created by the mind and are normal ways of experiencing life as opposed to medical entities, so the fact that NIH saddled patients with the assertion that they overestimate effort and/or underestimate rewards should have come as no surprise to anybody. Walitt was unequivocal in this interview with respect to his opinion that Fibromyalgia, and, therefore, ME, is a disorder of subjective perception. NIH’s formalizing of Walitt’s views in the study that he ran by blaming a dysfunctional perception of effort and/or rewards was the predetermined outcome of the NIH study.

Since the 2015 interview, Walitt has come a long way in learning to be less obvious with his propaganda and more sophisticated with euphemisms in order to conceal his long-held psych beliefs behind more palatable and scientific-sounding verbiage, but Walitt’s convictions obviously have not changed. It is hard to fathom that somebody like Walitt has a place at NIH, but he is far from the only one there with such extremist views.

You can read more about Walitt here (my analysis of his 2015 interview) and here (my initial analysis of the intramural study).

Right out of Wessely’s Playbook
It is crucial to understand that the authors of the NIH study, with its Effort Preference claim, have been emphatically endorsing, amplifying, and building on the narrative of the biopsychosocial origins of ME propagated by Wessely, Wessely comrades, Dr. Michael Sharpe and Dr. Peter White, and other followers of the biopsychosocial school of ME, which falsely claims that patients can recover by adjusting dysfunctional beliefs and behaviors and reversing deconditioning. This has manifested in various references by NIH over time, directly and indirectly citing and accepting as factually correct the work of Wessely et al.

For example, take a look at the following slide frequently used by Dr. Avindra Nath, principal investigator of the intramural study:

The slide cites two studies. The first one cites Wessely and White. The second one is co-authored by Wessely; it is cited in the intramural NIH paper.

Post-infective and chronic fatigue syndromes precipitated by viral and non-viral pathogens: prospective cohort study

and

Chronic fatigue and minor psychiatric morbidity after viral meningitis: a controlled study

Most importantly, the Effort Preference claim clearly and unabashedly continues and expands on the work of Wessely et al. according to whom ME is a “general disorder of perception” and of “misperception” of “the sense of effort.” Below is a quote by Wessely in the “Chronic Fatigue Syndrome” chapter of Encyclopedia of Stress (Wessely, Simon; Cleare, Anthony J. (2000). “Chronic fatigue syndrome“. In Fink, George (editor). Encyclopedia of Stress. Academic Press. pp. 460–467. ISBN 9780080569772.).

“One theme that emerges from the literature of all the fatigue syndromes is the possibility of a general disorder of perception, perhaps of both symptoms and disability. At the heart of this misperception lies the sense of effort. Chronic fatigue syndrome patients clearly experience increased effort in everyday physical and mental tasks, reflected in a sense of painful muscle exertion and painful cognitive processing. This increased effort is the [sic] not the result of increased neuromuscular or metabolic demands (a Victorian concept), nor does it result in any substantial decline in actual muscle or cognitive performance. The result is a mismatch between patients’ evaluation of their physical and mental functioning and the external evidence of any consistent deficits. The basis of this disorder of effort must remain speculative, particularly since the perception of effort is a complex topic. It is possible that it is because the sufferer needs to devote more attention, even energy, to processes that the rest of us find automatic, be it muscular exertion or mental concentration.

“From this fundamental problem flow other problems identified in CFS, such as increased symptom monitoring, decreased tolerance, increased anxiety, and so on. These are not unique to CFS and have been described in fibromyalgia and irritable bowel syndrome. Thus some centrally mediated disorder of perception of information may underlie the experience of fatigue syndromes and explain the widespread discrepancies between the intensity of symptoms and disability and objective testing of a number of different parameters, both physiological and neuropsychological.” [emphasis added]

Below is the corresponding screenshot from the book:

This Wessely quote is practically a summary of the NIH study’s main claim, an altered Effort Preference in ME. Both Wessely and NIH agree in their assertion that ME is a centrally mediated disorder; see the screenshot of NIH’s conclusion above as well as the following quote from the NIH paper:

“Considering all the data together, PI-ME/CFS appears to be a centrally mediated disorder.”

Moreover, NIH’s absurd failure to find POTS, Neurally Mediated Hypotension, decreased Natural Killer Cell function, neurocognitive dysfunction, muscle fatigue, sleep abnormalities, lymph node enlargement, ventilatory function abnormalities, abnormalities on brain imaging, brain injury as well as other very common findings—does that sound like NIH studied properly diagnosed ME patients?— jibes with Wessely’s false claim that there is no “substantial decline in actual muscle or cognitive performance” and no “external evidence of any consistent deficits,” allegedly resulting in “widespread discrepancies between the intensity of symptoms and disability and objective testing.”

Walitt’s and Saligan’s clairvoyant prediction. In NIH’s press release accompanying the publication of the paper, Walitt is quoted as follows:

“… fatigue may arise from a mismatch between what someone thinks they can achieve and what their bodies perform.”

That is NIH’s characterization of Effort Preference. Walitt’s message is unmistakable: patients only think that they are impaired. This statement mirrors, to a T, Wessely’s claim:

“The result is a mismatch between patients’ evaluation of their physical and mental functioning and the external evidence of any consistent deficits”

It also tracks with how Walitt and Saligan characterized ME in a 2016 NIH chemobrain paper he and Saligan co-authored:

“The discordance between the severity of subjective experience and that of objective impairment is the hallmark of somatoform illnesses, such as fibromyalgia and chronic fatigue syndrome.”

Being a rheumatologist and nurse scientist respectively, who likely had not heard of the EEfRT at the time of the chemobrain paper, it is quite extraordinary that Walitt and Saligan predicted in 2016 precisely what the 2024 NIH study that Walitt would end up designing and running, and particularly the EEfRT testing, would find. The very thing that the NIH authors claim defines ME—Effort Preference or what Walitt calls “a mismatch between what someone thinks they can achieve and what their bodies perform”—is what Walitt and Saligan characterized as “the hallmark of somatoform illnesses, such as … chronic fatigue syndrome” in 2016 and what Wessely claims is at the heart of ME.

The table below illustrates just how closely the NIH study tracks Wessely’s effort claim; NIH is 100% aligned with Wessely:

Effort Preference claim was the main goal of the intramural study. Showing “the existence of EEfRT performance difference” between ME patients and controls was the “primary objective” of the intramural study, according to a comment by NIH in the Peer Review File. In other words, confirming Wessely’s claim was what the NIH investigators had set out to do.

In Part 2, I will share my analysis of the EEfRT data and of NIH’s misrepresentation of the data that was required to make their Effort Preference claim.

Posted in Uncategorized | Comments Off on The NIH Intramural ME Study: “Lies, Damn Lies, and Statistics” (Part 1)

NIH Study: Walitt Strikes Again

It has taken NIH eight full years to complete their intramural study on Myalgic Encephalomyelitis (or what NIH calls “ME/CFS”) and publish their paper “Deep phenotypic of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome.” I take no pleasure in the fact that some of us accurately predicted the outcome of this study to be a delivery vehicle for the introduction of a new psych label for ME—which turned out to be the concept of effort preference—re-branding post-exertional malaise (PEM) and pacing as psychological.

The burying of ME has been going on for a long time at both NIH and CDC. Patients need to learn from this instead of giving NIH the benefit of the doubt because the agency has proven, over decades, that it would be foolish and dangerous to do that. However, even without awareness of that history, it would be difficult not to see the serious flaws in this paper. What follows is a truncated analysis of some of the issues.

Concerns Materialized

Advocates’ and patients’ main concern from the outset was the historic bias on the part of NIH against acceptance that ME is an organic disease process rather than a vague syndrome of fatigue and malaise. ME patients experience a dramatic loss of health and function that is both severe and persistent. NIH has never taken ME seriously by investing in research that would look in earnest for the cause of this loss. In the case of this study, the key concerns were that NIH would 1) use overly inclusive criteria to select a heterogenous patient group resulting in meaningless data, 2) use a small patient cohort that would produce statistically irrelevant results, 3) put researchers with an agenda in charge of the study, who would disregard or disrespect the scientific method, the large number of abnormal biological findings by renowned researchers in the field, and the experience and knowledge of patients, and 4) perpetuate or even expand the institutional bias against ME as an organic disease in favor of a psychological disorder. The latter two threats loomed particularly large in light of NIH’s involving of Dr. Brian Walitt, an extremist ME denier, who has been vocal for many years with his unsupported view that ME is somatoform. The actual outcome is an exponential step in the wrong direction due to the NIH’s confirming and even exceeding all of our concerns.

The ME community extensively protested numerous aspects of the NIH intramural study in 2016 with some limited success. For example, the community’s protest prevented NIH’s inclusion of a control group of patients with Functional Movement Disorder, a form of Functional Neurological Disorder. That initial plan immediately tipped the community off as to NIH’s agenda of classifying ME as functional or otherwise psychological. Another win was the removal from the study of Dr. Fred Gill, a proponent of Graded Exercise Therapy and Cognitive Behavioral Therapy for ME patients

Unspecific Criteria, Mixing and Matching of Criteria
We were also able to head off the use for patient selection of the 2005 Reeves Criteria, a preposterously over-inclusive definition of ME that would have guaranteed the participation of patients who do not actually have ME, e.g., patients with idiopathic fatigue, primary depression, etc. However, NIH used two other overly broad and unspecific definitions, one being the 2015 criteria of the former Institute of Medicine (IOM). The IOM redefinition of ME—a government-sponsored project, which many patients with untreated sleep disorder would satisfy due to the definition’s reliance on self-reporting and its lack of exclusions—was the subject of one of the largest ME-community protests, which included an open letter to the Secretary of HHS signed by more than fifty experts in the field and objecting to the redefinition of ME by non-experts, such as the IOM panel stacked with a considerable number, about half, of non-experts. Moreover, the IOM committee members were emphatic in their instructions that the IOM riteria were not to be used in research precisely because they do not guard against selecting the wrong cohort, a fatal blow to any research.

NIH’s use of the IOM definition in their intramural study constituted exactly the kind of bait and switch that independent advocates had been warning about with respect to the IOM definition. NIH, as the world’s premier medical research institution, which had bankrolled the new IOM definition, was, of course, acutely aware of the impropriety of using the IOM criteria. Similarly, the Fukuda Criteria, also used for patient selection in this study, tend to capture a large number of patients who do not have ME, not unlike the Reeves Criteria.

Why would NIH attempt to use the most unspecific U.S. criteria, Reeves, for patient selection and not even consider—despite changing study criteria at least five times in the process of designing this study—the strictest criteria available, the 2011 International Consensus Criteria written by ME experts, which would seem to be desirable in any ME research but especially at an institution of NIH’s reputation in research of a disease that the agency had abandoned for decades? Why was the agency afraid to study the most robust cohort?

And why did NIH use three different definitions in this study, the third one being the experts’ Canadian Consensus Criteria of 2003 (CCC)? I have seen ME studies that require that each participating patient meet each of several different criteria, although I have often wondered why those researchers would not just use the strictest of those criteria. If I had to take a guess, the inclusion of the IOM Criteria in particular but also the use of Fukuda is probably a nod to the CDC and NIH in hopes of securing future NIH funding by including government-sponsored definitions. However, the situation here is different: patients had to meet just one of the three definitions, not all of them. So, basically, NIH was studying three different, not comparable cohorts without differentiating them, potentially leading to studying unlike patient groups as if they were the same and, therefore, making the entire study suspect.

It is true that ME experts acted as adjudicators in the patient-selection process, but as far as I know, all those adjudicators were involved in ME research at the time and were likely vying for NIH grants and, therefore, not likely to risk antagonizing NIH. More importantly, Walitt was the one pre-screening patients, providing an opportunity to skew patient selection in a way that favors the inclusion of patients with a less classic ME profile or worse, which apparently is exactly what happened. The study authors’ self-serving claim that they “used rigorous criteria to recruit” ME patients, clearly untrue, casts doubts on their other claims.

When NIH felt the heat from independent advocates and patients in 2016, they promised that all participating patients would have to meet the CCC and be objectively tested for PEM in an effort to quash the criticism.

This was once again a bait and switch because NIH kept neither of these commitments. For example, only nine patients met the CCC, the strictest criteria used. In case there was any doubt as to the broad nature of the IOM Criteria, all seventeen ME patients met that definition. Fukuda was not far behind with fourteen patients.

Moreover, patients were not objectively tested for PEM before enrollment to rule out misdiagnoses. Self-reports of PEM are often undependable; therefore, relying on them is an unscientific approach when an objective test is available. Two-day CPETs are the gold standard for identifying such exacerbation, and NIH’s refusal to require them as part of the cohort-selection process is illuminating. Research by questionnaires will not result in science deserving of the label.

Statistically Irrelevant Cohort Size
The extremely small cohort size was another concern for many independent advocates. The study was originally designed for forty ME patients, a disproportionately small number for a disease that the CDC has been claiming for nearly two decades afflicts millions in this country. Because this study was on the NIH’s back-burner, it was completed years after the original projected completion date of 2018 and ultimately included only seventeen ME patients, a farcical cohort size that basically constitutes statistical gaslighting. The agency claims that this delay and downsizing were due to the COVID pandemic, a claim that does not hold up on further examination as the study was supposed to be completed well before the beginning of the pandemic. In any event, it would be naive to underestimate the impact of an NIH paper no matter how underpowered it is.

Importantly, the paper claims that four of the seventeen selected patients, or about twenty-five percent, spontaneously recovered. Because the usual recovery rate in ME (if there are any true recoveries as opposed to misdiagnoses or patients adjusting their expectations as to what healthy is) is much smaller at around five percent, some or all of those four patients might have been improperly selected. Frankly, such a high number of purported recoveries raises questions about the ME diagnosis of the other participants. Moreover, the paper’s inclusion of the four allegedly recovered subjects’ data is dubious and calls into question the validity of the entire paper.

Ridiculing of Patients
There were a number of other indicators of NIH’s antagonistic attitude toward ME, including the fact that the researchers involved, other than Walitt, obviously wanted nothing to do with ME and apparently viewed their involvement as a hardship assignment. The research team expressed their resentment by dubbing themselves “Team Tired,” a deeply ableist and demeaning name that foreshadowed the effort-preference outcome and confirmed that the institutional bias vis-a-vis ME that has been entrenched at NIH for decades is alive and well.

 

Dr. Brian Walitt

NIH’s Dr. Brian Walitt was the Lead Associate Investigator and Lead Author of the paper who designed the study protocol, pre-screened patients for participation, and made the day-to-day decisions in this study.

In a 2016 NIH ME advocacy call, Walitt famously claimed that he has no bias—not just no bias vis-a-vis ME, but no bias at all—which is rich given that Walitt is an acolyte of British psychiatrist Dr. Simon Wessely and medical historian and professor of psychiatry Edward Shorter, leading, obstinate, and prolific proponents of the disgraced and discarded theory of a biopsyschosocial origin of ME, and others in the biopsyschosocial school.

The similarities between Wessely and Walitt are striking and plentiful, for example:

Wessely in 1994:

“I will argue that M.E. is simply a belief, the belief that one has an illness called M.E.”

Walitt in 2015:

“The experience of fibromyalgia is very much real to the people who have it.”

 

There is a similar resemblance between the views of Walitt and Shorter, who devoted an entire book, From Paralysis to Fatigue, to the history of psychosomatic illness, which in his view includes ME. Naturally, it was Walitt who invited Shorter for a talk at NIH in 2016. This heavily criticized event was staunchly defended by Dr. Walter Koroshetz, Director of the National Institute of Neurological Disorders and Stroke (NINDS). Koroshetz argued that the invitation did not constitute an endorsement as though providing Shorter with an NIH pulpit is harmless. Would NIH have invited Andrew Wakefield? Koroshetz’s feigned obtuseness fooled nobody given the not subtle similitude of Shorter’s and Walitt’s views.

As is the case with Wessely, there is an endless supply of execrable Shorter quotes regarding ME. Here is just one:

Patients’ groups and physician-enthusiasts of CFS have seized with glee a trickle of inchoate immunological findings.… Quite naturally, psychosomatic patients who want their symptoms to keep abreast of scientific progress wish to see the underlying source of their problems as immunological in nature.

Long-time independent advocate Liz Willow aptly characterized Walitt in a recent tweet:

 

 

 

 

Dr. Stephen Straus was the architect of the concept of ME as mere “Chronic Fatigue,” which concept has since permeated all federal health agencies, most notably the CDC and NIH. As reported by the late ME advocate Craig Maupin as a result of a Freedom of Information Act request, author of the blog The CFS Report, Straus is on record in a memo to the CDC’s Dr. Fukuda (of the 1994 Fukuda Criteria) as to his hope and plan of reducing ME—a complex multi-system disease with many severe symptoms, fatigue not being the defining one—to mere fatigue as the result of the then-new Fukuda Criteria. Straus predicted that “the notion of a discrete form of fatiguing illnesses will evaporate. We would then be left with Chronic Fatigue that can be distinguished as Idiopathic or Secondary to an identifiable medical or psychiatric disorder.” Straus sounded elated when he concluded this fantasy with, “I consider this a desirable outcome.” In other words, the goal of the most prominent ME researcher at NIH in the 1980s and 1990s was the abandonment of ME, and nothing seems to have changed at NIH more than three decades later as Walitt carefully traces Straus’s footsteps.

Moreover, Straus was dismissive of findings of immunological abnormalities in ME. He also considered ME to be on a “continuum of illnesses in which fatigue is either the most dominant symptom or the most clearly articulated.” That is exactly Walitt’s position when he considers ME and Fibromyalgia to be all but identical, their diagnosis merely hingeing on “what [patients] complain about”:

The complaint that predominates your existence is how you end up being named, which has nothing to do with your physiology.

 

 

That is who pre-screened ME patients for intramural NIH study, somebody who might as well have thrown darts for patients selection. And once again, Walitt finds himself supported by Wessely:

The distinction between fibromyalgia and CFS is largely arbitrary and both overlap with affective disorder.

According to ME expert Dr. Byron Hyde, Straus was even verbally violent and threatening  toward ME patients in person at a CDC meeting in Atlanta (start listening at 4:20). This was still not enough for NIH to remove him from ME research.

 

 

As a rheumatologist, Walitt infiltrated and embedded himself into the world of ME (and now also Long COVID and Gulf War Illness) via Fibromyalgia. Although Walitt seemed to be doing a reasonable, though ultimately unconvincing, job feigning compassion toward his Fibromyalcgia patients, whom he paraded around in his presentations like circus attractions, his unhinged views are aggressively hostile toward ME and Fibromyaliga patients; he has been vocal with his conclusory view that both ME and Fibromyalgia are somatoform.

The discordance between the severity of subjective experience and that of objective impairment is the hallmark of somatoform illnesses, such as fibromyalgia and chronic fatigue syndrome.

For well over a decade now, Walitt has been establishing himself as an ME and Fibromyalgia enemy who considers the symptoms of ME and Fibromyalgia as within the “range of normal” and not worthy of validation or treatment because, in his opinion, they do not constitute medical entities. What is abnormal, in his view, is the patients’ beliefs that they are suffering from a disease. As if it is not bad enough that an NIH researcher has been allowed to build a career on such propaganda, Walitt works hard to convert his colleagues whom he claims are relieved that they no longer have to pretend that Fibromyalgia is pathological after he “say[s] [his] message.” This disturbing ten-minute Walitt interview about Fibromyalgia will leave little to the imagination in terms of Walitt’s predisposition. I analyzed the ghastly views he expressed in this interview when independent advocates protested his involvement with the study.

Another good example of Walitt’s disturbed views on Fibromyalgia and ME is an opinion paper that he co-authored with his biopsychosocial soulmate, the late Dr. Frederick Wolfe: “Culture, science and the changing nature of fibromyalgia.” Somebody saved a copy of this paper on the Wayback Machine.

In this paper, in which he quotes Wessely and Shorter, Walitt equates Fibromyalgia with Neurasthenia—i.e., “the vapors,” “depression of spirit,” “hypochondriac affections,” “effort syndrome,” etc. Neurasthenia is a psychologized fatigue concept that had started out as a central-nervous-system disorder and was the predecessor of Holmes’s and Fukuda’s “CFS.” Walitt referred to Wessely’s framing of ME as Neurasthenia in the slick and insidious “Old wine in new bottles: neurasthenia and M.E.” In his paper, Walitt expresses his belief (no science required) that Fibromyalgia is psychocultural, i.e., “shaped primarily by psychological factors and societal influences” and is associated/comorbid with psychological illness. Throughout the paper, Walitt labels Fibromyalgia—in addition to psychocultural—psychological, psychogenic, psychosomatic, a Somatic Symptom Disorder (i.e., somatoform), a social construct, etc. He further claims that Fibromyalgia is related to psychological disorders (including major psychopathologies), psychosomatic symptoms, and personality disorders and that it is a convenient, because socially acceptable, diagnosis for mentally ill patients to hide behind. According to Walitt, Fibromyalgia patients are not to be trusted because they have too many symptoms that are too severe and too unusual while appearing too healthy resulting in physicians’ shunning of them. Walitt laments the failure of Fibromyalgia as a psychological concept and strongly disapproves of what he calls the “success” of Fibromyalgia. He sounds practically paranoid when he blames “powerful societal forces,” which he claims have been “marshalled,” for propping up the “‘real disease’ message.” Walitt frames Fibromyalgia as a con job by patients and patient organizations whom he claims were enabled by other malevolent actors and forces, such the American College of Rheumatology (guilty for naming and defining it), governments, disability and pension systems, physicians, the legal and academic communities, scientific organizations, pharmaceutical companies, the Internet, and ICD codes. That’s an impressive list. Imagine if patients indeed had the allyship of those stakeholders and systems! There is no other characterization of Walitt’s Fibromyalgia views than deranged. And, of course, because ME and Fibromyalgia are basically the same to him, all of this applies to ME according to Walitt’s twisted views.

As sordid as the government’s record regarding ME has been, the intramural NIH study has opened a new, even darker chapter for patients. Putting Walitt in charge of this study despite his unmistakable bias against ME patients is just one item on a long list evidencing an atrocious track record on the part of the federal health agencies, including NIH, when it comes to ME. It warrants a reminder that there have been calls by federal health officials to silence critical patient voices as well as actual threats against members of CFSAC—the since dissolved federal Chronic Fatigue Syndrome Advisory Committee—who refused to toe the party line in addition to many actions by federal officials designed to thwart patient advocacy.

Walitt’s unyielding belief that ME, Fibromyalgia, and other diseases are reflections of an incorrect inner understanding of patient body’s capabilities seems to have grown only stronger over the years. His extremism will likely be weaponized against patients for decades to come unless NIH stops involving him in these studies. So far, NIH has circled the wagons to defend him and even promoted him by moving him from the National Institute of Nursing Research (NINR) to the more prestigious NINDS and making him head of the Interoceptive Disorders Unit.

NIH’s unconscionable insistence on putting a researcher in charge who propagates at NIH and elsewhere that ME does not even exist sends a chilling and unambiguous signal that the agency is committed to going down the biopsychosocial rabbit hole with the unmistakable goal of discrediting ME and ultimately discarding it as a medical entity altogether, which is just what Walitt has been lobbying for inspired by his role model Straus.

Effort Preference

Functional and Somatoform
The community’s worst concern materialized. All protestations of Nath and NIH surrogates in the media to the contrary, this study marks a renewed effort meant to lay the groundwork for NIH’s and CDC’s official categorizing of ME as psychological after decades of working toward that goal.

Just as the advoctes predicted, NIH attached a new psychological label to ME: effort preference. According to NIH, “[T]hese findings suggest that effort preference, not fatigue, is the defining motor behavior of this illness.” (Advocates, of course, agree that fatigue is not the defining symptom of ME, but what NIH was really getting at is that there is no muscle fatigue in ME.) The paper also asserts, somewhat inconsistently, “Fatigue is defined by effort preferences and central autonomic dysfunction.” NIH defined effort preference as “the decision to avoid the harder task” or “how much effort a person subjectively wants to exert.” The term effort preference—featured prominently in the Abstract and used twenty-six times throughout the paper, i.e., a major aspect of the study’s outcome—indicates a choice on the part of patients in how active they are. That part of the paper is an abomination and has already caused and will continue to cause great harm to patients who face prejudice, neglect, ridicule, and abuse on a regular basis.

From the start, NIH was committed to finding a functional angle in ME as evidenced by their attempt to include patients with Functional Movement Disorder, a common sub-type of Functional Neurological Disorder, as a control group, seemingly an effort at reverse-engineering their desired outcome. This quote from the paper seems to be saying functional as well: “Considering all the data together, PI-ME/CFS appears to be a centrally mediated disorder.” This is a conclusion at which NIH arrived after allegedly finding reduced brain activity in the right temporal-parietal area (TPJ). The paper claims:

This decreased brain activity is experienced as physical and psychological symptoms and impacts effort preferences.

And then there was the NIH media blitz following the publication of the paper with the agency hammering home the functional-disorder message. Dr. Nath said:

We did not find any structural abnormalities but multiple functional abnormalities.

and

It’s a functional suppression; it’s not a structural damage.

So, the brain of ME patients is not working properly, but NIH found no reason for that malfunction. Ergo, the functional classification, which in light of NIH’s limited, flawed, and biased inquiry is entirely unsupported. How can the study simultaneously find that patients prefer to exert themselves less than they safely can but also that their brain is limiting their ability to exert themselves? After all, effort preference and effort intolerance are mutually exclusive. Is it a choice or not? Were they lying then, or are they lying now?

If you are thinking “no structural damage” cannot be a good thing for ME patients, you would be correct and not just from a scientific-progress perspective. Classifying ME as functional is psychologizing it. Just think of how patients are faring who have a Functional Neurological Disorder (formerly Conversion Disorder) diagnosis, which is listed as a mental illness in the DSM-5. To assess ME patients’ effort-based decision-making, NIH used the Effort-Expenditure for Rewards Task (EEfRT), an alleged measurement of motivation and anhedonia, which is the core symptom of Major Depressive Disorder. Why would NIH even investigate a psychological model of ME while refusing to perform a second-day CPET if it were indeed committed to science?

In the context of NIH’s claim of having found only functional abnormalities and no structural abnormalities, consider Walitt’s historical emphasis on the dual function of the brain: biological and psychological. According to him, the symptoms of patients with fibromyalgia and ME are the result of normal brain function, i.e., not of structural abnormalities. Although Walitt acknowledges that patients believe that their symptoms indicate that they are sick, he claims that those symptoms are not a sign of abnormal biology indicative of disease.

But what NIH has done with the effort-preference model of ME goes further than stigmatizing patients with a functional label. NIH claims that they did not find any muscle fatigue as the underlying cause for the decline in performance of ME patients. In other words, according to NIH, ME patients limit their effort due to unwarranted concerns about PEM despite their bodies, in fact, not being prevented by an organic cause from performing harder tasks. That is a textbook case of a somatoform disorder (technically now captured as Somatic Symptom Disorder in the DSM-5), a disorder characterized by physical symptoms accompanied by an excessive amount of time, energy, emotion, and/or behavior related to the symptom that results in significant distress and/or dysfunction. If patients’ bodies are not actually dictating that they stay within strict limits, then their pacing by staying within safer limits and thereby severely curtailing their lives surely would constitute an excessive and dysfunctional focus on their symptoms. Of course, Functional Neurological Disorder has historically also been considered a sub-category of somatoform disorders. NIH’s fancy footwork in the paper itself and particularly in the media cannot cover up the new psych label that they pinned on ME.

Crucially, the presence of many biological  abnormalities in ME patients does not rule out a somatoform disorder because symptoms do not have to be medically unexplained to fit the DSM-5 diagnosis of Somatic Symptom Disorder. Once caught in the somatoform searchlights, patients cannot escape them no matter how well adjusted they are. The more that patients point to their numerous abnormal test results, the more they are faulted for their alleged obsession with their symptoms as if they are pulled down by quicksand. What a carefully constructed, treacherous catch-22!

It is self-evident that Walitt is responsible for the effort-preference hit job. When you compare the psych language historically used by him—“discordance between the severity of subjective experience and that of objective impairment is the hallmark of somatoform illnesses, such as … chronic fatigue syndrome”—with the soundbite he gave to the NIH press office regarding the NIH paper, it tracks one hundred percent:

Rather than physical exhaustion or a lack of motivation, fatigue may arise from a mismatch between what someone thinks they can achieve and what their bodies perform.

In other words, the psych part of the study was pre-determined by NIH’s putting Walitt in charge.

It would be interesting to hear NIH explain how an effort preference leads to enlarged lymph nodes, elevated viral titers, abnormal two-day CPETs, abnormal SPECT scans, etc. Moreover, given their effort-preference rubbish, should patients not be getting better instead of crashing when they push themselves past their safer limits as they often have to because life demands it?

NIH’s solidifying of its stance that ME is a psychological disorder will send an unambiguous signal to researchers who seek funding for biomedical studies not to submit grant proposals to NIH that are inconsistent with a functional or somatoform or another psychological angle of ME. It is already nearly impossible for independent researchers to obtain NIH funding for ME studies focused on physiological causes and mitigations. This study might well slam the door on experts in the field, who have advanced the science of ME dramatically, unlike NIH, because extramural researchers may now be prevented from continuing to do so and from replicating ME research.

No Muscle Fatigue
Participating in this study was obviously grueling for patients and likely involved travel for many of them and more than one trip to the NIH facility for some. In other words, patients who were properly diagnosed with ME were probably experiencing PEM by the time they arrived at NIH and certainly once testing began. Nevertheless, the paper rules out the possibility that the subjects’ decline in performance was caused by muscle fatigue, even though the data does not validly support that claim. Several tests were performed that relate to the effort-preference claim. None of them involved the entire minute ME-patient cohort, and the only test that even attempted, through the use of electromyography, to determine whether the declining performance of ME patients was due to muscle fatigue as opposed to avoidance behavior involved only eight ME patients. Add to that the issues with patient selection (What are the odds that those electromyography patients met CCC?), and the effort-preference claim is not only unrelated to science; it is hostile to science.

This raises the question, why would NIH even consider publishing this paper, which brands ME patients with a sweeping and obviously manufactured psych label based on a perversion of science? I understand the motive of Walitt: unfounded psych claims are the fuel on which his career advancement feeds. Unfortunately, there is no other way to interpret NIH’s decision to publish this train-wreck than that NIH shares Walitt’s agenda.

The effort-preference concept revived the pernicious notions of “fear of exercise” and “avoidance behavior” and pathologized pacing, without which patients continue to deteriorate. The resemblance of those discredited concepts and effort preference tracks with Walitt’s favored research area: aversive symptoms that develop after certain triggers, such as infections.

For years, patients were urging the government to study PEM. The federal health agencies refused to even acknowledge PEM until that became untenable in light of the strong science around two-day CPETs. They then pivoted to blaming patients for not pacing properly when they crashed. Now, they found a way to weaponize PEM by claiming that patients just think that they are crashed, but their brain is mistaken. NIH has taken one of the major ME symptoms—the exacerbation of symptoms after exertion or what NIH calls effort sensitivity—and redefined and psychologized it by bastardizing science.

Of note, even the one-day CPET was performed on only eight ME patients. NIH’s decision not to adopt the gold-standard two-day CPET allowed them to claim that patients are deconditioned. They knew that a second CPET would clearly show a significant drop-off and, thus, refute the implication that deconditioning is involved in patients’ disability. A second-day test would also have dropped a grenade into their effort-preference game plan, and that is likely why NIH shied away from it, not financial constraints at half a million dollars per patient and not feigned concern for patients’ wellbeing as they claimed.

Implications for Long COVID

There is an obvious effort at NIH to increasingly platform Walitt prominently with respect to ME, Long COVID, and Gulf War Illness despite deafening advocate protests. Although the science is not there yet to show that PEM-like Long COVID is identical to ME—and many in the ME community, including eminent researchers in the field, certainly have their doubts about that—NIH seems to consider them basically the same. So, with that backdrop, it is important to note that Walitt is the principal investigator for ongoing intramural Long COVID and Gulf War Illness studies at NIH, and I believe that those communities should brace themselves for an outcome of their NIH studies that will be similar to the ME paper in terms of a new psychological label.

Walitt has a habit of giving previews of his future studies, which apparently is how NIH does science these days. In his paper “A clinical primer for the expected and potential post-COVID-19 syndromes,” he is predicting that he will find sociocultrual stressors (I sense an effort to be more refined with his euphemisms since the days when he called Fibromyalgia pyschocultural) as well as neuropsychiatric and psychiatric issues. In addition, he is prophesying issues with the functional architecture of the brain, just like NIH claims to have uncovered with respect to ME. Whatever Walitt’s Long COVID studies will cost the taxpayer will be merely to buy the appearance of science and validity, not to actually to do legitimate science. Here is a screenshot of part of the paper’s conclusions:

It is untenable for NIH to continue involving Walitt in any ME, Fibromyalgia, Long COVID, and Gulf War Illness studies. If the agency nevertheless does so, they are sending a clear signal. Empowering obviously extremist researchers is not an accident.

I realize that this requires a difficult adjustment reaction, but I would strongly caution Long COVID patients against believing that what happened to ME patients over many decades could or would not happen to Long COVID patients. That would be a dangerously rose-colored view of the state of affairs that underestimates both the agenda and the power of the biopsychosocial school. Wessely disciples, such as Walitt, have mercilessly built successful careers on the backs of ME and other patients, and they will not hesitate to bring Long COVID under their psych umbrella. The only chance Long COVID patients have is getting out in front of this and starting to protest Walitt’s involvement now. Once the Walitt Long COVID paper is published, it will be too late, and his appointment as principal investigator of an ongoing NIH Long COVID study, for which he is currently recruiting patients, means that Long COVID advocates will come from behind on this.

I, therefore, urge Long COVID patients to coalesce as many patients as possible (using appropriate caution to protect their health) as well as healthy allies to prevent NIH’s leveraging of the intramural ME study in their intramural Long COVID study. This will be an existential fight for the Long COVID community, and all stops have to be pulled out to prevent Long COVID from being buried. Long COVID has strength in numbers, and the sooner patients realize the enormity of the fight they are in for, the higher the chances that Long COVID will fare better than ME has so far. Patients and advocates have a voice and the power to affect this situation if they take it seriously and act vigorously. Their future medical care and, for some, even their lives hang in the balance.

I also encourage Long COVID patients to study the sordid ME history, no small feat. A good start would be reading Osler’s Web, the acclaimed book about the history and politics of ME written by award-winning investigative journalist and brilliant writer Hillary Johnson.

Existential Risk to ME Patients’ Income

In 2016, I warned about the potential fall-out of Walitt’s involvement in this study with respect to patients’ benefits. Psychologizing ME as the NIH intramural study has done poses dangers on many fronts (medical treatment, support from family and friends, presentation of ME in the media, contamination of research, etc.), but I cannot overemphasize the danger to patients’ long-term disability benefits, which generally are limited to twenty-four months for disabilities caused or contributed to by mental/nervous disorders, including psychological disorders; this includes any health issues allegedly presenting with the psychological concept of effort preference. Contrast that with disability benefits not involving mental-health aspects, which extend until retirement age or recovery. Already approved patients who are past the twenty-four-months mark are no less at risk. Due to the frequent abuse by insurers of the standard mental-health limitation in long-term disability policies, it has always been exceedingly difficult for ME patients to get approved for long-term disability benefits beyond twenty four months, but the intramural NIH study is all but guaranteed to make that now exponentially harder if not impossible, having provided ammunition to insurers against patients of which insurers will undoubtedly make good use.

Conclusion

This NIH study is a damning case study on flawed methodology. Worse than the methodological flaws of the study is NIH’s obvious biopsychosocial ME agenda confirming its decades-old institutional bias of ME as a psychological disorder. Letting Walitt run the study made this outcome a foregone conclusion. Had this been private-sector research in any field, the research team would have known better than to publish this paper or else have reason for serious concern about reputational damage for being affiliated with a study that manifestly worked hard to deliver a biased and predetermined outcome. NIH lifers, obviously, do not have such concerns because of the lack of repercussions for scientific misconduct at NIH, but if any of the 75 researchers involved with this study have private-sector aspirations, having their name on this paper might turn out to be a career-limiting move unless they distance themselves from it.

In 2013, NIH bought a new ME definition from the IOM while remaining steadfast in their decades-long refusal to conduct ME research. NIH has now taken that new definition and slapped a psych label on ME patients through its intramural study—a manufactured outcome at which NIH arrived by letting somebody whose life’s mission has been to psychologize diseases such as ME look at only eight questionably selected patients. This study cost eight million dollars of taxpayer money and delivered exactly the outcome Walitt—and by association the NIH—desired: a reinvigorated campaign to dismiss ME as an organic disease. Independent advocates predicted this.

The effort-preference part of the study does not merely constitute scientific malpractice; given its implications, it amounts to scientific battery. It has the potential of becoming the PACE trial* on steroids and should be retracted immediately, but until it is, it presents a clear and present danger to ME patients, and chances are that NIH will build on their false effort-preference claim for ME and other diseases it considers related.

We must continue to protest the effort-preference outcome of this study; it cannot stand. Moreover, NIH has left no doubt that intramural ME research is harmful to patients. We need NIH to commit to substantial funding of extramural research of the numerous areas of biological abnormalities in ME by experts in the field who are committed to science. Other than funding, however, NIH needs to stay away from ME as long as it is enabling researchers who harbor Walitt-type biases. We also must call for an immediate investigation of NIH’s institutional bias against diseases such as ME, Long COVID,  Gulf War Illness, and Fibromyalgia.

___________________________________________________________

*PACE is the debunked (but not yet retracted) U.K. study recommending Graded Exercise Therapy and Cognitive Behavioral Therapy for ME, which has caused immeasurable harm to patients worldwide but especially in the U.K.

Posted in Uncategorized | 9 Comments

Keep an Eye on Your Walitt: NIH Study Poses Dramatic Risk to Long-Term Disability Benefits

Many ME/CFS* sufferers are covered by employer-sponsored long-term disability (“LTD”) policies. These policies almost universally limit LTD benefits to 24 months for disability caused—or even just contributed to—by a mental/nervous disorder. The following language is taken from a current policy issued by a major LTD insurer:

“Once 24 monthly disability benefits have been paid, no further benefits will be payable for any of the following conditions:

  • Anxiety disorders
  • Delusional (paranoid) disorders
  • Depressive disorders
  • Mental illness
  • Somatoform disorders (psychosomatic illness)” [emphasis added]

Another leading disability insurance company defines mental illness as:

“a mental, nervous or emotional disease or disorder of any type.” [emphasis added]

There are variations in the language, but the gist of the mental-health limitation in most LTD policies is the same: a termination of coverage for mental-health conditions after 24 months. Somatic Symptom Disorder as well as other somatoform disorders are listed in the DSM-V and regardless of whether they are expressly mentioned in a policy, any diagnosis of a somatoform disorder will, without a doubt, be classified as falling under the mental/nervous clause.

Disability insurance companies routinely claim that ME/CFS patients are suffering from a mental/nervous disorder despite the fact that the patient’s physician did not diagnose such disorder. Nevertheless, LTD insurers are often successful in their effort to terminate benefits at 24 months by requiring that claimants undergo an “independent” medical exam (“IME”) performed by doctors who are paid by the insurance companies and, nearly without fail—in the case of a CFS diagnosis—find a mental/nervous disorder as a primary cause or at least contributing factor for the disability.

Disabled ME/CFS patients typically suffer disability for their lifetimes, in many cases for decades. Any NIH study, finding or official reference that supports, in any way, the characterization of ME/CFS as a somatoform disorder would be a dramatic boon to disability insurance companies enabling them to limit their payments to disabled ME/CFS patients to 24 months as opposed to the age of 65 (which is the typical age at which LTD benefits terminate for disabilities not caused, or contributed to, by mental/nervous disorders).

The risks regarding disability coverage extend well beyond new claims; current recipients of LTD benefits would not be grandfathered in. Disability policies universally provide for ongoing reviews as to continued eligibility as well as the ability to require an IME or otherwise to review each ongoing claim on a regular basis. Therefore, every ME/CFS patient who has been receiving disability payments beyond 24 months should expect this type of review and likely termination of their benefits should the findings or positions of any HHS agency, such as NIH, suggest a classification of ME/CFS as a somatoform (or other mental/nervous) disorder.

Enter Dr. Brian Walitt, lead clinical investigator for NIH’s intramural study of post-infectious ME/CFS. Walitt is positioned to have a key role—probably the key role—in the study. According to the study’s principal investigator, Dr. Nath, Walitt has been instrumental in the study design. As a member of the small NIH team responsible for the “final assessment of diagnostic validity” (see screen shot below taken from this link to the NIH study website), Walitt will also be involved in the ultimate selection of the 40 ME/CFS patients, one of the most critical aspects of any study. Walitt is a member of that team because he is considered by NIH a “clinical expert” on ME/CFS. His influence will undoubtedly extend to the final conclusions of the study.

(Added 3/30/16: Please see my comment in the comment section below further clarifying Walitt’s central role in the study.)

I discussed at length, in my recent blog post (“Brian Walitt’s Radical Bias: Disorders of Subjective Perception, ME/CFS as Normal Life Experience?”) Walitt’s views (stated only a few months ago) of fibromyalgia not being a medical entity, but merely a normal life experience. Fibromyalgia is, of course, considered to have substantial overlap with ME/CFS and clinicians and researchers who believe fibromyalgia is a somatoform disorder typically believe the same about ME/CFS. Indeed, should there be any doubt, Walitt has been unequivocal in his opinion that chronic fatigue syndrome is a somatoform illness. This is set forth expressly in the 2015 paper, “Chemobrain: A critical review and causal hypothesis of link between cytokines and epigenetic reprogramming associated with chemotherapy,” which he co-authored and which contains the following statement:

“The discordance between the severity of subjective experience and that of objective impairment is the hallmark of somatoform illnesses, such as fibromyalgia and chronic fatigue syndrome.” [emphasis added]

Many patients were incredulous when Walitt flippantly revealed his obvious disdain during NIH’s March 8, 2016 invite-only “ME Advocacy Call” about the study in response to concerns about his bias:

“If chronic fatigue syndrome/myalgic encephalomyelitis is all in your head, it’s only because your head is part of your body.”

Here is Walitt’s quote in full:

“First let me affirm by saying that chronic fatigue syndrome/myalgic encephalomyelitis are a biological disorder. Research has shown that in every system of the body that has been investigated that there have been abnormalities when compared to healthy volunteers. If chronic fatigue syndrome/myalgic encephalomyelitis is all in your head, it’s only because your head is part of your body.” [emphasis added]

As you can see, it is true that Walitt acknowledged that every bodily system of patients has shown abnormalities. But that is entirely consistent with his view that ME/CFS is somatoform, as patients with somatoform disorders are not required to lack physical abnormalities; instead, patients are said to generate thoughts, feelings or behaviors in response to their somatic symptoms that are disproportionate or excessive. Further, Walitt seems to believe that the abnormalities are being created by the patients’ own thoughts and emotions due to some kind of biochemical mechanism. The symptoms themselves do not need to be medically unexplained; they can, in fact, be associated with a biological condition. Saying that one doesn’t believe that symptoms are all in a patient’s head is not irreconcilable with believing that the patient suffers from a somatoform disorder. Therefore, Walitt’s acknowledgement of abnormalities in ME/CFS does not, in any way, negate his apparently strongly-held belief that ME/CFS is a somatoform disorder.

Walitt had to know—in light of the overwhelming criticism directed at him—that it was crucial to pull off a flawless performance during the call and yet, he could not resist making that astonishing remark, which effectively betrayed his assurances of the absence of any bias. Walitt had the perfect opportunity to let us know if he had changed his mind by 180 degrees and no longer considers ME/CFS to be a somatoform disorder, unlikely as it would have been in only a few months. He did not do so.

Given Walitt’s well-documented opinion on CFS as a somatoform illness, there is a high likelihood that the study design, the patient cohort selected for the NIH study, the day-to-day decisions made by the lead clinical investigator and the ultimate conclusions of the study will be affected by Walitt’s clear bias. The very fact that NIH appointed Walitt in the first place, as well as the agency’s in-patients-face failure to remove him from the study after an unprecedented and ongoing outcry from the patient community, is ever so revealing in terms of NIH’s objectives for the study and its recently oft-repeated assertion of a suddenly-found desire to work with patients. It is hard to imagine that NIH could have managed a more perfect middle-finger salute to ME/CFS patients than appointing Walitt as the lead clinical investigator.

As I have said before, this study—in its impact—has the potential of becoming PACE on steroids. In addition to the other dramatic risks posed by the design of the study (which are beyond the scope of this post), thousands of disabled ME/CFS patients could face the sudden loss of most, if not all, of their already modest lifetime income and, as a result, life-threatening poverty that would be impossible to navigate for many in the face of the debilitation caused by their disease—if Walitt continues to remain on the study.

*I am using the term “ME/CFS” because that is what NIH says it is studying. It is beyond the scope of this blog post to discuss the futility of combining the disease, ME, and the social construct and wastebasket diagnosis, CFS.
Posted in Uncategorized | Tagged , , , , , , , , , , , , , , | 53 Comments

Standing Up to Coyne and Against Unfair Treatment of ME Advocates

[Update 5/5/16: For context and background on Ed’s post below, please read my blog post, “Has the “Coyne of the Realm” been devalued?” It describes in detail some of Coyne’s abuse of patients in the ME community, including myself. The mistreatment seems to be ongoing, as he continues to call patients “assholes” and “nut cases” and who knows what else.]

By Edward Burmeister

I am taking the liberty of posting this entry on Jeannette’s blog.

Many of you know that I seldom become involved in ME advocacy. My wife, Jeannette, is typically capable of holding her own. She has been, health permitting, a relentless advocate for ME for several years and has been effective in holding government agencies and officials accountable when their actions or inaction have damaged the ME patient community, and in particular when they have not lived up to their legal responsibilities. It is true that she has strong opinions on how to conduct effective advocacy and states her position assertively, but I can assure you that she makes it a priority to focus on the issues and to stay away from personal ad hominem attacks on other individuals advocating for ME. On the rare occasion when she has made a mistake, she was quick to apologize and set the record straight. Far from being self-promoting, Jeannette has gone out of her way to support and give credit to other patient advocates for their efforts. When her health allows, Jeannette collaborates with other advocates, typically behind the scenes. Among other things, Jeannette’s advocacy efforts seem to have derailed the massive price increase for Ampligen that was scheduled for next month, which according to the Ampligen study coordinator is now not going into effect for the time being.

Jeannette has made sacrifices for the community that go above and beyond. Aside from traveling to Washington, DC on numerous occasions on her own dime, she has paid the travel expenses of other advocates for DC trips. She anonymously paid the Incline Village rent for another patient whom she barely knew, so that that patient could get Ampligen. Not only did she risk having to cover the $139,000 in attorneys’ fees from her FOIA lawsuit herself, she ended up paying about $60,000 in taxes on those fees. In addition, as the plaintiff, she also was not able to bill for the time she herself worked on the lawsuit, which would have totaled additional tens of thousands of dollars. There are many other ways Jeannette has helped patients and the community that few know about. She gives back as much as she can without bragging about her generosity. And while we are financially comfortable, for which we are very grateful (and because of which we would have never thought of asking the community to chip in, for the attorneys’ fees, for example), her financial investment in her own and others’ advocacy has been significant and affected us financially fairly dramatically. (After all, Jeannette is disabled like so many ME patients.) Yet, she has not once talked about that.

The recent incident starting with the Facebook posts of Dr. James Coyne on February 27, 2016, is so outside the realm of reasonable and civil behavior and has affected Jeannette’s health so directly and adversely as to render her physically unable to defend herself at the moment, that I simply cannot stand by and witness this without comment.

Her blog entry, “Has the “Coyne of the Realm” been devalued?” sets forth the facts. I will summarize these briefly, however. It is important to know that Jeannette had been a supporter of Coyne’s efforts on behalf of the ME community and has never stated, publicly or privately, anything that could possibly be viewed as attacking him in any way. She did thoroughly analyze the public interview of Brian Walitt of September 2015 concerning fibromyalgia when she found out that he was to be the lead clinical investigator of the proposed NIH intramural study. She posted a blog entry, “Brian Walitt’s Radical Bias: Disorders of Subjective Perception, ME/CFS as Normal Life Experience?” based on the Walitt interview, expressing her concerns regarding his participation in the NIH study given his stated views concerning fibromyalgia.

Then she tweeted the following to the head of NIH, Francis Collins, with a link to her blog post (22 retweets and 30 likes as of now):

“Not one bit embarrassed that Walitt works for you, hurts very sick patients? What an agency you run!”

Notice that neither Jeannette’s tweet nor her blog post contain any direct or indirect comments on, or criticisms of, Coyne nor would anyone reading these perceive any conceivable attack on Coyne. Coyne would not enter your mind at all. Her tweet was assertive, but completely within the acceptable and reasonable range for advocates who take issue with government actions. One might even consider the tweet reserved given the history of the neglect of ME by NIH. Also bear in mind that Jeannette had filed a FOIA request with NIH to obtain documents concerning the genesis of the IOM study on ME/CFS. She was met with unconscionable obstacles and obfuscations by NIH, requiring her to file a lawsuit, including misrepresentations under penalty of perjury by NIH officials. Only after an arduous legal battle did she obtain the documents to which she and the public were legally entitled. The NIH behavior was so inexcusable that the Judge in the case awarded all her attorneys’ fees (highly unprecedented), totaling $139,000, and specifically called out NIH for its unreasonable conduct in this case.

It is simply not tenable to maintain that her tweet was “abusive.” You may not agree with it or with her tactic of criticizing Walitt or the handling of the proposed study by NIH, but the tweet was absolutely fair game and within acceptable standards of reasonable advocacy. With this background, it was shocking to her, me and most of the community that Coyne stepped in and demanded, in effect, that the patient community condemn Jeannette’s tweet, insist on an apology from the community for the tweet and ostracize advocates like Jeannette, calling her a “sick crazy lawyer,” having a “history of being abusive towards reasonable informed Americans” and posting a “nasty and abusive tweet.” He demanded that the patient community step up and “stop the abusive crazies” or he would stop helping with attempting to obtain documents relating to the PACE trial. When many questioned him on this, he called them names (e.g., “delusional”) and told them to “fuck off.” All of this because Coyne disapproved of Jeannette’s reasonable approach to advocacy.

Please keep in mind that this is coming from a renowned Ph.D. in psychology with a large public following and a recent position of prominence in support of patients with ME.

Coyne is a newcomer to this community and has apparently developed a pattern of labeling long-term advocates with whom he disagrees as divisive and destructive and asking others to ostracize them for simply expressing any views that are inconsistent with his views or approach to advocacy.

I sincerely appreciate, as does Jeannette, the large outpouring of supportive comments from patients in the ME community, public and private. Unfortunately, she has been too sick to thank everybody individually, as she wanted to. As ugly as this has been, Jeannette tells me that a number of new alliances have been formed over this and for that, she’s grateful. We are both especially heartened by the fact that the majority in the community has a functioning moral compass.

Unfortunately, there are a few who have apparently harbored resentment against Jeannette and have taken this opportunity to add to the abuse from Coyne, calling her names, such as “textbook narcissist” with her following of “flying monkeys.” To those few people who thought it fit to pile on in the aftermath of Coyne’s revolting mistreatment of Jeannette by egregiously defaming Jeannette and spreading vile lies about her (some of which are urban legends revived from years ago) and engaging in other character assassination, maybe you want to check in with your conscience because your behavior shines a bright light on your value system and, quite frankly, it’s not flattering. The same goes for those hosting or “liking” such comments.

Jeannette is currently undergoing an intense five-days per week treatment regimen for a medical issue secondary to ME, on top of her infusions. Continuity is crucial for the efficacy of that treatment. That is now in jeopardy due to her decline as a result of Coyne’s verbal and emotional assault. No sick person should be forced to choose between protecting her health and defending her reputation.

It is alarming that some have supported, or at least condoned or downplayed, Coyne’s behavior. Some have suggested that Jeannette somehow brought this on herself, which is not only untrue, but also constitutes shameless and cruel victim blaming. Some have justified it under the greater good theory, in other words, we need Coyne’s advocacy for obtaining the PACE data, so we have to sacrifice a sick patient advocate who has labored for years at great personal cost on behalf of this community and tolerate her abuse by Coyne. Others are saying that they disagree with his language, but basically agree with his sentiment about quieting advocates who don’t toe the party line. I want to be very clear that the foul language, as unacceptable and revealing as it is, is not the main issue here. Had Coyne done what he did—calling sick patients “crazies,” resorting to defamation, trying to coerce a vulnerable patient population into an apology (when he is the one who should apologize) and into shunning advocates for no reason whatsoever—but done it in a more polite manner, it would still have been reprehensible. It’s the substance of what he did much more so than the style in which he did it that is so objectionable. The style does reflect, however, a disturbing aspect of his approach to advocacy. And let’s not forget that this was not just one comment. It was a sustained attack that encompassed many comments and no apology has been made. To those who have suggested that Jeannette back off on this and, in effect, suffer in silence in order that the ME community keep its focus, i.e., the taking down the PACE trial, I respond with two points. First, it was Coyne who launched the unjustified attacks on the community and key advocates. It is up to him to deescalate the situation by apologizing and toning down his rhetoric. Second, silence has facilitated Coyne’s attempt to shut down and ostracize other advocates—Suzy Chapman and Angela Kennedy and presumably others—without any repercussions for Coyne. Someone has to stand up to his bullying tactics. Otherwise, who will be next on the chopping block? Further, is it not possible that his extreme actions in this arena could backfire and give those who oppose his position on the PACE data ammunition to reject his demands?

We are assessing what steps we should take to bring accountability for the described actions against Jeannette, but mainly I wanted to let the community know how strongly I feel about this and how directly it has affected our family. I should add that I have served as the Managing Partner of the Bay Area offices of Baker & McKenzie LLP, the world’s largest law firm, and have never witnessed this type of behavior towards a professional in comparable circumstances, particularly by an individual who, by education and profession, should be compassionate and caring.

By now, Coyne has had plenty of time to issue an apology to the community and the advocates he specifically targeted, including Jeannette. He has not done so. Instead, he stood idly by as further attacks by others, clearly incited by his initial smear campaign against Jeannette, took place. It is clear that he is unrepentant.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | 77 Comments

Has the “Coyne of the Realm” been devalued?

[Please also see the follow-up post by my husband, Ed Burmeister, “Standing Up to Coyne and Against Unfair Treatment of ME Advocates.”]

A recent addition to the ME advocacy community, Dr. James Coyne, has been celebrated as a savior of ME patients almost immediately since he joined the conversation just a few months ago by attacking the PACE trial. He has been a very welcome ally. So, it was rather bizarre when Coyne gave the ME community an aggressive ultimatum yesterday, in an apparent attempt to silence patients whom he disagrees with. In this particular case, it was my opinion regarding Walitt’s appointment as lead clinical investigator for the NIH study he took exception with. According to the ultimatum, patients can either “do something about [me]” or he “is out of here.” He posted the following about a tweet of mine addressed to NIH Director Dr. Francis Collins in two Facebook groups, “Invest in ME” and “The ME Alliance:”

He then proceeded to call me a “sick crazy lawyer,” tell another patient to “fuck off,” label another as “delusional,” tell the community that they can “fuck themselves” and throw around f-bombs like they are the new “lol.” He also painted himself as the victim of “abusive crazies,” i.e., dissenting ME patients.

Yet, according to Coyne, I am the trolling and harassing one!

I quote from the above:

fuck yourself

I will demonize a crazy sick lawyer

Apology, my ass

you are delusional

be my fucking guest

Coyne is threatening an “all out war” against the community:

Here is a person who, by his own account, is healthy and obviously extremely high functioning judging by his prolific writing, tweeting, posting, etc.

He, the healthy one, threatens a marginalized, abused, neglected and victimized patient population that has been desperate for his help. Yet, according to Coyne, dissenting patients are the crazies!

I have never seen a member of the ME community that enraged and out of control. Things got so surreal that a number of people wondered whether Coyne’s Facebook account had been hacked. But sadly his Facebook behavior of yesterday is consistent with his recent Twitter conduct.

The irony of the following statement by Coyne is likely lost on nobody.

Illness certainly does not justify bad behavior in public. Nothing does, really. Nevertheless, we witnessed Coyne’s behavior yesterday.

Once again, the community is facing somebody who is asking us to use our inside voices, never mind that hasn’t worked in 30 years. All the while, Coyne is shouting profanities at the top of his lungs, not at the perpetrators, the US government, but at the victims, vulnerable patients who have every right to voice their opinions. Coyne is free to disagree with them, in a civil manner. It’s called discourse. What he exhibited yesterday, however, is called verbal abuse.

Let me assure you that there has not been any bad blood between Coyne and me in the past. Like many other patients, I have been immensely grateful for Coyne’s commitment to fighting PACE and the work he has done on that front. He and I have never had an interaction of any kind other than my liking and sharing a few of his blog and Facebook posts as well as some of his tweets, typically related to PACE.  For all I know, Coyne does not know me, or of me or my past advocacy work, although bizarrely, he Facebook-friended my husband (who is active in ME advocacy only very sporadically) a few months ago.

So, despite the lack of any history here, Coyne proceeded the way he did without the courtesy or the courage to first engage me privately or to at least tag me in his public attacks. This was clearly a calculated hit on a reputable advocate that he felt he could get away with.

According to Coyne, my tweets are not only “quite off the wall” (seriously?), but I am also “quite closed to being corrected.”

How could he possibly know that? As I said, we haven’t had a single exchange, no direct contact whatsoever.

Also note that he posted in two UK groups, possibly because the UK community is not as familiar with my work as US advocates are and I’d likely be an easier target and get less support there, although it hasn’t exactly panned out that way for Coyne. In fact, most patients who commented, regardless of where they are based, are bewildered by Coyne’s tantrum and don’t appreciate his harsh and irrational ultimatum. It seems Coyne miscalculated and the community called his bluff. But still, if one wanted to pit the UK community against the US community, this would be a pretty good way to go about it.

The more patients pushed back, the more Coyne kept repeating that he will just abandon us, unless an apology from the community is forthcoming:

As a matter of background, NIH recently announced an intramural study on post-infectious ME/CFS (their phrase, not mine) as well as the agency’s appointment of Dr. Brian Walitt as the lead clinical investigator. I wrote in detail about a disturbing interview that Walitt gave about fibromyalgia. If you haven’t seen the interview, it’s a must-watch. It will make the hair on your neck stand up. According to Walitt, fibromyalgia is not a disease or illness; instead it is merely a normal way that some people are meant to experience life. Walitt considers a lack of treatment not harmful to patients. Furthermore, in his prior work, Walitt labeled CFS together with fibromyalgia as somatoform and both as disorders of subjective perception, much like the Wessely School. One thing that I fear might get lost in all this Coyne profanity is that he seems to deem Walitt an unproblematic choice for the NIH study. It really does make one wonder where his allegiance truly lies.

Coyne seems to have a huge blind spot when it comes to US ME politics. UK bad, US good:

It’s outside the scope of this blog post to discuss why Coyne’s view of the situations in the UK vs. the US is less than fully informed and much too simplistic. But one only has to look at the CDC website and its continuing medical education material to see that the agency is recommending graded-exercise therapy and cognitive-behavioral therapy, based in part at least on the PACE trial.

I stand unequivocally behind my tweet to Collins. Claptrap like that of Walitt has no place at NIH. Taxpayer money is wasted on Walitt’s salary while quality research is not getting funded. My opinion and that of the vast majority of US patients (and those UK and other patients) who have been closely following the details around the NIH study happens to be that Walitt is an appalling choice for the NIH study. It is an entirely reasonable position—and I’d argue the only reasonable position—that Walitt should not be anywhere near the study. Coyne feels different. In fact, he supports Cort Johnson who supports Walitt.

Do I think Cort’s pro-Walitt piece is indefensible because it downplays the dangers to the community of Walitt’s involvement? Sure. Have I called Cort names because of it? Of course not, because of that whole civilized grown-up thing adults typically do. My tweet was more than appropriate in light of the history of the US health agencies with this disease and in light of NIH’s choosing of Walitt. How can Coyne possibly be the arbiter of what is acceptable in Twitterland given his generous use of profanities?

It’s almost like Coyne is being territorial, the territory being forcefully-stated opinions. They are ok for him—definitely for him—but most certainly not for others, unless they support him; then they are fine, of course.

Can we take a step back and have a little reality check here? Twitter is a place for conversation, a venue to express opinions. Coyne, more than most people you will find on Twitter, expresses strong opinions on a regular basis, which makes this all even more surreal. He hasn’t thought of an attack he doesn’t like to unleash. He is not one to pull many punches. A few months ago, he called a female journalist’s tweet a “bitch comment,” something I naively made excuses for at the time. Let’s face it, we all placed a lot of hope in him and were willing to give him the benefit of the doubt. We were eager to overlook this lapse in judgment that offered a first glimpse of what was to come.

Some of you know that I grew up in East Germany. I am all too familiar with intimidation tactics designed to silence people. Once you’ve experienced the Stasi, you don’t scare easily and you also recognize bullying the moment you see it. Coyne is asking the community to apologize for a tweet—a reasonable one at that—by one person under threat of withdrawing his support for the entire community.

That is outlandish. It is aimed at stifling discourse about issues that are crucial to patients just because certain opinions don’t fit his narrative. Obviously, the entire community is not responsible for what a single advocate says. So, what Coyne is really trying to do is not get an apology—from me or the community—but to get the community to rein me in, just like Dr Nancy Lee, DFO of CFSAC, has unsuccessfully tried before.

Aside from David Tuller, we haven’t had anybody from outside the community stand up for us as forcefully as Coyne. We desperately need the help of outsiders. Many believed he would be the one freeing us from our shackles. Coyne, of course, knows that and is playing on the fears and hopes of patients many of whom have been distraught about Coyne’s behavior and are begging him to stay. He is not budging. Extreme events like this are physically harmful to ME patients. They can cause major crashes and a long-term decline. Either Coyne has not learned even that much about our disease in the last few months or patients are just collateral damage in his quest for complete control.

Coyne seems to think that Collins’ position somehow warrants that he be treated with kid gloves. The opposite is true. Collins is a public persona, a top US health official who needs to be held to the highest standard for the sake of taxpayers and gravely ill patients. His agency together with CDC and other HHS component agencies has been responsible for the abuse and neglect of ME patients. NIH has recently designed a study on ME that reveals the intent to rebrand our disease as a psychosomatic one. And yet, Coyne demands that patients censor and shun other patients for expressing their objection to the study in ways that are not Coyne-approved.

Collins is the head of the world’s largest and most powerful government agency sponsoring biomedical research, but Coyne is acting as though Collins is a delicate flower.

Really? One tweet defines the community’s relationship with NIH? Preposterous. But let’s just say, for argument’s sake, Coyne is right. Why is that a problem? Almost all patients and advocates agree on Walitt. Does Coyne want to gag all of them? Maybe he is hoping that making an example of me will teach others not to have an opinion that differs from his. After all, his attacking the disabled has a devastating effect on the targeted patients’ health and who would want to risk being next? Where will he stop? Or will he? When is the earth scorched enough for him?

But seriously, if agency heads cannot take a bit of heat—especially when they and their predecessors are responsible for a tremendous amount of abuse and neglect of the vulnerable—maybe their appointment was not the best choice. I wonder how Collins feels about being painted by Coyne as somebody who cannot stomach a justified tweet.

I see no need to defend my record. It’s a credibility problem for Coyne, not for me, that he didn’t do his homework. But let me say just this. Sure, I am not a feel-good, kumbaya advocate. My philosophy as an advocate is to hold the Feds’ feet to the fire and exert pressure because playing nice hasn’t worked and people are running out of time. That is how I have successfully called out numerous HHS & Co. legal violations. And that is how I won a federal FOIA lawsuit against NIH and HHS recovering all my attorney’s fees, something practically unheard of in a FOIA case, especially given the amount of legal fees—over $139,000—NIH and HHS had caused me with their recalcitrant and obfuscating behavior. The fact that the Court awarded attorneys’ fees in full is a clear indication of just how unreasonable NIH and HHS acted in that litigation. That conduct included NIH and HHS misrepresenting the facts under penalty of perjury, misstating the law, filing a frivolous summary-judgment motion, disobeying the Court’s order, wrongfully accusing me of lying, everything to avoid complying with the law at the expense of taxpayer money. HHS and NIH directly caused my health to dramatically deteriorate as a result of their deplorable conduct. They acted like bullies towards a disabled ME patient. I have not yet talked very much about the details of the lawsuit, but I am working on it and I can guarantee that patients will be appalled. The Feds used every dirty trick in the book and because of their arrogance, they didn’t in their wildest dreams imagine I would win. Yet, win I did. But hey, according to Coyne, I am just a crazy lawyer.

Quoting from the Court’s attorneys’ fees order:

… the government’s conduct throughout its dispute with Ms. Burmeister was unreasonable. [emphasis added]

Why is Coyne not outraged by how HHS & Co. have violated the law and treated me in that lawsuit? Why does he not take issue with NIH’s decades-long neglect of the community or its upcoming study whose design is beyond redemption? And please spare me the it’s-in-the-past speech. Not only is it unrealistic to think that an agency changes over night, but my lawsuit came to a close just a few months before Collins’ promise of a new era. But then again, Coyne doesn’t “give a fuck” and has his own grievances with NIH:

Is the following possibly why Coyne blew a gasket yesterday?

I have no doubt that he spent many hours on the PACE project, with the help of many patients. And yet, there is just no way he is as invested as any of us patients are. For Coyne, this is intellectual stimulation that provides adoration from many as well as possibly academic glory. For us, this is nothing less than our lives on the lines, vastly different stakes.

I am by no means the first member of the community to be on the receiving end of the rage that seems to have overtaken Coyne recently:

Earlier this week, I noticed his Twitter exchange between Coyne and Rosie Cox whose intelligent and spot-on commentary is a valuable asset to the community:

Patronize and micromanage much?

Angela Kennedy has been another Coyne victim. She was entirely civil in her Twitter exchange with Coyne a few days ago, asking him about his tweets linking to Cort’s blog post about the NIH study (reproduced above):

Instead of answering her question, he kept asking her if she had read Cort’s blog post. And then he sent her the following direct messages:

Wow! The Twitter police in action. In the world of ME advocacy where social media is almost everything, that kind of abusive overreach—seeking to interfere with the online presence of an advocate—can completely sideline somebody. That kind of disproportionate reaction is just vicious. It’s revealing and it’s inexcusable.

I seem to have missed the incident that led to Suzy Chapman—an impeccably accurate and prolific advocate—being blocked by Coyne on Twitter, but I am sensing a trend. I keep thinking how bizarre all of this is. I do feel a bit like I am back in high school and one of the cool (translation: mean) kids is telling others they can’t sit at the cool-kids’ table.

I wonder what Rosie, Angela, Suzy and I—all us *women*—have in common.

Coyne claims that Angela and I have a history of being more abusive with more reasonable people like Julie Rehmeyer.

Angela can speak, and has spoken, for herself and Julie has backed her up:

Coyne’s assertion is plain and simply an untruth as to myself as well. I recently pleaded with Julie to reconsider her plan of interviewing Walitt because I believe that it would give Walitt an opportunity to spin the absurd statements he made in the interview and it’s better to let the interview speak for itself. Julie and I disagreed on strategy. It’s advocacy, for Pete’s sake; there is going to be disagreement. We had a polite Twitter exchange about it. Julie tweeted to me twice that she understood where I am coming from. I am unable to find her second tweet in which she only said, “I hear you,” but here is the other one:

As you can see, Coyne’s description of that exchange as my abusing or attacking Julie is a blatant mischaracterization of what happened.

It has been reported to me that Coyne had a private-message exchange on Facebook on the evening of February 27th with a patient who had been providing him, for months, with amounts of quite rare documentation about UK ME-related matters, including the Lightning Process and the SMILE Trial. The patient relayed to me that, on a number of occasions, the patient urgently messaged Coyne to correct his factually incorrect online statements.

The patient notes that all the exchanges with Coyne up until that time had been perfectly pleasant and straightforward, concentrating on the documents. But on that evening, the messages from Coyne became suddenly abusive.

The patient has graciously given me permission to reproduce Coyne’s verbally abusive private Facebook messages. The first FB PM from Coyne to the patient in that thread appeared to be some kind of ultimatum relating to my Tweet to Francis Collins. The patient feels that Coyne took his anger out on her because he didn’t have the courage to confront me, because both my husband and I and many of our friends are attorneys. Here are some of the Facebook PM exchanges:

Coyne: “This is absolutely unacceptable trolling and harassing of the head of NIH. If something is not done about it, I am withdrawing form the struggle.” [Reproduced Jeanette’s Tweet to Francis Collins]

Patient “Err? I don’t control what Jeanette Burmeister Tweets, or who to. As far as I can see there was one tweet. She is a lawyer who won an FOI case (below)”

[The patient gave links to my blog posts on the NIH and HHS FOIA case.]

Coyne: “I do, when it is to the Head of NIH and I am prepared to tell the patient community collectively to fuck off.”

Patient: “Charming. I have always been civil to you James. As you can see there is definite history between the NIH and Jeanette.”

Coyne: “I don’t give a fuck. If the community cannot do better, they can fuck themselves.”

Patient: “I want you to stop sending me such messages, and to stop swearing in private messages to me. Go take your anger out elsewhere. I don’t tweet. I am too sick to take on another online system. Too many of us are very lucky to be still alive. If my doctors and fellow citizens had had their way in the years of very severe ME, I would be six feet under twice over.”

Patient: “Err – have you sent such messages with swearing to Jeanette Burmeister? Or just to me?

After all – its not as though Jeanette has told Francis Collins what you have just told me – ie if he can’t do better he can fuck himself.

She has been more civil to Francis Collins than you are being to me.”

Coyne: “fuck off, you are wasting my time.”

Patient: “Apologise James. Your communications to me tonight are are appalling and frankly abusive. Have you messaged anyone else with such abusive statements? Or is it just me, who has never been rude or offensive to you.”

Coyne: “I really don’t care what the community thinks, they have totally undermine all my hard work.”

Patient: “Don’t message me again. You have been rude and abusive in these messages beyond the call of anything.

I don’t care how angry you are. You don’t speak to me that way. You are out of order taking out your anger on me about a tweet by Jeanette Burmeister.

I ask you again, have you sent similar messages to Jeanette as well, or anyone else in the ME community tonight – or is it just me you feel free to be verbally abusive to tonight ?”

Coyne: “Let’s not talk to each other no, I didn’t write to her. But you don’t get what I’m saying”

Patient: “So, you decided to take your anger out on me. But you didn’t have the guts to write the same messages to Jeanette, who has a high profile blog and is a lawyer.

I got exactly what you were saying James. you were repeating fuck off in private messages to me.”

I wonder if other patients have received similar messages from him.

It is clear from this exchange that Coyne is seeking to completely control the community around the NIH study.

The problem with relatively new, self-appointed prophets is they don’t have a track record yet, haven’t revealed their agenda yet, have not proven themselves yet. Those things take a while, happen only over time. When people place all their hope in them, and I will admit I did it myself in this case, they risk getting annihilated on a whim, just like Coyne is threatening to do to our community now.

As an outsider who is new to ME advocacy, Coyne can’t be expected to know much about the history of the disease or its politics other than that of PACE. As an academic, however, he ought to know that it is crucial to be aware of one’s limited knowledge. His distressing hissy fit was so gratuitous and really quite unfathomable.

Is Coyne’s outrage even real? I mean is it really possible that somebody would get that angry about one low-to-medium-heat tweet? Or is he looking for an excuse to bail out on us? In other words, does he need a scapegoat to save face because he feels defeated in his fight against PACE? Advocacy gets frustrating. It is not for the faint of heart. I don’t know of many advocates who haven’t considered bowing out. I must say though that this kind of exit has never entered my mind. Or is this a case of uncontrollable rage against an easy target, sick patients who fear his threatened abandonment? It’s clear that we are witnessing control issues here. But does misogyny play a role? Those were all questions that the community has been asking since yesterday. Only Coyne knows the answers and I don’t want to speculate about the source of the ugliness of it all because I believe it will all become clear in short order.

Notice how Coyne set up the “discussion” in a way that if he does drop us as a community, it will look like it’s not due to his frustration over his lack of success with PACE. Rather, he framed it to be my fault or the community’s failure to teach me and advocates like me a lesson. I certainly don’t want to discourage Coyne in his fight against PACE. After all, many sick people have put a lot of their precious health into helping him help us and we urgently do need help. But should he choose to throw in the towel, that is on him entirely and nobody else. I will not be bullied into taking responsibility for his irrational actions—holding the community hostage over a tweet he dislikes—especially not after the indefensible abuse I have already had to take from him.

I really didn’t want to have to write this. But I take my reputation and my health very seriously. Needless to say, this incident has had a major impact on my health and is likely going to interrupt my desperately-needed treatments. I have been quite sick in the last six months and am just now coming out of a severe crash during which, at some point, I had to make an emergency will. Coyne’s behavior is making the community look bad. He’s holding everybody hostage demanding that the community condone his repeated abuse of, and aggression toward, patients in exchange for his staying engaged with PACE. I did not set it up that way. That was all Coyne. If he is willing to deescalate, I will be more than happy to listen.

I consider his vilifying attacks on, and lies about, me personal harassment and worse and plan on taking appropriate steps should they continue. His ultimatum to the community shows a degree of aggression that is unprecedented and highly alarming. Coyne called disabled people “crazies,” is attempting to bully patients into silence, is throwing his weight around with complete and vicious disregard for the wellbeing of seriously ill patients and the fact that his preposterous outburst is making them sicker. There is no room for this behavior in our community. How is somebody like that going to represent the ME community in the outside world? Is he going to walk around telling people who disagree with him to bleep off? Will that be held against the community?

So, yes, an apology is in order indeed—from Coyne to the community and me and everybody else he has abused and attacked.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 79 Comments

Brian Walitt’s Radical Bias: Disorders of Subjective Perception, ME/CFS as Normal Life Experience?

NIH has tapped Dr. Brian Walitt as the lead clinical investigator for its intramural study “Post-Infectious Myalgic Encephalomyopathy/Chronic Fatigue Syndrome.” (For terminology, please see the end of the post.)

Investigators

Walitt_PictureOnly a few months ago—in September 2015—Dr. Walitt gave an interview at a rheumatology conference about his conference presentation on fibromyalgia: “Tilting at Windmills (a rational approach to fibromyalgia).” Rarely have I seen anything less rational than Dr. Walitt’s interview. You simply must watch it because those nine minutes will make you wonder if you mixed up your thyroid meds with some left-over Quaaludes. There is not even a pretense of scientific thinking. The interview illustrates what NIH has in store for us. Watch it, really let it sink it and then tell me that it doesn’t scare the pants off of you. Share it!

Walitt’s claims lack even an inkling of science. In the demeanor of a hokey cult leader, he lays out his horrifying *beliefs* about fibromyalgia. Walitt *believes* that fibromyalgia is not a medical entity (because, you know, he can’t find anything wrong with fibromyalgia patients), that patients’ symptoms are merely part of a normal human existence (because, you know, mind-body connection), that patients are meant to have these symptoms, so that they can learn a lesson (because, you know, hardcore science needs more mindfulness), that it’s perfectly acceptable not to treat patients (because, you know, that will do them no harm and whining to your doctor, or “complaining” as he calls it, is just not very considerate) and that it’s just better for *everybody* that way (because, you know, doctors only have very limited time for each patient and medications don’t work well in fibromyalgia). As a result, Walitt really doesn’t seem to see a point in researching a biological basis for fibromyalgia. If you are wondering why he is doing research at NIH in the first place, don’t! None of this is apparently required to make any sense, except if you take into account that he’s quoting a psychiatrist. According to Walitt, “traditional medical diseases will eventually show themselves objectively.” BUT NOT IF YOU DON’T CONDUCT ANY UNBIASED RESEARCH ON THEM!

In short, according to Walitt, fibromyalgia is not a disease, or an illness; it’s just a story patients tell themselves and others—an incorrect interpretation of their perceptions—that Walitt is here to help interpret. One of his slides claims: “Fibromyalgia is our modern narrative for a range of persistent, distressing, and stereotypical sensory experiences.”

Walitt_What_is_Fibromyalgia

Walitt’s calm, collected and almost pleasant demeanor while presenting the most horrendous theories is chilling. He appears caring and compassionate while seemingly devoted to disappearing fibromyalgia as a medical entity. I say “almost pleasant” because there is something extremely disturbing about Walitt’s delivery in this interview.

Let’s look at what he said exactly:

The most important thing to do is to listen, right? … We can’t restore them to what they think they should be. [not audible] We should bear witness to their difficulties, which is the oldest of the jobs that physicians do. [emphasis added]

Right off the bat, Walitt shows a judgment about the patients’ expectations, disguised as pretend compassion expressed when Walitt stresses the importance of listening. Patients want scientific progress—not witnessing of our suffering—thank you very much.

What Walitt is really saying is, “How dare patients wish not to be sick?!” Never mind that they might be so debilitated that they can hardly do anything and have no quality of life, because they just think they shouldn’t be suffering.

In response to the question of what fibromyalgia is, Walitt said the following:

… fibromyalgia appears to be a way that people experience suffering in their body. Um, both from the way that the bodies are interpreted and the problems of the body, as well as the problems in their lives, as well as how societies tell us how to experience things. All those come together to create a unique experience in different points in time, and right now, that experience, um, is a, one of those experiences is fibromyalgia. Ah. Is it a disease? Or is it a, uh, a normal way that we handle and are supposed to work is still to be determined. But it’s quite possible that the (frowns) tricky way that the brain works, is that we may create symptoms as part of how we’re supposed to operate, as opposed to this representing the system breaking down. [emphasis added]

Instantly, despite the vague delivery, Walitt demarcates his path of normalizing pervasive, extreme, life-altering pain and other symptoms by way of a description that sounds like some religious teaching, definitely not science. According to him, fibromyalgia, while atypical, is—wait for it—normal.

Why conduct any bio-medical research if suffering is just meant to be?

Here is more:

The idea that mind itself is able to create these things and that all experiences are psychosomatic experience. Nothing exists without your brain creating those sensations for you. And the idea that, uh, that process of creation can create these things and is supposed to create things like this, to inform us and to teach us and to guide our behavior, ahem, pushes against the idea that we have free will and that we can do whatever we want and that we should be able to lead the lives that we have always thought we should leave [sic], not the ones that our bodies are restricting us to. Uh, and so accepting those kinds of ideas, ahem, is not so easy, but that might make it a little bit easier on everybody. It might be a more palatable narrative, uh: understanding that people can feel bad for no real fault of their own, uh, because of their circumstances of lives and how brains just work—the way it’s supposed to be, as opposed to being sick. There is a wonderful line from this gentleman, Joel Higgs [sic]: “When people are atypical, society do one of three things: They either medicalize, criminalize or moralize [Smiles]. And so, when we find people with things like fibromyalgia, they are either gonna be sick, bad or weak. And the idea is really to find a fourth way to realize that these atypical things are just a range of normal, that you are not sick, bad or weak, that you are just dealing with the difficulties of just being a human. [emphasis added]

Walitt declares *all* experience psychosomatic, riffs on the educational and spiritual value of symptoms, and applies the work of psychiatrist Joel Nigg to the puzzle of fibromyalgia (though he gets the name wrong).

He suggests that fibromyalgia patients should accept his narrative that their symptoms are part of a normal life, triggered by life’s “difficulties.” Ironically though, instead of medicalizing (definitely not that), criminalizing or moralizing, Walitt psychologizes like it’s nobody’s business. It’s really just his way of saying, “There is nothing wrong with you. So, now pull yourself together and get on with life without continuing to burden the health-care and disability systems.”

Imagine he had said this about cancer patients, that cancer is meant to inform, teach and guide them. Petrifying right? And yet, he said it—about fibromyalgia— unapologetically and entirely comfortably, almost proudly. How can somebody like that possibly be expected to be looking—in an unbiased way—at a disease that is considered closely related to, and overlapping with, fibromyalgia?

Also note that Walitt attaches a thinly-disguised judgment to fibromyalgia patients’ discontent with their “normal” experience of being sick, and thus, contrary to what he says, moralizes plenty. After all, it would be so much easier for everybody—translation: for him and doctors of the same ilk—if patients stopped complaining.

Walitt_Patient_FM

Those complaining middle-aged women really are pesky, aren’t they?

For Walitt, no further research is necessary, because he knows what he knows or so he says. Rather than rising to the challenge of finding answers, his main goal appears to be helping other doctors sink to his level of disbelief, as he explains here:

One of the interesting things about this talk is that people come in with a set of beliefs and a lot those come from what they’ve been taught and what they see on television and what their patients come to their offices with. Sort of these ideas of, you know, fibromyalgia is a disorder of sensitive nerves.

But it’s a narrative that doesn’t seem to be valid and the hope is, and physicians, as they see me talk, often start off using that narrative and believing in the narrative and answering the questions in terms of the narrative, but deep down they know that it’s not true. And by the end, after I say my message, um, they are relieved to hear that deep down their believes are not wrong and that really what may be required is not, uh, saying that fibromyalgia is not real, but finding a new narrative in which to discuss it, one that makes much more sense, uh, to everybody. [emphasis added]

Walitt flat out asserts that fibromyalgia is not a “disorder of sensitive nerves,” presumably based on the fact that his research has failed to bear out such a disorder. No big surprise there because he clearly has his mind made up and isn’t really looking for any biological cause or marker for fibromyalgia.

Basically, Walitt claims that physicians who “believe” in fibromyalgia as a dysfunction of the nervous system have been duped into doing so by their patients and by watching television. (Clearly, he can’t be talking about medical school here since one is unlikely to find any meaningful education about fibromyalgia in med-school curriculums.) Those tortured doctors, he says, feel a huge weight off their shoulders once Walitt enlightens them with his superior knowledge. His “logic” seems to imply that physicians fall victim to political-correctness pressure that prevents them from saying or even thinking what they really believe, which is, of course, congruent with Walitt’s beliefs that fibromyalgia is—while real—not a disease, just a normal life experience.

Rather than pushing for better research to understand, and develop treatment for, the biomedical causes of fibromyalgia, and certainly preferable to feelings of failure that might be uncomfortable to doctors (never mind the patients’ agony), Walitt suggests the following:

Walitt_A_rational_approach_to_tx

Since Walitt believes that not treating fibromyalgia doesn’t harm patients, it only follows that he views biomedical research as unnecessary.

But just how did traditional” diseases become traditional, i.e., understood? By conducting biomedical research. That’s how! And yet, for Walitt, no further research into fibromyalgia is necessary, because his radical theory allows him to sidestep science.

Having declared now that fibromyalgia is normal, though atypical, and having decided not to pursue the development of curative treatment, Walitt suggests patients be told to fend for themselves, as they’ve been doing in the case of CFS for decades. Very tidy:

When you talk to patients with fibromyalgia, uh, and you ask them what they think about it, they can often provide you the answers, about where they should go. Uh. People with highly spiritual feelings and belief that, um, in spiritual forces as, uh, potential ways to heal, uh, should be referred that way. Uh, people who believe in exercise should go that way; people who believe in Eastern philosophies should be referred that way.

Walitt’s round-about way of stressing that fibromyalgia is real betrays what he really thinks:

The experience of fibromyalgia is very much real to the people who have it. The way that we think and feel is based in electricity and biochemistry of our brains. And we don’t really understand how the physicality of that chemistry becomes our thoughts and feelings. And in people with fibromyalgia they clearly feel these ways, and there’s probably a [sic] underlying biology to it, but the idea that that’s an abnormal biology, um, is less clear. The idea that the way that we think and feel should be affected by the goings and comings of our lives and the difficulties we have, um, is something that seems, uh, self-evident, but also’s something that we, uh, like to pretend isn’t true. We’d love it if we could reduce all of these things to a simple pathway. You know science, um, has had all its greatest successes in reducing problems to a single pathway, a single place. And all the, you know, if you take diabetes, understanding the key role of insulin in diabetes. Once that was understood, it transformed the whole illness, and allowed for people to become better.

The problem with things like fibromyalgia and other, uh, disorders that are of the the neurologic systems of the brain, is that the brain seems to have a dual existence. It exists both as a biological construct but it also exists as sort of a psychological construct. And we don’t really understand how the two go together yet. (Smiles) How they play together, how they sing together, how they work together. And so our attempts to alter the biology without understanding the emotional overlay, um, probably leads to a lot of failure. Alright, there are, it it speaks to our lack of understanding how it really works. [emphasis added]

So in two paragraphs, Walitt has gone from being convinced that he knows that fibromyalgia is somatoform, to admitting that he doesn’t really know how it works because, you know, lack of understanding. But rather than being committed to finding answers, creating understanding, expanding the parameters of medicine to address the very real and urgent medical needs of millions of people, he settles—at the expense of patients—for providing only short-term palliative care to his complaining middle-aged female patients.

When he says that fibromyalgia is real to patients, he, of course, means it’s not real to him.

Instead of saying, “we haven’t found the ‘insulin’ for fibromyalgia and need to keep looking,” he bascially says, “It’s really just one of the body’s ways of experiencing suffering.” Bummer, I know.

And the good old standby mind-body connection is always helpful when trying to rationalize the psychologizing of a physiological disease:

… people are not willing to accept the idea that our emotions affect our sensation, right?

And again:

The problem with things like fibromyalgia and other, uh, disorders that are of the the neurologic systems of the brain, is that the brain seems to have a dual existence. It exists both as a biological construct but it also exists as sort of a psychological construct. And we don’t really understand how the two go together yet. (Smiles) How they play together, how they sing together, how they work together. And so our attempts to alter the biology without understanding the emotional overlay, um, probably leads to a lot of failure.” [emphasis added]

The dangers of the mind-body dualism are, of course, always particularly worrisome to Walitt & Co. when it comes to what the likes of him enjoy calling controversial conditions. Actually, according to Walitt, it’s not even a condition; it’s just normal. If only we would finally accept that our mind, and nothing else, is making us sick—scratch that: making us think we are sick—it would be better for *everybody.*

Fibromyalgia might be giving Walitt’s ego trouble because he, as a physician, is unable to help patients due to the limited amount of time a doctor has with his patients and the fact that medications do not work well in fibromyalgia. Hm, what other disease faces these obstacles? I can’t put my finger on it. But luckily, Walitt himself is helping me out. Since at least 2009, Walitt is on record conflating Fibromyalgia and Chronic Fatigue Syndrome as “disorders of subjective perception.” Sure, Walitt is likely to say that our experience is atypical, but what good is that going to do us? ME/CFS patients should be very alarmed by NIH’s choosing of Walitt.

By selecting a lead clinical investigator who has already declared CFS a somatoform disorder, NIH has tipped its hand in a major way, as if the stakes weren’t high enough for the community yet. Given the reputation and reach of the NIH, the weight this study is likely to carry will make PACE appear like a fifth grader’s (attempt at a) science project. In terms of its impact, this is PACE on steroids. Think about that, despite PACE being heavily attacked by journalists and scientists of impeccable reputation in the last few months, it has so far been impossible to get it retracted. Consider the enormous damage PACE has done to the health of so many patients and to the perception of what this disease is. And PACE is a study that, on its face, is devoid of science. How much harder do you think will it be to debunk an NIH study that appears to be looking for biomarkers, but finds that our debilitating symptoms are merely a normal reaction to life? It will be impossible. The stakes are enormous. This study has the potential to sink us for good.

How much more will NIH try to sneak by us? How many times are we supposed to give the agency the benefit of the doubt with this study? They attempted to force the Reeves criteria on us and when patients were petitioning against that, they dropped Reeves for the CDC Grand Rounds presentation without so much as the hint of an explanation as to why Reeves would have even been in the ballpark. They acted like Reeves was never in the picture and yet, I am not at all convinced his criteria are out of the picture. They are forcing the functional-movement-disorder and post-Lyme comparison groups down our throats despite a tremendous outcry about the use of a condition labeled psychogenic (What if FMD patients’ symptoms are caused by an infectious agent or have some other biological basis similar to what they might find in CFS?) as well as a stigmatized illness (Why use Lyme when we really don’t know much about it and the little we do know is disputed?). One of the other study investigators, Dr. Fred Gill, has the most repulsive track record when it comes to CFS. Charlotte von Salis’ piece about him is a must-read. Gill adores the late Straus of NIH, the Wessely School’s PACE and the CDC’s Reeves criteria. NIH might claim that Walitt was an accident, an inadvertent oversight. But two (and who knows how many more) investigators with a glaring bias against our community? That is clearly no coincidence.

And these are just issues patients have been able to glean, partly from the outlandish roll-out of the study (which NIH never apologized for) and NIH’s entirely inappropriate back-channel feeding of information to chosen advocates. There is, without a doubt, a lot going on behind the scenes that we won’t find out about until it’s too late. Replacing the Reeves criteria clearly wouldn’t rectify all that’s wrong with the study. Turns out Reeves may be the least of our worries. Let that sink in.

People are dying—either prematurely because of ME or at their own hand when the suffering becomes unbearable. If they are not dying, they live in agony. There is no room for even the slightest remnants of feel-good drivel or this-is-normal claptrap, let alone for putting somebody with that belief system in the role of the lead clinical investigator. With the reveal of Walitt’s role in the study, we’ve been treated to some inadvertently-shared insights into the structure and aims of the NIH study. Maybe we’ll be able to make enough noise to get Walitt kicked off the study. But don’t hold your breath and, in any event, it doesn’t change the fact that NIH has made its intentions very clear: the rebranding of ME/CFS as a normal life experience.

I’ve included the full transcript below.

Thanks to Ella for assistance in finding a way to translate this mess.

***

(This is not the topic of this post, but note that based on the first slide of Dr. Nath at the CDC Grand Rounds a few days ago, the original post-infectious *CFS* study was turned into a post-infectious *ME/CFS* study. In this post, I am using the terms “ME/CFS” because NIH is using it and “CFS” because Walitt is using it. Unfortunately, it’s beyond the scope of this post to address the issue conflating ME with CFS to which I strongly object.)

NIH_Study_Title_Original

NIH_Study_Title

***

Transcript of interview of Brian Wallit

during the conference “Perspectives in Rheumatic Diseases 2015”

held on September 18-19, 2015 in Las Vegas

(The video of the interview was posted on September 30, 2015 at http://www.familypracticenews.com/specialty-focus/rheumatology/single-article-page/video-fibromyalgia-doesnt-fit-the-disease-model/e913134880916685f3005dac5459ab88.html)

Interviewer: What should we do to try to help patients with fibromyalgia?

Walitt: The most important thing to do is to listen, right? To understand that the experience is valid and not to belittle them, right? It’s also important to be honest with them and explain that the medical system can’t provide the answers that they want. That at best, we can try to help them. We can give them some tools to help deal with the day-to-day struggles of having fibromyalgia. But we can’t just make it go away. We can’t restore them to what they think they should be. [not audible] We should bear witness to their difficulties, which is the oldest of the jobs that physicians do.

(Smiles) One of the interesting things about this talk is that people come in with a set of beliefs and a lot those come from what they’ve been taught and what they see on television and what their patients come to their offices with. Sort of these ideas of, you know, fibromyalgia is a disorder of sensitive nerves.

Slide 1: Patient FM

But it’s a narrative that doesn’t seem to be valid and the hope is, and physicians, as they see me talk, often start off using that narrative and believing in the narrative and answering the questions in terms of the narrative, but deep down they know that it’s not true. And by the end, after I say my message, um, they are relieved to hear that deep down their believes are not wrong and that really what may be required is not, uh, saying that fibromyalgia is not real, but finding a new narrative in which to discuss it, one that makes much more sense, uh, to everybody.

Interviewer: Do we have any idea yet what narrative might be more useful?

Walitt: (Smiles, sights) Ahhhh, that’s a tough question. Um, the problem is that language is so heavily charged. Uh, people are not willing to accept the idea that our emotions affect our sensation, right?

Slide 2: What is Fibromyalgia?

The idea that mind itself is able to create these things and that all experiences are psychosomatic experience. Nothing exists without your brain creating those sensations for you. And the idea that, uh, that process of creation can create these things and is supposed to create things like this, to inform us and to teach us and to guide our behavior, ahem, pushes against the idea that we have free will and that we can do whatever we want and that we should be able to lead the lives that we have always thought we should leave [sic], not the ones that our bodies are restricting us to. Uh, and so accepting those kinds of ideas, ahem, is not so easy, but that might make it a little bit easier on everybody. It might be a more palatable narrative, uh: understanding that people can feel bad for no real fault of their own, uh, because of their circumstances of lives and how brains just work—the way it’s supposed to be, as opposed to being sick. There is a wonderful line from this gentleman, Joel Higgs [sic]: “When people are atypical, society do one of three things: They either medicalize, criminalize or moralize [Smiles]. And so, when we find people with things like fibromyalgia, they are either gonna be sick, bad or weak. And the idea is really to find a fourth way to realize that these atypical things are just a range of normal, that you are not sick, bad or weak, that you are just dealing with the difficulties of just being a human.

Interviewer: Brian, why did you title your talk on fibromayalgia as “Tilting at Windmills?”

Slide 3: Tilting at Windmills (a rational approach to fibromyalgia)

Walitt: Oh, I wanted to invoke Don Quixote’s quest to slay a dragon. Um, fibromyalgia is a very challenging thing for physicians to deal with and the idea that there are easy answers that can be prescribed to one’s patients, um, is kind of a fallacy. And I thought that title would uh bring that out. (Smiles)

Interviewer: What are the difficulties in dealing with fibromyalgia?

Walitt: Well, as physicians, we have a limited amount of time in the office and our training is to use medications, um, to deal with the problems that we see in front of us. And fibromyalgia as a disorder defies all of that. It requires a lot more time and medications do not work very well. And if you try to adhere to how we’ve been trained to treat people, [uh, you’ll inevitably fail.

The experience of fibromyalgia is very much real to the people who have it. The way that we think and feel is based in electricity and biochemistry of our brains. And we don’t really understand how the physicality of that chemistry becomes our thoughts and feelings. And in people with fibromyalgia they clearly feel these ways, and there’s probably a [sic] underlying biology to it, but the idea that that’s an abnormal biology, um, is less clear. The idea that the way that we think and feel should be affected by the goings and comings of our lives and the difficulties we have, um, is something that seems, uh, self-evident, but also’s something that we, uh, like to pretend isn’t true. We’d love it if we could reduce all of these things to a simple pathway. You know science, um, has had all its greatest successes in reducing problems to a single pathway, a single place. And all the, you know, if you take diabetes, understanding the key role of insulin in diabetes. Once that was understood, it transformed the whole illness, and allowed for people to become better.

Slide 4: “Fibromyalgia Controversy”

The problem with things like fibromyalgia and other, uh, disorders that are of the the neurologic systems of the brain, is that the brain seems to have a dual existence. It exists both as a biological construct but it also exists as sort of a psychological construct. And we don’t really understand how the two go together yet. (Smiles) How they play together, how they sing together, how they work together. And so our attempts to alter the biology without understanding the emotional overlay, um, probably leads to a lot of failure. Alright, there are, it it speaks to our lack of understanding how it really works.

Interviewer: What is fibromyalgia?

Walitt: That’s a hard one. Ah, time will tell. Uh, fibromyalgia appears to be a way that people experience suffering in their body. Um, both from the way that the bodies are interpreted and the problems of the body, as well as the problems in their lives, as well as how societies tell us how to experience things. All those come together to create a unique experience in different points in time, and right now, that experience, um, is a, one of those experiences is fibromyalgia. Ah. Is it a disease? Or is it a, uh, a normal way that we handle and are supposed to work is still to be determined. But it’s quite possible that the (frowns) tricky way that the brain works, is that we may create symptoms as part of how we’re supposed to operate, as opposed to this representing the system breaking down.

(fade in and out)

When you talk to patients with fibromyalgia, uh, and you ask them what they think about it, they can often provide you the answers, about where they should go. Uh. People with highly spiritual feelings and belief that, um, in spiritual forces as, uh, potential ways to heal, uh, should be referred that way. Uh, people who believe in exercise should go that way; people who believe in Eastern philosophies should be referred that way. Uh, taking a one size fits all, or using your own judgment of what is legitimate, uh, is often not helpful in treating people with fibromyalgia, because it’s really about what they think is legitimate.

Slide 5: A rational approach to treatment

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , | 71 Comments

Deadline for Comments on Proposed ERISA Disability Regs Fast Approaching: Additional Guidance

We are down to the wire; the deadline for public ERISA (“Employee Retirement Income Security Act”) comments—January 19, 2016—is fast approaching. As of yesterday, the Department of Labor (“DOL”) has received 23 comments in response to its proposed new ERISA long-term disability (“LTD”) regulations. Just to be clear, these proposed regulations relate to disability determinations under employer-sponsored LTD plans, not Social Security disability benefits. I have written about the substance of the DOL proposal and posted instructions for commenting and a sample comment.

Let me supplement what I said in my last post about the substance of the comments. Generic supportive comments along the lines of “I wholeheartedly support the proposed changes” are not as powerful as comments that go somewhat into detail. Don’t get me wrong, I believe if the DOL were to receive a fair number of generally supportive comments, that would definitely help. But ideally, the comments would have some meat to them addressing the specific changes that have been suggested by the DOL.

I realize that this is a potentially intimidating proposition for ME patients, given the cognitive limitations the disease imposes. And the last thing I want to do is discourage anybody from commenting. Therefore, I thought it might be useful for me to break down my analysis from my first ERISA blog post into a list. Try to think about which of the proposed changes would have been helpful to you as you went through the LTD approval process or would be helpful to you if you had to go through it because you currently are, or might in the future be, covered by an LTD plan and then address those points. Remember that the comments will be publicly available, so don’t disclose anything sensitive. Here is a summary list of the most important proposed changes:

1. Claims adjudicators and medical experts may not be hired, compensated, terminated or promoted based on the likelihood of their denying disability benefits or supporting the denial of such benefits.

[Note: This requirement is intended to ensure independence and impartiality of the persons involved in making the decision, which, in turn, is meant to guarantee a full and fair review.]

2. Medical experts may not be hired based on their reputation for outcomes in contested cases rather than based on their expertise.

[Note: This requirement is intended to ensure independence and impartiality of the persons involved in making the decision, which, in turn, is meant to guarantee a full and fair review.]

3. The notice of claim denial must include a discussion of the decision, including the basis of disagreement with a disability determination by the Social Security Administration or a treating doctor.]

[Note: This requirement is intended to aid claimants in understanding why the claim was denied and why the decision is inconsistent with that of the Social Security Administration and/or the treating physician.]

4. The notice of claim denial must include internal rules, guidelines, protocols, standards or similar criteria of the plan that were used to deny the claim.

[Note: This requirement is intended to aid claimants in fully understanding the reason for the denial and in meaningfully assessing the likelihood of success of an appeal.]

5. The notice of claim denial must include a statement that the claimant is entitled to receive—at that stage and not only at the later stage of denial of the appeal—all relevant documentation supporting the denial of the claim.

[Note: This requirement is intended to aid claimants in fully understanding the reason for the denial and in meaningfully assessing the likelihood of success of an appeal.]

6. Claimants must be given the right to review (free of charge), and respond to, new or additional evidence or rationales for denial considered, relied upon or generated during the appeal process and not only after the claim has been denied on appeal. The information would have to be made available as soon as possible and sufficiently in advance of the deadline and the plan would be obligated to consider the claimant’s evidence and written testimony in response to the plan’s new or additional information before making a decision on appeal.

[Note: This requirement is intended to ensure a full and fair review by affording claimants the opportunity to respond to new evidence or rationales during the administrative stage, before going to court.]

7. If the LTD plan has not followed all procedural rules (except in cases of minor errors), a claimant may proceed straight to court without first exhausting all administrative remedies.

[Note: This requirement is intended to allow claimants to proceed to court without exhausting all administrative remedies if the plan’s process fails to satisfy the regulatory minimum standards.]

8. If the LTD plan has not followed all procedural rules, the reviewing court will consider the matter de novo, i.e., the court will give no deference to the plan’s determination and instead set it aside and use its own judgment based on the administrative record.

[Note: De novo is a much more favorable standard for claimants than the usual abuse-of-discretion standard under ERISA, which merely reviews whether the plan’s decision was arbitrary and capricious.]

9. The retroactive rescission (cancellation or discontinuance) of coverage would constitute a so-called adverse-benefits determination regardless of whether the beneficiary/participant is currently receiving benefits. Classification as an adverse-benefits determination is important because it permits the claimant to invoke the ERISA claims-procedure requirements.

[Note: This is a very technical point. It broadens the definition of “adverse-benefits determination.” If the proposed change is adopted, it would, e.g., allow a claimant to invoke the ERISA claims-procedure requirements in case of a claim denial based on the retroactive assertion by the plan that the claimant made a misrepresentation on their application form even if the error was made innocently.]

10. The notice of claim denial must include a prominent one-sentence statement in the relevant foreign language about the availability of language services if the claimant resides in a county where at least 10% of the population are literate only in the same non-English language.

[Note: This requirement only applies in counties that satisfy the 10% hurdle (currently 255 counties) and only with respect to the particular language that is the only one spoken by 10% of the population.]

The proposed change in under 2. would, in my opinion, be the most important one, as many LTD cases are lost by claimants because of the biased opinion of a physician who is in the pocket of the insurance companies. The third proposed change is probably equally crucial. Nevertheless, it is important that the other points be addressed as well or they are more likely to fall victim to the insurance companies’ and employer organizations’ objections. That is not to say that everybody should address all points.

Some of the already submitted comments are likely from the insurance industry and employer organizations, trying to water down the proposed changes by whining about how cumbersome they would be for them. So, let’s rally and get some more comments submitted. I sincerely hope that our advocacy organizations recognize this important opportunity to get involved.

Contrary to what I said in my prior post, I believe now that the easiest and most reliable method of submitting your comments is online at http://www.regulations.gov/#!documentDetail;D=EBSA-2015-0017-0001. Make sure to request an emailed receipt. Please note that “Regulations.gov will undergo scheduled maintenance and as a result the site will be unavailable Monday, January 18, from 8:00 am through 4:00 pm (ET).” If you want to submit your comment during that time, you can email it to e-ORI@dol.gov.

All submissions must include the agency name, Department of Labor, and Regulatory Identifier Number, RIN-1210-AB39.

 

Posted in Uncategorized | Tagged , , , , , , | 5 Comments

Proposed ERISA Disability Regs: Instructions and Sample for Public Comments

[Update 1/15/16: additional guidance for your comments here]

Below are instructions on how to submit your comments on the new regulations proposed by the Department of Labor (“DOL”) for long-term disability (“LTD”) benefits under the Employee Retirement Income Security Act (“ERISA”). The DOL asked for comments from the public on these proposed regulations. For details on the proposal, please see my prior blog post.

I am also posting my own comments, as people have asked for a sample. A few words about that. First of all, everybody should feel free to borrow from my submission. Please do not feel like your comments have to sound legalistic; they don’t. They also don’t have to be long or “perfect.” My own comments are far from perfect nor are they exhaustive. I merely picked some of the issues that jumped out at me. It would be helpful if you could include in your comments a few points (or even just one) about how proposed-to-be-changed provisions have negatively affected you personally in your dealings with your LTD carrier and in obtaining LTD benefits, but remember not to include any personally identifiable or confidential business information (see below). Basically, limit your remarks to things you would be comfortable with the whole world knowing about you. It might be useful to take a look at my prior post to decide what points you feel you can meaningfully address.

As a matter of background, the proposed regulations would provide strengthened procedures and safeguards for employees claiming LTD benefits under ERISA. If the proposal is adopted, it would be a game changer for disabled employees covered by ERISA and a big step towards putting a stop to the egregious commonplace wrongful denial of LTD benefits by LTD plans. Many, if not most, ME patients covered by an ERISA plan encounter unconscionable tactics by the powerful disability insurance companies designed to deny employees the benefits they are entitled to, precisely because ERISA has created a framework that overwhelmingly and devastatingly favors said insurance companies. Of course, the ERISA rules affect every LTD claimant regardless of the disabling disease. But ME patients are one of the patient groups that are particularly vulnerable, in part due to the pervasive ignorance, in the medical profession, of the grave disability it can cause. With its proposal, the DOL has put forward a new set of rules that is designed to counteract the unfair advantages insurance companies have had for decades and to provide a better chance for claimants to receive the benefits that are rightfully theirs.

I urge everybody who is able to do so to provide strongly supportive public comments to the DOL. As I have said previously, the DOL is not the big black hole that is HHS, where public comments are ignored or disappeared. The DOL wants to enact this proposal; it merely needs enough public support to justify doing so in light of the fact that they will receive strong pushback from the insurance and employer lobby. I feel quite optimistic that the new regulations will be put in place if we do our part to offset those lobbying efforts.

Instructions for Comment Submissions:

Comments must be in writing and received on or before January 19, 2016.

There are three ways to submit your comments: email, online or regular mail. I personally prefer email because that creates a record of the submission, although, unlike with HHS, the DOL does not have any incentive to make public comments disappear—quite the contrary. And in this case, it looks like the website will provide a receipt after submission. So, if it’s easier for you to submit your comments online or to send a letter, that should work just fine. Just make sure they are received by the deadline of January 19, 2016.

Emaile-ORI@dol.gov (Specify RIN 1210-AB39 in the subject line of the email.)

Online: http://www.regulations.gov/#!documentDetail;D=EBSA-2015-0017-0001 (Click on “Comment Now!”)

Mail: Office of Regulations and Interpretations, Employee Benefits Security Administration, Room N-5655, U.S. Department of Labor, 200 Constitution Avenue NW, Washington, DC 20210, Attention: Claims Procedure Regulation Amendment for Plans Providing Disability Benefits.

All submissions must include the agency name, Department of Labor, and Regulatory Identifier Number, RIN-1210-AB39.

Do not include any personally identifiable or confidential business information that you do not want publicly disclosed because all comments will become part of the public record without any redactions or changes and will be available to the public, without charge, online at http://www.regulations.gov and http://www.dol.gov/ebsa, via search-engine searches and at the Public Disclosure Room, Employee Benefits Security Administration, Suite N-1513, 200 Constitution Avenue NW, Washington, DC 20210.

Important edit just to clarify (Thanks go to Mary Ann Kindel for pointing this out.):  Submissions may be withheld by some agencies if they contain “duplicate or near duplicate examples of a mass-mail campaign.” I do not know if the DOL is one of those agencies, but it is definitely important that your comments be “customized,” as suggested above even if you borrow some ideas or language.

My comments:

Re: RIN 1210-AB39

I am writing to comment on the Proposed Regulations issued by the Department of Labor, Employee Benefits Security Administration on November 18, 2015 (“Proposed Regulations”).

First of all, I want to commend the Department of Labor (“Department”) for this very constructive proposal. I strongly approve of the comment made by the Department in the preamble that “disability claimants deserve protections equally as stringent as those that Congress and the President have put into place for health care claimants under the Affordable Care Act.”

I am presently a disability recipient under an employer-sponsored disability plan governed by the Employee Retirement Income Security Act of 1974 (“ERISA”) and its requirements regarding claims procedures. I can speak first hand to the potential abuses occurring under the current claims-procedure regulations and the urgent need to address these in the Proposed Regulations.

The proposed tightening of the conflict-of-interest rules is particularly welcome. Prohibition against a claims fiduciary (typically the insurance carrier insuring the disability claim under the employer plan) making any decisions regarding hiring, compensation, termination, promotion or similar matters with respect to any individual (such as a claims adjustor or medical expert), based on the likelihood that the individual will support the limitation or denial of disability benefits, should—going forward—help eliminate, or substantially reduce, the documented cases of such behavior by disability insurance carriers, most notably Unum/Provident (see John H. Lanbein, Susan J. Stabile, Bruce A. Wolk, Pension and Employee Benefit Laws at pp. 669-74). The insurance carrier would not be permitted to contract with a medical expert based on the expert’s pattern of denying claims, as is clearly the typical situation today, which I know from my own experience. This will, I hope, add a measure of integrity to independent medical exams (IMEs) used so frequently to contest, and ultimately deny, a disability claim notwithstanding the opinion of the claimant’s doctor.

The proposed amendments to the disclosure requirements should also prove helpful to disability claimants faced with a claim denial based on ill-defined reasons. The requirement to produce a detailed description of the denied decision, including the basis for the plan’s disagreement with the claimant’s treating physician or the Social Security Administration as well as the internal rules, guidelines, protocols, standards or other criteria applied to deny the claim, should prove helpful in appealing denied claims in court.

The other proposed changes are meritorious as well and should be adopted as part of the final regulations. For example, the “de novo” standard of review in cases where the plan has not followed the correct procedures should provide an effective incentive for disability carriers to comply with the relevant rules—an incentive that is unfortunately so desperately needed.

The Proposed Regulations give disability claimants more procedural rights and safeguards to partially offset what is a an unacceptably and unjustifiably uneven playing field at present. I can speak from personal experience that disabled claimants are faced with substantial procedural obstacles put in their way by disability carriers. This is particularly disturbing in light of the diminished capacity of most claimants—due to the limitations imposed by their disability—to get through all the gratuitously cumbersome procedural hurdles and grueling, harassing and irrelevant requirements placed on them by the disability carriers. Given the lack of a jury trial, the prohibition against punitive damages and the potential deferential standard of review of denied claims, these proposed changes are critical to provide at least some fairness to disabled claimants in a process that is heavily structured against them.

For the above reasons, I strongly support adoption of the Proposed Regulations as soon as possible.

Posted in Uncategorized | Tagged , , , , , , , | 7 Comments

Department of Labor Proposes Lowering Bar for ERISA Disability Claims, Requests Public Comments

[Update 1/15/16: instructions for submitting comments here and additional guidance for your comments here]

I am happy to report a rare positive development for disability claimants, one that is important to get behind. As most of you know, the rules under the Employee Retirement Income Security Act (“ERISA”) regarding employees’ long-term disability (“LTD”) claims are abysmal; the deck is clearly stacked against claimants. It seems, that the U.S. Department of Labor (“DOL”) has taken notice and is attempting to level the playing field somewhat. On Wednesday, November 18, 2015, the DOL published proposed regulations, which, if and when adopted, would provide employees who claim disability benefits under their LTD plan with additional procedural protections and safeguards that would afford some claimants benefits that would otherwise have been improperly denied, as happens all too often. I will discuss the proposal in more detail below, but here is the bottom line: While there still won’t be punitive damages—in my opinion, the most needed change under ERISA—or the right to a jury trial (Neither are in the purview of the DOL, but would require legislative changes to the statute instead.), the proposed changes would impose significant additional restrictions on LTD plans that would make it more difficult for them to improperly deny LTD benefits, which they are so highly motivated to do for obvious financial reasons.

In order for the DOL to move forward with enacting the proposal, it is crucial that it receive comments from the public in support of the proposed changes. Even just a few dozen public comments could tip the scale. If there is no expression of support from the public, that will substantially decrease the likelihood of the proposal being put into place because it is a near certainty that disability carriers and representatives of employer organizations will provide comments opposing these regulations, lobbying to retain the status quo that favors them so heavily. The easiest way to provide comments is by email to e-ORI@dol.gov. Comments have to include “RIN-1210-AB39” (best placed (also) in the subject line) and the agency name, “Department of Labor.” Comments need to be submitted within 60 days. Please note that all comments will be published online without redactions; therefore, do not include any sensitive information.

Unlike HHS and its component agencies, such as CDC and NIH, the DOL seems genuinely interested in effecting desperately-needed change. Citing the “aggressive posture” of LTD insurers and plans, the agency took the initiative to attempt to strengthen the current procedural requirements imposed on LTD plans “[b]ecause of the volume and constancy of litigation in this area….” In fact, the department realized that “disability cases dominate the ERISA litigation landscape today.” Therefore, the DOL “recognized a need to revisit, reexaime, and revise the current regulations in order to ensure that disability benefit claimants receive a fair review of denied claims….,” as “insurers and plans looking to contain disability benefit costs are often motivated to aggressively dispute disability claims.” As opposed to comments sent to HHS or its component agencies, which are completely ignored as a matter of course, comments to the DOL on this matter have a real chance of making a meaningful difference for future claimants and those currently in the claims process. Therefore, in addition to input from individuals, this strikes me as an excellent and unprecedented opportunity for our advocacy organizations to potentially effect some meaningful change. It’s a low-hanging fruit.

As a matter of background, employer-sponsored LTD plans are required, under ERISA, to have in place so-called claims procedures that set forth the process for disabled employees to make claims and appeal the denial of claims under an LTD plan. These requirements have been in place since ERISA was implemented in the mid-1970s. Recently, comparable rules for health plans were strengthened as a result of provisions in the Affordable Care Act (“ACA” or Obamacare, as it has come to be known). What the DOL is proposing with these new disability plan claims procedure rules is to apply many of the stricter ACA health-plan rules to LTD claims.

Note: These proposed regulations do not apply to Social Security disability claims.

Here is a summary of the key aspect of the proposed regulations:

  1. Independence and impartiality—avoiding conflicts of interest.

The proposal explicitly requires that plans ensure—in the interest of a “full and fair review”—that all disability benefit claims are adjudicated in a manner designed to ensure independence and impartiality of the persons involved in making the decision. More specifically, the proposal requires that claims adjudicators and so-called “medical experts” utilized by the plan not be hired, compensated, terminated or promoted based on the likelihood of their denying disability benefits or supporting the denial of such benefits. Tying bonuses for claims adjudicators to the number of denials would not be permissible anymore. Furthermore, the hiring of a medical expert based on his or her reputation for outcomes in contested cases rather than based on his or her expertise would no longer be allowed. I predict that this provision would knock out pretty much every “medical expert” currently engaged regularly by LTD insurance companies because most of them are squarely in the insurance industry’s pocket. This new rule would be much more than an inconvenience for the insurance industry; it could change the game and is, thus, a crucial potential improvement.

  1. Improved disclosure to claimants

Adverse determination of claims would be required to contain a discussion of the decision, including the basis of disagreement with a disability determination by the Social Security Administration or a treating physician. This would constitute a big shift, as LTD benefits are often denied despite the fact that Social-Security benefits have been approved and/or in disregard of the opinion of the treating physician, with no or little explanation of the disagreement. Adverse determination notices would also have to contain the internal rules, guidelines, protocols, standards or similar criteria of the plan that were used to deny the claim. Further, a notice of claim denial would have to contain a statement that the claimant is entitled to receive, at that stage, all relevant documentation supporting denial of the claim. Currently, this is required only at a later stage, upon the denial of benefits on appeal. These new provisions would aid in claimants fully understanding the reason for a denial and meaningfully assessing the likelihood of success of an appeal.

  1. Right to review and respond to new information before final decision is made

Claimants must be given the right to review, free of charge, and respond to new evidence or rationales developed during the appeal process and not only after the claim has been denied on appeal. The evidence would have to be made available as soon as possible and sufficiently in advance of the deadline and the plan would be obligated to consider the claimant’s evidence and written testimony in response to the plan’s new information.

  1. Changes to technical rules regarding the requirement that claimants go through all the plan’s procedural requirements (in legalese, “exhaust administrative remedies”) before taking their claim to court

These changes generally allow a claimant to proceed straight to court without first jumping through more hoops on the administrative level when the plan has not followed all the procedural requirements of the regulations and also provide that the reviewing court consider the matter “de novo” in those cases where the plan has not followed the correct procedures. “De novo” means that the court gives no deference to the plan’s determination denying the claim; instead, it sets aside the plan’s decision and uses its own judgment based on its own review of the evidence. It is a much more favorable standard for claimants than the usual abuse-of-discretion standard under ERISA, which merely reviews whether the plan’s decision was arbitrary and capricious.

  1. Culturally and linguistically appropriate notices

The added language safeguards would require that adverse-benefit determinations include a prominent one-sentence statement in the relevant language about the availability of language services if the claimant resides in a county where at least 10% of the population are literate only in the same non-English language.

 

There are other aspects of the proposed regulations, but those described above are the most significant. Taken together, they should provide ammunition to those whose disability claims have been denied by the insurance carrier administering the applicable LTD plan.

This quote from preamble of the proposal sets out an overview of all the proposed changes:

The major provisions in the proposal largely adopt … provisions that seek to ensure that (1) claims and appeals are adjudicated in a manner designed to ensure independence and impartiality of the persons involved in the making the decisions; (2) benefit denial notices contain a full discussion of why the plan denied the claim and the standards behind the decision; (3) claimants have access to their entire claim file and are allowed to present evidence and testimony during the review process; (4) claimants are notified of and have an opportunity to respond to any new evidence reasonably in advance of an appeal decision; (5) final denials at the appeals stage are not based on new or additional rationales unless claimants first are given notice and a fair opportunity to respond; (6) if plans to do not adhere to all claims processing rules, the claimants is deemed to have exhausted the administrative remedies available under the plan, unless the violation was the result of a minor error and other specified conditions are met; (7) rescissions of coverage are treated as adverse benefit determinations, thereby triggering the plan’s appeals procedures; and (8) notices are written in a culturally and linguistically appropriate manner.

Posted in Uncategorized | Tagged , , , , , , | 28 Comments

The Scientifically Challenged UK Media Strikes Back

Reblogging this must-read post by Utting-Wolff Spouts exposing today’s “ethically indefensible” piece by Sarah Knapton, science editor at The Telegraph, about the follow up to the scientifically disturbing PACE trial.

Posted in Uncategorized | 3 Comments

Holding HHS Accountable for Unrelenting and Unrepentant Legal Violations

Many members of the community have called out HHS for legal violations over the years, such as Dr. Mary Ann Fletcher and Ms. Eileen Holderman confronting Dr. Nancy Lee, DFO of CFSAC, for her attempted intimidation of CFSAC members by threatening to evict them from the committee for voicing their opinion. This was well documented by Jennie Spotila on her blog. Ms. Spotila also uncovered other FACA violations. I successfully sued HHS and NIH in federal court for violating FOIA and the Judge found the agencies’ conduct to be unreasonable to a degree that led him to order both agencies to pay all of my attorneys’ fees, more than $139,000. The award of attorneys’ fees is by no means a given in FOIA cases; it requires a high level of unreasonableness on the government’s part. I explained why HHS again violated FACA regarding CFSAC’s January 2015 comments to the P2P here, here and here. Many advocates have protested these and other legal violations of HHS in formal complaints and public testimony throughout decades.

It’s almost too obvious to make the point, but the government, in our case, HHS, has a mandate to follow the law. The rules exist for important reasons, in the case of FACA, to protect the integrity of the process through transparency and accountability. Similarly, FOIA is meant to facilitate open government. Those are important constructs that, together with other aspects of our legal system, build the foundation of a principled society. They are not just technicalities that can be shoved aside or overlooked whenever it is convenient for the government. To the contrary, they represent rights of the people that are enforceable in court. It seems what we are seeing is a desensitization to legal violations due to the sheer numbers of times HHS has been violating the law, all the while acting as if nothing was wrong. But it is the duty of a citizen, especially an advocate, not to let that cloud one’s judgment and not to let HHS get away with it. A violation is a violation regardless of how many times it has been committed.

However, a few members of our community prefer to turn a blind eye when it comes to HHS’s unlawful conduct. It is possibly understandable, though not excusable, that HHS would downplay the seriousness of its actions or even misstate the law to the public, as Dr. Lee, CFSAC’s DFO, did again just last month at the latest CFSAC meeting in describing HHS’s disclosure obligations under FACA. But why would patients do it?

For a few, the answer seems to be that they are taken aback when they realize that they participated in a process that was unlawful on the part of HHS, such as a FACA violation. Instead of directing their dismay over these violations at HHS, they turn it against those of us who are holding the agency accountable. When somebody has been passionately invested in a project by volunteering a lot of time and effort, it may be natural for the initial knee-jerk reaction to be pushing back upon hearing of HHS’s misconduct. Cognitive dissonance can be quite compelling. And, of course, HHS is relentless in its denial of its violations, never mind that they are obvious. After decades of neglect and abuse by HHS, wanting to believe that things are finally different—that HHS turned over a new leaf and now has the best interest of ME patients in mind—can become a desperate need reinforcing the narrative that nothing is wrong with HHS’s actions. It’s tough to admit to oneself and others that things were not above board when one was led to believe by HHS that they were on sound legal footing and one relied on that. I get that. However, it is asking a bit much of the community to overlook these serious transgressions by HHS just to allow those who were part of the tainted process to retain their comfort level and alleviate any potential guilt. Once a well-reasoned and well-supported analysis of the law has been presented outlining the legal violations by HHS, there is no longer any plausible deniability.

Nevertheless, a shooting of the messenger, which does occur at times when accountability is demanded from HHS for the agency’s illegal actions, is crossing a line. Not only do a few patients and/or advocates praise HHS despite all its egregious violations, make excuses for the agency and presumptuously and patronizingly apologize to the agency or its component agencies for other patients, they also misstate the law publicly to the community thereby enabling HHS to continue their unlawful pattern. They even go as far as to, often publicly, accuse those who try to hold HHS accountable of being conspiracy theorists, making unsupported assumptions, creating unnecessary drama, reporting recklessly and manipulating the community. They question the value of insisting on HHS’s adherence to the law and instead stress the amount of work that went into an HHS project, as if that somehow offsets the violations. They also deny established facts.

This hurts all patients. It is also a double whammy for the many in the community for whom compliance with the law is not negotiable; they witness HHS break the law time and time again and, when they confront the agency, they face unsupportable accusations by others in the community who enabled, condoned, or acquiesced in, the HHS violations and/or are either not familiar with the law or choose to overlook legal violations, seemingly in the interest of a purported greater good.

The greater-good argument is, of course, a slippery slope. To what degree are we supposed to tolerate legal violations? When do they cross over to becoming inexcusable? Who gets to decide? The law exists to remove those grey zones. In our society, the duty to follow the law is not optional nor is it permissible to follow it selectively.

One person has even publicly suggested that it is improper for an advocate who chooses not to participate in a particular process to later criticize such a process. This is absurd. Usually, the reason the advocate chose not to participate in the process (assuming he or she was given an opportunity to do so) is that the process itself was flawed or tainted. Participating would be tantamount to endorsing the flawed process, such as the farcical jury model of the P2P. Only through the looking glass would this lack of participation force silent acceptance on what ultimately turns out to be not only a tainted, but an unlawful process.

It is important in this context that, once a legal violation has been explained in painstaking detail and the facts are not at all in question and are, in actuality, admitted or otherwise proven—as is the case with the FACA violations that occurred with respect to the January 2015 CFSAC recommendation—the accusation that he who uncovers HHS’s unlawful conduct made misrepresentations of the facts or the law can no longer be claimed to be a negligent attack on that person’s reputation; it’s quite intentional.

This is one of those moments when the M.E. community defines itself. Does it want to insist on HHS’s adherence to the law or condone the agency’s manifest legal violations? Some advocates have been fighting, often at great personal cost, to compel legal compliance by HHS. Others have enabled HHS, actively or indirectly, to disregard the law. Asserting that it is important to “work with” HHS or that legal infringements should be overlooked so as to achieve a purported beneficial end result, they downplay the seriousness of, or even defend, activities or processes that are tainted by unlawful HHS conduct. They also, instead of taking issue with HHS’s unlawful pattern, fault those who seek to hold HHS accountable for its legal violations.

It is crucial that those who stand for accountability of HHS under the law and integrity of the governmental process continue to insist on HHS’s compliance with the law. Pursuing legal violations by HHS gives the community unparalleled leverage in its fight against the agency’s recalcitrance, abuse, contempt, neglect, obstruction, distortions, misinformation and failure to fund. Let’s remain firm in our conviction that going along with HHS’s unlawful methods in order to get along is out of the question for our community.

Posted in Uncategorized | Tagged , , , , , , , , , , , | 26 Comments

Yes, CFSAC, there is a FACA violation

“Yes, Virginia, there is a Santa Claus.”–From an 1897 editorial, “Is there a Santa Claus?” of The New York Sun

I was contacted by a member of the Chronic Fatigue Syndrome Advisory Committee (CFSAC) regarding my blog post, “Oops, they did it again! CFSAC violates FACA.” Below is the part of my answer that I think will be of particular interest to the public, as it spells out one of HHS’s recent FACA violations in more detail.

“I want to thank you for your message in response to my blog post, “Oops, they did it again! CFSAC violates FACA”, as it gives me an opportunity to spell out the FACA violation discussed in the second half of my post in more detail in hopes that it will facilitate a deeper understanding of the seriousness of the events surrounding the January 2015 CFSAC P2P comments.

Pursuant to Sections 5(b)(3) and (c) of the Federal Advisory Committee Act (“FACA”), the appointing authority—here, the Department of Health and Human Services (“HHS”)—shall not inappropriately influence the advice and recommendations of the advisory committee—here, the Chronic Fatigue Syndrome Advisory Committee (“CFSAC”). Instead, according to the law, that advice and those recommendations are supposed to be “the result of the advisory committee’s independent judgment.”

Accordingly, the role of a federal advisory committee’s Designated Federal Officer (“DFO”) is to ensure compliance with FACA (ironic, I know), and any other applicable laws and regulations; call, attend, and adjourn committee meetings; approve agendas; maintain required records on costs and membership; ensure efficient operations; maintain records for availability to the public; provide copies of committee reports to the Committee Management Officer for forwarding to the Library of Congress and to provide other support services for the committee. (U.S. General Services Administration, Office of Governmentwide Policy, Committee Management Secretariat; see also 41 C.F.R. §102-3.120 as well as CFSAC’s charter.) Please note that all these functions are purely administrative in nature, such as handling expense reimbursements for committee meetings. The DFO is not supposed to get substantively involved in the advice and recommendations by the advisory committee.

Those federal rules were clearly violated here. I realize that CFSAC’s DFOs break this particular FACA rule (and others) on a regular basis, but that doesn’t lessen the gravity of each violation. For the DFO, Barbara James at the time, to be involved in any substantive way in the comments constituted a violation. For her to be involved in the disturbingly invasive way that she was just makes the violation all the more egregious. I would hope that all CFSAC members find this obvious violation of the law appalling.

There is no doubt that HHS’s behavior was against the law. Beyond that, I did not suggest that the working group did anything unethical or immoral, but the working group certainly caved to HHS pressure and that was inappropriate. The fact that the group did not acquiesce in one instance—the length of the comments—does not excuse giving in on the more substantive issues. Nobody can, in all seriousness, argue that HHS did not exert any pressure on the working group to make changes to the document or did not affect any such changes, regardless of whether all changes were adopted for the final document. HHS’s making, or lobbying for, changes behind the scenes and hiding that very fact from the public is directly contrary to FACA’s purposes of independence and transparency.

Aside from the glaring legal issue, if CFSAC is going to have any credibility, it has to operate independently of the DFO. The integrity of the process is compromised entirely and the committee’s role is usurped if the agency whose contracted work is to be reviewed by the committee is allowed a veto right and, even more so, when that veto right is afforded up front. The working group draft should have gone to the full committee without any edits made by, attempted to be made by, or caused by the involvement of, HHS. The intact document is what represented the “consensus of the Working Group.” Once any kind of pressure is applied by the committee’s authorizing agency, the independence of the committee is not only undermined, it’s obliterated. The entire committee should have had a chance to review the unaltered working group document, discuss it and vote on it at the public meeting. If the DFO, Ms. James at the time, had inappropriately raised any objections to the draft comments at the meeting, the full committee and the public would have witnessed HHS’s improper attempt to influence the draft recommendation and that FACA violation would have become part of the minutes of the meeting.

You argue that it was the right thing to compromise on HHS’s changes to avoid risking that “the P2P panel would never see [the comments].” I disagree. Regardless of the fact that the ends hardly justify unlawful means, CFSAC should have chosen to avoid any appearance of improprieties. The committee could have had an impact without the Secretary. It should have adopted the recommendation without HHS tampering. If the Secretary had not followed the recommendation and not forwarded the committee’s comments to the P2P panel, that would have spoken for itself in a powerful way. Meanwhile, the CFSAC recommendation would have still remained on the record and members of the public could have and would have submitted the recommendation to the P2P panel.

The position that Ms. James got involved “with the best intentions” is both irrelevant and a leap of faith that seems unjustified given the history of the treatment of our disease by HHS, which includes a very long list of wrongdoings and abuses of the community by HHS and which doesn’t bode well at all for even more government censorship or secrecy. For example, during my FOIA lawsuit against HHS and NIH, false statements under penalty of perjury made by agency representatives were not isolated incidents. Other examples include misdirecting congressional funds in the millions by CDC; almost complete refusal to fund research of our disease by the NIH using pretexts and untruths; misinforming the public, media and medical community about our disease by CDC; conducting unscientific studies claiming, e.g., a connection between sexual abuse and our disease by CDC; creating the meaningless social construct and harmful name “CFS;” ridiculing patients by CDC and NIH; creating overly broad definitions by CDC preventing research progress by diluting cohorts; and frequently committing violations of various federal laws. In addition, vigilance and skepticism were clearly called for given HHS’s history of making changes to CFSAC recommendations, which was documented well by Jennie Spotila.

It is alarming that this improper influence by HHS was tolerated at the time given the compelling objections from at least two working group members. Now that these violations have been clearly exposed, any justification of HHS’s unlawful interference can no longer be maintained.”

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , | 19 Comments

Another CFSAC FACA Fail: DFO Misconstrues Law

On Monday, I published a post about CFSAC violating the Federal Advisory Committee Act (“FACA”) by failing to make the the working group’s draft P2P comments available to the public prior to, or at the time of, the January 2015 CFSAC meeting.

Under section 10(b) of FACA, “the records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents which were made available to or prepared for or by each advisory committee shall be available for public inspection and copying at a single location in the offices of the advisory committee or the agency to which the advisory committee reports until the advisory committee ceases to exist.” These documents must be made available no later than at the time of the meeting. When HHS had not made the draft comments available to the public for the January meeting, I requested that document, under FACA, after the meeting in a letter to Barbara James, Designated Federal Officer (“DFO”) of CFSAC at the time. I received it in March along with other documents, which is posted and analyzed in my Monday post.

In an apparent attempt to counter my FACA-violation charge, the current DFO, Dr. Nancy Lee, addressed the issue at Tuesday’s meeting:

“We are not required to send these [documents prepared for the meeting] out in advance of the meeting [other than to committee members] because they are pre-decisional. … [W]e are not required to post them on the website because they are pre-decisional.”

Dr. Lee’s excuses for HHS’s failure to comply with FACA fall flat because she is plain wrong regarding the requirements of FACA.

First of all, she seems to combine two arguments. It appears she is saying that HHS is not required to

  • email the relevant documents to the public or post them on the CFSAC website in advance of the meeting nor
  • disclose them at all because they are pre-decisional.

I and numerous other advocates wrote to Ms. James ahead of the meeting in January, saying that “the public should see” the document, which is exactly what FACA requires. I did not ask for the document to be emailed to me nor for it to be posted on the CFSAC website. I also never claimed that the FACA violation was the result of the failure to do so. Yet with her above statements, Dr. Lee falsely insinuated that I did when, in fact, I pointed out a FACA violation resulting from not making the document available to the public at all as required by FACA.

Regarding her second argument, Dr. Lee seems to be confusing the Freedom of Information Act (“FOIA”) and FACA. Under FOIA exemption 5, government agencies may withhold documents that are the product of the “deliberative process in governmental decision-making,” also referred to as “pre-decisional,” the term Dr. Lee used. The relevant FACA-disclosure requirements, however, are as follows:

“… FACA requires disclosure of written advisory committee documents, including predecisional materials, such as drafts, working papers and studies. The disclosure exemption available to agencies under exemption 5 of FOIA for predecisional documents and other privileged materials is narrowly limited in the context of FACA to privileged ‘inter-agency or intra-agency’ documents prepared by an agency and transmitted to an advisory committee.” [emphasis added] (Memorandum Opinion for the Assistant Attorney General Office of Legal Policy dated April 29, 1988)

In other words,

“FOIA Exemption 5 cannot be used to withhold documents reflecting an advisory committee’s internal deliberations.” (Memorandum for Committee Management Officers from James L. Dean, Director, Committee Management Secretariat, dated March 14, 2000)

The rationale is simple:

“Timely access to advisory committee records is an important element of the public access requirements of the Act. Section 10(b) of the Act provides for the contemporaneous availability of advisory committee records that, when taken in conjunction with the ability to attend committee meetings, provide a meaningful opportunity to comprehend fully the work undertaken by the advisory committee.” (see 41 C.F.R. §102-3.170)

CFSAC, as a FACA committee, is not an agency. Therefore, there is no FACA exemption for pre-decisional materials prepared within CFSAC, a sub-committee or a working group for consideration at a CFSAC meeting. Such an exemption would apply only if the materials were prepared by HHS, one of its component agencies or another federal agency.

Lastly, with respect to the working group draft documents that were discussed during the August 18-19, 2015 meeting, Dr. Lee mentioned they were “in the back of the room” and “available for anybody here for review.”

That would not appear to be FACA-compliant either because under Section 10(b) of FACA, the documents “shall be available for public inspection and copying ….” In order to gain access to a CFSAC meeting, one must be pre-registered for the meeting in order to undergo a security check in advance. Without being registered, one is not able to enter the Hubert H. Humphrey Building where the meetings take place. Therefore, “available for anybody here in the room” does not constitute being “available for public inspection” and “available for review” does not satisfy the requirement that it be “available for copying.”

In my last post, I called for a firm commitment from HHS to follow federal laws, such as FACA, going forward. Instead, Dr. Lee misinformed the committee and the public about the legal requirements under the statute, implying that no FACA violation by HHS had occurred. HHS continues to act as though the agency is above the law and it appears that HHS has no intention to be compliant in the future.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , | 26 Comments

Oops, they did it again! CFSAC violates FACA

This is a post about the violation of federal law by CFSAC yet again. This is also a post about how HHS has controlled CFSAC’s input on the P2P report.

CFSAC violated FACA, the Federal Advisory Committee Act, again. Not that anybody is shocked by that anymore, I know. Just your typical day at the CFSAC office. But wait! There is more. CFSAC’s DFO unduly influenced the committee’s advice to the Secretary. While this probably doesn’t come as a surprise to most either, it is quite revealing as to the mindset of HHS and what M.E. patients can expect from the agency, which is more stonewalling and empty promises. Nothing good has ever come out of that agency for us. I hate to be the bearer of bad news, but it’s not about to change if history as recent as January of this year is any indication.

“How dare I make these claims!” you say? Well, CFSAC admitted to the FACA violation in writing (see below) and I will spell out the overwhelming evidence for the undue influence of CFSAC by HHS in detail below. Here is a link to supporting documents for both, which I received in March 2015 from HHS after I officially requested, under FACA, documents relating to the January 2015 CFSAC meeting.

Of course, the same document-production game was played that HHS enjoys so much with FOIA requests, except, this time, they could not delay as much as they do with FOIA requests (Never mind that it violates federal law.) because I formally requested the documents under FACA and the deadline for bringing a lawsuit based on a FACA violation is very short. There are more technicalities here, but I will spare you the tedious details. I received the documents shortly before the deadline to sue for this recent FACA violation. Part of the documents—crucial parts, i.e., the reference to the line-item numbers for the P2P draft report—are illegible; pages were out of order based on the Bates numbers assigned to them by HHS; and after I sorted them according to the Bates numbers, they were not in chronological order. Maybe to make up for that, I received duplicates of 45 pages. Of course, all of that makes a review that much harder, but based on my FOIA lawsuit experience with HHS and NIH, it’s not only par for the course, it’s by design.

Page numbers I cite refer to document page numbers, not the Bates numbers at the bottom. If you are in the document on your computer, searching for a page is much easier that way. However, if you print the document (It’s long!), the page numbers will be off by one because the cover letter from the DFO does not have a Bates number.

No Access for Public to CFSAC P2P Draft Comments During January Meeting

FACA, the Federal Advisory Committee Act, governs the activities of federal advisory committees. Unlike with, say, the Patriot Act, the title sort of gives it away. Importantly here, it focuses in part on meetings being open to the public. According to section 10(b) of FACA, an agency is generally obligated to make available to the public, before or at the time of the meeting, all materials that were made available to or prepared for or by an advisory committee. The rationale for the obligation to provide contemporaneous availability of advisory committee records under FACA is simple. It is to afford, “when taken in conjunction with the ability to attend committee meetings, […] meaningful opportunity for the public to fully comprehend the work undertaken by the committee.” (41 C.F.R. §102-3.170). Without that opportunity, the meeting isn’t really open to the public. Not providing contemporaneous access to committee records is a FACA violation, a big federal no-no, and that’s what happened with CFSAC in January of this year.

So, “What exactly happened?” you ask. At its December meeting, CFSAC decided to convene an ad hoc working group (“Working Group”) that would provide comments from CFSAC on NIH’s Pathways to Prevention (“P2P”) draft report. The Working Group consisted of the following CFSAC members and ex officios: Dr. Dane Cook, Dr. Mary Ann Fletcher, Dr. Fred Friedberg, Dr. Susan Levine, Dr. Janet Maynard, Donna Pearson and Alaine Perry. The Working Group also included two non-CFSAC members, Claudia Goodell and Charmian Proskauer. That Working Group prepared a draft of the official CFSAC comments on the P2P report and that draft document was the subject of discussion among all CFSAC members at the January 2015 CFSAC meeting. (It was finalized after the meeting and submitted to the Secretary.) FACA was violated when the discussed draft was not made available to the public prior to, or at the time of, the meeting. As a result, patients and other members of the public who listened to the meeting over the phone—the only way for the public to participate—found it impossible to follow along, which essentially turned the meeting into a non-public meeting and that, in turn, means that the CFSAC P2P recommendation to the Secretary was invalid.

There is not much grey zone here. This is about as clear-cut a FACA violation as you will find. If you still don’t believe me, check out page 1 of the linked-to documents. And I quote from a letter written to me by the DFO, Barbara James, dated March 3, 2015, in response to my FACA demands:

“We sincerely apologize that the enclosed Draft Comments discussed during the CFSAC meeting on January 13, 2015, were not provided at the time of that meeting. Thank you for bringing this issue to the attention of HHS, so that HHS can try to prevent this issue in the future.”

And there you have it. Excuse me if I find the I-swear-I-didn’t-know-FACA explanation lacking. Assuming that CFSAC’s DFO was indeed ignorant with respect to FACA, is that really better than a willful violation? I any event, it is an assumption I am not willing to make. The DFO is supported by an assistant whose job it is to know the intricacies of FACA inside and out. This was no oversight.

Also note the lack of a firm commitment to comply going forward. “[T]ry[ing] to prevent [FACA violations] in the future” just isn’t anywhere near good enough.

HHS was fully aware that not providing the draft comments to the public in time for the meeting would make it impossible for the public to follow the meeting. When asked by Ms. Perry about the reason for the artificial three-page limit for the CFSAC comments, the DFO replied that there isn’t an official page limitation (page 305) and then stated the following (page 304):

“The upcoming meeting will be a conference so the committee and the public will not be viewing slides or the document on their computers. Therefore, all changes (edits, new text, etc.) will not be visible to the listening audience or the committee.”

This statement is true only with respect to the public, of course. The committee, on the other hand, did have access to the document, either on their computers or in hard copy format (though not to the changes in real time other than by listening). But what’s important here is that it didn’t bother the DFO one bit that the public would basically be shut out of a meeting that, under federal law, is supposed to be open. Ms. Pearson was also aware of the lack of access to the document to be discussed (p.89):

“Since the public will not have the document, you should suggest up front that they follow along using the P2P’s 389 line Draft Executive Summary if possible. (You might also say that it might probably be difficult for them to follow everything discussed, but that the complete document should be posted on the CFSAC website after going through the correct channels.)”

A cognitively impaired patient population will have difficulty following the discussion of a document it doesn’t have access to? You don’t say! This complete disregard of the duties of a DFO under a federal law is simply inexcusable. Which part of “contemporaneous” is so hard to grasp? What good does a subsequent posting of the finalized document do? None. And “after going to through the correct channels?” Wait, more censorship?

When I threatened legal action in January, I received the linked documents, among them various versions of the draft CFSAC comments. But again, having access to the discussed document after the meeting is not what Congress had in mind when it enacted FACA.

And how about this? In an earlier comment regarding the three-page limit of the document, the DFO offered this justification (page 305):

“… to increase the chances that NIH will actually review and consider our comments.”

This had me quite confused, as the P2P process was supposed to be carried out completely independently from NIH. But I digress.

Keep in mind that CFSAC is required to provide the committee documents without members of the public requesting them. And yet, HHS did not do so despite many patients and advocates expressly asking for it. See the numerous emails from the public in the public-comment section of the linked documents starting at page 150 asking for a copy of the draft comments.

[Edit August 20, 2015: At the CFSAC meeting on Tuesday 18, 2015, the DFO, Dr. Nancy Lee, seemed to try and counter my charge of this FACA violation. I examined her arguments in my new blog post, “Another CFSAC FACA Fail: DFO Misconstrues Law.” Basically, Dr. Lee misinformed the committee and the public on the law.]

Undue Influence by HHS

Regarding the second violation, let’s start with how those P2P comments from CFSAC came about. Please note that I have probably not completely captured the process, as it seems pretty clear that I was not provided with all correspondence regarding the matter, despite the representation by HHS that it had “provided all the documents available under FOIA.” Another blatant misrepresentation.

On December 19, 2014, Ms. Pearson sent a first draft of the CFSAC P2P comments to the Working Group (pp. 277-298). A call among the Working Group members was had on January 5, 2015 to discuss the draft and a revised version was circulated the same day. Three days later, another version was sent to the Working Group members.

Ms. Pearson rejoiced:

“The end is in sight!”

Quite obviously, the Working Group did not expect any substantial additional changes (aside from the ones from CFSAC members who were not part of the Working Group).

Ms. Pearson let the Working Group members know that the document was:

“being carefully reviewed by Barbara James and her staff. They will check for grammar, typos, errors.” They would then “send the document to the full Committee for advance review.” (p. 2)

So far, so good. Grammar and typos, fair enough. Errors, makes sense. Until … all hell broke loose two hours later. Ms. Pearson notified the Working Group as follows (p. 29):

“Barbara James just informed me that the Committee Management Officer for HHS has advised that our document will not be cleared for submission to the Secretary as written. The inclusion of statements that are perceived to be inflammatory, negative or derogatory to HHS or other agencies, the Panel, the Secretary or others will not be accepted.” [emphasis added]

This beyond-belief interference by HHS reminds me a bit of lower-level party officials not allowing the submission to the Politbüro of a report  that will be offensive to the communist party or its leadership. Under FACA, CFSAC is supposed to be independent from its parent agency, HHS. In fact, it is supposed to give advice to HHS, not receive it from HHS just to turn around and forward it to the Secretary. It’s called an “advisory committee!” Get it? If HHS dictates what advice CFSAC can give to the Secretary, then the Secretary is really advising herself through her own agency. Go, taxpayer money!

The Secretary is, of course, free not to implement a CFSAC recommendation. In fact, HHS’s Secretaries have a lot of experience with that; they have made a habit out of ignoring CFSAC. But HHS’s Committee Management Officer or CFSAC’s DFO have no right to refuse to submit a CFSAC recommendation to the Secretary. The draft comments disseminated to the Working Group were about to become a CFSAC recommendation subject to some minor changes by the entire CFSAC before and during the January meeting had the DFO not intervened. To threaten that a committee recommendation will “not be cleared for submission to the Secretary as written” clearly eviscerates CFSAC’s independence.

There was a certain amount of CYA involved here (p. 29):

“Please be aware that Barbara did indicate that we can stand by our original document and/or that one or more of us could submit it directly to the P2P Panel as individuals (not on behalf of the CFSAC). However, it will not be posted on the CFSAC website without the Secretary’s clearance, nor it will be sent to the Panel.”

Another threat, this time that HHS would not post the recommendation on the CFSAC website. HHS must really not have liked those nearly complete draft comments by the Working Group.

So, let’s look at this more closely. Subjectively (“perceived”) inflammatory, negative or derogatory comments will not even be sent to the Secretary? Nothing “negative?” Are these people serious? What is this, the editorial policy of Pravda? If a Secretary’s ego is so fragile that she can’t handle any criticism of anybody (“others”), maybe she’s in the wrong line of work. Patients are suffering day in and day out and the highest-ranking government official in the health department needs to be protected from the truth? Why is it that HHS is so ashamed of the M.E. reality that they have created? If they had done their jobs, shouldn’t they be proud?

But fear not, Ms. Pearson promised an easy fix of the situation. She let the Working Group know that the DFO volunteered to work late into the night to sanitize, I mean revise, the document (page 29). And so the DFO did; she worked all the way till 11:22pm, bless her heart.

This did not go over very well with some of the Working Group members. This is what Dr. Fletcher had to say in reply to the astounding news (p. 31):

“I am certain that our charge as members of the CFSAC was to advise the Secretary of HHS on ways the HHS may better help patients with ME/CFS through research, clinical care and prevention efforts. We were not told to avoid criticism of the HHS or any of its agencies, indeed we were to advise HHS and advise on ways to have an effective programmatic response. The P2P process, which included public response time before finalizing was designed to help set the research agenda for the field. Certainly the CFSAC advisory committee’s response should have weight and be taken into account. The report as it stands is the advise [sic] of this committee to the Secretary and the P2P panel. It should not be edited or changed by the HHS staff.

We thought that we were asked to serve on CFSAC because we had expertise in the field and that HHS wanted our advice. We have worked diligently and professionally in preparing this response, which should be delivered to Secretary and to the P2P panel without further changes or delay. We would hope our comments will be seen and influence the report before it is finalized.”

Brava, Dr. Fletcher!

Ms. Proskauer also took issue:

“Barbara, can you tell us exactly which statements have been flagged as ‘inflammatory, negative or derogatory?’ We should all know, then be given the opportunity to address these as a group. The full Committee has not even had an opportunity to review our work, either to approve or change. It does not seem appropriate to be making changes prior to the full Committee discussion.”

More good points made.

Despite objections being raised, the DFO proceeded to revise the document and, man, it sure must have been in need of some serious revisions— given the massive amount of changes that were made— despite all the time and effort the Working Group had invested and despite the fact that the draft was basically final. In the process of being reviewed and revised by HHS, entire paragraphs were deleted. In order to get a feel for the extent of the revisions that were made after HHS got involved, take a look at the redlines starting on p. 33 and on page 359. Some language was revised in such a way as to change its meaning completely. The redlines don’t always seem to properly track the changes that were made because the deletions and additions don’t match up in some places. Some changes didn’t make it into the final document. The important point is the extent of HHS’s involvement and the nature of the resulting or attempted changes. The comments were supposed to come from CFSAC, not HHS itself.

As you go through the versions and email correspondence, please keep in mind that there are, in all likelihood, many emails missing. Some emails are referenced, but were not provided. Not a single email critical of the HHS draft was provided to me. It is simply not credible that there were none. There is no way that there was not more fallout from the vast changes made after HHS got involved. Some Working Group members did not chime in at all if one were to believe the file I received is complete. Obviously, there is a lot more related correspondence out there that we don’t have access to. Are we to believe that somebody as principled and outspoken as Dr. Fletcher, for example, would not have objected to the heavy-handedly edited Working Group draft? The correspondence that was sent to me was clearly cherry-picked and the critical voices were left out. Dr. Cook called the revised document “improved” (p. 84). Seriously? The document was gutted! With friends like that, who needs HHS? Dr. Sue Levine simply said, “I think the document is fine.” And off it went to the entire committee, sent by the DFO. A few more changes were made in response to requests by the full committee. And voila, a CFSAC recommendation that was quite different from what the Working Group had signed off on was created. It can be found on the CFSAC website.

Below are a few examples of changes that seemed to have occurred after HHS got involved. There are many more. Underlined parts were added. Struck-through parts were deleted.

“Although dedicated researchers have identified parameters for defining ME/CFS, those parameters have not been universally adopted by the CDC and HHS. As a result, studies of ME/CFS are fraught with methodological problems, preventing a clear understanding of who is affected by the disease.” (p. 10)

“The dissemination of diagnostic and therapeutic recommendations should focus on primary care providers and all other health care providers dealing with symptoms specific to this disease, including but not limited to cardiologists, endocrinologists, neurologists, rheumatologists, psychiatrists, clinical immunologists, internal medicine and pediatrics, and infectious disease specialists.” (p.15) (emphasis added)

Earlier in the Draft, you asked whether or not ME/CFS is a spectrum disease. We believe the better question is the one originally published by the PzP Working Group and then discarded due to lack of research studies and evidence. “Are ME and CFS separate diseases or do they fall on a spectrum of one disease?” To take that original question further, have the terms CFS and ME/CFS been broadened, intentionally or otherwise, to encompass far more conditions than the disease identified as Myalgic Encephalomyelitis by the World Health Organization? ” (p. 18)

Researchers, advocates and the CFSAC have recommended use of the Canadian Consensus Criteria to define the illness until further research warrants modification. The failure to do so, along with the failure to adequately fund large scale studies aimed at identifying objective biomarkers, has opened the door to no fewer than eight (8) definitions over the years.” (p. 35)

The dearth of funding and reluctance of the HHS to collaborate with the broader stakeholder community has negatively impacted scientific progress in every way.” (p. 36)

“…estimated $5 million, which is far below diseases of less consequence and lower prevalence…” (p. 36)

Clinicians and others who do not think that ME/CFS is a disease in its own right simply have no read the literature and are thus uninformed.” (p. 40)

Yet the NIH and other agencies use a lack of information regarding ME/CFS to justify the failure to adequately fund additional research. In a response to this Committee’s request for an RFA in 2014, the National Institutes of Health replied “Unfortunately there remains a lack of definitive evidence regarding the etiology, diagnosis, and treatment for ME/CFS. As such, issuing a Request for Applications (RFA) would not be an effective strategy as RFAs generally encourage a narrowly defined research area that addresses more specific gaps in scientific knowledge.” Regarding the lack of a consistent set of criteria, the CFSAC has frequently recommended the universal adoption of the 2003 Canadian Consensus Criteria (CCC), requiring the key symptom of post exertional malaise.” (p. 40)

It is important to acknowledge that a majority of experts in the field have agreed upon parameters for defining ME/CFS. In a letter to the Secretary of Health and Human Services dates September 13, 2013, more than 50 of the world’s experts stated that they support the adoption of the 2003 Canadian Consensus Criteria and urged the HHS to adopt the CCC as the single case definition for all Department activities, both research and clinical uses. The NIH acknowledged their status as experts when responding to a CFSAC request for a data and biobank sharing platform in 2014: “The pool of ME/CFS researchers is small (e.g., the advocacy field identifies a group of 50 ME/CFS clinicians and scientists world-wide considered expert in this area of research.)… Thus, developing and maintaining a unique ME/CFS database is cost prohibitive in light of the small number of ME/CFS researchers…” However, the request of these experts to adopt the 2003 Canadian Consensus Definition has not been recognized or supported by the CDC and HHS agencies.” (p. 40)

“... we consider the PACE Trial to be “fruits of the poisonous tree.” (p. 43)

There is research and evidence for post-exertional malaise in ME/CFS and neurocognitive symptoms have been demonstrated for decades in this patient population. Much of the literature regarding ME/CFS was excluded from the Evidence Review.” (p. 45)

“[T]he failure to lack of universally accepted adopt the parameters identified by dedicated researchers has stifled progress.” (p. 45)

“There exists a plethora of is published objective data about ME/CFS and the disease itself is not subjective in nature.” (p. 45)

Additionally, rather than holding yet another workshop or conference, strong commitment from the Department of Health and Human Services is needed to follow the lead of experts in the field and fund the research that is so desperately needed.” (p. 46)

The Department of Health and Human Services HHS should follow the lead of stakeholders and national and international experts to adopt a universal consensus-based case definition and to help advance the field.” (p. 46)

For decades the burden of communication has fallen almost entirely on patients who must often educate themselves in order to receive a correct diagnosis, and then must educate family, friends, employers and healthcare providers.Many patients, especially those who are better educated and have more financial resources, are (by necessity) actively involved in their own care, while others are too sick to participate or do the research required to find physicians who can help.” (p. 50)

The DFO delivered her vast edits with the following comments (p. 358):

“I tried to keep as much of the working group’s language as possible.”

Could have fooled me. And not just me. Even Ms. Pearson, who was spearheading the Working Group effort and worked very closely with the DFO, noted (p. 62):

“We will need to communicate thoroughly and effectively with the members of the Working Group. There are so many modifications this document [sic] that they will consider the revisions to be disrespectful of their expertise and unappreciative of their contributions.”

When Ms. Pearson sent the heavily revised document to the Working Group, she did so saying (p. 88):

“PS If you are interested in seeing Barbara’s ‘red ink’ version, I will send it under separate cover. However, you’ll probably see all the deletions and changes and get confused, disheartened, and angry. So I just ask that you check out the revised document with a fresh eye first, then go through the marked up version to see specifically what is [sic] missing and/or changed.”

These two quotes are ever so revealing. It is quite obvious from that language that there was an awareness that the extent of the changes was excessive and problematic.

Remember, the document was just about to go to from the Working Group, which had basically completed its work, to the entire committee for their review when the DFO, Barbara James, took over. (The remainder of the committee later ended up making only minor changes to the document, as those committee members with the most interest in the subject had likely volunteered to be members of the Working Group and had already provided their input.) The only changes that the DFO and her team were going to make were with respect to grammar, typos and errors. Yeah, right.

I noticed that Dr. Levine raised the necessity of separating ME from CFS twice, but no responses to her have been produced. Here are Dr. Levine’s remarks:

“I wonder if any of you care to address the ambiguity of using the combined term ‘ME/CFS’ and the need to tear these apart as representing possibly 2 separate illnesses.” (p. 300).

“Personally, I feel that even though it’s mentioned several times, that it’s crucial that we distinguish ‘ME’ as a separate illness or what we now understand to include ‘post exertional malaise’ from other types of fatigue.” (p. 313)

Finally, Ms. Pearson sent an email around to the Working Group copying and pasting “complimentary emails.” Emails by Working Group members that were critical of the process are obviously missing from the production (see above), but HHS was making sure to send along those messages containing praise, pitting different corners of the patient community against each other:

“Although no one told me their remarks were confidential, it might be best to keep them within out group, just in case.” (pp. 345-347)

Just in case of what? In case of a FOIA request, in which case the identities cannot be legally redacted? Although assuming that HHS would bother with legalities is a stretch. Silly me. Some of the messages are easily attributed to their authors as it is. Let’s see, which advocate uses random initial caps, just for example? Of course, I noticed the mea-culpa message (p. 346):

“I thought Jeannette Burmeister was on to something with her legal case and that it was all well thought out. My sincere apology!” (p. 346)

I respect somebody who is able to admit they were wrong. Except I was indeed on to something and it was indeed all well thought out. No good deed goes unpunished.

So, am I going to get an apology now? No worries; my ego is not that fragile or needy. What really is called for is an apology by HHS to the entire M.E. community and, more importantly, a firm commitment to follow FACA and other federal rules in the future.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , | 23 Comments

CFSAC Comments August 2015: Ampligen Price Increase on Shaky Ground

I looked into the Ampligen issue–the exorbitant 267% price increase by Hemispherx–some more. Here is one thing that patients who are currently enrolled in the trial can do. They can contact Schulman Associates, the Institutional Review Board (IRB) for this trial at: Schulman Associates Institutional Review Board, Inc., 4445 Lake Forest Drive, Suite 300, Cincinnati, Ohio, fax: 866.377.3359. The IRB was “established to help protect the rights of research subjects” and encourages trial participants to write to the IRB “[i]f [they] have any questions about [their] rights as a research subject, and/or concerns or complaints regarding this research study….”

I also sent a follow-up message to FDA’s Dr. Woodcock with additional information regarding the distressing new price for Ampligen. I submitted that message together with my message to Dr. Woodcock from two days ago as official comments for the CFSAC meeting next week urging CFSAC, especially its FDA ex officios, to follow up with Dr. Woodcok. My CFSAC comments are reproduced below. (My new message to Dr. Woodcock starts under “August 13, 2015 Letter.”)

CFSAC Meeting August 18th-19th, 2015

Public Comments by Jeannette Burmeister

Submitted on August 13, 2015

I would like to urge CFSAC, particularly its FDA ex officio members, Drs. Maynard and Hall, to follow up with Dr. Woodcock regarding Hemispherx’s enormous price increase for Ampligen. Since FDA has regulatory authority over cost-recovery programs such as the Ampligen trial, I am asking that FDA exercise its authority to audit the justification of the new price and to re-assess its authorization of the increased price.

Below are two letters I sent to Dr. Woodcock on August 11, 2015 and today (August 13, 2015) with more details about the situation, which is dire and urgent.

August 11, 2015 Letter:

You can read my August 11 letter at 267% Price Increase for Ampligen.

August 13, 2015 Letter:

Dear Dr. Woodcock,

As a follow-up to my letter of August 11, 2015 regarding the enormous price increase for Ampligen by Hemispherx Biopharma, Inc. (“HEB”), I wish to raise a few additional issues.

As you know, FDA may allow drug companies to recover certain costs for investigational drugs in accordance with 21 C.F.R. 312.8, as it has done in the case of HEB and Ampligen. In order for a drug manufacturer, a so-called sponsor, to charge for certain costs for a drug undergoing clinical investigation, certain requirements have to be met.

In accordance with 21 C.F.R. 321.8(d)(1), a sponsor may recover only the direct costs of making its investigational drug available, not the indirect costs. The regulations further provide:

“Direct costs are costs incurred by a sponsor that can be specifically and exclusively attributed to providing the drug for the investigational use for which FDA has authorized cost recovery.“

“Indirect costs include costs incurred primarily to produce the drug for commercial sale (e.g., costs for facilities and equipment used to manufacture the supply of investigational drug, but that are primarily intended to produce large quantities of drug for eventual commercial sale) and research and development, administrative, labor, or others costs that would be incurred even if the clinical trial or treatment use for which charging is authorized did not occur.”

In March of this year, Hemispherx announced the completion of an $8 million facility-enhancement project in New Brunswick, N.J. to allow for a higher-capacity manufacturing process for both Ampligen and the company’s other drug, Alferon N. In the same month, HEB announced plans to commence distribution of Ampligen in Australia and New Zealand. This week, HEB announced that it was getting ready to supply Ampligen to patients in Europe and Turkey. As Australia, New Zealand, Europe and Turkey are currently completely untapped markets for Ampligen, it seems likely that the upgrades to the New Brunswick facility were made in anticipation of commercially selling the drug in these large new distribution areas, especially given the timing; the completion of the enhanced facility nearly coincided with the announcement regarding Australia and New Zealand and was followed, only a few months later, by the announcement with respect to Europe and Turkey. HEB will need to produce Ampligen in much larger quantities now in order to satisfy the demand in the new markets and with its upgraded facility will have the capacity to do so. In addition, HEB, by its own admission, is still actively and diligently pursuing FDA approval in the U.S. If it is successful with that endeavor, the new facility will be used to produce large quantities of Ampligen for commercial sale in the U.S. Consequently, the facility-enhancement project is likely an indirect cost and not recoverable under FDA regulations. Therefore, should HEB have included it in the cost justification for the price increase in the U.S. market, that would constitute an improper cost calculation and, given the magnitude of the project, even if depreciated or amortized, it alone may account for the Ampligen price increase.

Moreover, HEB has incurred manufacturing costs for the study of Ampligen treatment of other indications, e.g., Ebola, HPV, HIV, hepatitis and influenza. Were the costs for those efforts included in the cost justification for the open-label-trial price increase?

I also want to make you aware of the fact that the documentation that patients had to sign in order to enter the trial makes the express representation that the charge for the drug is “expected to be $2,100 for the first eight (8) weeks and $2,400 for each additional eight (8) week period.” Obviously, modest, justifiable price increases are to be expected and not objectionable. Dramatic increases—certainly those in the ballpark of 267% (an increase of $26,000 per year, from $15,600 to $41,600)—are not; they are inconsistent with the terms on which patients agreed to participate in the trial. In an FDA-regulated trial, such seeming price gouging ought to be impermissible, especially given the concerns as to the cost calculation and the representations made to the participants in the trial, many, if not most, of whom have made substantial personal sacrifices, financial and other, to participate and have also, over all these years made contributions, often at a price to their health, to HEB’s FDA-approval efforts for Ampligen by frequently completing extensive paperwork, undergoing large blood draws, performing stress tests twice a year, traveling to D.C. to testify in support of the approval of the drug, etc.

These and potentially other concerns raise serious questions as to whether the tremendous price increase for Ampligen was implemented properly and is otherwise permissible. Since FDA has regulatory authority over cost-recovery programs such as this one, I am asking again that FDA exercise its authority to audit the justification of the new price and to re-assess its authorization of the increased price. I do not purport to speak for anybody other than myself, but please be aware that the situation is a top priority for many in the patient population.

Sincerely,

Jeannette Burmeister

cc:

Dr. Stephen Ostroff, FDA Acting Commissioner (via email)

Nancy McGrory, Hemispherx Patient Advocate (via email)

Schulman Associates Institutional Review Boards (via fax)

Posted in Uncategorized | Tagged , , , , , , , , , , , , , | 42 Comments

267% Price Increase for Ampligen

[Please see here for my follow-up letter to Dr. Woodcock.]

I just sent the following message regarding Hemispherx’s extraordinary 267% price increase for Ampligen to Dr. Janet Woodcock, the FDA’s Director of the Center for Drug Evaluation and Research:

Dear Dr. Woodcock,

I am writing to you regarding a matter of grave concern for the patients in Hemispherx Biopharma, Inc.’s (“HEB”) AMP-511 open-label clinical trial for Ampligen, a drug highly effective for many ME (or as the FDA calls it “ME/CFS”) patients. I have testified at the Ampligen Advisory Committee meeting and other federal committee meetings in favor of FDA approval of the drug and I remain convinced that this drug should be approved by the FDA without further delay because many patients would benefit from it and because there are no other FDA-approved pharmaceutical interventions for ME.

I have been a study participant for over three years. Last night, I learned through ME Action’s blog (http://www.meaction.net/2015/08/10/ampligen-price-increases-substantially-available-soon-in-europe/) that the price of the drug will go up 267%, from $15,600 to $41,600 per year, effective immediately as of July 24, 2015. Because it has not been approved by the FDA, the cost of the drug is currently not covered by private insurance or Medicare/Medicaid. Patients pay the entire cost out of pocket. Nevertheless, I have not received any notification from HEB of this extraordinary price increase.

HEB seems to claim that the price increase is necessitated by their increased cost in providing the drug to trial participants and that the increase has been verified by an accounting firm. However, accounting firms can avail themselves of a number of different methods to establish cost. For example, I understand that HEB recently expanded its facilities. Was the cost of this expansion, which would be a sunk cost at this point, included in the cost justification for the price increase either through depreciation or amortization? Moreover, HEB’s position itself is apparently contradictory as to the basic fact whether the new price includes merely manufacturing cost or also the cost of continued research and FDA-approval efforts. These points are merely illustrative of the various types of cost that may or may not have been included in the price-increase justification. It just does not seem probable that HEB’s cost increased that dramatically over night. A gradual increase seems much more plausible and would have been much easier to absorb for patients.

As you know, there are only four Ampligen trial sites in the country. Patients move and either leave their families behind or uproot them, either buy and sell houses or rent second homes, give up or change jobs, mortgage their houses, enroll their kids in new schools, etc. in order to relocate to a trial site and, in doing so, incur substantial long-term expenses far beyond just the price of the drug and the related infusion/physician’s cost. At the very least, HEB could have informed study participants of the fact that it is considering an increase at the time when it hired the accounting firm. The entire process from hiring the firm to the firm’s completed report typically takes time. That would have at least provided patients some advance notice. Some patients have very recently invested in relocating to a trial site just to find out now that they will not be able to afford the drug at the new price.

To be completely blindsided, not only without any advance warning, which was entirely feasible, but without any notification from HEB whatsoever upon effectiveness of the increase—more than two and a half weeks ago—is inexcusable and I would like to confirm with you that HEB followed any applicable federal rules both with respect to the magnitude of the price increase and the lack of notice.

I am looking forward to your response. Obviously, the matter is of utmost urgency, as many trial participants will be unable to afford the new price and will have to re-plan their lives without the drug. Most importantly, suddenly being cut off from a potent drug that patients’ immune systems have come to rely on might very well put the health of the current trial participants at risk.

Sincerely,

Jeannette Burmeister

cc:

Dr. Stephen Ostroff, FDA Acting Commissioner

Nancy McGrory, Hemispherx Patient Advocate

Posted in Uncategorized | Tagged , , , , , , , , , , | 31 Comments

Hip Surgery and ME: Society Has It Wrong

I am proud to share a note that my husband, Ed Burmeister, wrote last week. He initially posted it on Facebook only where it received a lot of attention and was shared more than 250 times. It really resonated with the community.

Therefore, I talked him into allowing me to post it here as well. I am blessed to have such a supportive and loving spouse.

Last Wednesday, I had a complete hip replacement.  It was a short procedure (1-1/2hours). No general anesthesia required.  I was out of bed the day of surgery and home after two days.  On Monday, I started driving again and really could have done so on Saturday already. Yesterday, I returned to work. I was comfortably working away, largely free of pain.  I walk without a limp and with no assistance and am pretty much unrestricted in my activities. I never needed narcotic painkillers after the surgery.  Ibuprofen does the trick.

Well-wishing family, friends and colleagues sent cards, flowers and gift baskets.  These were all nice to receive and I appreciated them. There have also been numerous and repeated inquiries about my progress. Just a lot of thoughtfulness all week.

Contrast this with the way Jeannette and her fellow ME patients are viewed and treated by the same cohorts.  Their disease, myalgic encephalomyelitis, is many multiple times worse than what I went through and it is ongoing, in Jeannette’s case for over nine years now. Many others have been sick much longer, some for decades. ME patients will most likely be sick for life and they are typically getting worse, as ME is often progressive.

Most activities that others don’t think twice about are impossible for Jeannette. She cannot stand for more than just a few minutes. She cannot walk more than just a few blocks. Sometimes, she cannot walk one block. Her debilitation goes far beyond the effects on her mobility and reaches into every corner of our lives. She is never comfortable, not even for a few minutes. It is always just a matter of degree of the relentless misery. Jeannette’s only contact to the outside world, besides the infusion room, is Facebook. But her presence on social media is frequently judged by some (what her friend Dave, also an ME patient, calls) normal-health people. It is estimated that about 25% of ME patients are sicker than Jeannette, some to a point that is unimaginable to everybody who has not been around those who are near death.

Jeannette is unable to leave the house on most days, and then generally only to receive thrice-weekly infusions, and spends most of her time lying down. Even sitting is impossible for extended periods. If she ignores her limits, it comes at a big price in the form of feeling considerably worse. Last Wednesday, the day of my surgery, Jeannette had no choice but to sit in the hospital waiting room for hours. There was no way to elevate her legs, which would have helped somewhat. Her only alternative was to lie on the floor, which she has done at the airport and other places in the past, but couldn’t risk in a hospital due to her being immunocompromised. At the end of the day, she was at least as impaired as I was having just come out of major surgery. The next day, she was too sick to visit me in the hospital, for which she beat herself up. She wanted nothing more than to be there next to me in the recliner the hospital staff had kindly moved into my room to accommodate her disability. But she couldn’t. That day, she didn’t eat, she could hardly move or talk. It was her payback for the sin of being there for me on my day of surgery.

It breaks my heart to see what Jeannette and other ME patients go through every day of their lives due to being this sick. But something else is almost more intolerable and that is how society treats them.

The thing is, when she is able to go out to the doctor or for an occasional meal with me, Jeannette often looks normal, often fantastic actually, despite being quite sick because she rests up for her outings in order to be able to make them and she probably also operates on a fair amount of adrenaline when she does leave the house for which she pays dearly. There are times when her appearance matches her debilitation and she looks like death warmed over, but at those times, she is usually too sick to leave the house. Nobody sees it. When others see her on those better days, they simply cannot seem to take in the degree of suffering she endures on an ongoing basis.  It is as if, despite her achievements, she has no credibility with society, which makes split-second assumptions about her health merely due to her particular diagnosis and what people think they know about it, which typically has very little to do with reality. At best, her disability is ignored. At worst, she isn’t believed. Hence, she does not receive flowers or gift baskets or cards wishing her well. Much worse, she does not receive the consideration and understanding that even a modest comprehension of her disease should provide.

I think it is as hard on her as the suffering from the disease to have to endure this constant indifference and complete lack of understanding by those around her. The absence of any validation of the degree of her disability and of any consideration for her special needs is, in and of itself, debilitating and robs Jeannette’s soul of the nourishment and support she so desperately needs.

The determination with which society refuses to acknowledge the severity of ME would be hard for me to believe if I didn’t witness it almost daily. A week after major surgery, I am multiple degrees less sick than Jeannette is almost every day, but–except for her fellow patients from whom she fortunately draws a lot of strength–nobody around her knows it. Worse, it seems that people don’t want to know it.

Posted in Uncategorized | Tagged , , , | 32 Comments