Proof the fake medicine has got us hoodwinked.
Scientific studies--and especially clinical trials--are considered the gold standard for the evaluation of most new medical treatments. If a treatment is backed by several studies--regardless of how many studies it took to get there--it's considered "good to go" in the medical community. On the other hand, if an alternative treatment comes up short in just one well-publicized study, it is derided and dismissed by the medical community as useless and a waste of money, or even worse: dangerous. More curiously--but perhaps eminently understandable considering human nature and the aura of gospel attached to studies by doctors--once "newer" studies disprove older studies concerning the efficacy of a medical treatment, it can still take decades for the medical community to act on that knowledge and disavow the now useless/dangerous treatment. Such is the miracle of studies.
For our readers, this is nothing new as I've talked about these issues extensively over the years--to the point of exhaustion. Now, don't get me wrong. I'm, by no means, trying to say that studies are useless or bad. On the contrary, they are responsible for the great advancements in science over the past several centuries. They are invaluable. But, that said, the are not what most people think they are. No single study or set of studies should be considered gospel. There are just too many places where errors can enter into a study--and once peer reviewed and approved, there are just too many journals in which that original error can be referenced and transferred to study after study, continually reinforcing the original error. For these reasons, studies should not be thought of as absolute but, rather, as invaluable trail signs that eventually lead us on a zigzagging path to a place of true knowledge.
Leave it to researchers from Harvard to use a formal, self-referential trial to prove the point.
The objective of the study, which was published over Christmas in the BMJ, was to determine if using a parachute prevents death or major traumatic injury when jumping from an aircraft.1 Yes, you read that correctly: they conducted a randomized controlled trial to determine if parachutes served a function when jumping from planes. And wait until you see the results before you jump (all puns intended) to any conclusions as to the merits of the study.
A total of 92 individuals aged 18 and over were screened and surveyed regarding their interest in participating in the PARACHUTE trial. Among those screened, 69 (75%) were unwilling to be randomized (I wonder why) or found to be otherwise ineligible by investigators. In the end, 23 agreed to be enrolled and were randomized. The study was conducted using private and commercial aircraft between September 2017 and August 2018. The study involved participants jumping from an aircraft (airplane or helicopter) with a parachute (noted in the study as the "control" group) versus those jumping with an empty backpack and noted in the study as the "intervention" group. The main outcome measured was the composite of death or major traumatic injury (defined by an Injury Severity Score over 15) upon impact with the ground measured immediately after landing.
Now, to avoid your own personal injury, please remain seated before reading any further. The results of the study found that parachute use did not significantly reduce death or major injury (0% for parachute v 0% for control; P>0.9). This finding was consistent across multiple subgroups. To put that in plain English, none of the study's participants--whether they wore a parachute or not--died or were injured within five minutes or 30 days of the jump. As a result, the researchers concluded that parachute use does not reduce death or major traumatic injury when jumping from aircraft.
Make no mistake. This was an actual study with real test subjects jumping from planes and real data collected and analyzed. It was replete with scientific jargon attesting to both the intelligence of the researchers and the careful design of their study as well as a series of compelling tables. For example:
Table 3
Event rates for primary and secondary endpoints. Values are numbers (percentages) unless stated otherwise
Endpoint |
Parachute |
Control |
Mean difference (95% CI) |
P value |
On impact |
||||
Death or major traumatic injury |
0 (0) |
0 (0) |
0 |
>0.9 |
Mean (SD) Injury Severity Score |
0 (0) |
0 (0) |
0 |
>0.9 |
30 days after impact |
||||
Death or major traumatic injury |
0 (0) |
0 (0) |
0 |
>0.9 |
Mean (SD) Injury Severity Score |
0 (0) |
0 (0) |
0 |
>0.9 |
Health status |
||||
Mean (SD) Short Form Health Survey score |
43.9 (1.8) |
44.0 (2.4) |
0.1 (−2.0 to 2.2) |
0.9 |
Mean (SD) physical health subscore |
19.6 (0.7) |
19.7 (0.5) |
0.04 (−0.5 to 0.6) |
0.9 |
Mean (SD) mental health subscore |
24.3 (1.3) |
24.3 (2.1) |
0.08 (−1.6 to 1.8) |
0.9 |
As they wrote in the BMJ, should the results be reproduced in future trials, it could save the global economy billions of dollars spent annually on parachutes to "prevent injuries related to gravitational challenge."
"We have performed the first randomized clinical trial evaluating the efficacy of parachutes for preventing death or major traumatic injury among individuals jumping from aircraft. Our groundbreaking study found no statistically significant difference in the primary outcome between the treatment and control arms. Our findings should give momentary pause to experts who advocate for routine use of parachutes for jumps from aircraft in recreational or military settings. "
Okay, at this point, you're obviously asking, "What's going on here? There has to be a catch." And there is. Unfortunately, they stated it in typical science-study-speak, which means it's easy to have no idea what they're saying. But hang in there, and all will be made clear:
"A minor caveat to our findings is that the rate of the primary outcome was substantially lower in this study than was anticipated at the time of its conception and design, which potentially underpowered our ability to detect clinically meaningful differences, as well as important interactions. Although randomized participants had similar characteristics compared with those who were screened but did not enroll, they could have been at lower risk of death or major trauma because they jumped from an average altitude of 0.6 m (SD 0.1) on aircraft moving at an average of 0 km/h (SD 0). Clinicians will need to consider this information when extrapolating to their own settings of parachute use."
Or, as they stated in their conclusion in simpler terms: "The trial was only able to enroll participants on small stationary aircraft on the ground." In other words, the jumps were all at a height of about two feet suggesting, in the words of the researchers "cautious extrapolation to high altitude jumps." In a further exploration of that vein of thought, they stated, "The study also has several limitations. First and most importantly, our findings might not be generalizable to the use of parachutes in aircraft traveling at a higher altitude or velocity."
The researchers then added the following statement to their conclusion, revealing the actual intent of the entire exercise.
"When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice."
To once again put that into English, the scientists argued that their trial highlights how misleading scientific studies can be.
"The PARACHUTE trial satirically highlights some of the limitations of randomized controlled trials. Nevertheless, we believe that such trials remain the gold standard for the evaluation of most new treatments. The PARACHUTE trial does suggest, however, that their accurate interpretation requires more than a cursory reading of the abstract. Rather, interpretation requires a complete and critical appraisal of the study."
The parachute study illustrates just one way studies can misrepresent reality. In truth, the researchers did not mention that there are, in fact, many. For example:
Now don't misunderstand what I'm saying. Studies are not useless. In fact, they are the foundation of modern medicine and modern science. Without them, we might still be treating diseases by regulating bad humors and expelling evil spirits. BUT--and this is a very, very important "but" --clinical trials are not the be-all and end-all of scientific knowledge. They often contain serious flaws and biases, not to mention being subject to the vagaries of human greed and duplicity. In the end, individual clinical trials should be considered as divining rods that at best point us in the direction of important conclusions. Certainly, multiple studies that come to the same conclusion are a better indicator than individual studies--but even then, the result is not guaranteed. As mentioned above, if the same bias is carried from study to study (confusing synthetic vitamin E with full complex, natural vitamin E, for example), study after study can end up with the same flawed conclusion, just reached multiple times. But once we understand the limitations of the different types of studies and the way errors can creep into them, we can see that decades, centuries, and even millennia of anecdotal information about herbal and nutraceutical remedies can be equally effective in "possibly" pointing the way to important healing discoveries.
And for those who might say, "Yes, Jon, that's all well and good, but who are you to attack the credibility of clinical studies? What studies do you have to support your point of view?" Well, thanks to Harvard University, I can now say, "Here ya go."
"Destroying the New World Order"
THANK YOU FOR SUPPORTING THE SITE!
© 2025 Created by truth.
Powered by
You need to be a member of 12160 Social Network to add comments!
Join 12160 Social Network