The Problem With “Science”

You hear it all the time today, more than ever before: “TRUST THE SCIENCE!” “SCIENCE-DENIER!” “I’m getting the vaccine because I trust the scientists!” “If only we’d trusted The Science, then we wouldn’t be in this mess!”

Dr. Anthony Fauci, the Chief Medical Advisor to the President and the Director of the National Institute of Allergy and Infectious Diseases (NIAID) at the NIH, has been elevated to a position of importance and authority rivaling the President. Tens of millions of Americans have absolute and unflinching trust in him, hanging on his every word and believing that he and he alone is the Only One Who Can Save Us from Covid-19. It’s all rooted in the fact that he’s a “Scientist,” and therefore above the pettiness and bias of the political sphere. Fauci’s status as a scientist means lots of people view him as inherently trustworthy and unbiased, even though, as I went over the other day, he is quite literally one of the least-trustworthy people on the planet.

The Democrats, despite believing there are 112 genders, and that you can change your gender with a medical procedure, have positioned themselves as “The Party of Science.” “Science” was a huge part of Joe Biden’s 2020 campaign. He was constantly complaining about how Trump didn’t “believe in science,” and promised that if he was President, he would “trust the science” and presumably take care of Covid-19 in short order.

It’s really a simple as that: if the government simply does what “science” says to do, then all will be well. Science has the answer to all of our problems–all we have to do is simply listen to the scientists and do as they say.

There are two main underlying assumptions behind all of this:

  1. That “science” is a homogenous entity, a hivemind. All scientists are in agreement on everything, and speak with one voice on all matters.
  2. Scientists are completely apolitical, unbiased, and uncorrupted, and have zero ulterior motives. They are unanimously pure and noble, and seek only to solve mankind’s problems, learn more about our world and lead mankind forward into the future with advancements and innovations. Thus, they are never to be questioned our doubted.

To be sure, it would be great if these things were actually true. It would be awesome if we all could agree on and accept the truth, and if the the truth-seeking process was entirely uncorrupted, but this is little more than naive idealism.

The first problem is that it’s not as easy to find “the truth” as people believe. It’s not simply a matter of there being “the truth” and then those who stubbornly and ignorantly reject it due to either stupidity or malice. On the overwhelming majority of scientific issues, there is good-faith debate over what the truth actually is. Just because you aren’t part of the supposed “scientific consensus” on a particular issue doesn’t mean you’re a SCIENCE-DENYING MURDERER.

Take masks, for instance: we’re told that the Science Says We Should Wear Masks, because they keep us safe from Covid. It’s SCIENCE. It’s SETTLED. So shut up and put on your mask. Anyone who doesn’t wear a mask is anti-Science. The CDC says to wear a mask, and that’s the end of the discussion. Do as your told. Never question the government.

But in November of last year, the New York Times reported on a Danish study that found that there really wasn’t much benefit for healthy people from wearing a mask. “Masks prevent people from transmitting the coronavirus to others, scientists now agree. But a new trial failed to document protection from the virus among the wearers.”

To conduct the study, which ran from early April to early June, scientists at the University of Copenhagen recruited more than 6,000 participants who had tested negative for COVID-19 immediately prior to the experiment.

Half the participants were given surgical masks and instructed to wear them outside the home; the other half were instructed to not wear a mask outside the home.

Roughly 4,860 participants finished the experiment, the Times reports. The results were not encouraging.

“The researchers had hoped that masks would cut the infection rate by half among wearers. Instead, 42 people in the mask group, or 1.8 percent, got infected, compared with 53 in the unmasked group, or 2.1 percent. The difference was not statistically significant,” the Times reports.

Is this Danish study not “science”? Does it not count? If that’s the case, then why?

LifeSite News was able to find a total of 47 studies that found that masks were ineffective at preventing Covid, and a further 32 studies that found negative health effects from wearing masks.

The bottom line is, one person can say “Masks work,” and point to scientific studies to back up their claim. But then someone else can say, “Mask don’t work,” and point to other scientific studies to back up their claim.

And take Ivermectin as another example. All the “authorities” in this country, from the CDC to Dawktuh Fauci to the mainstream media and even the World Health Organization are in agreement: Ivermectin does NOT work against Covid, and under no circumstances should you even consider using it as either a treatment against Covid or as a preventative.

But just because they all vehemently oppose Ivermectin does not mean all scientists do. The FLCCC compiled a summary of all the evidence supporting Ivermectin’s use against Covid (hopefully the link works, if not just go to FLCCC’s home page and you should be able to find the study. It’s called “It’s the Totality of Evidence That Counts!”). There are 31 observational controlled trials, 27 randomized controlled trials, and tons more studies based off of clinical observations and experience, attesting to Ivermectin’s effectiveness against Covid. These studies come from all over the world: the Dominican Republic, Argentina, Peru, India, Mexico, America, America and more.

Do they all not “count” as science?

This whole idea that “all scientists agree that x is true,” and that the scientific community is in lockstep and speaks with one unanimous voice on all things–it’s ridiculous.

In order to believe that, you’d have to ignore and dismiss all scientific studies that don’t support your personal position. But wouldn’t that then make you a Science-Denier?

The great 20th century science-fiction writer Philip K. Dick once said that, “Reality is that which, when you stop believing in it, doesn’t go away.”

Just because you, or the media, or a big pharmaceutical company, or even the government, refuse to acknowledge the results of some scientific study does not mean that study never happened, or isn’t accurate.

The Peer Review Problem & The Replication Crisis

The biggest problem with science is that the general public has been seriously misled on how science actually works.

For example, when people hear that a scientific study has been “peer-reviewed,” they believe that it automatically means the study is “correct” and legitimate. Someone put out a study, other scientists reviewed it and gave it the seal of approval. Peer-reviewed studies are legit, non-peer reviewed studies are illegitimate.

But when you really dig in to the peer review process, you’ll find that it doesn’t really work that way. And people who actually work in the field of science have known this for years. Richard Smith, former editor of the British Medical Journal (BMJ), wrote a paper in 2006 detailing the myriad problems with the peer review process:

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you’d expect by chance.1

That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish’ and `reject’. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven’t already done it?

This type of talk from a respected scientist is quite jarring to the average person. We’re all under the impression that the scientific field as a whole is austere, rigorous and holds itself to the highest of standards. It’s a little unsettling to learn that scientists are just like the rest of us: basically making it up as we go and constantly bitching about the fact that their entire profession seems broken, messy and jury-rigged.

From the outside looking in, just about every prestigious profession out there seems like it’s populated by people who are a cut above the average person–robots, almost, who know exactly what they’re doing, never make mistakes, and are perfectly efficient, ethical and effective at their jobs.

But you’d be shocked to learn some of the stuff that goes on in, say, the airline industry, according to people who work in it. During a flight, pilots are constantly on their phones and letting autopilot do it’s thing. Sometimes they even sleep.

And take the Secret Service, for example: we look at them as the most elite bodyguards and law enforcement agents on the planet. Their perfect suits, earpieces and dark sunglasses make them look almost robotic, like the Agents from the Matrix. These are the guys who protect the leader of the free world–they should be the best of the best, with applicants to rigorously screened and weeded out so that only the most elite are tasked with protecting the President.

Then, in 2012, news broke that on the eve of a presidential trip to Cartagena, Columbia, at least 13 Secret Service agents were busted with prostitutes after a wild night of partying. The problem was that one agent got into a dispute over paying one of the prostitutes he’d been with, it caused a disturbance in the Hotel, and the cat was out of the bag. One of the prostitutes even said she easily could’ve gone through the Secret Service agent’s things while he was asleep, which of course represents the risk for a massive security breach.

At any rate, the incident caused the Secret Service’s public reputation to take a hit. We realized, “Wow, they’re just normal people like us.” They roll into town with the President, head to the bar and pick up chicks by dropping the line, “What do I do for a living? Oh, I’m with the President. Yeah. Secret Service.”

The point of this little detour is to illustrate that so many of these prestigious industries and professions are in reality just as flawed and prone to human-error as any other field. Science is no exception. After all, scientists are human beings, too.

More from Smith’s paper on peer review:

But does peer review `work’ at all? A systematic review of all the available evidence on peer review concluded that `the practice of peer review is based on faith in its effects, rather than on facts’.2 But the answer to the question on whether peer review works depends on the question `What is peer review for?’.

One answer is that it is a method to select the best grant applications for funding and the best papers to publish in a journal. It is hard to test this aim because there is no agreed definition of what constitutes a good paper or a good research proposal. Plus what is peer review to be tested against? Chance? Or a much simpler process? Stephen Lock when editor of the BMJ conducted a study in which he alone decided which of a consecutive series of papers submitted to the journal he would publish. He then let the papers go through the usual process. There was little difference between the papers he chose and those selected after the full process of peer review.1 This small study suggests that perhaps you do not need an elaborate process. Maybe a lone editor, thoroughly familiar with what the journal wants and knowledgeable about research methods, would be enough. But it would be a bold journal that stepped aside from the sacred path of peer review.

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers.3,4 Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust.

He goes on to list all the problems with peer review: it’s slow and expensive, it’s inconsistent, there’s bias (whether personal or ideological), it’s easily abused and has a high potential for fraud (i.e. you send a paper to another scientist for peer review, and he just plagiarizes your ideas and publishes them under his own name). Plus, scientists can be a catty bunch: it’s not uncommon for peer reviewers to, as Smith puts it, “produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor.” He then lists a few proposed solutions to make the process of peer review more reliable, but then notes that a lot of them have already been tried with little success.

Smith concludes thusly:

So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.

Odd indeed.

It turns out that science isn’t always based on an objective analysis of the facts. In fact, there’s a ton of subjectivity and faith-based decision making in the field of science. The best scientists and the best papers don’t always get the recognition they deserve.

Yet the general public has an abiding faith in the peer review process. If a study has been peer-reviewed, then we assume it must be a flawless study. If it hasn’t been peer-reviewed, or it has been peer-reviewed but was rejected for publication, we assume the study is junk. In reality, it’s far more complicated than that.

In addition to the many problems with peer review, there’s also a “replication crisis” across all scientific fields right now, which is an even greater problem than the peer review problem:

A 2016 poll of 1,500 scientists conducted by Nature reported that 70% of them had failed to reproduce at least one other scientist’s experiment (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), while 50% had failed to reproduce one of their own experiments, and less than 20% had ever been contacted by another researcher unable to reproduce their work. Only a minority had ever attempted to publish a replication, and while 24% had been able to publish a successful replication, only 13% had published a failed replication, and several respondents that had published failed replications noted that editors and reviewers demanded that they play down comparisons with the original studies. In 2009, 2% of scientists admitted to falsifying studies at least once and 14% admitted to personally knowing someone who did. Such misconduct was, according to one study, reported more frequently by medical researchers than by others.

If 2% of scientists admitted to falsifying their studies, yet 14% said they personally knew a scientist that falsified his or her studies, then that means the 2% number is a lot higher. That 14% figure is probably under-estimating the problem.

Even still: 14% of studies being falsified is a massive problem on its own.

The fact that 70% of scientific studies can’t be replicated, however, means we’re basically taking these scientists at their word when they publish a study. It’s like scientists are basically saying, “I made this really cool discovery during an experiment, but for some reason I can’t make it happen again. But trust me, it was awesome.”

That doesn’t sound very scientific, does it? If you can’t replicate a study someone else conducted and arrive at the same conclusion, that casts some serious doubts on the result of the original study.

But when 70% of studies can’t be replicated, that casts doubt on the entire scientific field as a whole.

And this is to say nothing of the involvement of money. Oftentimes, there are millions, even billions of dollars, at stake when new studies are published.

Funding and financial incentives

Pfizer and BioNTech released a study in July that said booster shots to the Covid vaccine were needed 6-12 months after getting your second shot in order to get the best protection from Covid. But Pfizer is the company making the shots: they have a financial incentive to promote the booster shots. Would you really expect Pfizer’s own study to conclude anything else?

Pfizer wants people to get booster shots. Joe Biden wants people to get booster shots. But then, not too long ago, a government panel of scientists and doctors overwhelmingly voted to reject booster shots for the general adult population, citing a lack of data and research on their effects. They approved the booster shots only for those over the age of 65.

However, today, in what the New York Times described as “a highly unusual decision,” Rochelle Walinsky, the director of the CDC, overruled the FDA panel on booster shots issued an official recommendation for booster shots for all adults:

I thought we were “trusting the science”? I guess not always.

One can only wonder why the CDC made this move, which is exactly what Biden wanted. Does Walensky know more about the science than the advisory board? Does Joe Biden have a better understanding of the science behind booster shots than the advisory board?

Or did Big Pharma simply win out over the pencil-necks?

Whatever the reason, the government’s official position is now Booster Shots For Everyone. It is quite obviously not a policy based on “trusting the science,” and in fact it goes directly against “the science,” as represented by the panel that initially rejected Biden’s booster shot policy.

But something tells me the Biden administration doesn’t really care. Which then begs the question, have they ever really cared about “the science,” as they claim they do?

It doesn’t take a genius to figure out that Big Pharma, not “science,” is calling the shots here.

“But!” you might respond, “Big Pharma might be following the science!”

Sure, you can believe that. But should you?

The entire pharmaceutical industry is a giant conflict of interest: pharmaceutical companies exist to make money off of people buying their drugs. They’re in the treatment business, not the cure business. They want people to keep buying their drugs. They don’t want you cured.

The two most profitable pharmaceutical drugs? Cancer and diabetes. Vaccines are 4th on the list, by the way. The pharmaceutical industry as a whole makes $124 billion a year on cancer drugs, and by 2024 that’s projected to be almost $240 billion. It’s not much of a stretch to conclude they don’t want to find a cure for cancer. Same with diabetes. They make almost $50 billion a year from diabetes medication. They don’t want a cure for diabetes; they want people buying insulin for life.

Big Pharma makes its money off of sick people, not healthy people. It’s not entirely accurate to say Big Pharma doesn’t want to cure diseases–more likely, they only want to cure diseases if there’s money to be made in doing so. If there’s no money to be made in the cure, then no cure.

So it’s not hard to see why Pfizer wants booster shots every 6-12 months: money. They already made enough money off the initial two shots of the vaccine as it is. But you know what’s even more lucrative? Indefinite periodic booster shots for everyone. Just like the diabetes model.

Another massive example of corporate money corrupting science is Big Tobacco. In 2005, Lisa A. Bero published a paper in the Public Health Chronicles entitled “Tobacco Industry Manipulation of Research,” and it was all about how Big Tobacco spent decades suppressing any studies that showed smoking was harmful, and even commissioned studies of their own to show that smoking wasn’t harmful:

The tobacco industry has devoted enormous resources to attacking and refuting individual scientific studies. In addition, the industry has attempted to manipulate scientific methods and regulatory procedures to its benefit. The tobacco industry has played a role in influencing the debate around “sound science,” standards for risk assessment, and international standards for tobacco and tobacco products. In the early 1990s, the tobacco industry launched a public relations campaign about “junk science” and “good epidemiological practices” and used this rhetoric to criticize government reports, particularly risk assessments of environmental tobacco smoke. The industry also developed a campaign to criticize the technique of risk assessment of low doses of a variety of toxins, working with the chemical, petroleum, plastics, and chlorine industries.

The tobacco industry has explicitly stated its goal of generating controversy about the health risks of tobacco. In 1969, Brown and Williamson executives prepared a document for their employees to aid them in responding to new research about the adverse effects of tobacco, which stated: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy. . . . If we are successful in establishing a controversy at the public health level, then there is an opportunity to put across the real facts about smoking and health.” Eleven years later, the tobacco industry expressed the same goal regarding evidence on the risks of secondhand smoke. A report prepared by the Roper Organization for the Tobacco Institute in 1978 noted that the industry’s best strategy for countering public concern about passive smoking was to fund and disseminate scientific research that countered research produced by other sources: “The strategic and long-run antidote to the passive smoking issue is, as we see it, developing and widely publicizing clear-cut, credible, medical evidence that passive smoking is not harmful to the non-smoker’s health.”

[…]

The internal tobacco industry documents include descriptions of research that was funded directly by law firms. For example, the law firms of Covington and Burling, and Jacob and Medinger, both of which represent a number of tobacco company clients, funded research on tobacco in the late 1970’s through the early 1990’s. Lawyers selected which projects would be funded; including reviews of the scientific literature on topics ranging from addiction to lung retention of particulate matter. These law firms also funded research on potential confounding factors for the adverse health effects associated with smoking. For example, projects were funded that examined genetic factors associated with lung disease or the influence of stress and low-protein diets on health. These deflected attention from tobacco as a health hazard and protecting tobacco companies from litigation.

[…]

In 1992, the U.S. Environmental Protection Agency (EPA) published a risk assessment of environmental tobacco smoke, which concluded that passive smoking is associated with lung cancer in adults and respiratory disease in children. The development of the risk assessment was considerably delayed by the tobacco industry’s criticisms of the draft report. Sixty-four percent (69/107) of submissions received by the EPA during the public commentary period claimed that the conclusions of the draft were invalid; of these, 71% (49/69) were submitted by tobacco industry–affiliated individuals.

It’s a fascinating article that goes in-depth on the lengths Big Tobacco went to promote cigarette smoking. In 1988, Philip Morris, RJ Reynolds and Lorillard Corp founded this ostensibly independent research organization called “The Center for Indoor Air Research,” in other words, second-hand smoke. What CIAR was actually founded to do was research other potential contaminants of indoor air and shift the conversation away from second-hand smoke. For example, “why are we worrying about second-hand smoke when asbestos is a far greater danger?” or something like that.

Bero in the article clearly defines Big Tobacco’s “playbook” when it comes to muddying the waters:

You think this strategy hasn’t been used by anyone else? A 2003 study by Bero and Joel Lexchin found that pharmaceutical industry-funded research was similarly backwards:

Clinical research sponsored by the pharmaceutical industry affects how doctors practise medicine. An increasing number of clinical trials at all stages in a product’s life cycle are funded by the pharmaceutical industry, probably reflecting the fact that the pharmaceutical industry now spends more on medical research than do the National Institutes of Health in the United States. Most pharmacoeconomic studies are either done in-house by the drug companies or externally by consultants who are paid for by the company.

Results that are unfavourable to the sponsor—that is, trials that find a drug is less clinically effective or cost effective or less safe than other drugs used to treat the same condition—can pose considerable financial risks to companies. Pressure to show that the drug causes a favourable outcome may result in biases in design, outcome, and reporting of industry sponsored research.

A recent systematic review of the impact of financial conflicts on biomedical research found that studies financed by industry, although as rigorous as other studies, always found outcomes favourable to the sponsoring company.

By now it should be abundantly clear that it is not as simple as whether or not someone “trusts science” or not. At this point, it’s hard to even tell what “science” means anymore. This is why on this blog I distinguish between Science™ (fake science; politicized science; corrupted science; corporate science; made-for-TV science, etc.) and science (real science).

There’s actual science, and then there’s a whole industry of antiscience, which may even be bigger than actual science.

How can we even trust scientific studies anymore when we know there are so many financial incentives for big corporations to corrupt the scientific field?

Settled science?

Everyone remembers learning about Galileo in school. The ancient debates over whether the earth was flat or a globe, the debates between geocentrism and heliocentrism–these were thought to be matters of “settled science,” until someone came along and challenged the prevailing wisdom.

From antiquity until the late 19th century, the most common medical practice performed by surgeons was bloodletting, where patients would be deliberately bled-out based on the belief that illness was caused by an imbalance among the four basic “humors” of human health: blood, phlegm, black bile and yellow bile. Physicians would prescribe bloodletting as a way to bring to body’s four “humors” back into balance with one another.

Have you ever gone to the doctor and been recommended 16 ounces of bloodletting? Of course not. The practice was discredited and abandoned long ago. But for thousands of years, it was standard operating procedure in the medical world. It was thought to be “settled science.”

Only once scientists began challenging the accepted conventional wisdom did we learn that bloodletting was not only quackery, but dangerous and likely to cause more harm than good to patients.

Going back to the Big Tobacco example, they used to literally use doctors to sell cigarettes. I’m sure you’ve seen these old ads:

“Say, doc: what’s the healthiest cigarette brand?”

Imagine being alive back then and being skeptical of cigarettes, and then someone calls you a Science-Denier because doctors recommend Lucky Strike cigarettes. “How dare you question the EXPERTS?”

Of course, Big Tobacco later had to update its marketing strategy once the public started getting wise to the health risks of smoking, which is why they got into the business of scientific studies starting around the 1960s, as discussed above.

Ultimately, by the mid-late 1990s, it got to the point where Big Tobacco could no longer suppress all the evidence that its products were both dangerous and addictive. The tide of public opinion had turned against smoking for good. But it took a long freaking time for the truth to win out.

And it all started with some real scientists saying, “Wait a minute. I’ve got a hunch these cigarettes aren’t actually healthy.”

This is how science is supposed to work. You come up with a hypothesis, you test it via an experiment, and you publish your results. It often starts with questioning or challenging the status quo, or the conventional wisdom. “What if we’re all wrong about this?” “What if we tried this?” etc. This is how major breakthroughs often happen.

But the moment you as a scientist decide to go against the “conventional wisdom,” you often find yourself on the opposite side of some very powerful and well-funded special interests.


In January 1961, after 8 years in the White House, President Eisenhower delivered his “Farewell Address.” More so than perhaps any other presidential farewell address, Eisenhower’s words have stuck with us over the decades. Today his speech is mostly remembered for his prophetic warning about the “military-industrial complex,” and justifiably so, but immediately after that part of the speech, he issued an equally important warning that we would do well to remember today. It was about what he called the “scientific-technological elite”:

In this revolution [the rise of the military-industrial complex], research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.

Today, the solitary inventor, tinkering in his shop, has been over shadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

It is the task of statesmanship to mold, to balance, and to integrate these and other forces, new and old, within the principles of our democratic system-ever aiming toward the supreme goals of our free society.

He was talking about the potential for the corruption of “science” by government money.

Money flips the whole scientific method on its head. Instead of conducting experiments in search of truth, scientists are under pressure to “arrive” at predetermined conclusions. The “scientific” stamp of approval is a mere tool for both corporations and the government to garner public support for whatever product they’re selling, or for a policy they want enacted into law.

I remember when I was fresh out of college, I was an intern and then a low-level staffer on Capitol Hill in Washington. One of the first things I learned on “the Hill” was that all the voting that goes on in Congress is basically a sham. There is no uncertainty at all involved in that voting. One of the more seasoned staffers in our office told me that the Speaker of the House doesn’t bring a bill to a vote unless he/she knows it’s going to pass. All the actual “voting” is done well before the bill is actually presented to the House chamber.

If a bill is ever rejected on the House floor, that’s by design, too: they either want to force the other side’s hand by making them go on record as being for or opposed to something (for example, Planned Parenthood funding, gun control, etc.), or they want to embarrass the other side. The vast majority of bills are never even brought to a vote on the floor, much less actually become a law.

This idea of predetermined conclusions applies to the scientific field, too. Would Big Pharma ever fund a study that’s going to undermine credibility for their drugs? Of course not. Big Pharma spends millions of dollars to develop these drugs; they’re not going to them be derailed by a contradictory study. Most of the time, the studies they commission have predetermined conclusions. A lot of these scientists are merely arriving at the conclusion they were paid to arrive at.

In other words, Big Pharma doesn’t put out studies that go against Big Pharma. It’s a sham.

And the left actually fully understands this concept of the corporate perversion of science, at least when it comes to Big Oil. For years the left would decry Big Oil-funded “studies” that either tried to absolve the petroleum industry of its contributions to global warming, or cast doubt on the idea of global warming altogether. There’s a famous book that came out in 2010, written by Naomi Oreskes and Erik Conway, called “Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming.” It was even made into a movie in 2014.

Basically, it wasn’t just Big Tobacco trying to manipulate and corrupt the scientific field in order to protect its financial interests. It’s tons of other industries, from Big Oil, to Big Ag to even the meat and dairy industry. Vox recently published an article talking about how “Big Meat” (lol) companies spend millions to “crush” good climate policy:

You probably already know that the fossil fuel industry has spent many millions of dollars trying to sow doubt about climate change and the industry’s role in it.

But did you know that big meat and dairy companies do the same thing?

According to a new study out of NYU, these companies have spent millions of dollars lobbying against climate policies and funding dubious research that tries to blur the links between animal agriculture and our climate emergency. The biggest link is that about 14 percent of global greenhouse gas emissions come from meat and dairy.

This is yet another example of honest scientists going up against extremely powerful corporations and industries that have way more money and political clout than you do. Plus, a lot more at stake financially.

However, in recent years, it seems like we don’t see this classic David and Goliath battle between science and corporate America as much.

Nowadays, it feels like corporate America has taken over much of the scientific field, as well as the government regulatory bodies that are supposed to be conducting oversight.

Instead of science driving and informing government policy, it now feels like the combined forces of Big Business and Big Government are strong-arming the scientific field. Everything nowadays is justified by “It’s what Science says to do.” Vaccines, masks, lockdowns–virtually everything Covid-related. And then of course you have “climate change” and environmental policy, which is entirely justified with “Because SCIENCE says so.”

For most of human history, science and government have been basically in opposition to one another. Now all the sudden they’re in agreement on everything. The government now even claims to basically be subservient to “science.” That should raise some red flags.

When the government starts basically fetishizing “science,” that should tell you that something has gone horribly wrong in the scientific field.

Just like the military-industrial complex, the “scientific-technological elite” that Eisenhower warned of all those years ago is here, and in control.

It’s not about whether or not you “trust the science.” It’s about whether or not you trust the government officials and corporate interests that have taken over the scientific field.

1 Comment

Leave a Reply