When you spend a decent chunk of two years working on a novel about scientific purity, you tend to have fairly developed ideas on the subject. Recently on Portsmouth Point I read an article entitled Why Science is Being Too Arrogant, describing the ways in which knowledge spawned from scientific thought was treated as ‘absolute truths’, set in the stone of mathematics, yet often old ideas were proved wrong. Public perception of ‘science’ is indeed often seen as some grand rule book, or an accumulated collection of breakthroughs, theories and inventions. But those are really just its offspring. Science is a method of thinking, where you take the least possible biased observations of the world, and from these draw the most logical and likely conclusion. If anyone has to be accused of arrogance, it is scientists, people – in the same way there are those who twist religious thought into rallying people into war, or misuse the ideals of communism, subsets of economics etc. for their own personal gain, the method of thinking itself can often not be held accountable for its failure in application. A person can cling to an unlikely theory, or can profitably sell a pseudo-invention – this is what would be the arrogance, and misuse, in science.
The only absolute truth in science, for it to remain pure, is that there are no absolute truths. That may seem like a bold statement, given how everything from light bulbs to space travel, A level textbooks to cancer treatment, are based upon ideas which have then become mathematics, then theories and, finally, proven ‘facts’. We humans have explanations for the hazy feedback of guitar music, the migration patterns of African Swallows, of sun burn and serial killers, the number of legs on a spider and even the passing of time. But ‘facts’ are imagined by humans, by people. We in ourselves bring an irremovable aspect of bias. Take the last from that list, the seemingly obvious idea that time is a metronome. Along comes Albert Einstein, does such things as considering light in a manner seemingly disproved at his time, and eventually revolutionises our old ideas by giving evidence that the passing of time is in fact relative to each individual, not clockwork. Einstein’s work meant parts of our ‘accumulated knowledge’ needed to be backtracked and rewritten, but this is the beauty of scientific thought over all others – that it is self-regulating, admits mistakes or false theories. It allows for someone in, say, 20 years to come along and now prove Einstein wrong, again rewriting the textbooks. Because what scientific thought does is let anybody in the world make an unbiased observation and develop their own theory, such that potentially thousands exist for one subject (which has the effect of removing as much individual bias as possible). The difference between each competing theory on the same subject, and indeed contradicting theories formed from religious thought or others, is that different people are convinced differently by the evidence for each one. There is nothing to say the universe was created by a deity in one week, over saying it was produced in an infinitesimal second called The Big Bang, except for differing evidence for the two theories. In science, ideas are merely more or less likely than another – the more likely is accepted (hence why Lamarkist evolution died out to Darwinian). An idea tends to only become dangerous if it stays static and hence stagnates, refuses to accept any adjustments.
It is perfectly possible that all human knowledge to date is false. We are yet unable to marry the physical laws of the quantum and macro worlds. We recently found potential evidence for a particles’ current state being affected by future events, but have no way to grasp or comfortably explain this. And even though our nuclear power plants clearly work, and gravity never decides to work in the opposite direction to what we expect, and as I sit here writing this article on my laptop I know humanity’s knowledge of electronics and manufacturing makes the laptop’s circuits work, make its screen emit light, and make its metallic casing not rust, what if the knowledge human beings have of the universe is so basic that on our level of understanding we are unable to realise that that knowledge is wrong? We try to apply constraints on a perhaps unrestricted universe, because that is natural human bias. Something we can perhaps never remove.
But that’s okay. Because, if it’s a form of bias, we’ll try.
The media loves to treat each new study on bacon curing cancer as fact. The general public often likes to believe in God’s rulebook of the universe, and that each day scientists are hunting out the next page. And this is because we are taught things as being ‘facts’. A fact is the most probable explanation from a set of theories, based on the general consensus of convinced experts. This in itself isn’t taught widely enough, and is perhaps why there are those out there who accuse science itself of having arrogance. Scientific knowledge, adjusted by those ignoring as much of their own individual bias of self gain and unfounded belief in a theory, is a creative, constantly evolving, intrepid stumble into the unknown. The accumulated knowledge of humanity has had to start at base zero, with nothing, and from when Neanderthals first brushed stick and stone to glimpse fire, or ponder up at the endless shining beacons filling the night sky, we have had to set our own initial foundations to build that knowledge upon. Of course it is a messy process, contradicting our base ideas and having to dig back to rebuild them when we’ve found they’re most likely wrong – but if we never stepped forwards, never guessed because we knew it might be wrong, then we would be permanently stuck and stagnating at base zero. A newspaper, or a textbook, or a neighbour might describe the rotation of the Earth, or a breakthrough in stem cell research, as an absolute truth, because it has to, as we humans crave a certain answer. But a scientist, speaking as one myself, lives for the mystery.