Categories
General

Probability and evolution

There are a range of arguments against evolution that rely on the notion that it is inherently so improbable as to be impossible. They date right back to William Paley, who published his ‘watchmaker argument’ in 1802, predating Darwin’s ‘Origin of Species’ by more than 50 years.

Essentially, this is the notion that the appearance of design implies the existence of a Designer. It’s also linked to the analogy of ‘a tornado in a junkyard creating a functional Boeing 747 aircraft’, and to the ‘infinite number of typing monkeys creating the complete works of Shakespeare if given an infinite amount of time’.

I’ve tried to avoid linking out to things elsewhere in this series and to make them self-contained, but this paper from 1971 in American Biology Teacher has a perfect example of the kind of thing I’m talking about, in the section entitled “How Many Genes Could Exist?”

Using the usual approach used in these kinds of arguments, that piece comes up with a probability of 1:10600 for the random evolution of all possible genes. That’s a genuinely outlandish number: to give you a sense, the number of atoms in the known universe is on the order of 1080. If that number were correct, then indeed it would seem wildly improbable that life could evolve.

This short post is just about explaining why it is not.

First, that assumes entirely random processes without a step-wise process of natural selection. It assumes that it is necessary (to mangle a metaphor) to arrive at the Boeing 747 without passing via the Wright Flyer and successive improvements.

Second, it assumes that a specific outcome is the goal, whereas biology has a very large range of possible ways to solve the same problems. There are different kinds of wings and eyes and different types of legs and different models of fish locomotion and… Again, to play with metaphors, it’s not inevitable that the monkeys will arrive at the complete works of Shakespeare: they might end up with Stephen King or Iain M Banks or S T Colleridge instead.

These two objections may not sound like much, but together they essentially mean that the statistical and probability claims against the evolution of life do not hold water. Statistical models are only valid when they accurately model the phenomenon of interest… and these simply don’t.


For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Why I think it’s important to understand evolution
Cosmogenesis, abiogenesis and evolution
Evolution and entropy
Facts, Theories and Laws
Radiocarbon dating
Radiometric dating and deep time
Four Forces of the Universe
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise

Categories
General

Four Forces of the Universe

(This piece is a bit of a digression from the recent focus on evolution and its discontents, and moves back closer to my own comfort zone in physics. It is also at a high school physics level – the water gets much deeper than I’ve presented here very quickly, but this is a useful place to start.)

In discussions of the universe at the larger scale of cosmology and astrophysics – the formation and motion of stars and galaxies – I’ve recently begun encountering people who claim that ‘it’s all magnetism’, that every force in the end can be reduced to magnetism.

No doubt these voices have been around forever, and it’s just that I’ve started hearing them, but it’s a fascinating phenomenon. These people tend to be skeptics about General Relativity, and sometimes even about whether gravity exists at all. Certainly dark matter and dark energy are rejected passionately – it’s all magnetism!

I’m not sure what motivates it. Perhaps magnetism is the force that seems realest and most tangible. Most of us have played with magnets, and felt that real force, of both attraction and repulsion. It feels quite like the force of gravity holding us down – at least the attractive element does. And imagining a repulsive gravitational force is exciting! No heavy rockets needed to get to space if you’ve got anti-gravity!

As a bit of an antidote, I thought I might talk for a moment about the four fundamental forces that act in our universe. I might include a couple of equations for comparative purposes, but it should be quite easy to understand this post even if you ignore them.

The four forces are gravity, electromagnetism, the strong nuclear force and the weak nuclear force. (Newer physics links electromagnetism and the weak nuclear force together as the ‘electroweak interaction’, but that’s past where we want to go for the moment.)

The nuclear forces govern what happens inside the nucleus: the strong force is what holds the nucleus together in spite of the electrostatic repulsive forces between the protons, and the weak force governs radioactive decay. We won’t say too much more about them, except to note that the balance between the strong nuclear force and the electromagnetic force is what allows stable nuclei – and hence us – to form. Slightly different values of either would not allow matter to form. This has implications for whether divine fiddling with the speed of light or the rate of radioactive decay could happen, but that’s another story for another day.

Electromagnetism can be thought about as two things, although they are tightly tied together – electricity and magnetism.

We’ll take magnetism first. It’s fascinating, because it acts only on a moving charge. If a charged particle remains perfectly still in a magnetic field, no force acts on it. The formula is F = qvBsin(theta), where F is the force (newton), B is the magnetic field strength, q is the charge (coloumb), v is the velocity (metre per second) and theta is the angle (degrees or radians). As you can see, if v is 0, the force will be 0. The calculation is actually a ‘vector cross product’, and that tells us the direction the force will act in, but that’s also probably further than we need to go.

OK, we have enough already to reject the idea that gravity can be reduced to magnetism. We saw that if v is 0, F is 0, but also, if q – the net electrical charge on an object – is 0, the force will be 0. I don’t have a net electric charge on my body, neither do you, and neither does Earth. That means that the gravitational force holding me down on my chair as I type this is not a magnetic force.

(It’s possible the objection will be raised that the protons and electrons in my body have charge and are moving relative to those in the Earth, but again, there is not a net overall charge on an atom, and the directions of motion would all cancel one another out. And, of course, we now tend not to think of the motion of electrons in terms of the ‘orbit’ metaphor anyway…)

The other manifestation of the electromagnetic force is electrostatic attraction and repulsion, and this gets interesting. The formula is F=(kq1q2)/r2 where F is the force, k is a constant, 9 x 109 , q1 and q2 are the charges on the two objects and r is the distance between them.

The reason I said it gets interesting is that this has a direct mirror in the equation for gravitational force, F=(Gm1m2)/r2 where G is a different constant, 6.67 x 10-11 and m1 and m2 are the masses of two objects.

While the similarities are striking, there are also two important differences:

  1. There are both attractive and repulsive electrostatic forces. Professor Paula Abdul had it right: opposites attract! And same charges repel. On the other hand, there is only an attractive force of gravity, not a repulsive one. Objects with mass always pull one another closer, never push one another away.
  2. The relative strength of the forces. I haven’t included the units, but in terms of the normal SI unit conventions the constant for the gravitation force is about 1/1020 or 0.00000000000000000001 times as large as that for the electrostatic force. Inducing a very small charge in a party balloon, for example, will allow it to stick to a wall in defiance of gravity.

So there you have it: a brief rundown of the four fundamental forces, and – in simple terms at least – some discussion of why we still need four, and can’t boil them all down to one.


For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Why I think it’s important to understand evolution
Cosmogenesis, abiogenesis and evolution
Evolution and entropy
Facts, Theories and Laws
Radiocarbon dating
Radiometric dating and deep time
Probability and evolution
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise

Categories
General

Radiometric dating and deep time

If you haven’t yet read the Radiocarbon dating post, and if you’re not already au fait with the elements of half-lives and radiometric dating, it’s probably worth clicking on the link and reading that post first, then coming back to this one.

Radiocarbon dating can only take us back on the order of a few tens of thousands of years. Humans and proto-humans are believed to have been around for a couple of million years, the last dinosaurs to have become extinct about 65 million years ago, and the Cambrian to be about half a billion years ago. Earth itself is believed to be about 4.5 billion years old.

That means we need some other dating methods, and some of those also rely on radioactive decay. Carbon-14 has a half-life of 5730 years, but other radioactive elements have much, much shorter half-lives – on the order of nanoseconds or even femtoseconds – and some have much, much longer half-lives.

A few different methods and decays are used:

DecayHalf-life (years)
Uranium-Thorium80,000
Postassium-Argon1.3 billion
Uranium-Lead*4.5 billion
Rubidium-Strontium50 billion
Samarium-Neodymium106 billion

*This is the U-238 to Pb-206 decay, but there is also a U-235 to Pb-207 decay with a half-life of 700 million years that runs in parallel and can be used as an extra check.

The method is as for radiocarbon dating, but in most of these cases the ‘daughter nuclide’ – the second name in each of the pairs in the table, the thing that the first-name element decays into – is solid and stays around, unlike the nitrogen that is the product of radiocarbon dating. This means that the age is usually calculated in terms of the ratio of the parent and the daughter nuclide in the sample.

There are similar ‘gotcha’ examples sometimes used for these dates, but since many creationists are also proponents of a very short age of the Earth – some 6000 years in accordance with Bishop Ussher’s chronology based on the Biblical genealogies, some a little longer but not much – most would suggest that the planet1 is younger than even the 80,000 years of a single half-life of the Uranium-Thorium decay.

That means that a common theme is “But you assume that the rates of radioactive decay has always been constant. Maybe it was different in the past.” Some suggest that at the time of Noah’s flood there was also a dramatic increase in the rate of radioactive decay.

The thing is, every radioactive decay reaction also releases heat. If there had been sufficient acceleration to fit 4.5 billion years worth of decay into 40 days and 40 nights (while it was raining (in the account)) or even a year (before the floodwaters subsided)), the heat released would have been sufficient to melt the entire planet, many times over.

The response I’ve sometimes received when raising that issue is “God has infinite power and could shield the Earth from the heat”. Well, I guess so, but that just piles ad hoc intervention on top of ad hoc intervention. If God wanted to do that, perhaps it would have been simpler just to specifically set the isotope ratios… and also do things like create the polonium halos in granite that certainly suggest radioactive decay occurred over a very long period.

My goal in these posts is really not to pick fights, but to enhance understanding of the relevant science. Occasionally, though, enhancing understanding involves addressing common misunderstandings.

  1. Or life: beliefs differ, but even those who believe the planet has been around longer believe it was ‘without form and void’, so there were not features capable of being dated.

For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Why I think it’s important to understand evolution
Cosmogenesis, abiogenesis and evolution
Evolution and entropy
Facts, Theories and Laws
Radiocarbon dating
Four Forces of the Universe
Probability and evolution
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise

Categories
General

Radiocarbon dating

A friend, in a discussion of the age of the Earth, recently said “I remember a study where there were some bones that were known to be only a few years old but they were carbon dated as being 8 or 9,000 years old”.

It helps to illustrate the problem: it’s someone’s recollection of something someone else told them some time ago, so there’s no citation, and few details. It’s difficult to check what actually happened. I found it then but can’t now, but there are a number of similar articles around, and we’ll get to the issues with them.

I thought today I’d talk a little bit about radiocarbon dating specifically (more on other forms of radioactive dating in coming days): how it works, what it can and cannot do, and why some of the common objections to it don’t really hold water.

Radioactive decay is a fascinating process. Unlike physical decay, it is not influenced by how hot, wet, pressured or otherwise its environment is. It is a process that appears random at the level of individual decays – it’s impossible to accurately predict when one will occur – but is highly predictable at a statistical level, when many decays are combined.

If a sample of a substance has, say, 1000 atoms of a radioactive chemical element in it, the meaning of the term ‘half-life’ is the amount of time taken for half of those atoms to undergo decay. The half-life for a particular type of decay to occur is constant. Say in our example the element has a half-life of 2 days. After 2 days, there are 500 atoms left (half the original amount). After 2 more days, there are 250 left (half as many as 2 days ago, quarter as many as were there originally, 4 days ago). After 6 days in total there are 125, after 8 days there are 62.5 (there are not really 0.5 atoms, so it would likely be 62 or 63). It will keep halving, each 2 days, until there are no atoms left.

The great majority of the carbon in our environment is carbon-12. It has 6 protons and 6 neutrons in its nucleus, for a total of 12 ‘nucleons’. Carbon-12 is stable and does not undergo radioactive decay. A very small amount of the carbon has an extra neutron for a total of 13. It’s called carbon-13 and is sometimes important in MRI scanning. Carbon-13 is also stable.

When neutrons in incoming solar radiation strike nitrogen-14 in the upper atmosphere it sometimes undergoes a tranformation into carbon-14, and carbon-14 is radioactive.

It later undergoes radioactive decay to release a beta particle and returns to being nitrogen-14. The half-life of this decay is 5730 years.

(An electron anti-neutrino is also released, and this equation isn’t properly charge balanced, and there’s an interesting reason for the minus sign for atomic number on the beta particle, but perhaps that’s too much detail for here.)

The carbon-14 in the upper atmosphere is distributed through the whole environment. Plants take it in when they use energy from sunlight to power photosynthesis, changing carbon dioxide and water into glucose and releasing oxygen. Living things either eat plants or eat things that eat plants, so all living things have carbon-14 in them. As long as they’re alive, they keep replenishing their stores of carbon-14, and so the amount in their bodies is stable. There are radioactive decays going on, but the supply is being replaced.

Once something dies, though, it stops breathing, stops eating, stops interacting with the environment. No new carbon-14 is added to its body, and what is there decays in a predictable way.

This is why radiocarbon dating can only be used on things that were formerly alive. It is not useful for dating rocks, or fossils (which are rock that’s replaced something that was formerly alive), or buildings, tools and other artifacts. Something had to be living, breathing, eating and drinking at some point in history to be able to be radiocarbon dated. There are other methods of dating other materials, that I’ll talk about in a different post.

The period of time that radiocarbon dating can stretch back is also limited. With a half-life of 5730 years, it’s very convenient for dating things that are low multiples of that, back to 25,000 years or so, but even at that point you’re 4 half-lives in and there’s only 1/16th of the original amount remaining. If you have a larger sample, so that the remnant is larger even after multiple halvings, radiocarbon dating can get you back 50,000 years or so, but not much further than that.

As a matter of perspective, there are artifacts of Aboriginal settlement in Australia that are older than that.

Radiocarbon dating is generally reliable. It makes some assumptions, but they are generally valid, or else able to be calibrated for. So, for example, if additional volcanic activity, or nuclear testing or other influences changed the amount of carbon-14 in the atmosphere at the period in which the formerly-living thing being dated was alive, that’s relevant: and can be taken into account. If additional carbon-14 has leached in or out of the sample, that’s relevant.

There are a few claims made by creationists, that are usually of the ‘gotcha’ type. They will have sent a sample to a lab with no information about what it is or where it comes from, requested dating, and then triumphantly revealed that the real known age is different. In all of these cases I have seen, including the one with which I started this piece, there is a clear, simple scientific explanation for the apparent disparity, which does not invalidate the method of radiocarbon dating for age determinations.

Often, the issue is that the organisms being dated were not in contact with the atmosphere in a ‘normal’ way. Examples include shells that grew in water from underground caves, where the carbon dissolved in the water in which they and their food grew had in many cases spent a very long period as part of limestone. The carbon-14 in it had long ago decayed already, so the carbon these shells were absorbing was depleted in carbon-14 relative to the ‘norm’ in other places. When comparing these shells to similar ones grown in fresher water, they appeared ‘older’ because they had lower levels of carbon-14.

When the science is done properly, the sample is tagged with the location where it is found, and these kinds of anomalies can be calibrated for. The ‘gotcha’ examples might give those seeking to impugn the method something to crow about, but they’re not good science.


For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Why I think it’s important to understand evolution
Cosmogenesis, abiogenesis and evolution
Evolution and entropy
Facts, Theories and Laws
Radiometric dating and deep time
Four Forces of the Universe
Probability and evolution
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise

Categories
General

Facts, Theories and Laws

(This piece is slightly modified from one originally published on the Adventist Today web page – apologies to those who have already ready it in other fora. More brand new words tomorrow!)

“Evolution is a fact” is something we tend to hear from one side in debates about origins. I’d argue that this statement reflects a misunderstanding of the roles of facts, theories and laws in science.

At the same time, from the other side of those same debates, we tend to hear “evolution is (just) a theory”, which is equally unfortunate as a way of thinking about what ‘theory’ means in science.

I want to, as clearly as I can, briefly outline the meanings in science of the terms ‘fact’, ‘theory’ and ‘law’, and to explain why a theory, no matter how well supported by evidence, never turns into a fact.

Evolution is, in some circles, a controversial theory, and therefore a bit awkward to use as an example for this discussion, since it brings in strong emotions and strongly held views on the part of readers. (And, just quietly, I’m a physics guy, not a biology guy.) So, instead, I’ll use the example of gravity.

Here is a fact about gravity: close to the surface of the earth, if any object (that has mass) is unsupported, it will accelerate toward the centre of the earth with an acceleration of about 32 feet per second per second, or about 9.8 metres per second per second. If you have an object and a stopwatch handy right now (and your phone probably has a stopwatch function), and have done a little high school physics, you can test this fact.

In science, ‘fact’ is used to refer to a single piece of data, the result of a measurement. Other facts about gravity include the fact that it decreases in strength as we move away from the centre of the earth, and that every object that has mass exerts a gravitational force on every other object that has mass. With sophisticated-enough instrumentation, all these facts can be measured and expressed in numbers.

Not all of science is physics, though, as my students often remind me. It’s a fact in chemistry that sodium chloride (table salt) has a cubic lattice structure between its atoms, and a fact in biology that living things contain DNA, and a fact in geology that most rocks contain a lot of silicon dioxide.

Let’s leave ‘theory’ on the side of our plate for the moment, because it’s the most complicated, and talk about ‘law’. In science, a law is a mathematical relationship between quantities. The most famous law in science is probably Einstein’s E = mc2, which describes the relationship between matter and energy.

The key law in gravity, which was formulated by Isaac Newton (and I do apologise for those who find equations challenging!), is F = (Gm1m2)/r2 In words, it says that if there are two masses, m1 and m2, a distance r apart, the force F between them is given by this law, where G is called the ‘universal gravitational constant’.

A law is powerful because it makes a relationship clearer. A couple of paragraphs ago I said that the force decreases with distance, but the law gives more detail. It shows that the force decreases with the square of distance: if the objects are 2 times as far apart, the force is only ¼ as great.

A theory is a human mental creation that explains facts and has withstood the test of experiment. This view of the nature of science and of theory is owed to philosopher of science Karl Popper. A theory in science has descriptive, predictive and explanatory power. That is, a theory describes the world as we see it and experience it. It allows us to reliably predict how the world will behave in future in a particular set of circumstances, and it explains why the world is as it is. If we make a prediction using a theory, and then conduct the experiment and the prediction fails – the world does not behave as the theory leads us to expect – then Popper would say the theory has been ‘falsified’ and should be discarded. The theories that make up science at any given moment are the ones that have been tested many times and have never been falsified. Einstein neatly summed up Popper’s perspective: “No amount of experimentation can ever prove me right; a single experiment can prove me wrong”.

A theory is not held to be ‘true’ in any final sense under this view: at best, it is the most powerful theory available, that explains the greatest number of facts, and has not been falsified. A new experiment may yet be conducted that will falsify it, and if that occurs the theory will need to be discarded and replaced with a better one.

The first – and longest-lived – theory used to explain our experience of gravity was proposed by Aristotle. He suggested that things in the universe have their ‘natural station’, the place where they belong. Things mostly belong on the ground – even birds – so when we lift them above the ground they are being lifted out of their natural state, and if they are not prevented from doing so, they will return to it. If we lift a book from the floor to a table, it is out of its natural place, and if the table were not there to prevent it doing so, the book would return to its natural place on the floor.

Aristotle’s theory of gravitation applied only on Earth, since he also believed that the heavens were a different domain from Earth, with different rules and processes. The contribution of the next great theorist of gravity, Isaac Newton, was to apply the same rule to objects in space like the moon and the planets that was used to explain how things move on Earth.

Johannes Kepler had created rules that described the motion of the planets in purely mathematical terms, but did not explain why the planets moved the way they did. Most times laws are derived from theories, but we could argue that Kepler’s laws were not drawn from a theory. They had descriptive and predictive power – Kepler could tell you when the next eclipse would come – but not explanatory power. Newton developed the theory that any two objects with mass exert a force on each other, and – crucially – that this is true in the heavens as well as on Earth. The same force that caused his (probably apocryphal) apple to fall from the tree to the ground explained the motions of the heavenly bodies. Newton had a theory, not just a law, because in addition to description and prediction, it was capable of explanation. Newton’s theory of gravity still works well for everything we encounter in everyday life, but for much more extreme environments, such as near the event horizon of a black hole, it breaks down. For those specialised contexts, it has been replaced by the theory of General Relativity proposed by Albert Einstein. The mathematics gets very complex very quickly, but in words, Einstein’s theory can be stated as ‘matter tells space how to curve, space tells matter how to move’. Gravity is explained, not as a force between objects, but as mass causing curvature in the local space, which then causes mass to move differently.

Einstein’s theory is considered ‘better’ than Newton’s because it is more universal – it can be applied everywhere in the universe, whereas Newton’s theory breaks down in some situations.

All three of the theories described – Aristotle’s, Newton’s and Einstein’s – explain the fact that a dropped book or apple will fall toward the ground. Aristotle’s does not have an associated law – a mathematical statement about how rapidly the apple will fall – while Newton’s theory does include a law. Einstein’s theory also includes laws, but the mathematics are too complex to go into here.

I hope these examples have helped to explain why a theory can never turn into a fact or a law, no matter how much evidence it has behind it. These are three different things with different qualities, each important in science.

I suspect that when someone says “evolution is a fact”, they are using the word ‘fact’, not as a scientist would, but in the everyday sense of ‘not fiction’. They mean that a textbook on evolutionary theory in the library would not be placed with the novels and short stories, but with the other books that contain true information about the world. It’s probably still an unfortunate usage, though, since the claim being made falls within science, so the language used ought to be the careful language of science.

Evolution is a theory. It is one that explains, not one fact, but an enormous variety of facts about the diversity and the characteristics of life on Earth. It is a theory that was proposed more than 150 years ago, and it has been tested in a wide variety of ways. The fundamental concepts have not been falsified, but significant elements have been changed and updated. Darwin did not know about genes when he wrote ‘On The Origin Of Species’, for example, or DNA. He did not have access to the vast array of data about living things that modern scientists can draw on. The modern evolutionary synthesis includes the recognition that gene transfer plays a much greater role than previously thought, for example, so that less of the ‘heavy lifting’ of generating new characteristics must be borne by mutations.

When people say “evolution is (just) a theory”, they are drawing on the common, everyday use of the word, rather than the scientific use. We say “I have a theory” when we mean a guess, a hunch, an untested brainwave. Saying “evolution is a theory” is thought of as a way of saying that it is unsupported, held without evidence, untested. These same people would tend not to say “gravity is (just) a theory”, although in scientific terms, gravity is indeed a theory. Or, at least, there are multiple theories of gravity that explain the facts of gravity, some of which provide mathematical laws, and Einstein’s is currently the best theory we have.

Theories do change, and Einstein’s theory may well be replaced by an even more powerful one in the future. There are interesting problems at the boundaries between General Relativity and quantum theory, for example, that may revolutionise our understanding in the future. But a book will still fall from a table, and an apple from a tree. A change to the theory of gravity will not enable us to suddenly, unaided by technology, safely walk off the roof of a tall building and just float. The facts of gravity will remain the same if the theory used to explain them changes.

The same is true of evolutionary theory. It has changed in the past and is likely to continue to change. Alternative candidate theories, including special creation and intelligent design, already exist, and already claim to explain the same facts. When the theory changes, the facts will not change. The DNA that forms the genetic blueprint for a jellyfish will not suddenly begin to produce a lion instead.

It’s probably a discussion for another article, but to date the alternative candidate theories – special creation and intelligent design – have not demonstrated descriptive, predictive and explanatory power in the same ways and to the same extent as the modern evolutionary synthesis. There are efforts to make these demonstrations, which will continue to be tested against the facts of biology.

I hope this brief discussion has been helped you to understand the meanings of the terms ‘fact’, ‘theory’ and ‘law’ in science, and to be aware when these terms are being used confusingly in discussions around origins as well as other scientific topics such as vaccination, genetic modification and climate change. If we can communicate clearly and accurately, and go forward together in good faith, we have more chance of finding our common ground.

For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Why I think it’s important to understand evolution
Cosmogenesis, abiogenesis and evolution
Evolution and entropy
Radiocarbon dating
Radiometric dating and deep time
Four Forces of the Universe
Probability and evolution
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise


Categories
Religion Science Stuff

Evolution and entropy

The claim is sometimes made that entropy, or the laws of thermodynamics, prohibit the possibility of evolution.

The laws of thermodynamics can be stated in a number of ways in words, and more precisely in equations, but here is one way of stating them:

  1. Energy cannot be created or destroyed in an isolated system.
  2. The net entropy of an isolated system always increases.
  3. The entropy of a system approaches a constant value as the temperature approaches absolute zero (-273.15o C).

A more amusing but less accurate version I have seen is:

  1. You can never win, you can only break even.
  2. You can only break even at absolute zero.
  3. You can never reach absolute zero.

Returning to the first set of laws, the First Law obviously needs to be slightly modified in the light of General Relativity and Einstein’s famous equation E = mc2 to ‘matter-energy cannot be created or destroyed’, but the bottom line remains the same.

Entropy has a technical definition, or rather a number of different technical definitions, expressed in equations, but it is often understood as the ‘disorder’ of a system. So an increase in entropy is a decrease in order, and so on. Essentially, energy tends to change from more useful forms, that can do work, to less useful forms, over time.

The claim I referenced in the first sentence – that entropy forbids evolution – relies on the notion that evolving from a single-celled organism to something like a human being (with a human brain, perhaps the most complex matter we know of) requires a considerable increase in order, and therefore a net decrease in entropy.

The answer is right there in the Second Law, though: the words ‘in an isolated system‘. A single-celled organism does not evolve into a human being if it is placed in a sealed chamber and isolated from incoming energy – in the forms of heat, light, food and air – from the environment.

Perhaps the people who make this claim want to regard the whole of Earth as an isolated system, and argue that the evolution of all lifeforms from simpler (less-ordered) lifeforms is prohibited by the Second Law of Thermodynamics?

But the answer to that involves stepping outside and looking up, even on a cloudy day. Earth is not an isolated system, because it receives vast amounts of energy from that big nuclear fusion generator in the sky, the Sun.

Local increases in order – decreases in entropy – are certainly not prohibited: a human brain is much more ordered than all the food that goes into making it.

In practical terms, no system in the universe is closed and isolated. Even the Solar System emits solar energy to the space around it. But certainly, in considering Earth, the energy coming in from the Sun is a huge part of the overall energy picture.

(As a side note, high energy, short wavelength visible light arrives with the ability to do work, including the work of photosynthesis. It passes through various processes, and then is emitted as low energy, long wavelength infrared radiation, which radiates off into space. Unless intercepted by greenhouse gases, in which case it hangs around a bit longer, warming the globe…)

And it turns out that the nuclear fusion process of hydrogen combining to form helium that produces the Sun’s energy involves a net increase in entropy – a net decrease in order. And, given that Earth receives only a tiny fraction of the energy the Sun puts out, this increase in entropy occurring in the Sun completely dwarfs the local decrease in entropy involved in evolution. So, in the local system of our Solar System, net entropy increases, in agreement with the Second Law

No rules of thermodynamics are contravened by the processes of evolution.


For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Why I think it’s important to understand evolution
Cosmogenesis, abiogenesis and evolution
Facts, Theories and Laws
Radiocarbon dating
Radiometric dating and deep time
Four Forces of the Universe
Probability and evolution
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise

Categories
General

Cosmogenesis, abiogenesis and evolution

A common response to evolution is “It cannot account for how life first arose from non-life”. But that’s not the ‘job’ of evolutionary theory. Evolution is a theory that explains how, given life with the ability to self-replicate, we get from a few forms of simple life to the enormous variety of life on Earth.

Two other theories – or rather, domains of theory – are required to account for, respectively, the origin of the universe itself and the original origin of life.

I should take a small digression, because what I should have said in the previous sentence was ‘are required in the absence of miracles to account for…’ Science deals in ‘methodological naturalism’. I might go into that in more detail later, but in brief terms it means that, in science, miracles are not invoked as explanations. Things are explained in terms of natural causes and natural effects, not supernatural actions.

The implication of that is that, in the presence of miracles, all bets are off. If a Divine Creator created the universe in a miraculous supernatural act, no scientific theory of cosmogenesis is required. If such a Creator sparked off the beginning of life, no theory of abiogenesis is required. If She created life in pretty much its current form, no theory of evolution is required. (In each of these cases, though, a theological theory of why the Creator chose to make the universe look as though these processes took place might be required…)

OK, returning to the main theme, sometimes people are frustrated by the distinctions between cosmogenesis, abiogenesis and evolution: “It’s all evolution!” But these discussions take place in the domain of science. Even if the claims are religiously motivated, they are being presented as though they are scientific claims. In that domain, how words are used is important.

So cosmogenesis is the domain of theories that seek to explain the origins of the whole universe. The currently dominant such theory is the Big Bang, which is unfortunately named in that the word makes people think of an explosion in space, when it really describes an expansion of space-time itself. There are other candidate theories, including that the universe is in a ‘steady state’ and/or has always existed. Theories of cosmogenesis need to account for stars and galaxies and how they are distributed, for the ‘red shift’ that shows all galaxies are receding from us and the Cosmic Background Radiation.

Abiogenesis is the domain of theories about how life first arose from non-life. There exists life and, unless it has eternally existed, there must have been a point at which there was only non-living matter in the (natural – scientific theories don’t account for supernatural life) universe, and therefore some moment at which the first life existed.

Scientists acknowledge that, of the three domains, abiogenesis is the most difficult to engage with. For cosmogenesis we have the ancient light from the stars, and even from the beginning of the universe, that can be analysed and studied. For evolution, there is all of life, the fossil record and DNA to study. But the very first single-celled organisms were not the kinds of things that leave fossils or any consistent record. Abiogenesis is much more difficult to study.

Indeed, it is likely that we cannot ever confirm exactly how that first event went. About the best we can achieve is demonstrating possible mechanisms by which life can arise from non-life. There are interesting theories – about the surfaces of certain clays acting as templates, for example, or about lipid bubbles forming elementary cell walls – but there is a lot of work still to be done.

There are different candidate theories for the evolution of life and the ‘tree of life’ – the relationships between existing species, and between existing and extinct species, and between different extinct species. The most strongly supported by evidence, right now, is the ‘modern evolutionary synthesis’. Other posts in this series will outline that theory in more detail.

So, really, perhaps the main point of this piece is very simple: you will be recognised by others as knowing what you’re talking about if you recognise that different domains of theory are relevant for explaining different elements of how we get from nothing to here.


For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Why I think it’s important to understand evolution
Evolution and entropy
Facts, Theories and Laws
Radiocarbon dating
Radiometric dating and deep time
Four Forces of the Universe
Probability and evolution
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise


Categories
Religion Science Stuff Teaching

Why I think it’s important to understand evolution

I’ve decided to re-animate this blog for a while to post a short series of clear, simple discussions of some of the common arguments that are used to reject evolutionary theory as an explanation for the current diversity of living things on Earth.

When I raised the issue on Facebook, a friend asked “Why is it so important to you to persuade people to believe the Theory of Evolution?”, which is a great question. So this post is both an introduction to the series (initially I think there might be about 10 posts in total, but it may well grow), and my attempt to answer that question.

First, I think there’s value in clarifying that evolution is not something we ‘believe in’ in any religious sense. Rather, we ‘believe that’ it is the theory that best explains all of the available evidence… until a better one comes along. This is true for all scientific theories.

With that in mind, then, I care that people understand evolutionary theory because I care about what is true, and because it is a theory that we use in things like medical and pharmaceutical advances that save lives. Rejecting it is also strongly associated with rejecting science in other domains such as vaccines and climate change. It also makes people very vulnerable to liars and charlatans.

I suppose there’s one or two other notes worth including in this introductory post: I’ve been using the words ‘evolution’ and ‘evolutionary theory’, but it is probably more accurate to talk about the ‘modern evolutionary synthesis’ – the sum of the best current understanding on the part of evolutionary biologists of the mechanisms through which life perpetuates itself and changes.

Those who reject evolution often talk of ‘Darwinism’, but this is inaccurate for two reasons:

  1. Evolutionary theory is a scientific theory, not an ideology. It is not an ‘ism’. Confusing the kind of thing an idea is confuses our thinking.
  2. While Charles Darwin was important in outlining the broad lines of evolution, others also did so before and since. He wrote in a time when he did not know of the existence of genes or DNA, so he got some things wrong. Science, by its nature, moves on, and evolutionary science is no exception. Refuting Darwin may not refute the modern evolutionary synthesis, and vice versa. (A related point is that traducing Darwin’s character or motivations does not refute evolutionary theory.)

The other point is about the use of ‘theory’ in relation to evolution, and this is something I’ve already written about elsewhere: Facts, Theories and Laws

I hope that the journey will be interesting and useful for all of us.


For ease of navigation I will include links to each of the other posts in this series at the bottom of each post.

Cosmogenesis, abiogenesis and evolution
Evolution and entropy
Facts, Theories and Laws
Radiocarbon dating
Radiometric dating and deep time
Four Forces of the Universe
Probability and evolution
Species and ‘baramin’, macro- and micro-evolution
Mitochondrial Eve and Y-chromosomal Adam
Transitional fossils
Complexity – irreducible and otherwise

Categories
General

How do we decide who is, and is not, a Christian?

“That’s not very Christian!” is something we tend to hear when someone does something unkind or unloving. There’s an enormous range of shapes that Christianity takes in the world now, and the question in the title is a perplexing one for me. (I could replace that final word with Buddhist, Muslim, Mormon, Mason, Metalhead… but except on the last, I simply don’t know enough to speak, so I’ll stick with considering Christians.)

On the one hand, I tend to find ‘Is it _____ ?’ discussions tedious, where the gap is filled by ‘science’ or ‘art’ or ‘black metal’ or whatever. They turn on people’s individual definitions of those things, which can be quite divergent, so the debates tend to go around and around without reaching any worthwhile conclusions.

On the other hand, how we respond to people who claim to be Christians is coloured by our view of what it means to be Christian, and whose version of that we consider to be definitive – or, at least, most influential.

A complexifying factor is the ‘No True Scotsman’ fallacy. Very briefly, it stems from a story in which a newspaper report says that a Scotsman did something very bad, and a reader says ‘No true Scotsman’ would do such a thing. If a Christian does something evil, it’s much too convenient to simply define that person out: no ‘true Christian’ would do such a thing. The history of child sexual abuse by clergy uncovered in the recent Australian Royal Commission is just one example of bad things done by Christians, including pastors and priests.

If Christianity can never be represented by its worst adherents, but only by its charitable works and the best examples, it’s impossible to have a fair accounting of its net impact in the world.

At the other end of the spectrum, opponents of Christianity like Richard Dawkins might exclude or ignore all positive influences and impacts, and characterise Christianity only by its worst features and examples. Dawkins pays some lip service to more sophisticated theologies early in his book, but then defaults to treating all of Christianity as though it represented by its most literal and fundamentalist fringes.

There is a huge range of political views and beliefs in the church, from Christians like Jarrod McKenna, who builds homes for the homeless and refugees, and Father Rod Bowers who advocates for more humane policies, to the Michelle Bachmans of the world, who say that Donald Trump is the most godly president of our generation. Indeed, American evangelicalism has entirely embraced capitalism and wealth.

For most of these kinds of ‘club membership’ discussions I would be happy to accept people’s own self-identification as ‘in or out’. If someone decides they are a Christian, then they are, and I don’t have the right to gainsay them.

In this case, though, I’m going to suggest an alternative definition. It’s linked to something I think I’ve talked about in the past, either here on the blog or on Facebook, and certainly in conversations. While I think ‘What Would Jesus Do?’ (as represented on bracelets in the 90s) is dangerous because it is far too prone to projection, so that it becomes ‘What would I do?’ or ‘What would my pastor and the members of my church do?’, I think ‘What Did Jesus Do?’ is a pretty decent guide for living.

After all, Jesus is described as the ‘Christ’, and ‘Christian’ literally just means ‘follower of the Christ’. So, if someone claims to be a Christian, the test is simply ‘Do they do what Jesus did?’ And, I guess, do they refrain from doing what He did not?

Now, some of my atheist friends – and some of my theologian friends too, for that matter – might interject that the Gospels may have been subject to later tampering and interpolations, and were certainly a selection from among a range of documents at the time. They were also written some time after Jesus’ death, largely based on other accounts, both written and verbal. I acknowledge this, and yet… if we simply take what we have, and take it as a wisdom literature that informs our moral reasoning, not a creed that dictates it, the Jesus described in the four Gospels offers a way of life.

There are controversial sections where He says that he comes to bring division, and other difficult passages, but this is the story of a man who owned nothing more than the clothes he stood up in, and went around ministering to the poor and vulnerable and excluded in society. He reserved anger for the powerful, the wealthy and oppressors, and comforted those who were rejected by others in their society.

Read the Beatitudes, in Matthew 5, and the parable of the sheep and the goats in Matthew 25. In fact, read all the Gospels… it doesn’t take all that long.

Then, if someone claims to be a Christian – and particularly if they want to make you do something or stop you from doing something because they are a Christian – just run the ‘What did Jesus do?’ ruler over them.

Categories
General

Peter Achinstein and Explaining As An Activity

Achinstein notes that most accounts of scientific explanation have focused on the ‘product’ – the explanation itself, whether spoken or written – rather than on the act of explaining. He sets out to analyse explanation from the perspective of what human beings are doing when we explain.

An explanation is given by someone, with the purpose of helping someone else to understand. Achinstein explains it in slightly more technical language, but in brief he says that the purpose of an explanation is to have the audience know the correct answer to a question and know that it is a correct answer. We’ll leave aside the kinds of questions for which there is no correct answer, or many correct answers.

Achinstein describes explaining as an ‘illocutionary’ act. This is from a framework by Austin. Wikipedia sez: “In Austin’s framework, locution is what was said, illocution is what was meant, and perlocution is what happened as a result.”

Achinstein notes that the exact same sentence can be said with different intentions. An example he uses (I’ll paraphrase somewhat) is that when Dr Jones says “Bill ate spoiled meat”, he is giving an explanation of Bill’s stomach ache, and therefore the kind of illocutionary act he is undertaking is ‘explanation’. When Bill’s wife Jane says “Bill ate spoiled meat”, she is criticizing Bill’s dietary choices, so she is undertaking an illucutionary act of the kind ‘criticism’. This is true even though both people said the exact same words.

Achinstein suggests an ‘ordered pair’ approach, which can be described as (p, explaining q). ‘p’ is the explanation product itself – a sentence or proposition, and the second part of the brackets clarifies that someone said (or wrote) p in order to explain something, ‘q’. Dr Jones’ response might then be written as (“The reason that Bill has a stomach ache is that Bill ate spoiled meat”, explaining why Bill had a stomach ache).

By identifying what is going on in the explaining process, the explanation ‘product’ is clearer.

He considers the issue of evaluating explanations: a correct explanation may not be a good explanation in general terms, or it may not be a good explanation for a particular audience or a particular purpose. Achinstein talks about ‘instructions’ for explaining in a particular context.

Achinstein proposes the following criteria for the good-ness of an explanation:

  1. The audience does not already understand it
  2. There is a way to explain it that will allow the audience to know the correct answer and that it is a correct answer
  3. The audience is interested in the explanation
  4. It will be valuable for the audience to understand the explanation

There are a lot more details and issues, but the two key takeaways for me are (1) this approach is closer to my concerns with science teaching explanations than those of Hempel and Salmon because it centrally includes the explainer and the audience and (2) the challenges of teaching are with ensuring conditions (c) and (d) above – that our students are interested in the explanations we offer, and that the explanations we offer will be valuable for our students.

Note that (d) is not ‘the audience knows that it will be valuable to understand’. While that’s desirable, it is not essential, as long as the explainer knows it. But I would argue that it must be authentically in the interests of the audience (students, learners) to understand the explanation if we are to justify teaching it, and ‘valuable’ needs to mean something much more than passing an exam. The explanations we give in science teaching should transform worldviews and offer tangible benefits.

Categories
General

Eine Kleine Achinstein

Just a little taste for you of the kind of stuff I’m reading at the moment. The sauv blanc helps, at least in moderation. 😉

If Q is an explanation-seeking question (e.g. ‘Why did Nero fiddle?’), and q is the indirect form of the question (e.g. ‘The reason that Nero fiddled is that______’), and if a person A is seeking to understand q, and if qI is the answer to q under a specific set of instructions, I (so, for example, it might be ‘Explain why Nero fiddled in terms of his mental state’ or ‘Explain why Nero fiddled in terms of historical factors obtaining in Rome at the time…’ and so on), then:

A understands qI only if (∃p)(p is an answer to Q that satisfies I, and A knows of p that it is a correct answer to Q, and p is a complete content-giving proposition with respect to Q). (Achinstein, 1983, p. 57)

∃ is the ‘existential quantifier, which means ‘there exists’, so ∃p means ‘there exists a proposition p such that…’

A ‘complete content-giving proposition’ is complex, but basically it means it contains everything relevant and nothing irrelevant to explaining Q.

Categories
General

Wesley Salmon, Statistical Relevance and Causal/Mechanical Explanation

As you’ll know if you’ve been following along in this series of posts1 on the philosophy of explanation, or if you decide to go back and read them in chronological order before continuing to read this one, Wesley Salmon is a realist who has been working on the problems of explanation for some considerable time. He first advanced and then withdrew a ‘statistical-relevance (SR)’ approach to explanation, and later adopted what he called a ‘causal/mechanical’ approach. My aim here is to briefly explore both of these approaches and what they offer.

You’ll remember that Hempel advanced the ‘deductive-nomological (D-N)’ model for explanations when the causal laws that govern the scientific phenomena are deterministic: ‘if X happens then Y will definitely happen’. He also introduced the ‘inductive-statistical (I-S) model for when the laws are probabilistic (e.g. in quantum mechanics): ‘if A happens there is a 78% chance that B will happen’. Hempel insisted on a high probablity (close to 100% or 1.0) for explanations under the I-S approach. The main reason for this is that, if the probability is lower, A could presumably explain both the occurrence and non-occurrence of B. Say the probability is of B given A is .5, and A occurs, if B occurs we say ‘B happened because A’, but if B does not occur in some sense it also makes sense to explain this in terms of A, since there is a 50% chance that A will not lead to B.

There are also other helpful counter-examples. Jim (who is biologically male) did not become pregnant last year. Jim faithfully took birth control pills all year. Logically, we could say that Jim did not become pregnant because he took birth control pills, but our intuition tells us this is not a valid explanation. The birth control pills are not relevant to explaining the phenomenon. Similarly, being a lifelong smoker only yields about a 20% chance (probability of .2) of getting lung cancer, yet we consider that the smoking explains the cancer.

Similarly, the probability arguments can be complex. Someone who has pneumonia and is treated with penicillin has a higher probability of recovering than someone who does not have pneumonia. We would argue that the penicillin caused the recovery, or at least that it did so in conjunction with the immune system of the patient. (On the other hand, if we observe that taking Vitamin C correlates with recovering from the common cold after about a week we might consider that it is causal… until we realise that most people, Vitamin C or not, recover from the common cold in about a week.

Salmon suggested, therefore, that relevance is important in statistical cases. He also noted, as in the smoking example, that explanations for events with low probabilities can be explained, whereas Hempel’s approach insists on high probabilities.

Let’s go back the pneumonia patient, but add the information that there are penicillin-resistant strains of pneumonia. The simple argument that penicillin improves the odds of recovery is complicated by this new information, and the two classes of pneumonia patients initially – those treated with penicillin and those not – become four classes – those untreated who have the non-resistant strain, those treated who have the non-resistant strain, those untreated who have the resistant strain and those treated who have the resistant strain. In considering an individual patient’s likelihood of recovery, which of these quadrants s/he falls in is statistically relevant.

Salmon adds the additional criteria that (a) all relevant factors must be included and no irrelevant ones and (b) we must divide up our whole population of cases so that we look at an ‘objectively homogeneous’ class in trying to explain something. For example, in the case of our pneumonia patient, we can divde the population into four with two factors, and each of those four groups will be somewhat homogeneous (all members having the same characteristics). But there are potentially other relevant factors, like age, sex, obesity… the list is almost endless. In the end, while Salmon described objective homogeneity is an ideal, he conceded that practical problems mean it is unlikely to be actually useful in constructing and evaluating real explanations. He moved on to consider the important role of causality:

I no longer believe that the assemblage of relevant factors provides a complete explanation—or much of anything in the way of an explanation. We do, I believe, have a bona fide explanation of an event if we have a complete set of statistically relevant factors, the pertinent probability values, and causal explanations of the relevance relations. (Salmon, 1978)

His discussion of causation and explanation gets into Reichenbach’s ‘screening off principle’, conjunctive forks, interactive forks and other complexities that don’t really concern me for the moment.

The big contribution from Salmon to my project is (a) the very thorough overview his book ‘Four Decades of Scientific Explanation’ offers of Hempel’s work and the responses to it up until the late 1980s, (b) his realist approach in contrast to Hempel’s anti-realist approach and (c) the ways in which the statistical-relevance approach, despite shortcomings of its own, fixed some of the shortcomings of Hempel’s approach and led to other interesting work. He also enabled me to think carefully about which philosophers working in this field will need to be considered in depth in my book, for my purposes, and which can be mentioned in brief but not analysed in depth.

Next cab off the rank is Peter Achinstein, whose approach is less rigidly logical-philosophical and more directly focused on what human beings do when we explain. He calls it an ‘illocutionary’ approach, which is just a longer word for the process of giving and explanation and the ‘product’ of that explanation, whether it be written, spoken, animated etc. I’ll be reading Achinstein’s book over the next few days and will report in when I’ve done that.

  1. As you may have guessed, this series is in part a way of sharing the stuff I’m interested in and excited about with others, partly a way of taking notes for myself to remind me of some of the broader themes of what I’m reading… and partly just procrastination from writing the book I’m supposed to be writing about this stuff! I feel as though it’s worthwhile procrastination, though, because if I can explain it for a smart lay audience of my friends it will help me to better understand it for when I write about it more formally.

References

Salmon, W. (1978). “Why Ask ‘Why?’? An Inquiry Concerning Scientific Explanation”, Proceedings and Addresses of the American Philosophical Association, 51(6): 683–705. Reprinted in Salmon 1998: 125–141. doi:10.2307/3129654

Salmon, W. (1998).Causality and Explanation, New York: Oxford University Press. doi:10.1093/0195108647.001.0001

Categories
General

Realism and Anti-Realism in Philosophy of Science

There’ll be a much more detailed post shortly about Wesley Salmon’s ‘statistical-relevance’ theory of scientific explanation (a response and extension from Hempel’s ‘deductive-nomological’ and ‘inductive-stastical’ approaches, discussed in an earlier post). In the mean time, though, a quick discussion on realism and anti-realism.

The distinction is that realists accept that the unobservable entities that we use in our scientific explanations such as fields, atoms, electrons, photons and so on are real features of the universe. Anti-realists – and one prominent school within this camp is the instrumentalists – claim that these entities are useful rather than true. They serve their purpose in that they help us to provide explanations that work and theories that allow us to describe and predict observable phenomena, but they are not considered to be in any sense ‘real’. Hempel is an anti-realist, and constructs scientific explanations in terms of logical relations and laws. Salmon, on the other hand, is a realist1.

As a side note, Bas van Fraassen, another important figure in the philosophy of explanation (who, for various reasons, I will mention only in passing in my book) describes his position as ‘constructive empiricism’. While the anti-realist is an ‘atheist’ in terms of unobservable entities and makes the strong claim that they are not real, a constructive empiricist is ‘agnostic’: s/he neither knows nor cares whether they exist, and their reality is not a required feature of the approaches to explanation proposed by van Fraassen and those who follow him.

Salmon essentially uses two arguments in support of the reality of the unobservable. The first relates to extending the range of our senses. He talks about what he can see in a book with tiny print with and without his glasses, and notes that it would seem very odd to claim that the full stops on the page are not real when he has his glasses off but are real when he has his glasses on and can observe them. He then extends this, noting that the optics of a microscope are based on the exact same principles as the optics used in making his glasses, so it makes sense to consider the things that can be observed through a microscope to be real.

The argument then extends to telescopes and things like the moons of the planets in our solar system, which are not visible to the naked eye. The objection has been made by others that we could, in principle, travel to the moons of the planets and verify their existence with our senses but that we can’t (‘Fantastic Voyage’ aside) travel to the microscopic realm to check our observations in the same way.

In response to this, Salmon talks about a process by which a grid is designed at macroscale then shrunk and manufactured at microscopic scale and used for things like counting bacteria in a sample under a microscope. It seems quite silly to claim that, at the scale when we can no longer observe it directly with our unaided senses, such a grid loses its reality.

The final argument is based on the work of Jean Perrin, who started out observing Brownian motion (the way in which very small particles suspended in a fluid (gas or liquid) exhibit random movement, which is explained as being caused by collisions with the particles in the fluid, e.g. water molecules or nitrogen molecules in air). Brownian motion allows the direct observation (though usually aided by a microscope, because particles small enough to be bumped off course by a single molecule are pretty small) of the effects of molecules, although the molecules themselves cannot be seen. Perrin used Brownian motion to find the value of Avogadro’s Number, 6.02 x 1023, a very important number in chemistry that relates the molecular and macro scales.

The really interesting thing, though, is that Perrin then went on to find 13 different and independent ways to determine the value of Avogadro’s number, such as electroplating silver out of a solution and measuring the current used for a given mass of silver, radioactive decays and so on. The fact that a range of independent experiments, across a range of different branches of chemistry and physics, all yielded the same number (within experimental error) is at least pretty strong inferential empirical evidence for the reality of atoms, molecules and electrons.

When we get to photons and other entities at the level where quantum phenomena are dominant, it gets more complex still… things sometimes behave like particles and sometimes like waves. Are they ‘real’? They help us to create good – if complex (literally) – explanations.

I have to admit that, while in general I’m probably inclined toward realism, if I had to swear to it, hand on heart, constructive empiricism would be an attractive approach for me. Or is that just a copout?

There’s a good, if somewhat technical, introduction to some of the issues in explanation here: https://www.iep.utm.edu/explanat/ For me, it makes too much of the implications of this realist/anti-realist distinction, when I find other aspects of explanation more interesting and important, but nonetheless it does a nice job of sketching the last 70 years in the philosophy of this issue, since Hempel and Oppenheim’s seminal paper in 1948.

More Salmon shortly.

  1. Like many other terms in science and philosophy, ‘realist’ has a technical and an everyday meaning. In everyday parlance, a ‘realist’ is someone who takes the world as it is, as opposed to an ‘idealist’ who seeks to work as though the world follows – or ought to follow – some ideal order. It’s important to distinguish that sense of the term ‘realist’ from the technical meaning discussed in this post.

Categories
General

Carl Hempel and ‘Covering-Law’ Models of Explanation

As part of my on-going reading in the philosophy of explanation I’ve been focusing on the work of Carl Hempel, who talks about what Dray has described as ‘covering-law’ approaches to explicating explanation in science.

Together with Paul Oppenheim, in 1948 Hempel described ‘deductive-nomological (D-N)’ explanation in science: explanation in terms of scientific laws combined with initial conditions.

His 1965 work, which I’m reading now, expands this understanding to include ‘inductive-statistical (I-S)’ explanations, noting that some scientific laws are inherently statistical in character rather than deterministic. While Dray originally included only D-N explanations when coining the term ‘covering-law’, Hempel expands the term to include I-S explanations.

Hempel talks only about I-S explanations which make the probability of the outcome ‘practically certain’, or very close to 1, however I already know from reading David-Hillel Ruben that there also exist I-S explanations that explain outcomes with low probability, and even explanations that decrease the probability of the thing they explain.

The relevant chapter is about 130 pages long and includes a lot of defenses of this approach against a number of challenges, as well as expanding the discussion to include historical and other explanations as well as scientific ones.

Relevant to my interests, he also considers the ‘pragmatic’ features of an explanation given to an individual person, as well as the general explanations given in science. What is required to explain to an individual depends on characteristics such as the person’s existing knowledge and interests, whereas a general explanation does not depend on these things.

After finishing Hempel’s account, next step is to move on to Wesley Salmon… and then Peter Achinstein.

Categories
General

Broken Links

A little utility I run in the background of this WordPress site informed me there were 767 broken links in posts and 34 warnings. Probably unsurprising, since the blog has been up for something like 15 years and links come and go. I’ve removed all of them now, so some of the old posts may be missing links out to pictures or the things they were talking about, but the blog content is still there for what it’s worth.

Categories
General

Is a Fallacious Explanation an Explanation At All?

It’s something I mentioned in passing in a blog post some time ago (I’ve been busy!), but I wanted to take up the question again, because I think it’s interesting.

If an explanation that is offered is false or incorrect, does it constitute an explanation?

Our answer to this question, and the kind of thinking it takes to get to an answer, is likely to be helpful in thinking about the broader question ‘What is an explanation?’

Take an example: Chemtrails. People see the lines of cloud that are left in the sky when jet-powered planes fly over when certain atmospheric conditions apply.

(I thought about using vaccines and autism, but that debate is both disrespectful to people with autism, and tragic in terms of the unnecessarily dead or ill children it produces, so I thought I’d leave it aside. The considerations do apply to it, though.)

A scientific explanation involves the burning of jet fuel – a hydrocarbon similar to kerosene – in oxygen and the fact that the products are water vapor and carbon dioxide, and that the water vapor condenses into small droplets of liquid water if the surrounding atmosphere is cool enough, and that if the winds are slight at that altitude, these lines of vapor can remain for some time before the evaporate or disperse.

An alternative explanation considers that the government is dispersing chemicals using jet planes that are intended to (variously) pacify or sterilise the populace. This is often linked to comments like ‘I don’t remember seeing so many in the past’.

On that last one, a few minutes with statistics on the total numbers of flights occurring now compared to the past can be illuminating…

Part of the challenge in thinking through whether the latter explanation is an explanation is a potential confusion as to what phenomenon is being explained. Is the explanation tendered in order to explain the white lines we see in the sky, or to explain passivity and low birth rates among the populace?

If it’s the former – white lines in the sky – then there doesn’t seem to be a simple empirical way to distinguish between our two explanations: both ‘explain’ the white lines as some form of chemical substance (remember, water is a chemical substance) being released from jets. We might use logic and reason and the demonstrated inability of governments to keep secrets secret or maintain conspiracies in the long term, but that’s not something we can observe directly.

If we wanted to look at passivity and sterility, though, presumably water vapor would have no effect (since it already pervades the atmosphere and we breathe it out ourselves – check your breath on a cold day), while sinister chemicals would.

Since world population is still increasing and the atmosphere covers the whole world, sterility chemicals, if they’re being used, aren’t very effective. (Some variants have racist elements where the chemicals target particular races, but we’ll leave those where they belong.)

Protests are far from unknown either, so the passivity-inducing chemicals don’t seem much more effective. (Social media, on the other hand…)

Anyway, this wasn’t meant to be a post about chemtrails: the topic is explanations. I am going to argue that an explanation must be true, accurate and correct, or at the very least to represent the best current state of knowledge in relation to the thing to be explained, in order to be an explanation.

The old definition of knowledge as ‘justified true belief’ is helpful here. That is, to be able to say that we know something, we must believe it, it must be true, and we must have adequate, relevant grounds for believing it.

If an explanation is intended to increase knowledge, and knowledge is justified true belief, then an explanation must be true.

Categories
General

Explanatory Power

The last couple of posts have focused on explanations in science education, but this one pivots back to explanations in science.

There is a scheme, owed to Hempel, of 5 kinds of explanations in science and their relation to scientific laws, but that is a topic for another day.

In brief, a scientific theory – which is not the same thing as a scientific law – ought to have descriptive, predictive and explanatory power.

There are some laws which do not have explanatory power. Kepler’s Laws describe the motion of the planets accurately, but they are ’empirical’ laws, constructed based on observations. They do not include any explanation of the phenomena they describe and predict. It required gravitational theories from Newton and later Einstein to explain why the planets move as they do.

Indeed, it could be argued that laws – mathematical relationships between quantities – never have explanatory power. They explain what happens, but not why.

Scientific theories, however, explain what happens. That is what a scientific explanation is and is for.

Categories
General

Explanation in Science Education from a Constructivist Perspective

When a teacher explains a concept to a student… and, before I continue I should note that the people in those roles may not be in them formally. Parent explaining to child, foreman explaining to new employee, doctor explaining to patient. These ideas are relevant to a very wide range of human activities.

Explanations in science education are different from everyday explanations in a number of features, but that’s probably not something we need to go into in great detail here. We would avoid explanations such as that the contrails of jets are really ‘chemtrails’ of drugs to pacify the populace, not so much because they are not scientific (they aren’t) but because the best available evidence doesn’t support them. It’s an interesting question whether a fallacious ‘explanation’ is an explanation at all, but that might be another post for another time.

OK, digressions aside, I’ll start again: When a teacher explains a concept to a student, that process was historically considered to be what we educational theorists might call ‘transmissive’. The metaphor is like a radio or TV transmission, where the signal that is sent is the same as the signal that is received. The concept is moved intact from the teacher’s mind to that of the student.

There’s a fair bit of evidence, argument and experience to suggest that that’s not … I was about to say ‘what really happens’, but a better way to put it is ‘an effective way to think about it’.

Rather, we tend to have a ‘constructivist’ image of learning 1. In brief, this means that students construct their own knowledge based on their experiences. Those experiences include, but by no means are limited to, the explanations and other experiences offered by their teachers. These in-school experiences are joined with the life experience of the phenomena being discussed: riding bicycles for physics, observing living things – and being living things themselves – for biology and so on.

From a constructivist perspective, then, there is no such thing as the ‘perfect explanation’ of a scientific concept, as a thing unto itself. An explanation is part of the process of explaining (see a post from a couple of days ago on the distinction) that occurs between teacher and student. The explanation provides structured experiences which are the ‘building materials’ from which the student actively constructs understanding.

The importance of the dynamic interaction – and the relationship which forms its context – is that each student is building on different conceptual ‘foundations’. Each has a different set of experiences, and each has made different meanings of them. By listening, drawing on feedback, giving feedback and re-constructing the explanation, the teacher ensures that the explanation offers the best possible materials for that particular student to use in constructing an understanding of the specific scientific concept to be learned.

  1. There are definitely a number of older posts about constructivism on this blog if you’re interested. The Search box on the right side of the page (scroll down a bit) will enable you to find them.

Categories
General

Explaining and Explanation in Science Education

It was a bit tricky to work out the best order in which to talk about this topic and another one – a constructivist approach to explanation – but I think I promised in the yesterday’s post that I’d talk about ‘explaining and explanation’ next, so let’s do that. But hopefully tomorrow’s post will cast some additional light on this if you’re patient.

Suzie tends to talk about the distinction between nouns and verbs in relationships: having an expectation of our partners, versus expecting something. I kinda see what she’s talking about, in that verbs are inherently more fluid and dynamic than nouns.

The distinction between explanation and explaining is similar, but explaining contains explanation. Let me try to make that a little clearer.

I should also note (this is not one of my more coherent posts in terms of structure!) that this distinction and approach, as well as the constructivist approach, is owed to the German colleagues I recently visited in Bremen, particularly Christoph Kulgemeyer.

In this way of thinking, an ‘explanation’ is a unit in itself. It might be given by speaking or writing, or by speaking in a video or using an animation or simulation, but the explanation is a contained unit of meaning that is designed to increase understanding on the part of someone else, and is somehow delivered.

Explaining is the much larger social and interpersonal, dynamic process within which the explanation is given. It includes the person giving the explanation and the person receiving it. The process of explaining includes feedback, which is crucial. The explanation (as a unit) is modified and re-presented on the basis of the feedback received.

As a teacher (and this includes anyone who understands a concept and is seeking to help someone else develop an understanding of it, not just someone with the formal role) we have to make assumptions about what our student (the person willing to try to develop an understanding of the new concept) already knows, what life experiences they have had, what they are interested in, and so on.

Now, this brings me to one really important distinction between explanations given ‘live’, in classrooms or any situation when human beings are in a room, so that immediate (verbal and non-verbal) feedback is available, versus explanations given in books, videos, games and so on. Kind of by definition, the latter are informed only by the explainer’s ‘best guess’ about the characteristics of the ‘typical’ audience member, and no revision or improvement of the explanation in response to immediate feedback is possible. Simply, this is an explanation largely shorn of the process of explaining.

I’m interested in the implications of this idea for my own research using interactive simulations – although that has all occurred in classrooms with live teachers – and in its implications for things like the ‘flipped classroom’, which rely to a very large extent on explanations given in videos.

Categories
General

Explanation in Science and in Science Education

I feel kind of dumb in only really coming to realise this properly now, after working with and writing about explanations for well over a decade, but there is a basic qualitative difference between explanations in science and in science education. They are different kinds of things that have different purposes.

I think perhaps Treagust and Harrison’s interesting work from 1999 and 2000, which was some of the first I read, might have got me off on the wrong track. It talks about the differences between verbal explanations of concepts in, for example, scientific papers versus in science lessons, as well as the differences between these science teaching explanations and ‘everyday explanations’.

They are useful directions, but those three things are all the same kinds of things: verbal explanations, given from one person to another (or a group) with the goal of helping the latter develop a deeper understanding. They all involve, to one extent and in way way or another, teaching.

I’ve been reading David-Hillel Ruben’s ‘Explaining Explanation’ recently, and come to realise that the kinds of explanations he is talking about, when he reviews the work of Plato, Aristotle, Mill and Hempel & Oppenheimer, is not the same thing at all. These ‘explanations’ are the very foundations of science, and are much more like ‘the energy states of the valence shell electrons in sodium metal and chlorine gas explain the reaction between them (given that the activation energy is present)’. In other words, an explanation takes in the various laws or theories of science and explains why something happens as it does.

Now, a particular scientist may well give a verbal or written description of that explanation to another scientist, but that is what Ruben might call ‘an explication of an explanation’: it is not the explanation itself. The explanation is often causal – ‘this happens because this set of antecedent conditions and properties is met’.

Of course, Ruben’s book is academic philosophy, and the water gets very deep very quickly. Do causal explanations have to be determinate and certain or can they be probabilistic? Some explanations in quantum theory, for example, are not deterministic. Are all explanations necessarily causal?

There’s plenty to think about, but just realising that there are these two quite different senses in which ‘explanation’ is used is pretty important if I’m going to write a book on the topic! As it happens, this kind of scientific explanation will be a relatively minor facet of the book, since the focus is on explanation and explaining (and the next post in the series will talk a bit about why that distinction is useful) in science education. I want to know how teachers can create better explanations for the purposes of helping students to come to understand scientific concepts.

Why is this important? Not to boost Australia’s scores on international standardised tests! But because scientific concepts transform our perspective on the world, and empower our students to make positive changes.