US Election Free Zone

Although I’m awfully tempted, I see little point in adding my Australian-living-in-Canada two cents worth to the swirling masses of verbiage that permeate the wired world about the US election. Besides, I talk to lots of lovely, smart, thoughtful Americans online, but I’ve never yet known one of them to change his/her political allegiance, so talking about it seems kinda pointless. So I hereby declare this blog an Election-free Zone: I won’t talk about it (unless I do (yep, guess I’m a flipflopper (arrgghh, the pervasiveness of that content-less meme drives me insane (dang, there I go, talking about it)))). Anyway, it’ll all be over in 2 weeks – or possibly 2 months.


Broadband Networking as Craft

At a CANARIE conference in Vancouver last week, the most cyberpunk thing I saw was Open Net Craft.

It’s a set of instructions, available on a CD, for a small business person or group in a remote or rural town in British Columbia to bring in their own broadband access to their community. It’s dedicated to the idea that it’s done as craft, not as science or technology – it’s all about simple howtos that a lay person can handle. There’s a real commitment to taking the tech, money and knowledge from the ‘city slickers’ and expensive consultants and putting it in the hands of the communities – and it’s working.

“The street finds its own uses for things” – William Gibson

Philosophy Science Stuff

Bayesian statistics, certainty and philosophy of science

[Hmm, maybe a bit too theoretical for a first post – scare everyone off – but it’s cool theory, dammit!]

Most people, and most scientists, still tend to subscribe to something like logical positivism mixed with realism: the idea that there’s a real world out there, we have direct access to it through our senses, and we create theories by observing regularities in our observations (sometimes using artificially simplified observations called ‘scientific experiments’).

Sir Karl Popper challenged logical positivism on the grounds that inductivism – the process of making repeated observations and deriving a law from them – is inherently circular reasoning, since the only way to establish the principle of induction is by induction (the problem had actually been known for a long time, but he proposed a solution that was really more like side-stepping the problem). Popper suggested that, rather than verifying theories by comparing them with experience, instead all we can ever do is falsify them: we can never prove a theory right, the best we can do is try hard to prove it wrong, and fail for the time being.

Thomas Kuhn paid more attention to the history and sociology of scientific discovery, and suggested that science proceeds by a series of revolutions. Between revolutions, we have a period of ‘normal science’, characterised by a shared paradigm (he used the term in a number of different ways, but the one that has spread to everyday use means something like ‘connected set of theories’ or even ‘worldview’). Only when the paradigm stops explaining our experiences well is it replaced by a more powerful paradigm in a scientific revolution.

Both Popper and Kuhn suggest, in different ways, that certainty in scientific theories is unattainable – the best we can say, in Popper’s terms, is that our theory is our best current guess, and has so far withstood our best attempts to falsify it. In Kuhn’s terms, a paradigm is supported or challenged almost in a popularity contest.

(Imre Lakatos and Paul Feyerabend are actually my favourite philosophers of science, but I don’t want to get into their ideas here too much ‘cos this is complicated enough. Feyerabend claimed that ‘anything goes’ in terms of scientific discovery – no set of rules can describe the process. Lakatos adapted and strengthened Popper’s approach.)

There’s a new approach to the philosophy of science that I find very intriguing, and that deals with our lack of certainty without consigning us to complete uncertainty. It’s based on Bayesian statistics. I’m really not sure at this stage whether the Bayesian approach is applicable to science (and, of course, ‘science’ is such a huge and diverse thing in itself that it may apply in some areas and not in others) in a genuine statistical sense, or more as a metaphor for how we might proceed.

Bayesian statistics allow us to make an initial estimate of the probability of something, then continue to iteratively improve that estimate as more information becomes available. The simplest and best real world example is Bayesian spam filters – the very cool K9 is an example I’ve used. It checks each e-mail message I get, and decides whether it’s spam or not. I check whether it has made a false positive (identified a good message as spam), or a false negative (identified a spam message as good), and the software uses my corrections to improve its skills in assigning a probability that a specific message is spam.

In a similar way, we could estimate the probability that a particular theory is correct, then iteratively use the experimental results that come in to successively improve the theory.