Tag: Karl Popper

Must science be testable?

Source: Aeon.co

Author:Massimo Pigliucci

emphasis mine

The general theory of relativity is sound science; ‘theories’ of psychoanalysis, as well as Marxist accounts of the unfolding of historical events, are pseudoscience. This was the conclusion reached a number of decades ago by Karl Popper, one of the most influential philosophers of science. Popper was interested in what he called the ‘demarcation problem’, or how to make sense of the difference between science and non-science, and in particular science and pseudoscience. He thought long and hard about it and proposed a simple criterion: falsifiability. For a notion to be considered scientific it would have to be shown that, at the least in principle, it could be demonstrated to be false, if it were, in fact false.

Popper was impressed by Einstein’s theory because it had recently been spectacularly confirmed during the 1919 total eclipse of the Sun, so he proposed it as a paradigmatic example of good science. Here is how in Conjectures and Refutations (1963) he differentiated among Einstein on one side, and Freud, Adler and Marx on the other:

Einstein’s theory of gravitation clearly satisfied the criterion of falsifiability. Even if our measuring instruments at the time did not allow us to pronounce on the results of the tests with complete assurance, there was clearly a possibility of refuting the theory.

The Marxist theory of history, in spite of the serious efforts of some of its founders and followers, ultimately adopted [a] soothsaying practice. In some of its earlier formulations … their predictions were testable, and in fact falsified. Yet instead of accepting the refutations the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. In this way they rescued the theory from refutation … They thus gave a ‘conventionalist twist’ to the theory; and by this stratagem they destroyed its much advertised claim to scientific status.

The two psycho-analytic theories were in a different class. They were simply non-testable, irrefutable. There was no conceivable human behaviour which could contradict them … I personally do not doubt that much of what they say is of considerable importance, and may well play its part one day in a psychological science which is testable. But it does mean that those ‘clinical observations’ which analysts naively believe confirm their theory cannot do this any more than the daily confirmations which astrologers find in their practice.

As it turns out, Popper’s high regard for the crucial experiment of 1919 may have been a bit optimistic: when we look at the historical details we discover that the earlier formulation of Einstein’s theory actually contained a mathematical error that predicted twice as much bending of light by large gravitational masses like the Sun – the very thing that was tested during the eclipse. And if the theory had been tested in 1914 (as was originally planned), it would have been (apparently) falsified. Moreover, there were some significant errors in the 1919 observations, and one of the leading astronomers who conducted the test, Arthur Eddington, may actually have cherry picked his data to make them look like the cleanest possible confirmation of Einstein. Life, and science, are complicated.

This is all good and well, but why should something written near the beginning of last century by a philosopher – however prominent – be of interest today? Well, you might have heard of string theory. It’s something that the fundamental physics community has been playing around with for a few decades now, in their pursuit of what Nobel physicist Steven Weinberg grandly called ‘a theory of everything’. It isn’t really a theory of everything, and in fact, technically, string theory isn’t even a theory, not if by that name one means mature conceptual constructions, such as the theory of evolution, or that of continental drift. In fact, string theory is better described as a general framework – the most mathematically sophisticated one available at the moment – to resolve a fundamental problem in modern physics: general relativity and quantum mechanics are highly successful scientific theories, and yet, when they are applied to certain problems, like the physics of black holes, or that of the singularity that gave origin to the universe, they give us sharply contrasting predictions.

Physicists agree that this means that either theory, or both, are therefore wrong or incomplete. String theory is one attempt at reconciling the two by subsuming both into a broader theoretical framework. There is only one problem: while some in the fundamental physics community confidently argue that string theory is not only a very promising scientific theory, but pretty much ‘the only game in town,’ others scornfully respond that it isn’t even science, since it doesn’t make contact with the empirical evidence: vibrating superstrings, multiple, folded, dimensions of space-time and other features of the theory are impossible to test experimentally, and they are the mathematical equivalent of metaphysical speculation. And metaphysics isn’t a complimentary word in the lingo of scientists. Surprisingly, the ongoing, increasingly public and acerbic diatribe often centres on the ideas of one Karl Popper. What, exactly, is going on?

I had a front row seat at one round of such, shall we say, frank discussions last year, when I was invited to Munich to participate in a workshop on the status of fundamental physics, and particularly on what some refer to as ‘the string wars’. The organiser, Richard Dawid, of the University of Stockholm, is a philosopher of science with a strong background in theoretical physics. He is also a proponent of a highly speculative, if innovative, type of epistemology that supports the efforts of string theorists and aims at shielding them from the accusation of engaging in flights of mathematical fancy decoupled from any real science. My role there was to make sure that participants – an eclectic mix of scientists and philosophers, with a Nobel winner thrown in the mix – were clear on something I teach in my introductory course in philosophy of science: what exactly Popper said and why, since some of those physicists had hurled accusations at their critical colleagues, loudly advocating the ejection of the very idea of falsification from scientific practice.

In the months preceding the workshop, a number of high profile players in the field had been using all sorts of means – from manifesto-type articles in the prestigious Nature magazine to Twitter – to pursue a no-holds-barred public relations campaign to wrestle, or retain, control of the soul of contemporary fundamental physics. Let me give you a taste of the exchange, to set the mood: ‘The fear is that it would become difficult to separate such ‘science’ from New Age thinking, or science fiction,’ said George Ellis, chastising the pro-string party; to which Sabine Hossenfelder added: ‘Post-empirical science is an oxymoron.’ Peter Galison made crystal clear what the stakes are when he wrote: ‘This is a debate about the nature of physical knowledge.’ On the other side, however, cosmologist Sean Carroll tweeted:

My real problem with the falsifiability police is: we don’t get to demand ahead of time what kind of theory correctly describes the world,’ adding ‘[Falsifiability is] just a simple motto that non-philosophically-trained scientists have latched onto.’ Finally (but there is more, much more, out there), Leonard Susskind mockingly introduced the neologism ‘Popperazzi’ to label an extremely naive (in his view) way of thinking about how science works.

This surprisingly blunt – and very public – talk from prestigious academics is what happens when scientists help themselves to, or conversely categorically reject, philosophical notions that they plainly have not given sufficient thought to. In this case, it was Popper’s philosophy of science and its application to the demarcation problem. What makes this particularly ironic for someone like me, who started his academic career as a scientist (evolutionary biology) and eventually moved to philosophy after a constructive midlife crisis, is that a good number of scientists nowadays – and especially physicists – don’t seem to hold philosophy in particularly high regard. Just in the last few years Stephen Hawking has declared philosophy dead, Lawrence Krauss has quipped that philosophy reminds him of that old Woody Allen joke, ‘those that can’t do, teach, and those that can’t teach, teach gym,’ and science popularisers Neil deGrasse Tyson and Bill Nye have both wondered loudly why any young man would decide to ‘waste’ his time studying philosophy in college.

Loud debates on social media and in the popular science outlets define how much of the public perceives physics.

This is a rather novel, and by no means universal, attitude among physicists. Compare the above contemptuousness with what Einstein himself wrote to his friend Robert Thorton in 1944 on the same subject: ‘I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today – and even professional scientists – seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.’ By Einstein’s standard then, there are a lot of artisans but comparatively few seekers of truth among contemporary physicists!

To put things in perspective, of course, Einstein’s opinion of philosophy may not have been representative even then, and certainly modern string theorists are a small group within the physics community, and string theorists on Twitter are an ever smaller, possibly more voluble subset within that group. The philosophical noise they make is likely not representative of what physicists in general think and say, but it matters all the same precisely because they are so prominent; those loud debates on social media and in the popular science outlets define how much of the public perceives physics, and even how many physicists perceive the big issues of their field.

That said, the publicly visible portion of the physics community nowadays seems split between people who are openly dismissive of philosophy and those who think they got the pertinent philosophy right but their ideological opponents haven’t. At stake isn’t just the usually tiny academic pie, but public appreciation of and respect for both the humanities and the sciences, not to mention millions of dollars in research grants (for the physicists, not the philosophers). Time, therefore, to take a more serious look at the meaning of Popper’s philosophy and why it is still very much relevant to science, when properly understood.

As we have seen, Popper’s message is deceptively simple, and – when repackaged in a tweet – has in fact deceived many a smart commentator in underestimating the sophistication of the underlying philosophy. If one were to turn that philosophy into a bumper sticker slogan it would read something like: ‘If it ain’t falsifiable, it ain’t science, stop wasting your time and money.’

But good philosophy doesn’t lend itself to bumper sticker summaries, so one cannot stop there and pretend that there is nothing more to say. Popper himself changed his mind throughout his career about a number of issues related to falsification and demarcation, as any thoughtful thinker would do when exposed to criticisms and counterexamples from his colleagues. For instance, he initially rejected any role for verification in establishing scientific theories, thinking that it was far too easy to ‘verify’ a notion if one were actively looking for confirmatory evidence. Sure enough, modern psychologists have a name for this tendency, common to laypeople as well as scientists: confirmation bias.

Nonetheless, later on Popper conceded that verification – especially of very daring and novel predictions – is part of a sound scientific approach. After all, the reason Einstein became a scientific celebrity overnight after the 1919 total eclipse is precisely because astronomers had verified the predictions of his theory all over the planet and found them in satisfactory agreement with the empirical data. For Popper this did not mean that the theory of general relativity was ‘true,’ but only that it survived to fight another day. Indeed, nowadays we don’t think the theory is true, because of the above mentioned conflicts, in certain domains, with quantum mechanics. But it has withstood a very good number of high stakes challenges over the intervening century, and its most recent confirmation came just a few months ago, with the first detection of gravitational waves.

Scientific hypotheses need to be tested repeatedly and under a variety of conditions before we can be reasonably confident of the results.

Popper also changed his mind about the potential, at the least, for a viable Marxist theory of history (and about the status of the Darwinian theory of evolution, concerning which he was initially skeptical, thinking – erroneously – that the idea was based on a tautology). He conceded that even the best scientific theories are often somewhat shielded from falsification because of their connection to ancillary hypotheses and background assumptions. When one tests Einstein’s theory using telescopes and photographic plates directed at the Sun, one is really simultaneously putting to the test the focal theory, plus the theory of optics that goes into designing the telescopes, plus the assumptions behind the mathematical calculations needed to analyse the data, plus a lot of other things that scientists simply take for granted and assume to be true in the background, while their attention is trained on the main theory. But if something goes wrong and there is a mismatch between the theory of interest and the pertinent observations, this isn’t enough to immediately rule out the theory, since a failure in one of the ancillary assumptions might be to blame instead. That is why scientific hypotheses need to be tested repeatedly and under a variety of conditions before we can be reasonably confident of the results.

Popper’s initial work pretty much single-handedly put the demarcation problem on the map, prompting philosophers to work on the development of a philosophically sound account of both what science is and is not. That lasted until 1983, when Larry Laudan published a highly influential paper entitled ‘The demise of the demarcation problem,’ in which he argued that demarcation projects were actually a waste of time for philosophers, since – among other reasons – it is unlikely to the highest degree that anyone will ever be able to come up with small sets of necessary and jointly sufficient conditions to define ‘science,’ ‘pseudoscience’ and the like. And without such sets, Laudan argued, the quest for any principled distinction between those activities is hopelessly Quixotic.

‘Necessary and jointly sufficient’ is logical-philosophical jargon, but it is important to see what Laudan meant. He thought that Popper and others had been trying to provide precise definitions of science and pseudoscience, similar to the definitions used in elementary geometry: a triangle, for instance, is whatever geometrical figure has the internal sum of its angles equal to 180 degrees. Having that property is both necessary (because without it the figure in question is not a triangle) and sufficient (because that’s all we need to know in order to confirm that we are, indeed, dealing with a triangle). Laudan argued – correctly – that no such solution is ever going to be found to the demarcation problem, simply because concepts like ‘science’ and ‘pseudoscience’ are complex, multidimensional, and inherently fuzzy, not admitting of sharp boundaries. In a sense, physicists complaining about ‘the Popperazzi’ are making the same charge as Laudan: Popper’s criterion of falsification appears to be far too blunt an instrument not only to discriminate between science and pseudoscience (which ought to be relatively easy), but a fortiori to separate sound from unsound science within an advanced field like theoretical physics.

Yet Popper wasn’t quite as naive as Laudan, Carroll, Susskind, and others make him out to be. Nor is the demarcation problem quite as hopeless as all that. Which is why a number of authors – including myself and my longtime collaborator, Maarten Boudry – have more recently maintained that Laudan was too quick to dismiss the demarcation problem, and that perhaps Twitter isn’t the best place for nuanced discussions in the philosophy of science.

The idea is that there are pathways forward in the study of demarcation that become available if one abandons the requirement for necessary and jointly sufficient conditions, which was never strictly enforced even by Popper. What, then, is the alternative? To treat science, pseudoscience, etc. as Wittgensteinian ‘family resemblance’ concepts instead. Ludwig Wittgenstein was another highly influential 20th century philosopher, who hailed, like Popper himself, from Vienna, though the two could not have been more different in terms of socio-economic background, temperament, and philosophical interests. (If you want to know just how different, check out the delightful Wittgenstein’s Poker (2001) by journalists David Edmonds and John Eidinow.)

Wittgenstein never wrote about philosophy of science, let alone fundamental physics (or even Marxist theories of history). But he was very much interested in language, its logic, and its uses. He pointed out that there are many concepts that we seem to be able to use effectively, and that yet are not amenable to the sort of clear definition that Laudan was looking for. His favorite example was the deceptively simple concept of ‘game.’ If you try to arrive at a definition of games of the kind that works for triangles, your effort will be endlessly frustrated (try it out, it makes for a nice parlour, ahem, game). Wittgenstein wrote: ‘How should we explain to someone what a game is? I imagine that we should describe games to him, and we might add: ‘This and similar things are called games.’ And do we know any more about it ourselves? Is it only other people whom we cannot tell exactly what a game is? […] But this is not ignorance. We do not know the boundaries because none have been drawn […] We can draw a boundary for a special purpose. Does it take that to make the concept usable? Not at all!’

The point is that in a lot of cases we don’t discover pre-existing boundaries, as if games and scientific disciplines were Platonic ideal forms that existed in a timeless metaphysical dimension. We make up boundaries for specific purposes and then we test whether the boundaries are actually useful for whatever purposes we drew them. In the case of the distinction between science and pseudoscience, we think there are important differences, so we try to draw tentative borders in order to highlight them. Surely one would give up too much, as either a scientist or a philosopher, if one were to reject the strongly intuitive idea that there is something fundamentally different between, say, astrology and astronomy. The question is where, approximately, the difference lies?  But this is not ignorance. We do not know the boundaries because none have been drawn […] We can draw a boundary for a special purpose. Does it take that to make the concept usable? Not at all!’

Rather than laying into each other in the crude terms, scientists should work together not just to forge a better science, but to counter true pseudoscience.

Similarly, many of the participants in the Munich workshop, and the ‘string wars’ more generally, did feel that there is an important distinction between fundamental physics as it is commonly conceived and what string theorists are proposing. Richard Dawid objects to the (admittedly easily derisible) term ‘post-empirical science,’ preferring instead ‘non-empirical theory assessment’, but whatever one calls it, he is aware that he and his fellow travellers are proposing a major departure from the way we have done science since the time of Galileo. True, the Italian physicist himself largely engaged in theoretical arguments and thought experiments (he likely never did drop balls from the leaning tower of Pisa), but his ideas were certainly falsifiable and have been, over and over, subjected to experimental tests (most spectacularly by David Scott on the Apollo 15 Moon landing).

The broader question then is: are we on the verge of developing a whole new science, or is this going to be regarded by future historians as a temporary stalling of scientific progress? Alternatively, is it possible that fundamental physics is reaching an end not because we’ve figured out everything we wanted to figure out, but because we have come to the limits of what our brains and technologies can possibly do? These are serious questions that ought to be of interest not just to scientists and philosophers, but to the public at large (the very same public that funds research in fundamental physics, among other things).

What is weird about the string wars and the concomitant use and misuse of philosophy of science is that both scientists and philosophers have bigger targets to jointly address for the sake of society, if only they could stop squabbling and focus on what their joint intellectual forces may accomplish. Rather than laying into each other in the crude terms sketched above, they should work together not just to forge a better science, but to counter true pseudoscience: homeopaths and psychics, just to mention a couple of obvious examples, keep making tons of money by fooling people, and damaging their physical and mental health. Those are worthy targets of critical analysis and discourse, and it is the moral responsibility of a public intellectual or academic – be they a scientist or a philosopher – to do their best to improve as much as possible the very same society that affords them the luxury of discussing esoteric points of epistemology or fundamental physics.

 

See: https://aeon.co/essays/the-string-theory-wars-show-us-how-science-needs-philosophy?utm_source=Aeon+Newsletter&utm_campaign=eba5a1d6e4-Daily_Newsletter_10_August_20168_10_2016&utm_medium=email&utm_term=0_411a82e59d-eba5a1d6e4-68915721

What is the Scientific Method?

Source: Physics.about.com

Author: Andrew Zimmerman Jones

Emphasis Mine

(N.B.: “Scientists don’t believe, we test hypotheses.” John Gribben)

Science is, at its core, a method of looking at questions about the physical world and reaching conclusions that are as consistent as possible with the physical reality. We reach this end through experimentation and observation, and by being willing to let go of an idea if it doesn’t match with reality.

The scientific method is a set of techniques used by the scientific community to investigate natural phenomena by providing an objective framework in which to make scientific inquiry and analyze the data to reach a conclusion about that inquiry.
Steps of the Scientific Method
The goals of the scientific method are uniform, but the method itself is not necessarily formalized among all branches of science. It is most generally expressed as a series of discrete steps, although the exact number and nature of the steps varies depending upon the source. The scientific method is not a recipe, but rather an ongoing cycle that is meant to be applied with intelligence, imagination, and creativity. Frequently, some of these steps will take place simultaneously, in a different order, or be repeated as the experiment is refined, but this is the most general and intuitive sequence. As expressed by Shawn Lawrence Otto in Fool Me Twice: Fighting the Assault on Science in America:

There is no one “scientific method”; rather, there is a collection of strategies that have proven effective in answering our questions about how things in nature really work.

Depending on the source, the exact steps will be described somewhat differently, but the following are a good general guideline for how the scientific method is often applied.

Ask a question – determine a natural phenomenon (or group of phenomena) that you are curious about and would like to explain or learn more about, then ask a specific question to focus your inquiry.
Research the topic – this step involves learning as much about the phenomenon as you can, including by studying the previous studies of others in the area.

Formulate a hypothesis – using the knowledge you have gained, formulate a hypothesis about a cause or effect of the phenomenon, or the relationship of the phenomenon to some other phenomenon.
Test the hypothesis – plan and carry out a procedure for testing the hypothesis (an experiment) by gathering data.
Analyze the data – use proper mathematical analysis to see if the results of the experiment support or refute the hypothesis.

If the data does not support the hypothesis, it must be rejected or modified (N.B.: that is Reject the hypothesis, NOT the data) and re-tested. Frequently, the results of the experiment are compiled in the form of a lab report (for typical classroom work) or a paper (in the case of publishable academic research).

It is also common for the results of the experiment to provide an opportunity for more questions about the same phenomenon or related phenomena, which begins the process of inquiry over again with a new question.
Key Elements of the Scientific Method
The goal of the scientific method is to get results that accurately represent the physical processes taking place in the phenomenon. To that end, it emphasizes a number of traits to insure that the results it gets are valid to the natural world.

objective – the scientific method intends to remove personal and cultural biases by focusing on objective testing procedures.
consistent – the laws of reasoning should be used to make hypotheses that are consistent with broader, currently known scientific laws; even in rare cases where the hypothesis is that one of the broader laws is incorrect or incomplete, the hypothesis should be composed to challenge only one such law at a time.
observable – the hypothesis presented should allow for experiments with observable and measurable results.
pertinent – all steps of the process should be focused on describing and explaining observed phenomena.
parsimonious – only a limited number of assumptions and hypothetical entities should be proposed in a given theory, as stated in Occam’s Razor.
falsifiable – the hypothesis should be something which can be proven incorrect by observable data within the experiment, or else the experiment is not useful in supporting the hypothesis. (This aspect was most prominently illuminated by the philosopher of science Karl Popper.)
reproducible – the test should be able to be reproduced by other observers with trials that extend indefinitely into the future. It is useful to keep these traits in mind when developing a hypothesis and testing procedure.
Conclusion
Hopefully this introduction to the scientific method has provided you with an idea of the significant effort that scientists go to in order to make sure their work is free from bias, inconsistencies, and unnecessary complications, as well as the paramount feat of creating a theoretical structure that accurately describes the natural world. When doing your own work in physics, it is useful to reflect regularly on the ways in which that work exemplifies the principles of the scientific method.

In common usage, the words hypothesis, model, theory, and law have different interpretations and are at times used without precision, but in science they have very exact meanings.

Hypothesis

Perhaps the most difficult and intriguing step is the development of a specific, testable hypothesis. A useful hypothesis enables predictions by applying deductive reasoning, often in the form of mathematical analysis.

It is a limited statement regarding the cause and effect in a specific situation, which can be tested by experimentation and observation or by statistical analysis of the probabilities from the data obtained. The outcome of the test hypothesis should be currently unknown, so that the results can provide useful data regarding the validity of the hypothesis.

Sometimes a hypothesis is developed that must wait for new knowledge or technology to be testable. The concept of atoms was proposed by the ancient Greeks, who had no means of testing it. Centuries later, when more knowledge became available, the hypothesis gained support and was eventually proven, though it has had to be amended many times over the year. Atoms are not indivisible, as the Greeks supposed.

Model

A model is used for situations when it is known that the hypothesis has a limitation on its validity. The Bohr model of the atom, for example, depicts electrons circling the atomic nucleus in a fashion similar to planets in the solar system. This model is useful in determining the energies of the quantum states of the electron in the simple hydrogen atom, but it is by no means represents the true nature of the atom.

Theory & Law

A scientific theory or law represents a hypothesis (or group of related hypotheses) which has been confirmed through repeated testing, almost always conducted over a span of many years. Generally, a law uses a handful of fundamental concepts and equations to define the rules governing a set of phenomena.

Scientific Paradigms

Once a scientific theory is established, it is very hard to get the scientific community to discard it. In physics, the concept of ether as a medium for light wave transmission ran into serious opposition in the late 1800s, but it was not disregarded until the early 1900s, when Einstein proposed alternate explanations for the wave nature of light that did not rely upon a medium for transmission.

The science philosopher Thomas Kuhn developed the term scientific paradigm to explain the working set of theories under which science operates. He did extensive work on the scientific revolutions that take place when one paradigm is overturned in favor of a new set of theories. His work suggests that the very nature of science changes when these paradigms are significantly different. The nature of physics prior to relativity and quantum mechanics is fundamentally different from that after their discovery, just as biology prior to Darwin’s Theory of Evolution is fundamentally different from the biology that followed it. The very nature of the inquiry changes.

One consequence of the scientific method is to try to maintain consistency in the inquiry when these revolutions occur and to avoid attempts to overthrow existing paradigms on ideological grounds.

Occam’s Razor

One principle of note in regards to the scientific method is Occam’s Razor (alternately spelled Ockham’s Razor), which is named after the 14th century English logician and Franciscan friar William of Ockham. Occam did not create the concept – the work of Thomas Aquinas and even Aristotle referred to some form of it. The name was first attributed to him (to our knowledge) in the 1800s, indicating that he must have espoused the philosophy enough that his name became associated with it.

The Razor is often stated in Latin as:

entia non sunt multiplicanda praeter necessitatemor, translated to English:

entities should not be multiplied beyond necessity

Occam’s Razor indicates that the most simple explanation that fits the available data is the one which is preferable. Assuming that two hypotheses are presented have equal predictive power, the one which makes the fewest assumptions and hypothetical entities takes precedence. This appeal to simplicity has been adopted by most of science, and is invoked in this popular quote by Albert Einstein:

Everything should be made as simple as possible, but not simpler.

It is significant to note that Occam’s Razor does not prove that the simpler hypothesis is, indeed, the true explanation of how nature behaves. Scientific principles should be as simple as possible, but that’s no proof that nature itself is simple.

However, it is generally the case that when a more complex system is at work there is some element of the evidence which doesn’t fit the simpler hypothesis, so Occam’s Razor is rarely wrong as it deals only with hypotheses of purely equal predictive power. The predictive power is more important than the simplicity.

The Consolation of Philosophy

From: Scientific American

By: Lawrence Krauss

“Recently, as a result of my most recent book, A Universe from Nothing, I participated in a wide-ranging and in-depth interview for The Atlantic on questions ranging from the nature of nothing to the best way to encourage people to learn about the fascinating new results in cosmology.  The interview was based on the transcript of a recorded conversation and was hard hitting (and, from my point of view, the interviewer was impressive in his depth), but my friend Dan Dennett recently wrote to me to say that it has been interpreted (probably because it included some verbal off-the-cuff remarks, rather than carefully crafted written responses) by a number of his colleagues and readers as implying a blanket condemnation of philosophy as a discipline, something I had not intended.

Out of respect for Dan and those whom I may have unjustly offended, and because the relationship between physics and philosophy seems to be an area which has drawn some attention of late, I thought I would take the opportunity to write down, as coherently as possible, my own views on several of these issues, as a physicist and cosmologist.  As I should also make clear (and as numerous individuals have not hesitated to comment upon already), I am not a philosopher, nor do I claim to be an expert on philosophy.   Because of a lifetime of activity in the field of theoretical physics, ranging from particle physics to general relativity to astrophysics, I do claim however to have some expertise in the impact of philosophy on my own field.  In any case, the level of my knowledge, and ignorance, will undoubtedly become clearer in what follows.

As both a general reader and as someone who is interested in ideas and culture, I have great respect for and have learned a great deal from a number of individuals who currently classify themselves as philosophers. Of course as a young person I read the classical philosophers, ranging from Plato to Descartes, but as an adult I have gained insights into the implications of brain functioning and developments inevolutionary psychology for understanding human behavior from colleagues such as Dan Dennett and Pat Churchland.  I have been forced to re-examine my own attitudes towards various ethical issues, from the treatment of animals to euthanasia, by the cogent and thoughtful writing of Peter Singer.   And reading the work of my friend A.C. Grayling has immeasurably heightened my understanding and appreciation of the human experience.

What I find common and so stimulating about the philosophical efforts of these intellectual colleagues is the way they thoughtfully reflect on human knowledge, amassed from empirical explorations in areas ranging from science to history, to clarify issues that are relevant to making decisions about how to function more effectively and happily as an individual, and as a member of a society.

As a practicing physicist however, the situation is somewhat different.  There, I, and most of the colleagues with whom I have discussed this matter, have found that philosophical speculations about physics and the nature of science are not particularly useful, and have had little or no impact upon progress in my field.  Even in several areas associated with what one can rightfully call the philosophy of science I have found the reflections of physicists to be more useful.  For example, on the nature of science and the scientific method, I have found the insights offered by scientists who have chosen to write concretely about their experience and reflections, from Jacob Bronowski, to Richard Feynman, to Francis Crick, to Werner Heisenberg, Albert Einstein, and Sir James Jeans, to have provided me with a better practical guide than the work of even the most significant philosophical writers of whom I am aware, such as Karl Popper and Thomas Kuhn.  I admit that this could primarily reflect of my own philosophical limitations, but I suspect this experience is more common than not among my scientific colleagues.

The one area of physics that has probably sparked the most ‘philosophical’ interest in recent times is the ‘measurement’ problem in quantum mechanics.  How one moves from the remarkable and completely non-intuitive microscopic world where quantum mechanical indeterminacy reigns supreme and particles are doing many apparently inconsistent things at the same time, and are not localized in space or time, to the ordered classical world of our experience where baseballs and cannonballs have well-defined trajectories, is extremely subtle and complicated and the issues involved have probably not been resolved to the satisfaction of all practitioners in the field.   And when one tries to apply the rules of quantum mechanics to an entire universe, in which a separation between observer and observed is not possible, the situation becomes even murkier.

However, even here, the most useful progress has been made, again in my experience, by physicists.  The work of individuals such as Jim Hartle, and Murray Gell-Mann, Yakir Aharonov, Asher Peres, John Bell and others like them, who have done careful calculations associated with quantum measurement, has led to great progress in our appreciation of the subtle and confusing issues of translating an underlying quantum reality into the classical world we observe.   There have been people who one can classify as philosophers who have contributed usefully to this discussion, such as Abner Shimony, but when they have, they have been essentially doing physics, and have published in physics journals (Shimony’s work as a physicist is the work I am aware of).  As far as the physical universe is concerned, mathematics and experiment, the tools of theoretical and experimental physics appear to be the only effective ways to address questions of principle.

Which brings me full circle to the question of nothing, and my own comments regarding the progress of philosophy in that regard.   When it comes to the real operational issues that govern our understanding of physical reality, ontological definitions of classical philosophers are, in my opinion, sterile.  Moreover, arguments based on authority, be it Aristotle, or Leibniz, are irrelevant.  In science, there are no authorities, and appeal to quotes from brilliant scholars who lived before we knew the Earth orbited the Sun, or that space can be curved, or that dark matter or dark energy exist do not generally inform our current understanding of nature.  Empirical explorations ultimately change our understanding of which questions are important and fruitful and which are not.

As a scientist, the fascination normally associated with the classically phrased question “why is there something rather than nothing?”, is really contained in a specific operational question.  That question can be phrased as follows:  How can a universe full of galaxies and stars, and planets and people, including philosophers, arise naturally from an initial condition in which none of these objects—no particles, no space, and perhaps no time—may have existed?  Put more succinctly perhaps: Why is there ‘stuff’, instead of empty space?  Why is there space at all?  There may be other ontological questions one can imagine but I think these are the ‘miracles’ of creation that are so non-intuitive and remarkable, and they are also the ‘miracles’ that physics has provided new insights about, and spurred by amazing discoveries, has changed the playing field of our knowledge.  That we can even have plausible answers to these questions is worth celebrating and sharing more broadly.

In this regard, there is a class of philosophers, some theologically inspired, who object to the very fact that scientists might presume to address any version of this fundamental ontological issue.  Recently one review of my book by such a philosopher, which I think motivated the questions in the Atlantic interview, argued not only that one particular version of the nothing described by modern physics was not relevant.  Even more surprisingly, this author claimed with apparent authority (surprising because the author apparently has some background in physics) something that is simply wrong:  that the laws of physics can never dynamically determine which particles and fields exist and whether space itself exists, or more generally what the nature of existence might be.  But that is precisely what ispossible in the context of modern quantum field theory in curved spacetime, where a phenomenon called ‘spontaneous symmetry breaking’ can determine dynamically which forces manifest themselves on large scales and which particles exist as stable states, and whether space itself can grow exponentially or not.  Within the context of quantum gravity the same is presumably true for which sorts of universes can appear and persist. Within the context of string theory, a similar phenomenon might ultimately determine (indeed if the theory is ever to become predictive, it must determine) why universes might spontaneously arise with 4 large spacetime dimensions and not 5 or 6.   One cannot tell from the review if the author actually read the book (since no mention of the relevant cosmology is made) or simply misunderstood it.

Theologians and both Christian and Muslim apologists have unfortunately since picked up on the ill-conceived claims of that review to argue that physics can therefore never really address the most profound ‘theological’ questions regarding our existence.   (To be fair, I regret sometimes lumping all philosophers in with theologians because theology, aside from those parts that involve true historical or linguistic scholarship, is not credible field of modern scholarship.)  It may be true that we can never fully resolved the infinite regression of ‘why questions’ that result whenever one assumes, a priori, that our universe must have some pre-ordained purpose.  Or, to frame things in a more theological fashion: ‘Why is our Universe necessary rather than contingent?’.

One answer to this latter question can come from physics.  If all possibilities—all universes with all laws—can arise dynamically, and if anything that is not forbidden must arise, then this implies that both nothing and something must both exist, and we will of necessity find ourselves amidst something.  A universe like ours is, in this context, guaranteed to arise dynamically, and we are here because we could not ask the question if our universe weren’t here.   It is in this sense that I argued that the seemingly profound question of why there is something rather than nothing might be actually no more profound than asking why some flowers are red or some are blue.    I was surprised that this very claim was turned around by the reviewer as if it somehow invalidated this possible physical resolution of the something versus nothing conundrum.

Instead, sticking firm to the classical ontological definition of nothing as “the absence of anything”—whatever this means—so essential to theological, and some subset of philosophical intransigence, strikes me as essentially sterile, backward, useless and annoying.   If “something” is a physical quantity, to be determined by experiment, then so is ‘nothing’.  It may be that even an eternal multiverse in which all universes and laws of nature arise dynamically will still leave open some ‘why’ questions, and therefore never fully satisfy theologians and some philosophers.   But focusing on that issue and ignoring the remarkable progress we can make toward answering perhaps the most miraculous aspect of the something from nothing question—understanding why there is ‘stuff’ and not empty space, why there is space at all, and how both stuff and space and even the forces we measure could arise from no stuff and no space—is, in my opinion, impotent, and useless.   It was in that sense—the classical ontological claim about the nature of some abstract nothing, compared to the physical insights about this subject that have developed—that I made the provocative, and perhaps inappropriately broad statement that this sort of philosophical speculation has not led to any progress over the centuries.

What I tried to do in my writing on this subject is carefully attempt to define precisely what scientists operationally mean by nothing, and to differentiate between what we know, and what is merely plausible, and what we might be able to probe in the future, and what we cannot.  The rest is, to me, just noise.

So, to those philosophers I may have unjustly offended by seemingly blanket statements about the field, I apologize.  I value your intelligent conversation and the insights of anyone who thinks carefully about our universe and who is willing to guide their thinking based on the evidence of reality.   To those who wish to impose their definition of reality abstractly, independent of emerging empirical knowledge and the changing questions that go with it, and call that either philosophy or theology, I would say this:  Please go on talking to each other, and let the rest of us get on with the goal of learning more about nature.

Emphasis Mine

see:http://www.scientificamerican.com/article.cfm?id=the-consolation-of-philos