On asking questions

“Many people over time have wasted their lives pondering questions which they would not have been asking themselves if only they had known more stuff about the world. In my model of the world, people need to rely on knowledge to ask good questions, and the more one knows about the world, the better one gets at asking the right questions about it. Thinking about stuff is different from knowing stuff, and the payoff schedule related to ‘knowing more stuff’ for most people will look very different from the payoff schedule related to ‘thinking more about stuff’. People in general have much less knowledge than they ought to have in order to support the opinions they already hold. … [If] you’re repeatedly engaging yourself in the activity of asking questions to which no answers exist, you’re in my mind… quite likely to be asking the wrong questions and to be wasting your time.”

I find myself very much in agreement with the position delineated above, though I’d like to quickly discuss some related points.

I would like to begin by noting that philosophers are not the only ones who are guilty of spending their time pondering questions to which there are possibly no answers. There are many scientists today who occupy themselves with questions to which no answers presently exist or are likely to exist in the foreseeable future (if ever). However, it is nonetheless undeniably the case that at least these questions are of vital importance if we wish to make progress in science — e.g., the interpretation of quantum mechanics to which you subscribe would have unavoidable implications on your views regarding how research in sub-areas like quantum gravity should be undertaken. Some interpretations of quantum mechanics are of course less problematic than others, but it suffices to say that, at present, we simply do not have evidence of adequate strength in any direction so that we may comfortably eliminate all interpretations but one.

Such epistemic uncertainty is partly attributable to the limits in our experimental capabilities — e.g., the existence of the Higgs boson took almost five decades to be experimentally verified. If only we could experimentally detect whether any kind of collapse actually takes place — then our worries about the nature of quantum mechanics might be assuaged! In response to such lamentations, one may propose that, instead of engaging in premature ruminations on theoretical inquiries, more brainpower (and funding) should just be committed to research in experimental physics instead. I think this suggestion is extremely sensible (and not just for the trivial reason that building sophisticated experimental equipment is obviously many orders of magnitude more expensive than purchasing stationery), but there are of course some caveats worth highlighting:

Firstly, experimentation is aimless in the absence of instruction from theoretical scientists. Currently, theoretical physicists have yet to work out the characteristics that would render each interpretation of quantum mechanics uniquely falsifiable. Additionally, there are crucial terms that still await elucidations by theoreticians in the form of formalised definitions — e.g., what does the notion of a wave-function collapse actually entail? If it turns out that there is no such collapse, what other tests should we perform to individually verify each of the non-collapse interpretations? Without answers to these questions, even if we had access to sufficiently advanced technology, we would still be at an utter loss at figuring out what to look for. If no one had predicted the existence of Neptune and performed the requisite calculations to approximate its position, we would not have known the direction in which to point our telescope; and if we did discover Neptune fortuitously at first try, it would be due to luck and nothing more — and that is assuming that we would have recognised it as Neptune in the first place.

Secondly, while our confidence in a theory doubtlessly increases once it receives the imprimatur of experimental confirmation, the lack thereof does not always spell doom for theoretical progress. E.g., until today we have yet to obtain experimental confirmation that Hawking radiation is a real phenomenon, but physicists have nevertheless accepted it as very likely true, on the basis that it coheres superbly well with those parts of physics for which we do have extremely strong evidence. Of course, while coherence is desirable, it is not enough — experimentation should still be the final arbiter of the veracity of any scientific theory, and that is why physicists continue to be interested in designing experiments that would allow them to observe Hawking radiation in action. This ties in with the point I made previously — theoretical work is necessary for the purpose of informing us of the things to which we should be paying attention.

Thirdly, while it would be really nice if theoretical physicists also spend more time in the laboratory so that progress in experimental physics may hopefully be expedited, it is nonetheless highly probable that their comparative advantage simply lies elsewhere, and that much more efficiency would be achieved if they devote their careers solely to the contemplation of theoretical issues.

It is important to note that the theoretical questions asked by these scientists might yield fruitful insights, or they might turn out to be completely wrong-headed. One problem is that, in general and not just in science, it is usually really difficult (if not impossible) to distinguish in advance between potentially fruitful questions and utterly misguided ones — after all, if we could always tell them apart, then there would probably be no need for asking questions at all! More often than not, our investigations will lead us to dead ends, and only a small handful of us would have the privilege of making breakthroughs — but it is often only in retrospect that we are able to realise which questions are worth asking and which are not. Knowledge progresses when we encourage people to ask lots of different questions and pursue myriad research paths, but doing so creates another problem — it is really not easy to figure out what stopping rules we should employ. If we have only the faintest idea of how inferentially distant the answers are from our existing knowledge, how would we know when to acknowledge that doing any more work on a given question is futile because a fundamental piece of the puzzle is missing?

Here is where the heuristic proposed in the quoted passage above becomes relevant: If your investigations consistently fail to yield any answers, you are probably asking the wrong questions. One approach would be to break them down into smaller and more manageable parts so that you may more easily check the assumptions that underlie each of them. You might discover that the intractability of your question is due to confused terminology, logical impossibility, scientific implausibility, or ignorance of some very essential information — or maybe even all of them. The importance of asking meaningful questions could hardly be underestimated, and I can think of at least two things that one could do to improve the quality of one’s questions: 1. As pointed out in the quote, one should place more emphasis on acquiring more knowledge about the world. The more knowledge you gain, the more you will come to know what you do not know — and this would inevitably help you (i) limit the scope of your question and (ii) come up with premises that square better with reality. 2. One should always strive for clarity and precision in one’s use of words. Doing so will ensure that the terms in your question are clearly defined, and that they do not rely on equivocation of different senses of the same word. Any question worth asking is worth the effort of careful formulation.

So far, I have very briefly written about how doing work on theoretical questions may be justified even when there is a dearth of experimental results. I have also quickly explained how one may improve one’s chances at asking good questions, insofar as ‘good’ is understood as ‘likely to generate useful insights about the world’. Now I would like to turn to an issue that belongs on the meta-level: The issue of understanding the motivation behind asking certain questions or framing them in certain ways. This becomes especially relevant when it comes to questions that involve potential attempts at manipulating or distorting our epistemic states — and such questions may not only be posed by others to us, but also by us to ourselves.

Consider political debates: A common tactic used by politicians is to rivet people’s attention to a less important question so that the more important ones will hopefully go unasked. E.g., by repeatedly questioning Obama’s patriotism and adherence to Christianity, the Republicans divert attention away from themselves — the media resources that could have gone into unearthing potential transgressions made by members of the Republican Party during Bush’s presidency now go into writing inane tabloid news about Obama instead. (I apologise if this example makes me seem partial to the Democrats — I assure you that I really do not care a whit about defining my political views along ideological or party lines.)

Politicians are not the only ones who make such dishonest moves — disingenuous people can also be found in other walks of life. Dishonesty often occurs when gains would accrue in the questioner’s self-interest if he can convince his listeners that his query is worth pursuing. So a desperate and unethical scientist, in order to boost his chances of winning grants, might tell half-truths (if not outright lies) about why his research is focused on asking really important questions and answering them.

Sometimes it is not necessary that the questioner and the listener are two different people — they could be the same individual, as is the case when we are pondering to ourselves. While there is no lack of people who try to fool us, I suggest that it is most pernicious when we ourselves engage in dishonest thinking, because it is presumably more difficult to recognise deceit (albeit unintentional) in ourselves than in others. Consider Daniel Kahneman’s two-system model of how we think: Thinking in System 1 is quick and intuitive, whereas thinking in System 2 is more deliberative. Kahneman noted that, whenever we are confronted with a complex issue for which there is no immediate resolution, we tend to substitute it with a related question that is far easier to answer. E.g., if you are thinking about what position to adopt regarding taxation policies, rather than studying the intricacies of microeconomics (as well as other relevant fields like sociology), you might just take a short-cut by asking yourself, “How do I feel about having my money taken away from me?” (Actually, this might account for the invention of the libertarian slogan that taxation is theft.)

Falling prey to such cognitive traps only serves to hinder your ability to understand the world as well as humanly possible, because you answer the questions you want to answer instead of the questions that you should be answering. Over time, it also erodes your willingness to grapple with complexities — for you would already be used to taking short-cuts, and engaging in deliberative thinking would become unappealingly effortful and time-consuming to you.

To become skilled at asking meaningful questions, then, you have to learn tirelessly about the world, strive fastidiously for perspicuity, and also get into the habit of asking yourself meta-level questions about why you have chosen to ask the questions you ask, and to frame them as you do.

Why I dislike arrogance/over-confidence

I apologise in advance for what you may perceive to be a preachy tone.

For the purposes of this article, I shall be using ‘arrogance’ and ‘self-confidence’ interchangeably. I shall also be limiting my discussion to intellectual arrogance, which manifests itself as an inflated sense of one’s own intelligence.

It is no exaggeration to say that arrogance is easily one of the qualities I dislike the most in a person, especially when I perceive said person to lack the intellectual caliber or achievements to warrant such arrogance. E.g., if you had as much astounding intellectual horsepower and as many ground-breaking accomplishments as Einstein did, then I would likely be more forgiving towards you. Needless to say, however, an overwhelming majority of people simply come nowhere close to being a member of Einstein’s league, and thus none of them possesses an intellect so blindingly brilliant that my admiration for him would outweigh any contempt I have for his arrogance. In fact, the farther you depart from such lofty echelons of human intellect, the more repulsed I am by your arrogance — this reaction is so strong, and so instinctive, that I immediately lose all interest in getting to know a person once I am impressed with an image of his unwarranted cockiness.

Someone I know has suggested that perhaps I detest over-confidence so strongly because it reminds me of the shakiness of my own confidence — i.e., I do not like being reminded of my own weaknesses. I disagree with this diagnosis. If it were correct, then I should dislike any display of confidence. But it is simply not the case that I dislike all confident people — it is specifically over-confidence that repels me. Confident people inspire me to want to emulate them; over-confident people make me want to mutilate them — with my chain-saw. (I am just joking, of course. I do not actually have a chain-saw. Besides, murder is illegal.)

Such invective aside, let me give a serious explanation of why I dislike arrogance so intensely. I shall begin by saying that I am an exceedingly curious person. I enjoy learning for its own sake. Admittedly, I am not terribly well-endowed intellectually, but I am accepting of this unfortunate fact — I just try harder to compensate for my lack of native gift with my efforts. Once I become deeply interested in a topic, I am often willing to spend the time required to look at its gory technical details. Always, without exception, the more knowledge I acquire, the better I come to understand both the magnitude and the content of what I do not know. There is a great plenitude of things which we do not know that we do not know; and learning helps me transfer, step by arduous step, some of these into the realm of things I know that I do not know.

Inevitably, learning instills in me a sincere sense of humility. Learning reinforces my deep sense of just how tantalizingly complex the world is, as well as just how frail, mediocre and pathetic my neural wetware is. In all honesty, I simply cannot fathom how anyone who is genuinely curious, and who has a deep heartfelt reverence for the universe, can have the gall to assume any degree of arrogance. Sure, if all you care about is being better than other people, then you might feel entitled to think highly of yourself if you happen to have above-average intelligence. [1] But if you care about knowledge for its own sake rather than just your self-perceived intellectual superiority, then you should constantly be humbled by just how much there is that you do not know.

Arrogance, then, is a generally reliable proxy for qualities that I find undesirable — qualities such as a lack of curiosity and a lack of rigor (regardless of whether it is due to unwillingness or inability to apply rigor). Speaking from my own experience, arrogant people tend to exhibit the following characteristics:

(i) They are more likely to be susceptible to confirmation bias. If a person is already very certain of the verity of his beliefs, then he is more likely to subscribe to the notion that anyone who disagrees with him must be misguided or stupid (and perhaps also that anyone who agrees with him must be well-informed or smart). [2] He would enter a debate with the assumption (whether implicit or overt) that his opponent is intellectually inferior, and he would likely find different ways of rationalising away his opponent’s arguments, regardless of how sound they are. Rather asking for clarification when he does not understand something, he would just assume that the fault lies with his opponent for being confused or obtuse. Arrogant people are more likely to care more about opinions, rather than ideas.

(ii) They are less aware of the concept of not knowing what they do not know. Sure, if you ask them “Do you agree there are things we do not know that we do not know?”, they would probably answer in the affirmative. They would be aware of such a concept if you deliberately call their attention to it. But, to them, almost always this concept is akin to a forgotten item that has been tucked deeply away in the neglected recesses of their mental attic. It does not occupy the forefronts of their minds, so that it could inform how they learn or shape their beliefs.

(iii) They care less about forming accurate beliefs. Someone once suggested to me that over-confident people who have a propensity to make bold claims “do not care about publicly being wrong” — in other words, they are refreshingly bereft of vanity. Unfortunately, in my experience, such a charitable interpretation is usually untrue. Most likely it is the case that these over-confident people cannot even imagine that they might be wrong, and this lack of imagination accounts for why they feel motivated to publicly make bold claims to signal their self-perceived intelligence. If you correct them, chances are that they would simply find very creative ways to rationalise why your criticisms do not apply. (See (i).)

(iv) They do not deal well with complexity. This is related to their tendency to care more about opinions rather than ideas. The more factors that you have to consider, the less likely it is that you would end up holding a strong opinion — and having a strong opinion is much more conducive to signalling than not having a strong opinion is. It is an unfortunate fact of our reality that most people want quick soundbites and answers, rather than elaborate data-/logic-driven arguments detailing various caveats. Therefore, insofar as you care about signalling, you would opt for having just enough knowledge to help you form an opinion that you can subsequently advertise, but not any more knowledge beyond that. (You might even defend yourself by euphemistically self-describing as “a big-picture thinker”, or something like that.) Arrogant people, in my experience, generally fail to recognise that it is really perfectly fine not to have an opinion — and this failure stems from their more deeply-rooted inability or unwillingness to recognise that the world has more complexities than they can contemplate.

(v) They are not exposed to really smart people. Perhaps this is by choice. It is much more ego-boosting if you are consistently the smartest person in the room. If you find that you are (almost) always the smartest person in the crowd, either you are suffering from a severe case of Dunning-Kruger affliction, or you are simply in the wrong crowd. (Or perhaps you are in the right crowd, if your main concern is to keep alive the illusion that you are really smart.) I genuinely think that you are not qualified to comment on how smart you are until you have experienced being immersed in an environment where there is a high concentration of really outstanding people who have dedicated their careers (and perhaps even their entire lives) to improving the state of human knowledge and innovation. Examples of such places include Boston (Massachusetts, USA), Bay Area (California, USA), Waterloo (Ontario, Canada), Oxford (Oxfordshire, UK), Cambridge (Cambridgeshire, UK), and Geneva (Switzerland). You would be reminded daily, by the unavoidable presence of all these incredibly awesome people, of just how non-special you are. It would be an absolutely humbling experience.

Actually, even if you do not enjoy the privilege of living in these fantastic places, it is nonetheless very easy to disabuse yourself of the illusion that you are very smart. The Internet is a wonderful place full of learning resources. Look for lectures offered by prestigious universities like Stanford. Download or purchase books that are recommended in the reading lists. Watch the lectures, study the textbooks, and try doing the assignments. Keep working until you can more or less confidently say that you truly understand the contents. Ask yourself whether you’d realistically be able to pass enough courses within four years to earn a degree from any one of these schools. Every year, many tens of thousands of young people manage to do just that. [3] Are you at least as smart as they are? Be honest with yourself.

If this requires too much effort, then consider reading a Wikipedia article a day. Realise that the amount of things you do not know is simply staggering. And keep in mind that Wikipedia contains only a small fraction of the current sum of human understanding. Now it becomes absolutely breathtaking just how ignorant you are, and how ignorant we all are. Importantly, do not make things easy for yourself. Do not limit yourself to just reading popular-level, non-technical expositions. Read textbooks. Read academic monographs. Read journal articles. Read specialist blogs.

Additionally, do not just focus on collecting facts so that you can show off at cocktail parties. Learn to synthesise your knowledge. Learn to question your assumptions. Learn to unpack your intuitions and subject them to criticisms. Doing these is very hard and uncomfortable work, but there are immensely helpful resources for that — e.g., the extremely well-written Stanford Encyclopedia of Philosophy. Read about objections that you would never have dreamed of, because you do not know what you do not know. See for yourself that your opponents could be immeasurably more knowledgeable than you are. Recognise that there are many subjects which are so esoteric that you might never reach a level of mastery where you can fruitfully discuss them. Recognise also that, even for subjects which do not appear esoteric, there are a lot of nuances and facets you probably have not even begun to consider.

Arrogance/over-confidence might be more forgivable if we do not live in an age wherein there are abundant opportunities to remind ourselves of just how ignorant we are. But we do live in such privileged times, and that is why arrogance/over-confidence in a person is such an immediate turn-off to me: It tells me it is extremely likely that said person mainly cares about appearing smarter than the average person, and that he does not care about utilising all these freely available amazing resources to learn just for the sake of learning. Less than 15 years ago, we did not even have access to all these learning materials. Now we do, and it is a better time than ever to cultivate some humility.


[1] It is also important to remember that you did not earn this genetic windfall. You have not done anything to deserve it. It is simply a matter of luck that you inherited good genetic material, and personally I do not see any reason to take pride in something you have not gained through your own efforts. What matters to me is how you utilise your above-average intelligence, and not the fact that you have it. (It must also be said that if you are proud of being smarter than the average, then you really need to be more ambitious.)

[2] They would be well-served to be acquainted with the concept of epistemic luck. Click here to read about one widely recommended book on the topic. I have not read the book, but I have heard favourable reviews of it.

[3] Also remember that there are many lesser-known schools which nevertheless offer very challenging syllabi. There are many accomplished students who could have qualified for universities like Stanford, but have chosen instead to attend internationally lesser-known schools for various reasons — perhaps they wanted to be closer to their families, or perhaps they wanted to incur less student debt, etc. Such underrated schools include public universities like the Georgia Institute of Technology as well as liberal arts colleges like Harvey Mudd College. So, effectively, the relevant reference class of people to whom you should be comparing yourself when evaluating your own intelligence now swells to a much larger size.

Experts as filters

The main problem with being incorrigibly curious about a wide range of subjects is that, being the mediocre mortal that I am, I have too little time and too little mental prowess to learn about the innumerable interesting things there are to learn. Whenever I visit various science websites to find out about the latest developments in all kinds of disciplines, I am inevitably confronted with an overwhelming deluge of reports about novel research findings, and I find that I have an extremely difficult time in many fields to discern whether each study is trustworthy or not. Additionally, I think that being aware of the most recent studies only counts as very superficial knowledge inasmuch as I am unable to synthesise these new titbits of information with my existing understanding; and since there are only two ways in which I would reasonably be able to remember these new findings over a long period of time – i.e., either by rote memorisation or through genuine internalisation – chances are that I would also most probably forget about them shortly after reading, which means that it is arguably a waste of time to read about them on science websites in the first place.

To put it harshly, reading about developments in science without really understanding their significance, or without truly being able to distinguish crud from results that represent real progress, feels very much like intellectual masturbation – it gives you the illusion that you are keeping up with advances in knowledge and thus makes you feel good about yourself, but in reality you have merely temporarily added a slew of random facts to your mental inventory without having gained a meaningfully deeper understanding of the world. To be polemical, then, I would say that reading science news without otherwise putting in effort to understand science better through more rigorous means is an activity for mildly curious but lazy laymen – and they might even be ignorant or in denial of their own laziness.

But I am being unfair, for the unfortunate reality is that most of us do not have the luxury of time/money or capability to read academic literature in multiple fields extensively, so subjects that do not occupy the forefront of our minds will only enjoy, at best, passing perusal. Happily, then, there is a good alternative to reading journal articles or textbooks – many academics and specialists maintain their own blogs, and one could defer to their expertise rather than trying impossibly, on one’s own, to sift through torrents of updates from science websites (and probably ending up with more confusion and/or erroneous beliefs). These experts have the requisite skills to critique various research findings – especially sensational ones that have received a great amount of hype in mainstream media – and they are able to assist us in synthesising new findings with our current knowledge into a coherent whole, so that our retention rate is far higher. If we are lucky, they might even provide links to their lecture notes – and reading those is always a decent substitute for studying a textbook, and far superior to reading updates on popular-level science websites.

The free availability of rigorous online content written by experts is why I seldom visit science websites these days: If a result is indeed worth knowing, I would most probably read about it on one of the specialist blogs that I regularly visit anyway, and I would have the advantage of knowing that what I read is not potentially misrepresented by journalists. (In addition, I personally find it intensely interesting to read about these experts’ insights regarding their own professions/research areas, as well as their general musings or speculations about issues that lie at the very cutting edges of their fields.) Basically, then, I am relying on experts as filters (and entertainment providers).

Here is an example of a specialist blog I frequently visit, and here is another. What about you?

YouTube videos of interest

John Carmack shows off his teaching talents with an unusually lucid presentation that has a reasonably high information-to-duration ratio — he has a knack for explaining technical details in an accessible manner that makes no sacrifices in rigour or accuracy. The Q&A session is a treat as well — the attendees generally asked interesting questions, which gave Carmack the opportunity to share his insights on the merits and demerits of various technologies. Carmack is probably one of the few people who come close to qualifying as polymaths in our present age, wherein knowledge is getting increasingly specialised, and wherein barriers to entry in most disciplines are rapidly escalating — he is indisputably one of the most skilled programmers in the world today; he is a pioneering innovator of many sophisticated techniques in the field of 3D graphics; and he was the lead engineer in an aerospace company. Even if you have absolutely no interest whatsoever in game programming, I’d still really encourage you to watch this video insofar as you have even a little bit of intellectual curiosity — the contents of the video are interesting in themselves.

You might already have heard of Timothy Gowers — you might know him as a knighted mathematician who won the Fields Medal (which is the equivalent of the Nobel Prize in the mathematics community, except for the fact that it is arguably even more prestigious than the Nobel, since it is only awarded once every four years to mathematicians under the age of 40); you might know him for starting a boycott against Elsevier journals within academia (read his blog for, among other things, very illuminating data on the exorbitant amounts of money that universities are compelled to spend on journal subscriptions); you might know him for having instigated the creation of crowd-sourcing platforms for solving difficult problems in mathematics; or you might just know him as one of the editors of the seminal Princeton Companion to Mathematics. In this video, alongside regaling us with a concise (albeit extremely abridged) history of intellectual advances in understanding the general decidability of mathematical problems, Gowers also provides sharp insights into the oft-misunderstood process of doing mathematical work by demonstrating, through the use of examples, how ingenious solutions often belie lots of hard work. Additionally, he also touches upon the (changing) nature of mathematical research with the advent of computers, and offers some of his own prognostications. In short, this is a great video by a great mathematician, and I cannot recommend it strongly enough. (Actually, all of Gowers’ lectures are worth viewing — you would benefit from exploring his other videos as well, if you find that you enjoy watching this one.)

Two professional scientists, one mathematician and one philosopher (who is armed with his own PhD in theoretical physics) gather together to discuss the ontological/metaphysical implications of quantum mechanics. The discussion was moderated by Brian Greene, who is an extraordinarily gifted communicator. I enjoyed the discussion, but thought that Ruediger Schack was unfortunately hampered by his lack of fluency in English — he did not motivate quantum Bayesianism very well. I guess that just means that I will have to spend time reading his essays on QBism to better understand his proposal…

The Sleeping Beauty Problem is not a paradox

A plethora of literature has been published by professionals from all lines of research – e.g., philosophy, physics and statistics – to discuss the Sleeping Beauty (SB) Problem. In this article, I would like to briefly explain why I think the problem is confused. First and foremost, however, I wish to make a disclaimer: I have not read very extensively about the discussions in this field, so any feedback would be more than welcome.

Here is a description of the SB Problem on Wikipedia:

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be wakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: if the coin comes up heads, Beauty will be wakened and interviewed on Monday only. If the coin comes up tails, she will be wakened and interviewed on Monday and Tuesday. In either case, she will be wakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is wakened and interviewed, she is asked, “What is your belief now for the proposition that the coin landed heads?”

There are two popular positions on what counts as the correct answer. Some scholars think that P(Heads) is 1/3; others argue that it is 1/2. This lack of consensus has spawned a lot of research activity, with a flurry of books and papers written in defence of either answer. I want to explain why I think that the answer must be 1/2. Let us first consider one dominant line of reasoning behind thinking that the probability is 1/3.

Suppose that this experiment is conducted 10,000 times. Assuming that the coin used is completely fair, by the Law of Large Numbers, we would expect to see 5,000 heads and 5,000 tails. This means that SB will be wakened 5,000 times on Monday after the coin lands heads, 5,000 times on Monday after the coin lands tails, and 5,000 times on Tuesday after the coin lands tails. I.e., in only 1/3 of the cases will her wakening be preceded by the event of getting heads. Therefore, she should give her answer as 1/3.

The answer might, at first blush, sound persuasive. But I think it is wrong, because it is answering a very different question from what is asked. The question is not asking us to evaluate the probability that SB will be wakened after the coin lands heads. It is asking us, instead, to evaluate the probability that the coin has landed heads. These two events have very different probabilities. Since the question has already informed us that a fair coin is being used, the answer cannot be anything but 1/2.

Let us think about this further. Suppose that you ask SB on Sunday, before she is put to sleep, how probable she thinks it is that the coin will land heads. Given that it is a fair coin, she will of course answer that P(Heads) is 0.5. So why should her probability assessment change depending on whether she is asked the same question before or after the coin is tossed? After all, recall how the experiment is set up – it is designed so that, every single time she is wakened, she has absolutely no information that would help her update her belief of whether the coin has landed heads. Upon being wakened, she would not be told whether it is Monday or Tuesday. She would not remember how many times, if any, she had been wakened already. So she would still be in the same epistemic state as she is on Sunday. I.e., she should still answer that P(Heads) is 0.5. Regardless of whether you ask SB for the objective probability of P(Heads) or for her subjective degree of belief in P(Heads), the answer should be 0.5 in either case.

It seems obvious to me that the answer cannot be anything other than 0.5 — i.e., there is no real dilemma. So why are there so many ongoing debates about this problem? I feel like I must be missing something — but what?

On time perception

The extent to which you’d find the video above to be intellectually gainful depends on how much you already know about neuroscience. I found the information-to-duration ratio to be somewhat lower than I would have liked, but it served as nice evening entertainment nonetheless — the host and the panelists were generally engaging and humorous.

During the discussion, the panelist David Eagleman described an experiment that he conducted in order to test time perception in extremely fearful situations. I’d like to comment a little on what he said. What inspired this particular experiment was the fact that our recollection of frightening events — such as those wherein our lives were being endangered — is frequently inaccurate in the sense that their duration tends to be very greatly exaggerated. So Eagleman wanted to find out whether our time perception actually does slow down during the period when a frightening event is occurring — i.e., whether during life-threatening moments we would be able to detect changes that are too quick for us to notice in normal situations. To do so, he recruited volunteers to jump off a 150-foot tower, with a safety net positioned on the ground to catch them. While the volunteers were in free fall, they were instructed to observe the screens of the number counters that had been attached to them. On the screens, different numbers were switching at a pace that was just slightly faster than the maximum speed at which we would still be able to perceive the changes under everyday circumstances. Eagleman said that, very much to his surprise, his volunteers reported that they could not identify the numbers as they were switching during free fall.

Honestly, I am surprised that he was surprised. To me, the reports made by his volunteers were hardly unexpected — the fact that we often recall a dangerous event to last far longer than it actually did is a fact about our imperfect memory, about how our brains retrospectively interpret facts, and not a fact about our conscious in-the-moment sensory perception. It seems strange to me that Eagleman would equivocate between the two. I am not saying that the experiment itself should not have been conducted — I do not dispute the utility of performing tests to make doubly sure that we are not confused in our delineations of concepts — but I am saying that I do not understand why Eagleman felt surprised.

Of course, I should perhaps be more charitable towards him — perhaps there exists some information in neuroscience, of which I am ignorant, but which suggested to him that there was a significant probability of not getting a null result in his aforementioned experiment. After all, he has a PhD and many years of research experience in the subject and I do not, so humility is really my best response. (No, I’m not being sarcastic.) If you have any reading material on this subject, please kindly direct me.

Toying with an idea

I weep over my imperfect pages, but if future generations read them, they will be more touched by my weeping than by any perfection I might have achieved, since perfection would have kept me from weeping and, therefore, from writing. Perfection never materializes.

— Fernando Pessoa, A Book of Disquiet

I am presently considering the prospect of injecting more frequent updates into this journal by sharing brief, regular notes on some of the intriguing things that I have read or contemplated. I should warn my readers that these scribblings will be even more embarrassingly unpolished than the material that has been published here thus far; however, if I do not attempt to overcome my impulses for fastidiousness — i.e., if I obsessively insist on detailing all potential objections, fleshing out all important caveats, delineating all shades of nuances, and providing respectable bibliographies for all of my essays here — I will most probably end up writing nothing; I will be held back from recording incipient ideas, shapeless and messy as they are — as all new inchoate thoughts are — so that they may hopefully be further developed at a later stage; I will risk forgetting potentially interesting reflections simply because I childishly criticise them for not attaining elusive perfection.

It is my hope that, having resolved to afford more leniency to myself, more regular updates will follow; and that readers will be forgiving towards future shortcomings, which will certainly occur in even greater abundance.

Such updates will be tagged ‘Quick thoughts of the day’.