science

This is a series of posts about curiosity and what happens to curios kids as they grow up. In other words, how good are our institutions at welcoming, enabling and utilizing curiosity. This post is part one, in which I mainly talk about academia — it is after all, the obvious candidate for a place at which many curious kids go.

It might admittedly sound rather strange at first, but in the last few days, I have been obsessively watching physics lectures on youtube. For me, this was the culmination of a years-long desire to understand the physical universe a tiny bit better after years of frustration with the incomprehensible maths and jargon that used to block my comprehension in the past. Thankfully, an Oxford maths degree later, I am now finally able to begin understanding the language of modern physics. 

If I have to be honest, making even a small amount of progress towards a long awaited goal has been quite unexpectedly rewarding and even enlivening experience. It has brought back memories of my high school days when I used to religiously play and meditate on autotuned science-themed songs made from scientific documentary clips (these videos introduced me to scientists and personalities I still love to this day — Carl Sagan and Richard Feynman being prominent examples, for instance). 

Of course, I loved the videos not because of the “music” as much as the message and the sense of wonder they conveyed. Moreover, there was a sense of a community of people interested in science which I came to embrace. Overall, this early teenage experience served to make me reaffirm my commitment to studying maths and science so I could truly know more.

Today, I feel even an greater curiosity and a sense of wonder. It is amazing how watching a few physics lectures could reignite a burning child-like passion for knowledge and how it could recast a large dose of university-accumulated fear of inadequacy as merely an inconsequential and harmful obstacle to learning. In other words, in the last few days, I wanted to learn physics and I could care less if I was good at it or not — as long as I kept learning I was happy with myself. At least for a few hours in the day, time seemed to stop.

Eventually, however, reality came back to the picture. No matter how interested I was in physics, I had to face the fact that curiosity is expensive. This is what this post is about.

***

Now, if one had to formulate the state of most young people in their early 20s, a few words would undoubtedly come up: confusion, uncertainty, relative ignorance, change. Of course, there are the occasional exceptions who have long ago showed a sense of direction and are well on a settled lifelong path. But most of us are not in this group. Most of us are still exploring what is out there.

In a sense, that is what a curious person would do. After all, kids don’t have full access or knowledge of the world so as they enter into early adulthood, there are still many areas to explore and to recognize as truly one’s own. Kids might dream to be policemen or astronauts (because that’s what they can see), but they cannot dream of being a life coach, a political campaign lead or a marketing expert (or one of the many other professions which require a sense of how the world works and are not immediately obvious; one can also include any profession which touches on sex, love and other experiences which children clearly don’t understand at all) Although a valid expression for everyone, it is precisely these grown curious kids that embrace and live out the full meaning of “finding oneself”. Incidentally, this process is not always conscious. Sometimes it is forcefully kickstarted by a rejection or the first major obstacle along a carefully preplanned way ahead. At other times, it is a continuation of the sense of wonder before the complexity of the world. In both cases, however, the result is the same: the realization of the vastness of the human world and the millions of different opportunities around.

Naturally, such great choice creates anxiety and often indecisiveness. The inner result might be confusion, but the outer is going along society’s expectations. If you’re confused and frankly have no idea what you want to do with your life, the path of least resistance is to simply do that which you will never be asked to defend (because you have no way of defending any choice you’ve made yet). In reality, this means going to college, getting a job or something like this — in Oxford, for example, for half of the mathematicians in my year it meant doing a PhD just so they could buy time and figure their life out. 

To be honest, there is much value in following the status-quo and social expectations. You get questioned less, there are more funding opportunities available and more interest in what you do. You are much more often to be recognized for doing a good job if you’re doing a job that people are recognizing as useful. Nonetheless, there are downsides as well. The main reason for them is this: curiosity is not always about utility. In other words, in any domain, to be curious is not always the same as to be useful. For instance, in science, not every research done today will find a practical use. Hell, not every research will find theoretical use either! And the same dynamic plays out in the business world (not every business idea will succeed, quite the contrary — it will most likely fail!), the government world (not every policy proposed will be implemented and even if it was, it may or may not work), etc. Crucially, the same dynamic plays out in the personal world of each of us — what is interesting and what we are curious about is not necessarily what is going to be useful. For example, if you’re an accountant, then learning how to play the guitar will likely make little difference to your career (you won’t suddenly become a musician and music won’t magically make you better at accounting). And yet, you might be dying of curiosity to know music…

***

I mentioned above that quite a few of my friends chose to do a PhD. Why didn’t I? It’s a deep question with many layers of answers to it. 

Firstly, academia is not simply an abstract space for the curious. It’s not ideology-free. Academia, from what I directly witnessed at Oxford and indirectly saw  elsewhere, is not on the whole the sort of open and welcoming place that I imagined. For one, there is the gnawing sense that ideology often takes precedence to truth in the humanities. As a result, the humanities have often struck me as less like a genuine forum for ideas and more like an ideological monolith with an agenda. Of course, tenure is the hypothetical solution to such problems, but I’ve become skeptical of its efficiency. To be blunt about it: the more ideology permeates the humanities (and increasingly beyond them as well), the more the academic system will select not for original thinkers, but rather for ideological conformists. The result will be that tenure will not protect the heterodox, but precisely the ones that are best at mouthing off the prevailing orthodoxy. Of course, some dissenters on a tenure track who still hold an idealistic view of academia will remain undetected by this ideological screening (or will be the lucky few in a bubble that actively tries to avoid it). Maybe some will get a tenure eventually. But how many of them will suddenly voice their year-long concerns, especially If future career progress is at stake? Likely not many — after all, for all their dissent, these are people that have found the system tolerable enough for years! And besides, being a curios person does not (and should not have to) mean being in the position to wage an ideological war. Temperament, social skills and time limitations all come into play here. As long as the overwhelming cultural forces in academia tend to favor one ideology (whatever that is at a given time) at the cost of curiosity, there will be a natural movement of people out of the academic institutions.

But even besides the ideological problems and the thought and speech police, there are other structural factors that put me off academia. Put simply: smart people tend to be arrogant. Of course, arrogance is in some sense a human universal (I am often guilty of it too), but academia has never struck me as trying to limit the damage of arrogance. At least at Oxford, the academic culture was almost always one of almost incessant argumentation and rarely one of honest conversation. In fact, most of the time conversations were non-argumentative, it was because people simply formed cliques that already agreed with each other. At least in my experience, it was difficult to find genuinely curious people willing to discuss all ideas in their best light. And all of this was at an undergraduate level where academic pressures are least prominent. As academia is (at least in theory) based on intelligence, the whole academic culture often devolves into one-upmanship and signaling, of everyone trying to prove to everybody else that they know more. After all, intelligence is, to a first approximation, the whole measure of worth in the academic world. As a consequence, the whole culture of academia is not one of curiosity and exploration, but one of belligerence and constant argumentation — if everything you say is going to be argued with, no wonder some say there is no truth!

Unfortunately, this culture of belligerence and the resulting self-segregation across disciplines and positions sometimes infects research too. Academia is supposed to be a battleground for ideas, but it often turns into vain intelligence and popularity contests, name-calling and elitism. The question for me is why be a part of a system which even if you do everything right might still disrespect you? (a phenomenon much more prominent in the humanities where empirical tests and strict logical proofs are unavailable as fair arbiters of dispute and where there is just as much if not more status competition and envy). Moreover, in the more realistic case, why join a system in which even if I do my best to further human knowledge, the reward I get will be at best uncertain? 

What I mean is this: if a curios kid fascinated by science / philosophy / art grows up and starts doing a PhD, there is little guarantee that academia will recognize their work as useful. It is important to say that here I am not talking about the top students — the nobel laureates and their equivalents. If the academic system failed to respect those people, it would be completely useless. I am talking about the rest of scientists whose names you won’t ever hear about, but who are in their labs and offices everyday working and whose discoveries pave the way for some next nobel laureate to come along. What is in academia for them? Besides a few conference visits and the occasional citations, there is the uncertainty of funding, the constant competition with others for grants, the frustration with the academic bureaucracy and, depending on the perceived quality of your work, the envy / hostility / dismissal by others around. Combine all of this with ideological considerations mentioned above as well as the over-saturation of the academic market and the resulting picture of academia becomes less and less attractive. (to the point where it’s unclear if academia serves the curiosity instinct as well as it’s supposed to)

In fact, academia is especially unattractive because the internal market forces (supply of tenure-track positions vs demand for them, available grants and governmental funding, etc.) often create incentives that are not necessarily aligned with a curious exploration of the world. What often results from this mismatch is a high pressure to publish and publish regularly. That is probably my biggest philosophical concern with the academic world as it stands. 

***

Artists often complain that market forces destroy their creativity and corrupt art. After all, an artists can only truly dedicate him/herself to one pursuit — of art or money. The image of the starving artist is a popular depiction of what often happens when a person chooses to pursue artistic exploration to the exclusion of market forces. Simply said, sometimes art takes too much time to be profitable (and hence justifiable to the market). Moreover, creativity suffers when individual expression is subjugated to the demands of others (after all, the market is a proxy for society at large and prices are an expression of society’s current values). Furthermore, good art requires tangential exploration into seemingly unrelated domains in search of great under-appreciated ideas. The combination of producing great works fast and without too much additional research often kills the dreams of many an artist. Some pull it off (eventually, after much wandering around and a successful accumulation of ideas). But some don’t. There is a misalignment of incentives. 

Of course, art is not special in this regard. Some would go as far as arguing that most if not all human activities are completely misaligned with the market. I don’t know about this — it’s an awfully strong statement. But what I do know is that curiosity is much like the artistic drive. It needs to wander and explore, often without a specific aim or a deadline. In fact, having a specific goal or a deadline can often be blinding — after all, to transcend the conventional state of thinking, one has to be able to also transcend the way it defines its goals too. Both art and science depend fundamentally on creativity and creativity is like looking for the way while walking in the dark — it requires hitting a wall or two multiple times and going down paths that most likely will end up as dead-ends. And even if some research is just a matter of straightforward implementation, much other isn’t. In fact, the most important research — the one that has even the best minds confused and helpless — is the one that most requires that sort of creative wandering. And that sort of wandering (e.g. sabbaticals, hobbies, etc) sometimes takes researchers away from their field of expertise. It enables an inter-subject cross-pollination of ideas. 

Unfortunately, when you have to publish all the time, creativity tends to suffer. Fewer alleys get explored when there is no time for deviations from the main road. Senior researches might be able to afford the luxury of wandering around, but junior ones less so. One could argue they don’t yet know enough for deviation to even make sense, but curiosity is ultimately just following a hunch and seeing where it leads. It might lead nowhere. But it might lead to a nobel prize. Curiosity is in some deep sense a risky activity. Academia is less and less so. Research proposals and publish pressures and insufficient funding all have the effect of stifling creativity or incentivizing straightforward exploration as opposed to fundamental research.

Of course, the risk-aversion is structurally built-in to the nature of academia. Academia requires regular positive results whose utility is obvious in advance (so that funding can be secured through a sufficiently enticing research proposal). I don’t have / I don’t know if there is any data to back me up on this, but I have the feeling that such an academic system is missing out on important developments that could have been made. Not every research is a matter of doing an experiment known in advance. Not every field has incorporated any and all wanderings into itself (like philosophy and to some extent maths, for example). And curiosity certainly isn’t driven by a need to satisfy an externally imposed need to be useful. (I am always reminded of Andrew Wiles working in secret for years without a guarantee of success — how many such projects are currently made impossible by academia?)

***

Before moving off the topic of academia, I have to mention that the misalignment of curiosity and research reality is far from the only reason why academia is not even in principle that attractive to me. There are a whole lot of other issues — mental health risks (I have many friends doing PhDs advising me to never do one myself because it’s allegedly both deeply depressing and a waste of time), opportunity costs (made even worse by the often insufficient science funding), the aforementioned academic politically correct orthodoxy whose radical proponents are opposed to the freedom of speech and expression, etc.. Moreover, academic life is often lonely and comes bundled with a whole set of hidden administrative (and depending on the person, teaching) nuisances.

I’ll explore these and continue with my discussion of curiosity in my next post

Read more

In this (last) post in my series on the is/ought problem (part 1, part 2, neither strictly necessary, but read them anyway for definitions and background on the problem), I wanted to approach problem from a different angle.

Thus, in this post I shall not venture into epistemology, but instead present what I currently see as the best possible defense of holding moral beliefs which are not mere opinions, but also potentially binding for others too. (so you can say to another, “don’t steal” and that has a different feel to it than “don’t make funny faces to strangers” or “don’t order their sushi, but the fried chicken”)

Now, it’s hardly news in the 21st century that without religion and god, the common picture of the world one gets from modern western culture has a significant nihilistic vibe to it. We might speak of morals, but everybody disagrees about them. And the people whose moral convictions seem deepest are precisely the religious people whom western culture has long since proclaimed wrong and out of touch. And if religious morals are shaky, what shall we say about secular ones? A single look at the diversity of the world’s cultures is enough to indicate that there are hardly any sacred moral laws humans all independently agree on.

So, is there anything at all one could say about morality? Is it all just arbitrary social convention predicated on power relations and nothing more? Could we really object, in moral terms, to even the seemingly most horrific of acts? Or is it all just emotional biases upon emotional biases that make us feel like certain actions are genuinely evil? Could one person be ever justified in saying another person’s actions are morally wrong? Or is everything a matter of preference and mere disagreement, essentially nothing any different from supporting your favorite sports team? Continue reading Grounding a Sensible Morality Based on What Truly Is

Read more

In my previous post (good for background on what I am discussing), I left off at a place which seemed good enough to justify much of our common sense. But there were costs to be paid for adopting a strange pragmatic epistemology. Namely, common sense is just one of many valid ways to look at the world.

Put simply, the upshot of pragmatism as a philosophy of knowledge is equating the concepts of truth and utility.

In some sense, my pragmatism is summarized as simply using different theories and explanations for different phenomena as long as they get the job done. In many ways, that’s pretty much what scientists do as well, but to a much more limited extent. A scientist looks at the world and tries to explain by positing some hypothesis. Then the scientist tests the hypothesis to see if its predictions match up with experiment (both for already observed and unobserved phenomena). If everything turns out alright, the hypothesis is validated and thus said to be true. 

Now, as I mentioned previously (and discussed at more detail here), science itself has a foundational problem in the problem of induction. As I currently see it, it is difficult to justify science as a valid way of acquiring knowledge about the world besides waving one’s hand and saying “induction works (at least so far) so who cares why”. And so it is that whenever someone tries to be all philosophical about the whole scientific enterprise, a scientists usually just shrugs and laughs at the absurd and self-evidently false suggestion that science doesn’t provide us with any new knowledge. Ignorance doesn’t produce iPhones, rockets or MRIs, after all. (and even though I disagree, it takes a lot of mental effort to explain why I can do so and still believe in science)

***

But even beyond induction, it is obvious (when one thinks a bit about it) that the scientific categories one uses in formulating hypotheses about nature are man-made and unlikely to unfailingly reflect the way nature really is. This is easier to see in hindsight. 

For example, before Einstein came along, time and space were thought of as separate entities unrelated to each other in any fundamental way. Einstein changed our understanding and we now speak of spacetime. 

Yet, theories that don’t make this distinction still work even though we know that they will break in certain specific conditions. Newton spoke of forces, but quantum mechanics does not. So do forces exist or do they not? Who knows. Who cares. The sophisticated way of putting this whole idea is “the map is not the territory”. 

In other words, even if your fancy theory explains the world in terms of a specific concept, it doesn’t mean that the concept necessarily corresponds directly to reality the way you think it does. For instance, Newtonian forces can be explained quantum mechanically so they still “exist” in some sense. But strictly speaking, they really don’t according to our current understanding.

Yet, from here, the obvious question arises: what if our current theories do just the same thing as those of Newton? What if the way we picture reality is nothing more than a figment of some scientist’s imagination? What if our vision of the universe is going to be fundamentally altered by the scientists of the future? (consider this: before the big bang theory came along in the last century, many believed the universe had been here forever! not a small conceptual change of understanding)

I personally don’t care. I have given up on knowledge about the way the world is (although I care deeply about being able to predict it). But common sense epistemology will never give up. In it, what the eyes see is what is really out there. And what we observe experientially is enough to give us knowledge about the way the world is… 

***

And yet, the only philosophically sound way of making sense of science is to justify it not as a means of producing knowledge, but as a means towards specific human ends — fighting disease, predicting nature, etc. 

We say physics is true because it works in telling us how fast a ball will drop or what will happen when two magnets get close to each other. Physics works for the goal of not being surprised by nature. But who knows if our theories (as conceptualized by us) match up with reality? Many scientists only really care about predicting what nature will do, not having some god-like ability to see the world as it is. *shut up and calculate*

Here’s the fun part: if one adopts an epistemology like mine, then science is not in any way special. If science can be justified on the basis of its working, then why not faith in god too? If the goal is to acquire meaning and to make sense of the evil in the world, an idea of man’s sinful nature certainly makes some sense. And even if the pictures two different theories paint are contradictory, the contradictions only matter if they make a difference for the goal at hand. Maybe there is no God, but if Christian morality works to produce stable societies / brings meaning to the faithful, could we not say it is true in some sense? Similarly, maybe quantum mechanics and general relativity contradict each other in the middle of a blackhole, but everywhere else one or the other theory applies to give us an answer we could confidently use in practice.

The point is, every form of knowledge we think we have is some sort of mental model which could well contradict other mental models we have. Yet, these contradictions don’t stop either model from working in the domain it was meant to work in. Here’s an example.

The way I see it, most scientific attacks against religion are much like critiquing flirtation for not being factually true. Sure, maybe a guy is not absolutely the best lover out there, or a girl might not literally like everything rough, say. But none of this matters — if both theorize the world as suggested by their flirtatious remarks, they will be able to predict the other’s behavior for all practical intents and purposes. I see religion in much the same way and I am not too bothered by any seeming contradictions with science or anything else I believe in. Ultimately, these ideas and concepts are all just tools, not the one true secret of the universe.

***

And here’s, at last, the crux of the real problem for me. All the epistemological peace of mind that pragmatism brings only serves to pose an absolutely frightening question. Namely, if we cannot truly know what is without also admitting contradictions to creep in in our models of the world, how are we to act when these contradictions matter?

Lest you think these contradictions are just some abstract mumbo-jumbo, here are a few examples — some silly, others not so much:

If one believes both that human life on Earth is characterized by suffering and that there is a heaven for everyone, then should death of a close person bring mourning or joy?

If one believes humans not to have free will, should one speak of responsibility, moral choices and other such loaded language? Should we hate criminals for choosing evil or should we look at them with compassion for having no choice in the matter?

If one believes the many-world interpretation, should one be so distressed by the horrors of history or should one look at them and reason that they were meant to happen in some universe, so why not this? (a sort of anthropic principle applied to suffering)

If one goes on a date and the waiter starts behaving rudely, does this indicate a lack of concern for the customer, a sign of exhaustion, a form of prejudice, a lack of social skills or something else? All of these contradictory models could explain the behavior, after all. But the tip one leaves at the end is certainly affected by which model one adopts.

Of course, one could up the stake from a mere generous tip to a whole life of freedom. We know for a fact that people get sentenced wrongly because judges first come to one conclusion only to later have their verdict reversed by new evidence. But it’s not like some smart prescient person couldn’t have believed that the verdict was wrong despite the evidence presented. Contradictory beliefs often explain the same behavior and it’s only our likelihood heuristic that allows us to make a decision what to do. But how certain are we in our ability to determine likelihoods? And how wise is it to follow when a human’s life is at stake?

Imagine this. It’s 2100 and robots are roaming around. No one knows if they are conscious or just really sophisticated machines that could well have fooled us if we didn’t know any better. For all we know, both alternatives are possible. At that moment, a person smashes a robot to pieces. 

Do we react like a murder had just taken place? Do we react like an animal had been slaughtered? Do we react like someone just dropped their TV from the last floor for fun or put a tablet into a microwave? What should we do with that person if multiple conflicting theories explain all the available facts and yet urge us to act in vastly contradictory ways?

The same question arises in the political arena too. If a politician misspeaks and says something offensive, is it a sign of an honest mistake or a peek into that politician’s well-hidden intentions and beliefs? Shall we vote for him/her anyway or criticize him/her into oblivion? (most likely both will happen along partisan lines — in politics, even when no one is lying about the facts, there will always be disagreement about what they imply about the world)

In fact, politics is rife with similar questions of intent. For example, criticizing the position of somebody from a protected minority can either be interpreted as a sign of bigotry or honest disagreement. Yet, the two explanations require vastly different responses. So, next time you wonder whether some politician is malicious or merely incompetent, you would basically be facing the same dilemma as me. (with the possible difference that for me both theories are true whereas somebody else might wish to withhold judgement; in either case, how one should act remains an open question)

***

Of course, in real life we employ heuristics such as “don’t attribute to malice what could be explained by stupidity”, “always assume the other is acting in good faith”, “don’t assume unnecessary hypotheses”, etc. Yet, while these work well to keep explanation simple enough without getting us into unnecessary conflict, they are in no way guaranteed to lead to justice, fairness, or the absolute truth about the world. 

And so, when one lets go of all the heuristics and all the conventional crutches, how does one resolve the question of how ought to act? 

I am not sure this question has a good answer. Maybe the crutches and the resulting imprecision are the best answer humanity could give? Or maybe not. Maybe progress is possible…

So, to summarize, there is not just the is/ought problem, but also the problem of actually knowing what the is is in the first place! And if one adopts a pragmatic epistemology, then the is is suddenly many different is-es at the same time! But the opportunity for action is but one!

At least in my pragmatic epistemology, knowing what to do is hella confusing and sometimes seems completely arbitrary. (**)

Nonetheless, if I make myself forget about these complications for a second, I am able to come up with some thoughts on bridging the is/ought gap in a somehow satisfactory way. But more on this in the last post of this series…

(**) These questions are absolutely fascinating to consider in the context of artificial intelligence that will not only need to know what the world is (i.e. form models of it), but also act on it to achieve its goals. It’s scary to think what could happen if a sufficiently powerful machine employed the wrong model about the world and concluded it should exterminate us all (or something even more absurd and painful like kill all men or women or children under 10, etc.)

Read more

“How should I act in the world?” — probably the single most important question any one of us faces in our lives. Yet, in the real world, most people don’t really bother thinking too much about it. Quite understandably, one might say. After all, there are bills to pay, kids to feed and billion other small things.

Nonetheless, I still feel like, even amidst the chaos of everyday worries, one can greatly benefit from having a large-scale orientation to their actions. There has to be a method to our day-to-day madness. We cannot simply go aimlessly through life forever. And so, even the most practical of men would find it useful to have at least an inkling of an answer to how they should or ought to act.

Now, because it is so universal and evergreen, the question itself is far from new. Nor are the basic types of answers one can give to it. Religions and ideologies throughout history have all attempted to establish an ethical cornerstone on which one can base one’s life. We all know the important concepts: God, community, humankind, love, compassion, consciousness, equality, freedom, yourself. Depending on where you stand politically and philosophically, you probably believe you should act in accordance with / for the benefit of a quite few of these.

Yet, besides our emotional desires and fascination with these concepts, there are problems lurking in the shadows. 

Superficially, many concepts sounds pretty great. God is all-loving, freedom ensures lack of oppression, equality guarantees lack of discrimination. But as soon as one starts delving deeper into each of these concepts, the whole idea starts to fall apart. The edges become blurry and far from evidently good. Ask a committed atheist if God is good, a left-wing protester about freedom or a right-wing one about equality and you’ll soon be faced with the realization that every single guiding idea we have sucks if taken to an extreme (or, at the very least, is far from obviously good). Most people realize that wisdom is about combining a few basic ideas and adding footnote after footnote, but for the more philosophically minded even this approach remains a bit too unsophisticated. Continue reading Bridging the Is/Ought Divide and Other Simple Questions

Read more

We live in interesting times for academia.

Traditionally, even though society at large has never particularly played a cheerleader  role for the universities, funding has always been available for all sorts of research. Moreover, academics have consistently been held in high regard as intelligent, hard working and useful to society.

In the recent years, however, the English-speaking world has gradually witnessed a mass revulsion from academia. And even more strikingly, some of that revulsion has been championed by precisely the sort of people who have traditionally fit the academic mould, i.e. curious types unafraid to ask questions and think critically for themselves.

While at Oxford, I myself underwent a similar process.

In the abstract, science is great and academia represents as much of a free-thinking heaven on Earth as there could be. Or at the very least, those were my expectations going in.

Going out, however, all I could think was: if this is the best we could do in terms of institutionalized curiosity, then God help us. Academia felt more like a hell than a heaven.

It is no secret to anyone who’s recently been on a university campus that the place has turned primarily into a political battleground (as opposed to an intellectual one). Many students seem more interested in activism than in hearing out differing opinions. Argumentation has mostly given way to idea imposition and political correctness. The academic orthodoxy enshrined at universities feels distinctly cult-like and frankly little better than the closed-mindedness academia supposedly stands in opposition to.

Consequently, there are two possibilities for the dissenting voices in academia. One is to shut up and do your work in hope that things might get better or that they won’t ever interfere with your life. The other is to judge the whole enterprise flawed as it currently stands and leave. In truth, there is a third possibility, namely to speak up. But it is fast becoming a career suicide for faculty and a social hell for students. In practice, the people who speak up are donors and those who no longer have anything at stake, i.e. those on the way out.

Today, it feels like academia has forgotten a very simple historic lesson: don’t moralize too much or people will begin to resent you. And if this lesson holds true in basically every domain, it holds twice as much in academia. For academia, at least in theory, has committed itself to the accumulation of knowledge and the assimilation of different perspectives. And that means being as impassionate, or at least as charitable, as possible to differing points of view — a sentiment hard to square with a top-down censorship of opinion or an embrace of student intolerance.

Yet, though I still have hope, I don’t expect much to change. The university system has shrugged off many other complaints before.

That such attitude only makes a mockery of the plea for feedback one receives after leaving; that the millions spent to entice children into science are then offset by millions spent to put them off their scientific ideals; that academia is turning more and more into an obstacle to learning; that the intolerant attitude is causing deep societal divison… all that doesn’t seem to matter.

All the above is why many of those who were willing to give the academic system the benefit of the doubt in the first place are ultimately leaving.

But there is another group in society — those who never got to university in the first place. Those are the people who academia tries to reach and inform, at least if you believe the public statements. Alas, those are also the people who oftentimes dissent only to be met with accusations of bigotry or willful ignorance. 

Just like Jesus, who said he came to save the sinners, academics claim they came to save the masses from their ignorance. But unlike Jesus, who embraced sinners and preached non-judgement, academics are fast to argue and quick to judge. For many academics, the perfect lay person is not the one who is curious and asks questions, regardless of how ignorant they might sound. No, the perfect lay person is the one whom you can keep at distance, but who buys your book and learns submissively from you, the academic master.

(This dynamic has always irritated me immensely. As soon as one actually engages with academia, the peer review system and its appeal to criticize seem to function more as a be-an-sshole system with an appeal to demean.)

In any case, the truth is this: academia is given the prestige and the tax-payer funding it enjoys only because it works. No matter what one thinks about the philosophical foundations of science, at the end of the day, science makes life better. The humanities too have much to contribute to a great life. Or at least they used to.

Nowadays, for every good paper in the humanities, there seem to another three full of non-sense. The humanities no longer work. The former contract — the public funds academia, academia does its thing and helps the public back — has been broken. And broken not only by making the humanities irrelevant (for then maybe no one would have noticed and consequently cared?); no, it’s been broken to an extent where the humanities are pushing a distinct ideology that makes life actively worse for many. I don’t believe much in Europe on that front, but at least in the US, self-censorship by a majority of the population should never be a thing…

In conclusion, the reality is that most people are pragmatists. They might support things they don’t understand, but they won’t support things that don’t work. And frankly speaking, there is no real reason why they should.

Mine is a delicate position to be in — both loving the ideas behind academia and hating the thing it has become today. 

Thankfully, it is finally becoming clear that being against the institutions of academia is not the same thing as an ignorant preference to stay in the dark.

If I could describe my position with a word, it would be academic patriotism — an attitude that can honestly see the flaws in modern academia without mistaking them as fundamental; an attitude that is not willing to give up on something so dear despite its current sickness…


Here are some prominent people (among many others) who are working to change academia for the better and who have influenced my thinking on the subject:

  1. Jordan Peterson
  2. Eric Weinstein
  3. Bret Weinstein
  4. Gad Saad
  5. Jonathan Haidt (especially this lecture)
  6. Hunter Maats
Read more