Thursday, January 28, 2016

Living in a rut: Thoughts on Roy Anderson's "Songs from the second floor"

You should watch Roy Anderson’s astonishing film Songs from the Second Floor (2000). Anderson is a philosopher with a keen eye for marrying absurdity to beauty. His characters, so completely enveloped in their own misery and lostness, simply fail to see the humour in what they are experiencing. There is a nihilistic edge in Anderson’s vision, too, but it is not entirely devoid of purpose or meaning. It therefore stands in stark contrast with the more nihilistic films of the Coen brothers’ — their highly enjoyable meditation on the pointlessness of events in A Serious Man (2009) comes to mind — and David Lynch — his aimless, stupid, and highly overrated Inland Empire (2006) is a particularly good example of just how easy it is for the mind-numbing to be mistaken for profundity.

While the Coens rob their characters of the possibility of finding any meaning outside of their own attempts to understand the world and while Lynch holds fast non-meaning and the anti-logic of dreams in a bizarre conglomeration of non-sequiturs, Anderson presents a world that makes sense, but with characters who fail to see any sense in it. His characters tend to reach conclusions about their own situations that are far from accurate. This reflects their own spiritual wasteland, but does not necessarily assume the complete absence of the transcendent. It is the spiritually asleep who refuse to acknowledge their own shortsightedness and limitations, probably because, as Hegel shows us, we can only know a limit once we have gone beyond it; we can only see how far we've fallen when we have been redeemed.

Most of the central characters in Songs from the Second Floor, shown through a series of almost-but-not-quite connected vignettes, are often found repeating the same idea in different settings. An old man constantly complains that his business has “gone up in smoke” (He literally set fire to it) and that his son “went nuts” because he wrote too much poetry (It is clear that this is not the reason that he went mad). His other son keeps on reciting a line of poetry — “Beloved be the ones who sit down” — even though he is not quite certain of its significance. And a nurse keeps on asking the doctor she is having an affair with with when he is going to divorce his wife (His silence reveals that he has no such intention, but this does not stop the nurse from pestering him).

These repeated lines and scenarios are like many of the stories we tell ourselves to convince us that we understand the world. But, as theory on the narrative fallacy goes, understanding the story about something is not the same as understanding the thing itself. Nevertheless, we keep repeating our stories, keep telling ourselves that this-is-how-it-works even when the truth might be utterly different. We hear people say, for example, that ‘Everything happens for a reason’ and even find repeated stories about the meaning of life in the way that people spend money on more stuff: shoes, better technology, bigger houses, smarter cars, et cetera, et cetera. Everything happens for a reason, except reason, and everything else. Maybe.


Another interesting symbol in Anderson’s movie is the presence of ghosts who seem to interact with the living. As is typical in folklore about ghosts, they exist in a loop, constantly repeating themselves even though such repetitions make no difference to any outcome. And, of course, this is precisely the point that Anderson is getting at. The people in his story are all ghosts. And perhaps we are ghosts too, when we have a tendency to do what we have been doing for no other reason than that that’s what we have always been doing. This ghostly living has the ontological structure of a rut.





One of my favourite scenes from Songs from the Second Floor shows a massive open floor at an airport as a very large crowd of people begins to fill the floor on their way to check in for their flights. But they are all carrying baggage that is very, very heavy — so heavy, in fact, that they can barely move it. It is one of the funniest scenes I've ever seen in any film. Instead of the punchline of the joke being built upon a quick about-turn in logic, the punchline here is that there is no punchline. Everyone just keeps moving very, very slowly, straining, because of the weight of their luggage, groaning, despairing.

The check-in counter seems miles away; perhaps it is unreachable. We don’t get to see what happens to the people when they reach the check-in counter, but I would guess the following possibility. In keeping with the absurdism of the film, let’s imagine that the people are allowed to check-in all that baggage. Even so, it’ll soon become obvious, in a strange dance between tragedy and comedy, that the plane is so overloaded that it is unable to get off the ground. In this imagined ending to this scene, the people are carrying too much stuff and stuck in too many ruts, and as a consequence the heavens have become totally unreachable. This is not to say that these folks will never get into the the kingdom of the sky at all, but it is saying something about the fact that it’s difficult to fly when we do not take ourselves lightly. 


Maybe it’s impossible to fly when you’re on autopilot.

Sunday, January 3, 2016

Radical honesty as a hermeneutic consideration

I always tell my students that research is, or at least should be, an exercise in radical honesty. It’s about telling your reader (and yourself) exactly what you’re looking at, exactly what you’re trying to cover, and, as much as possible, exactly what you’re not trying to cover. It’s about creating a perception of the text as an open door to a world of other investigative possibilities. It is, like anything we might say, about setting up a scene within which we and others can think. It is, in short, hermeneutical (i.e. interpretive and experiential), which implies an ongoing discussion where no one really gets to have the last word.
And the hermeneutical concern must always be about where we are taking the journey (the conversation) — where, in other words, we are taking others and ourselves, or even where we are taking the world. This does not mean, of course, that “anything goes” (as if such a thing is at all tenable) because honesty requires something to be honest about. You cannot speak the truth if you don’t think there is a truth, right?
I think of the idiom that proclaims honesty to be the best policy. That little word policy comes from the same origin as our words politeness, police and even politics. The idea, in my view, is simply that such honesty is fundamentally about human relationships, and human relationships are about keeping the conversation open (even when we’re chatting about really specific, solid things). This is why I speak of radical honesty and not just plain honesty: Radical honesty includes a stance of vulnerability, it is about the radix — the root, and not just the surface of things — which points not just to shallow concerns but to the heart of who we are and what we think and believe about living in the world.
The idea of radical honesty is not just something applicable to research. It applies to every area of life where dialogue is involved. I think that there’s something quite beautiful in the raw truth — something alarming and disarming about it. But, to be clear, when I promote radical honestly, I’m not talking about an act of verbal brutality or bluntness because to me this is not radical enough; bluntness is wielded in power games, and radical honesty begins with something far more alarming: our fragility.
We all have limitations.
I say this because I think that the fundamental hermeneutical principle pertains to attitude with which we read and not something rigid and inhuman like the precision with which we read — an idea that I introduce in my doctoral thesis with reference to the work of GK Chesterton. It’s not that precision is absolutely unimportant, but its importance is fundamentally rooted in something like the mood or the pathos of what we’re interpreting.
SerpentNoahMovie
I find a concrete example of a failure of radical honesty in the recent debate that flared up around Darren Aronofsky’s movie Noah (2014). Is the debate still ongoing? I’m not sure. Anyway, two people in this debate, Brian Mattson and Peter Chattaway (who shall henceforth be known as Matt and Chatt, for purely poetic reasons), have discussed in particular the likelihood or the unlikelihood that Noah is Gnostic film. Matt pics the its-definitely-Gnostic-because-the-Devil-is-the-Creator-and-also-I-have-a-PhD offensive, while Chatt strongly defends the its-not-Gnostic-because-I-interviewed-Aronofsky defense (A great deal of the debate centers around how we read the snake/snake-skin in the film). And as a visual culture theorist I have to look at the debate between these two (and even as it extends beyond the words of these two)  with a fair amount of bewilderment. It’s not my aim to discuss the theories explained and arguments leveled here (although they are, mind you, all very interesting). I just want to point out something that seems to me to be very obvious: we’re dealing with a movie here — a movie, mind you, that doesn’t strike me as being particularly factual, given that it is based on a work of mythology (the Noah story in 1 Enoch, Genesis, etc), although that’s beside the point I’m trying to make.
Even though I tend to take Chatt’s side in his more generous reading of Aronofsky as a committed theist (rather than Gnostic), I concede that perhaps Matt has made a few fair points. Both “watchers” — that’s a Noah movie in-joke — consider themselves “right” in their opinions of the film, and to some extent they are simply because opinions are an important part of our way of exploring the meaning of such things. But what I do find somewhat problematic in particular is Matt’s utterly rigid hermeneutic. Despite having been shown the possibility that the film can be interpreted differently without sacrificing reason, he still assumes that he must be more correct in his reading of things that are never explicitly confirmed or denied in the visual text that he is looking at. Chatt’s reading confirms, it seems, Aronofsky’s intention, but even so does not manage to quell the possibility that the text can be read differently (a la Matt).
The failure of radical honesty in the above example, I feel, is quite simple: It is a failure of honesty concerning the extent to which we are affected by our prejudices. Matt says, more or less, that people of a Christian persuasion (especially those in authority) who promote the film have done a bad job on the assumption that people who watch it will suddenly perceive that the Devil and God are the same person. Well, maybe some people will do this, but it will probably be because they walked into the cinema with that prejudice already embedded in their psyches. Chatt’s references to his interviewing of Aronofsky only shows that his prejudice was informed by that discussion. It doesn’t, just like Matt’s argument, prove that it cannot be read in another way. So I would offer something more alarming, but perhaps closer to a weird truth. Is Noah Gnostic or not? The answer must be yes. Both hermeneutical possibilities are open to us. What we do with them, then, is the main issue. And, realistically, even those are not the only hermeneutical possibilities.
We all have our confirmation biases and our desires for stability and happiness, and as a consequence our way of reading the texts of film and life must necessarily bear the stamp of our own temperaments. We see the world the way we are, say the Rabbis in the Talmud, not just the way it is. This, I feel, is what radical honesty gives me, both as a lover of research and a love of life: a chance to look not only at what I am doing (reading, research, etc), but a chance to look at how I look at the world. In this way, of course, I must recognize that when I read the world, I am also at the very same time the one being read.

Desire and groupthink

Let’s start with King Louis from Disney’s Jungle Book singing “You know, it’s true, oo, oo … I wanna be like you, oo, oo”. Or perhaps we might start, as we find in this Cyanide and Happiness short animation, by observing someone looking at something someone else has and then “deciding” to want that same thing for himself.  Desire, despite what we want to think, is not self-generated or spontaneous. It functions like a contagion — an infection, a virus, plague or zombie apocalypse. Despite these pejorative terms, though, desire isn’t always negative: the contagion can be joy or catharsis, or any number of good things too. The simpler (or more simpleminded) we are, though, the more spontaneous a desire will seem.
MBW-WarholDesire is always mediated through the other. So, the answer to the question of what we want is usually quite simple: We want what someone else wants. For example, “when a painter wants to become famous for his art he tries to imitate the originals of the best masters he knows” (Don Quixote, quoted on page one of René Girard’s Deceit, desire and the novel: Self and other in literary structure). This is absolutely beautifully captured in the hilarious mockumentary by Banksy, which demonstrates the absurdity of an art world obsessed with copying and pasting meaningless repetitions in order to be “original”. The image here of Andy Marilyn Warhol by the simulacrum called Mister BrainWash only goes to show that fame itself is parasitic upon mimetic desire. Even the leaders are the followers, as I’ve noted elsewhere.
So, yes, desire is always second hand, copied, borrowed. At its best, this second-hand desiring is a great way to ensure that we have free will: we get to pick and choose whose desires we want to borrow. At its worst, second hand desire is the desire of the crowd, the totality or even the totalitarian regime: we forget that we have a choice in the matter of what desires we end up borrowing. Proximity to the next or nearest mediated desire plays a big role here. We will tend to adopt desires simply because they happen to be the desires that we’re currently paying attention to. Thus, attention (cf. Simone Weil and Iris Murdoch) becomes a vital component of the way that desire is ordered.
Sometimes we choose whose desires we’ll borrow. Most of the time, we tend to be oblivious to our own choices, thrown about by various winds of doctrine, dogma and fashion. Even when people speak of “going with the flow” (forgive this appaling retro-speak), they’re essentially saying “Do what I do. Copy me”. Everyone is in the copying and pasting business. Your life in a formula reads as follows: Ctrl/Cmd + C; Ctrl/Cmd + V. You’re special, just like everyone else. I actually find it funny and rather provocatively symbolic that “control/command” precede the action of copying and pasting on a computer — our perception is that we are always entirely in control. The reality is different. Without going into the complexities of the various studies done on mirror neurons, which form the scientific description of this theory, I’d like to paint a picture of how this borrowing of desire might happen:
  1. First, Person A notices that Person B derives some experience (pleasure, joy, smugness, a sense of power, or any other appealing experience) from Object X;
  2. Second, if Person B resonates with Person A’s response to the experience — via empathy or envy or some other connective human impulse — then Person A will unconsciously/consciously reverse-engineer the experience to its origin and thus find/ create a connection between Person B’s experience and Person B’s desire for Object X;
  3. Three obvious things tend to become desirable: (1) The experience of Person B and (2) the Object (in this case, Object X) that seems to produce this desire and, most importantly, (3) the desire itself. This third desirable thing (the desire), however, becomes the one thing we tend to forget.
  4. In more confused cases, a forth thing might become desirable, namely Person B him/herself. This is, obviously, a logical faux pas, where “I wanna be like you, oo, oo” becomes “I want to possess you, oo, oo”. But, as I’ve already intimated, logic isn’t really ruling the roost here.
This formulation above may seem very technical and as a consequence might be somewhat misleading because this process isn’t usually rational or conscious (although, this doesn’t mean that it’s automatically irrational either — perhaps it is nonrational). Nevertheless, understanding the process helps us to understand, or at least describe, how this desire-according-to-the-other brings about the social order as we know and love/hate it. It also helps to highlight again the fact that a human being is not just a res cogitans, but is primarily a relational being — never isolated even when alone. The social other always precedes the self. And, because of this mimetic desire, we might understand how gatherings of various kinds happen. It even explains the formation of communities: people, when sharing desires, tend to bond. Birds of a feather flock together.
There are wonderful things about feeling like you’re part of a community, of course, but I want to just briefly look at flip-side, which is the phenomenon that we call groupthink. It’s a very worrying condition that also happens to be very difficult to overcome. Mimetic desire, after all, means that group cohesion tends to be valued above most other things, including reason, morality and truth. People generally would rather belong than be right. I know that it’s not necessarily a bad thing for groups to think alike. It is, for example, very good that most people think that murder is wrong and should be punished. What is bad, in my view at least, is when thinking itself is made taboo. The rule of groupthink is simple: ‘think’ like everyone else or you’re out. I’ve seen this happen so often that’s it’s almost comical now. History repeats itself, first as tragedy, then as farce. For your further edification, here are some rules that tend to guide groupthink.
  • Collective rationalization: Everyone does their best to explain away views that don’t seem to comply with their own. No one challenges their assumptions. To keep the sense of belonging in tact, no one ventures too far from home. This sort of thing is evident in religious and irreligious groups alike. Even the New Atheism has become a movement, perhaps even a religion. It’s far easier to assure yourself of your ideology when others are helping you on your way. Let all non-conformists unite!
  • Decreased moral sensitivity: Members of the group believe, without a doubt, in their own rightness. They therefore ignore the larger moral consequences of their own decisions. This, sadly, sets up the possibility of being thoroughly cruel to minority groups in the name of what is right. Blaise Pascal noted that “We never do evil so fully and cheerfully as when we do it out of conscience”. Historically speaking, Christians, who supposedly subscribe to what Nietzsche regarded as a religion for the weak, often step outside of their own religious ethic of looking after the outcasts, strangers, widows, orphans, and so on for the sake of group cohesion. Belief and morality, it seems, often fall prey to groupthink.
  • Stereotyping: Outsiders are named and shamed. The other is reduced to a cardboard cutout. The idea is that it’s much easier to destroy someone who is not remotely like you. Everything is then done to convince the self that the other is indeed entirely other. You can observe this easily in most political debates. The “enemy” is always made out to be two-dimensional, whereas the friend is always pictured as being wonderfully complex in her multi-dimensionality
  • Peer pressure: This is obvious and somewhat implicit in the other characteristics. Pressure is put on those who would dare dissent not to dissent. This pressure is usually unspoken, but it is real nonetheless. You’d think that this characteristic would only be true of kids, but it’s horrendously true of almost any group of people you come across.
  • Self-censorship: Doubts and deviations from the perceived group consensus are not uttered or exposed. Individuals will feel that their own doubts are unreasonable, so they soon shove their own doubts aside. Groupthink proliferates taboos.
  • The illusion of unanimity: The majority view is assumed to be unanimous. Democracy has gone horribly wrong when this happens; when, for instance, you cannot challenge the government without being thought of as unpatriotic. People in leadership positions are especially prone to setting up this illusion; after all, their power, at least as they perceive it, is reliant upon people agreeing with them. Have you ever sat in on a meeting once where I could tell that everyone in the meeting vehemently disagreed with the guy in charge, but where no one said a thing?
  • The illusion of invulnerability: The group is often slightly too optimistic about their own views and so are often willing to take dangerous risks. I’ve heard of religious extremist groups that encourage others to “sell all their possessions”. And it may even sound like a noble thing (it seems so brave!) until you realize just how vulnerable such a move can make a person. (Christians who subscribe to the radical image of the Book of Acts, I feel, should also read the practical wisdom of the Didache regarding the distribution of goods; people who sponge of others weren’t regarded with much patience by the early church).
  • Self-appointed mind-police: Members will often do everything possible to protect the group-leader from information that is problematic or contradictory to the group’s cohesiveness. They will police those who may potentially step out of line. All religious groups have a history of seeking out heretics. The issue, as per point number 2, is usually not about what is right or wrong, or correct or incorrect, but rather concerns who gets to be “in”.
There’s a parable that is told in the book of Genesis (Chapter 11) about the Tower of Babel. I’m sure you know the story well. In it, a lot of people get together and, by means of a particular technology, set out to build their own utopia. But the God-character in the story sees what’s happening and he steps in to confuse the languages of people. When they can’t understand everyone anymore — when they no longer speak in one voice (hege-mony) — they split up and move apart. The utopia fails.
The message of the parable seems very powerful: it is not a story about the origin of language (I am not a literalist on this one — I read the story as a legend or a myth of a kind for reasons I won’t go into here), but a story about the importance of having other voices. It is a story about the importance of disrupting groupthink. Bearing this in mind, there is one last characteristic of groupthink that needs to be noted:
  • Narcissism:  I know it’s normal for people to want to be unified in their groups, but without genuine difference, isn’t the group just a kind of support group for narcissists? Groupthink, ultimately, is just one person talking to another person with the hope that they will hear their own thoughts coming right back at them.

Social Media: Some Guidelines

The other day (this was originally published back in 2014) I heard a talk by Emma Sadleir, a lawyer who specializes in the legalities of living in the unreal world. Since I’m a media theorist of a sort, I’ve tended to talk and write about social media in somewhat abstract terms, but Sadleir is much more concrete. Whereas I’ve tended to leave people to draw their own conclusions, Sadleir was unambiguous. I realized in listening to her that this more concrete approach may actually have some merit. So, from the notes I took and in conversation with some of the the things I’ve looking at in my own travels through theory, I’ve compiled a kind of short list of principles that may help to keep the social media sphere a safe place to hang out in. By and large, I still regard the cybersphere as an inverted world of a kind, so that idea is what guides the principles that follow:
Principle 1: Servitude is the new freedom
When you sign up to join social media, you might think how lovely it is that it’s free. Well, it’s not free, and you’re not free either, especially when you sign up to write on walls and look at pictures of cats like they did in Ancient Egypt. It’s all there to make money out of you by selling your information to corporations who want to advertise to you via a range of preprogrammed algorithms. You may think that this is okay because you don’t have to buy what they’re selling, but I’m not convinced that it’s as harmless as all that because ultimately you’re still sacrificing your privacy and your information for the sake of things you don’t necessarily believe in. A great deal of advertising is also what I would call visual pollution, and I’m not convinced that having junk photons bombard your eyeballs is all that wholesome.
Principle 2: Publicity is the new privacy
We’d like to think that everything we put up on a social media platform — a platform I often refer to as YouInstaTwitFace — is private. It’s not. Whenever you Facebook, Tweet or Instagram anything, or Whatsapp or Whatsoever, you need to know this: you’re giving your words away. Even when you say something banal like “I’ve just spilled my coffee on my phone. LOL” or post a picture of your dog chewing off your foot it may be helpful to put “© Facebook” or “© Twitter” or “© Instagram” at the end of your sentence to remind you who owns what you say. You’re there to serve the platform, not the other way around. To reiterate, every photo you’ve ever put up on Facebook is owned by Facebook. Every pic you’ve taken using an Instagram filter is owned by Instagram. Big bother (no typo intended) is watching you.
It’s worth noting that Facebook, for instance, changes their terms and conditions almost daily (or so it seems) and this has meant in certain cases that what someone posted at one time as “private” at another time has become very, very public. If you wouldn’t want your parents or a predator or the police or your potential employer or your school principle to see what you’re posting online, then don’t post it online. Even if it’s “private” now, it may not be at a later stage (as Edward Rocknroll discovered fairly recently). Never, even for a second, think that you’re anonymous online, even if you operate under a pseudonym.
Principle 3: Permanence is the new transience
The internet is a big room with everybody in it, and what you say in the room stays in the room. This is as creepy as it sounds. Even if you’re a very nice person now, your past cyber misdemeanors will stay there for your future boss to find and use against keeping you in his employ. As Douglas Rushkoff has pointed out in his book Present Shock, we’re living now in the perpetual present. Everything you’ve done and been is now, now, now, and now just like digital time. Treat everything you put out there as if it were a tattoo on the mind of the cloud. Paris Brown probably wishes she’d thought of this before she ruined her own reputation.
Principle 4: Flat is the new 3D
Whatever you say out there in cyberspace is decontextualized (so things can be read in a variety of ways) and tone-deaf (so things like humour and sarcasm are often misread). You may say something on TwitFace with your tongue firmly planted in your cheek, but the world sees only your words. Reflect, then, a little more on what you want to say before saying it. If you saw those very same words on a billboard, with your face and your name attached, without any additional context, would you still want them to be out there?
Principle 5: Loud is the new silence
The fifth amendment is dead. Ok, not really. I’m not an American, so just bear in mind that I’m only using it here because it’s become a filmic metaphor for the “right to remain silent”. If you say it on the internet, you are already waiving your right to silence. Speech and memory are much more forgiving of our occasional errors. Not YouInstaTwitFace. You may like your freedom of speech (and this inglorious world of self-publishing) but that right does not supersede other inalienable human rights like the right to dignity and the right to a decent reputation. Sadleir’s metaphor for this is quite fitting: You have the right to wave your fist around, but the moment your fist connects with someone’s face, you lose that right; because someone else has the right to not have their face pummeled. Your human rights only extend as far in front of you as will allow the rights of others to remain in tact.
Principle 6: Fake is the new real
People can quite easily create fake profiles. Just be aware of this. The fact that we’re editing ourselves so much in any case creates a fake self out there in Fakebook land. I see the digital sphere as somewhat paradoxical: it exaggerates reality through language (and the ideologies that reside in language), but at the same time amputates reality through omission. It’s helpful to keep this paradox in mind.
In short:
1) Your online life is an extension of you, but it also amputates your character by replacing your life with your words. So it’s best to say only those things that you know reflect on who you want to be.
2) Never think of yourself as anonymous online. You’re effectively a celebrity now, albeit a severely underpaid one.
3) If you wouldn’t say it to your mother, don’t say it online.
4) The internet is your frienemy. It’s wonderful if you’re doing the right thing, but it can get you into some major trouble. What you say in the electronic sphere is often admissible as evidence in a court of law. This is good news for people hell-bent on suing anyone, but not for anyone else really.
5) I like the Arrested Development idea of having an “Anti-Social Network”. It’s called life. I’m making a concerted effort to spend more time living, and that’s working out wonderfully.

To the fragile

I know you’ve been thinking it all along, but I’ll say it anyway: W Somerset Maugham’s The razor’s edge is a very aptly titled novel. It tells the story about a young man who sets out in search of the Absolute, and while I don’t agree with the conclusions that this protagonist reaches, the title of the novel still manages to capture the delicate nature of his quest. To seek truth — to find wholeness and a sense of wellbeing — is to be balanced on a razor’s edge. You see, the absolute itself, however we may try to define it and however we encounter the ways that our words fail to capture it, seems to also be fragile; it is, perhaps, something discovered in fleeting glances, in ideas that escape representation, in brief experiences of the sublime, in gestures, in clouds that bend and shift to the whims of the wind. It is often most present when we feel that we have lost it, and somehow most absent when we claim to have a handle on it. Perhaps it is when we understand the fragility of this absolute, or at least the fragility of our ability to be present to it, that we can come to appreciate most profoundly what it means to be human.
William Desmond, a remarkable philosopher whose work I have been grappling with recently, presents this very simple but very profound idea that ‘to be’ — that is, to ‘take a stand’ on being human (to risk a Heideggerian phrase) — is to be ‘in the between’. It’s a simple idea — an idea that I may unfairly over-simplify here — that has some very profound and illuminating implications for how we might understand our experience within what is often a traumatically complex world. It presents us with the truth that our ordinary experience of the world is paradoxical: this is to say that existence is riddled with riddles that are always and forever dancing in, through and beyond the known.
Desmond writes that “[o]ur understanding of what it means to be comes to definition in a complex interplay between indetermination and determination, transcendence and immanence, otherness and sameness, difference and identity”. This “betweenness” is grappled with and taken in most elegantly through a somewhat mystical stance towards being that Desmond refers to as the metaxological, which is “[a] sense of plurally intermediated relatedness between identity and difference, offering a renewal of the openness of this between, where identity exceeds its own self-mediating and difference can define robust otherness irreducible to any dialectical self-mediation” (see Desmond, Desire, dialectic and otherness, 2014). I know there are a lot of big ideas here that I can’t quite get into in the kind of detail I would like, but suffice it to say that this metaxological stance is a stance of openness; it stresses “the mediated community of mind and being” and “calls attention to a pluralized mediation beyond closed self-mediation from the side of the same, and hospitable to the mediation of the other, or transcendent, out of its own otherness”. In other words, the metaxological sense “keeps open the spaces of otherness in the between, and it does not domesticate the ruptures that shake the complacencies of our mediations of being” (see The William Desmond Reader, edited by Christopher Ben Simpson).
This open relationship to otherness — this openness to that which escapes our attempts at domesticating reality and which leaves us often gasping for breath in the face of the tumultuous — is beautifully captured in the philosophy of John O’Donohue (especially in his book Eternal echoes). O’Donohue places our experience of being in the tension between longing and belonging. We all long for things — we are hungry, needy, lustful; we are seekers, trippers, askers, fumblers, fallforwarders. We all need to belong — to feel as if we, and the things we experience, fit. If we were only in a state of longing, we would all be addicts — always consuming, but never sated. We would be Erysichthonites and flounderabouters. But if we were only in a state of forced belonging, we’d all be fundamentalists and psychotics who try to control reality through language, actions and various permutations of bureaucratic institutionalization. To avoid these extremes, we need to have this tension between longing and belonging maintained in ourselves: we need a sense of adventure and a sense of the comfort of home, as Chesterton says. It is only in this tension that the “original astonishment” of being (as Desmond calls it) can be reclaimed (although, perhaps, this tension may also produce more perplexity than it does astonishment).
But, as I said, we’re on a razor’s edge. Our desire to stay in a right relationship with these tensions — with this delicate sense of the between — is disrupted almost continuously, both by the wondrous and the terrible, both by the awe-inspiring and the dread-inciting. We may tell stories and myths to stress belonging, but in so doing may also forget that we are hungry. Or we may wander around in the desert looking for water while forgetting that we’ve got everything we need right there with us. We are so easily thrown off kilter.
I know that you are like me in that you too are fragile. And I want to offer just this one thing that I have found to be true in what has turned out to be a beautiful-difficult year — a year of death and new life, of grieving great losses and celebrating new arrivals: the more we try to balance ourselves on this razor’s edge, the more likely it is that we will simply fall off. Because the absolute or the divine is not ultimately something we find. Rather, it is something that finds us when we let ourselves simply be — when we breathe, when we try to indicate towards that which cannot be named, when we accept that there are mysteries that we will never fully grasp. This, it turns out is grace. To be inbetween is, ultimately, insofar as I can tell, grace. I know it’s a difficult word to explain, but I think Paul Tillich (in Shaking the foundations) gets pretty close:
“Grace strikes us when we are in great pain and restlessness. It strikes us when we walk through the dark valley of a meaningless and empty life. It strikes us when we feel that our separation is deeper than usual, because we have violated another life, a life which we loved, or from which we were estranged. It strikes us when our disgust for our own being, our indifference, our weakness, our hostility, and our lack of direction and composure have become intolerable to us. It strikes us when, year after year, the longed-for perfection of life does not appear, when the old compulsions reign within us as they have for decades, when despair destroys all joy and courage. Sometimes at that moment a wave of light breaks into our darkness, and it is as though a voice were saying: “You are accepted. You are accepted, accepted by that which is greater than you, and the name of which you do not know. Do not ask for the name now; perhaps you will find it later. Do not try to do anything now; perhaps later you will do much. Do not seek for anything; do not perform anything; do not intend anything. Simply accept the fact that you are accepted!” If that happens to us, we experience grace After such an experience we may not be better than before, and we may not believe more than before. But everything is transformed. In that moment, grace conquers sin, and reconciliation bridges the gulf of estrangement. And nothing is demanded of this experience, no religious or moral or intellectual presupposition, nothing but acceptance.
In the light of this grace we perceive the power of grace in our relation to others and to ourselves. We experience the grace of being able to look frankly into the eyes of another, the miraculous grace of reunion of life with life. We experience the grace of understanding each other’s words. We understand not merely the literal meaning of the words, but also that which lies behind them, even when they are harsh or angry. For even then there is a longing to break through the walls of separation. We experience the grace of being able to accept the life of another, even if it be hostile and harmful to us, for, through grace, we know that it belongs to the same Ground to which we belong, and by which we have been accepted. We experience the grace which is able to overcome the tragic separation of the sexes, of the generations, of the nations, of the races, and even the utter strangeness between man and nature. Sometimes grace appears in all these separations to reunite us with those to whom we belong. For life belong to life.
And in the light of this grace we perceive the power of grace in our relation to ourselves. We experience moments in which we accept ourselves, because we feel that we have been accepted by that which is greater than we.”

Ice buckets and mimetic moronism

There are a whole bunch of critics out there who have suggested that this Ice Bucket Challenge thing has gone too far. At least, this is what they suggested when this happened way back in 2014 when this was first published. But, what if it hasn’t gone far enough?
Maybe people should instead start randomly hacking off their own limbs with machetes before nominating others to do the same in order to promote an awareness of ALS, or some other major disease or cause. Or maybe people should start something called the Guillotine Challenge, whereby the French nation’s greatest contribution to Making Beheading Easy can team up with the internet’s formidable contribution to Making Stupidity Popular. At least such a use of sharp weaponry would allow people to identify more strongly with the genuinely traumatic state of those who are actually diagnosed with a terminal illness instead of trivializing it completely. The beheadings will, I think, also be quite appropriate symbolically speaking: they would mirror pretty much exactly what’s already happening out there in Internetland. If thine head causes you to be a moron, lop it off; for it is better to exit the Kingdom of This World a moron than to be accused of reasoning properly. 
My concerns with the ideological apparatus of the Ice Bucket Challenge, however, go further than merely observing its tendency to force an association between the truly terrible and the truly ridiculous. Although you know this already (or, at least, I hope you do), it’s worth looking again how the game works: (1) First, you get nominated by someone who both loves and hates you, (2) then you have twenty-four hours to douse yourself in ice-cube infested water or, (3) if you fail to comply with this bizarre condition, you need to then donate a sum of money to ALS research. (4) Finally, you get to nominate three other people to join in the fun (otherwise known as “fun”). I know people who have donated to the cause and doused themselves and I must say that I’m impressed: it’s proof enough that not knowing the rules of a game isn’t necessarily going to stop people from playing.
“Don’t think, just do it! Guillotines away!”
It’s pure mimetic desire, of course. People want what others want for no other reason than that others want it. There is a kind of invitation here (“Come on, join in!”) and a kind of injunction (Join in, or else!), but I’m pretty sure it’s the injunction that ultimately wins while the invitation cowers in the corner. One could even suggest that the simpler, dumber and less self-aware the individual subject, the more likely they’ll be to fall prey to this mimetic vortex (although, as Bill Gates among others has shown us, the intelligent are not exempt from this copy-cat behaviour; we are all, after all, driven by some form of mimetic desire). And this is not necessarily a bad thing.
As the desire escalates through various viral commandments and participations, the desire of others becomes, as far as I can tell, the desire of what Lacan calls the big Other. As Hendrik Bjerre and Carsten Laustsen note in their book The subject of politics: Slavoj Žižek’s political philosophy, “Desire, in Lacan’s words, is always the desire of the Other. What I desire is first and foremost to be desired by the [O]ther … [Thus, even] the most immediate private wish is therefore always already mediated by a kind of unconscious awareness of our relations to the [O]ther.” Lacan’s big Other is different from the Girardian notion of the other in that Girard is talking about actual individuals —people with souls and wills — whereas Lacan is talking about the a kind of unconscious network of rules that governs human relations.
The Other, in Lacanian terms, is constituted by the entire symbolic realm of human productions. It is manifest not only in language but in all the various hypotheses that exert any influence upon the Subject: society, law, social mores, taboos, norms, expectations, perceptions of time, the dominant logic of the tribe, etc. The Other, then, is the place where all signifiers are stored, the treasury of semiotic interlinkages. It is, of course, ‘unconscious’ — that is, it functions like the electromagnetic field operating upon liquid crystal to form letters: it pulls signifiers into place without any force of will. The Other does not exist — it is a fiction — and yet it functions.
This Other can be equated with ideology, which Mark Lilla describes beautifully, “An ideology … holds us in its grasp with an enchanting picture of reality. To follow the optical metaphor, ideology takes an undifferentiated visual field and brings it into focus, so that objects appear in a predetermined relation to each other.” Ideology makes it possible for the individual to mirror itself in society. So, when we look at the Ice Bucket Challenge, we recognise its ideological function as follows: it is what allows people to feel like they fit; that they comply, belong, conform.
We’re all individuals here, right?
We are all unique just like everyone else.
Bjerre and Laustsen carry on: “There does not, or at least certainly doesn’t have to be, any unequivocal conspiracy (a mastermind) behind the way that ideologies work on their subjects, but they nonetheless function and subjects largely orient themselves via an imagined coherence behind the actual events in society, or to put it another way: we behave as if there were a society. There is no society but it functions nonetheless.”
And this one manifestation of ideology that we’re looking at here works as follows: Most people opt for participating in the self-dousing, which may be interpreted thus: participants have no real interest in actively putting their money where their quivering-shivering mouths are. This is not to say that money hasn’t been raised. In fact, a hell of a lot of money has been raised. Well done, world. And, yes, some kind of awareness is raised, which is great — I’m not dead-set against the charity in all of this, and I’m not even against the fun, per se. But with this awareness of the Girardian other and the Lacanian Other in mind, I hope you see that there are ideological constraints that need to be reconsidered.
In particular, there is an ideological trap built into the rules: only two real options are offered. This is the trick of every kind of spurious benevolence. The rules stipulate that you either participate by getting wet or you participate by paying up. No genuine third option is offered, although one is implied: those who don’t participate are automatically spoilsports (Their reasons and motivations are irrelevant).  I’m not sure this is a fair game, because the rules work much like a totalitarian regime works.
Apart from this forced compliance, the rules also pronounce any other social, environmental and/or economic concerns well outside of the scope of the game. In this vortex, what really matters is ALS; what really matters is participating in the game. Precisely where the money goes and how it is used is not even vaguely at issue. Everything else, including saving clean drinking water in a drought-stricken world, is irrelevant. And, naturally, thinking about any of this is implicitly prohibited by the rules, which I am hereby rebelling against. The internet flattens people; turns us all into memes.
So I hereby nominate the internet to stop nominating people.
Although, perhaps this too is unfair. One of my students pointed out to me yesterday after I gave a lecture on this subject that the Ice Bucket Challenge creates a wonderful opportunity for camaraderie and friendship building. This may be partially true, but the rules of this game, unlike a genuine friendship, seem to be beyond negotiation. Thus, inevitably, I am not taking issue with all of this joyous camaraderie, nor am I at odds with the fact that a great deal of good has come of this. And even while I find the sheer volume of banal videos and images on the internet on this viral phenomenon troubling, not even this is the main concern for me.
My main concern is that the contingent is taken as absolute; the ideology is thus rendered as an immovable object. The invitation is replaced by a rather forceful set of parameters that basically communicate that human freedom itself irrelevant. We are part of the crowd, the team, the global village. This, of course, is true, but it is only one part of the truth. And by emphasizing only a part of the truth, only part of our humanity is taken into account; and we know from history that when this sort of thing happens, we are inevitably dehumanized.

Evolution made me do it!

This was originally published in December 2012, which was a while ago. I'm pretty sure some of my ideas have evolved since then...

I recently read a report that records how scientists have “proved” that women are naturally promiscuous because (as the explanation goes) it increases the likelihood of their conceiving and giving birth to healthy children. It has long been suggested that men are also naturally promiscuous and often resort to rape for the same reason. In other words, infidelity and sexual deviance are somehow genetic mechanisms that perpetuate the ‘survival of the fittest’ (This is TH Huxley’s phrase, not Darwin’s, contrary to popular belief). Apart from the troubling use of science to legitimate human idiocy and cruelty and also the failure of this theory to point out that promiscuity is also quite helpful for killing off people as in the case of HIV/AIDS among other STDs, it presents a problem that I don’t think gets enough attention, namely the problem that a great many scientists think that they understand what they mean when they claim that “evolution” makes things the way they are when it is obvious to me that they don’t.

To be utterly transparent, I have absolutely no problem with the theory of evolution at all insofar as it remains within the scientific paradigm. It is an elegant albeit unprovable theory with some very valid and plausible conclusions. But what I do have a problem with is the fact that so many people evoke it without understanding what it is actually about. What follows is an attempt to deal with this problematic understanding using Hume’s Razor, which helpfully splits isness from oughtness.

Evolution, as Darwin recorded it, was simply meant as a descriptive theory, which is why he never saw it as contradicting various faith claims about the meaning of life and the existence of God. Evolution describes not how or why things change, but simply that they do change. It notes that certain birds started off with short, stumpy beaks and ended up with slightly longer beaks and that such a transition, by chance and accident (emphatically not design), happened to be ‘good’ for said birds in the sense that it allowed them to reach pollen at the bottom of more elongated flowers. Those that didn’t adapt to such flowers would have had to look for other sources of food, or perish. Notice how Richard Dawkins puts it so eloquently: “Evolution is blind to a goal”. This is to say that evolution is not willfully aiming for anything; it simply happens; things work out in a certain way and cannot be said to be either good or bad. If people evolve to be dumber and with an extra set of arms or eyeballs, evolution wouldn’t care. The Huxleyan phrase “survival of the fittest” is therefore merely a means by which it is noted that those that don’t adapt tend not to survive. That is all.

But here is where the trouble comes in. Somewhere along the line, people (eager to feed into the modernist myth of progress) took “survival” to be a value judgment in evolutionary theory, rather than a simple description about the presence or absence of life. In other words, it was deemed that evolution “thinks” survival is better than its opposite. It was concluded that evolution “deems” survival “good” and death “bad”. And the worry here is that evolution is given a mind, a will, and a reason to promote various kinds of social Darwinism. But evolution does not, and in fact cannot be bothered one iota about how we do and do not survive and whether survival is perpetuated or not. Evolution is utterly, completely and thoroughly blind. Dawkins points out elsewhere that “our genes neither know nor care … we just dance to their music”. Do our genes care whether they live or die? Nope. Not one bit. Does evolution care whether the human race is perpetuated or diminished? Again, no.

Evolution begins and ends with a world that is simply stuff. It is material and therefore not in the least bit mindful. It is not meaningful or charged with hope or love or destiny. It is just stuff. As soon as we start to talk about survival as better or worse than dying out, we are no longer in the realm of science. We are then dealing with metaphysics and, yes, even religion. As soon as we insist that evolution has a mind and a will, we are no longer talking about a description of how things happen to have happened the way they happened; rather, we are talking about a particular kind of conceivable ‘god’. We are no longer working with empirical observations (not that evolution is truly empirical; let’s be honest, it is still a theory, albeit a very good one in the sense that the usual criteria of testability and repeatability do not apply), we are working with faith claims. At this point, unavoidably, we are fully immersed within the domain of ideology.

I know I am making a fairly brutal division between science and philosophy, but I think it is helpful for navigating what each brings to the table. I want the scientist to tell me what kind of poison will kill me; it is for my local metaphysician to determine whether or not I ought to be killed. The division of science and philosophy does not suggest that the two are incompatible, but just the opposite. They are compatible and even helpful insofar as they allow people to make decisions within the limitations of each discipline. And this should be pointed out even to people like Dawkins, who while being a brilliant scientist seems to me to have the philosophical finesse of a chainsaw. A scientist may be able to tell me about how the material world works, but I’d rather talk to a philosopher about whether or not it may actually have any meaning.

Followers