Losing your soul

Losing your soul

It’s been said that vampire mythology has remained compelling for generation after generation because the myths lend themselves to the preoccuptions or anxieties of each generation. In Bram Stoker’s time this might have been an anxiety about industrialisation, or perhaps the alien East; more recently we’ve seen vampires transformed into romantic outsiders (the Twilight twaddle) or away from the mainstream, into parables of capitalist excess, AIDS, or homosexuality more generally (Perfect Creature, Daywalkers, True Blood and so on).

I caught up with film director Neil Jordan’s most recent excursion into the vampire world, Byzantium. It’s good to have a violent edge restored to the myth after the teenage goo of the Twilight saga, though it’s a much smaller film than Jordan’s previous Interview with the Vampire, and has had less success. That’s not surprising because despite a strong cast it was disappointing in many ways, needing more time and space to explore the ideas it was throwing up.

All the same along the way it raised, perhaps inadvertently, an interesting question about one of the most common features of horror myths, the idea that you might have immortality in return for your soul.

In the more religious ages of Christopher Marlowe, Goethe, or even Stoker, it’s apparent that the loss of the soul would be more straightforward (and frightening) than it is to us now. In those times it was (a little) clearer what might be meant by the soul, the part of us destined in any case for immortality either in heaven or hell. For Doctor Faustus it was the pact with the devil, the choice between an extraordinary life here on earth and the possibility of eternal bliss in the afterlife. This idea spilt over into vampire myths. In most 20th century versions, while some victims are simply turned into monsters by other monsters, there are often those who will willingly choose the vampire fate rather than die (this idea persists strongly in True Blood).

In Byzantium the loss of the soul was clearly presented as the price you’d pay for immortality, and yet it was not apparent in the film, nor in the broader context of our secular lives, what exactly it is that we would lose through this bargain. In earlier versions of the vampire myth, certainly in Stoker’s Dracula, and Christopher Lee’s version for Hammer, the vampire is reduced to a kind of bestiality, with no real emotional attachments and not much more than an instinct to survive. In this “monster” version the soul becomes the thing that makes us distinctly human, our capacity to care for others, the value we give to emotions and (arguably) through those emotions to ethical behaviour.

Religious belief is rooted in this idea, but it doesn’t depend on religious belief. In his contribution to the RSA’s Spirituality project Iain McGilchrist asked whether it still made any sense to talk of the soul in a secular world, and came up with a positive answer: in his account the concept of soul is a way of understanding the fullness of how we are in the world, the fullness of our experience, without necessarily invoking metaphysical beliefs about the divine or life beyond death.

In this view we might say that the idea of the soul is an aspect of consciousness, or even a generous concept of consciousness (hard to say what that distinction really means, but that’s part of the problem of discussing consciousness). We could even say it’s part of what we mean to have a soul, that we should care about the question of what it means to have a soul.

Thinking in this way means trying to achieve a better understanding of what might be possible through epistemology, about what we can and cannot know.

Thinking this way has to acknowledge the relatively recent realisation that our sense of identity is subject to the physical reality of our brains, that certain kinds of brain damage may alter our personalities radically, or destroy the memories on which our sense of identity depends. It seems to me that this inconvenient truth makes it very difficult to maintain the idea, common to pretty well all religions, that life on this earth is (in Keats’ phrase) a vale of soul making, with the fruits of those labours only fully realised in a life beyond this one.

This concept of the persisting soul itself depends on a concept of some kind of persisting identity, as well as the belief that this identity, this self or soul is responsible for its own development. But if a bang on the head can send all the development onto a different track then it’s hard to see how we can be held responsible for it, or to put it in a less judgemental way, which soul/identity is going to persist into another kind of life?

On the other hand, and this is part of McGilchrist’s wider thinking, it’s naïve to say the least to equate electrical or chemical activity in the brain with “thought” or indeed consciousness. The equation is made because we can correlate the two, but we shouldn’t confuse correlation with equation (or identity). We need a better account of embodied consciousness, without naive materialism but also without resorting to the tangles of metaphysics (let alone religious metaphysics). It’s plausible that a secular concept of soul offers a way of doing this.

This is all very well for debate in a philosophy seminar. It doesn’t make for particularly gripping drama. One of the problems with Byzantium, and indeed many modern vampire stories, is that they want to put some weight on the loss of soul, the price to be paid, without having any way of taking seriously what this could mean. So our lead characters far from being monstrous appear to be persistently human, with a full range of emotional and moral concerns, apart from the fact that they routinely have to cut into other’s veins and consume all their blood. The stories depend on this persistence of humanity to command our interest and sympathy.

Byzantium’s vampires are not even excluded from the daylight. Theirs is a subtler burden, the pain of living secretly among humans and knowing yourself to be different (hence the ready analogy with queerness). This doesn’t really seem like soullessness, more a fairly common aspect of human experience.

There is another sense in which choosing immortality will immediately estrange us from our humanity, our soul. Uncomfortable though it might sometimes be, our sense of what it is to be human really might depend on our mortality, on the fact that we age and die. This doesn’t make the prospect of death any more welcome in itself, but it does make it more acceptable. As Tennyson’s Tithonus complains “me only cruel immortality/Consumes”. The force of that “me only” falls on “consumes”: Tithonus is not the only immortal, but the immortal gods are made of different stuff, their ageless eternity quite unlike the withered Tithonus who persists only as a “walking shadow” in the world. We might reasonably wish for more time with better health, but in the end even the futility of wanting more is part of what it is to be human.

The bitter irony is that for fundamentalists of all stripes, the promise of an afterlife, of an existence more important than life on earth, or even a cause they think might live on through their action, is enough to make them forget their humanity and destroy what life we do know is real. There be monsters.

Minds, machines and business models (2)

Minds, machines and business models (2)

Since the industrial age, the machine has been the model for business (explicitly in the work of the late 19th century management theorist Frederick Winslow Taylor, whose thinking, though hardly fashionable or explicitly embraced, still dominates management ideas).

That’s probably because efficiency is directly correlated with profit, and making a business work like a machine seemed a good way of lowering costs and achieving consistent output.

The machines of the industrial age were large, mechanical, their processes sequential. Bryan Appleyard notes the significance of Babbage’s Difference Engine (a computational device) as the first machine with an abstract rather than tangible output, heralding a new and very different machine age, the one in which we’re now living. But coming from the first machine age we constructed organisations around the idea that managers should have as much control as possible, the scope for deviation written out of the program.

The very best, obvious way to achieve this was to replace human work with machines. This has duly happened, with ever more sophisticated robots in recent years eliminating the need for human labour even in tasks that once called for more advanced human skills. We’ve put machines into places that we’d reasonably expect to be about human interaction, whether in the ATMs that took over many common banking activities, or the automated call answering systems that pretend to be a first level of customer responsiveness.

Control and creativity

Machines bring their own management tasks: they have to be maintained, and upgraded in line with technology advances. But they do simplify, reducing the possible variables between management intention and output to the market (which itself is only okay if you see your relationship with the market as essentially one-way). This simplicity became all the more attractive in a world that was rapidly becoming more complex, as communications technology made globalisation possible. Businesses themselves had evolved into more complex structures, expanding their product ranges, their geographical reach, and the sophistication of their support functions, from human resources through logistics to marketing.

For a while this evolution seemed the most obvious means for managers to keep control of all the different elements that the business required, but as the functional complexity multiplied management theorists (and practitioners) began to talk of “core competencies”. A different model quickly emerged, in which managers define their requirements, and then contract out to specialist suppliers to meet those requirements. On the face of it this model multiplies profit centres in the supply chain, and so would push costs up. In reality it rests on the assumption that specialist operators will be better at identifying and removing cost in their operations without compromising quality. Crucially it allows managers to shorten lines of accountability, restoring a sense of control in an ever more challenging world.

But this desire for control is only part of the story. In the last twenty years we have found ourselves moving into a different kind of machine age, driven by information technology (if the importance of the Difference Engine is that it’s the first machine with an abstract output, we’re only just beginning to experience how transformational this could be). This shift away from the mechanical demands a rethink of the machine model for those who remain in the workplace.

Machines are generally a poor source of competitive advantage. Early adopters of new technology may gain a useful early lead, but others will soon catch up (machines are relatively easy to replicate). This probably explains why in the last fifteen years or so management theory has put an increasing premium on creativity on business. What really defines a business is its culture (not what it makes). It seemed that at some level we no longer wanted the organisation to work like a machine, but to have a culture in which people could be spontaneous, where they routinely thought “out of the box”, a culture that was entrepreneurial and nurtured innovation and responsiveness.

The left brain world view doesn’t yield this ground easily.  Managers have sometimes tried to make this kind of human creativity machine-like, following the idea that “knowledge management” could be an IT function. The American consultant and writer Don Tapscott sums up the folly of this (and suggests a necessary alternative).

“Knowledge management has failed. We had this view that knowledge is a finite asset, it’s inside the boundaries of companies, and you manage it by containerizing it.

“So, if we can get all of Jessica’s knowledge into this container, or computer system, then when she leaves the company we’ll still have Jessica, or we can get to Jessica in this container. And this was, of course, illusory, because knowledge is an infinite resource. The most important knowledge is not inside the boundaries of a company. You don’t achieve it through containerization, you achieve it through collaboration.”

(in an interview with McKinsey Quarterly January 2013)

It’s true that Tapscott is still thinking about technology (he is urging businesses to investigate and adopt social media), but that’s because he understands that technology can help us do human things more effectively, rather than working as a mechanised substitute for those activities (which are valuable exactly because they are unpredictable).

The crucial point here, and one which has attracted surprisingly little comment, is that there is a big contradiction between this ambition to foster creativity, and the prevailing, fundamental assumptions about the machine-like things managers should be doing to make a business run effectively. Something has to give. The problem is also evident in the ways we have come to think about brand, a problem I will be exploring in subsequent pieces.

Minds, machines and business models (1)

Minds, machines and business models (1)

Neuroscience raises a question about the possible distortions of the models we use to organise and shape our knowledge. With neuroscience it is quickly necessary to think about the relationship between observable brain functions and consciousness.

It’s become possible to make a direct correlation between specific states of mind and visible activity in the brain, which has led some effectively to equate brain activity and consciousness. But this ignores the fundamental philosophical challenge that consciousness offers. We are not aware of our consciousness as we are of other things, because that consciousness is the starting point for our awareness of other things. It’s hard to talk of our consciousness as if it was actually oneself rather than something we own, something separate, and yet we know that it’s not something separate. It is not like a hand or a leg which we can possess in the sense that we could also live without our limbs. It is, fundamentally, what we think of as our self (so loss of certain brain functions can equate literally to a loss of self).

When we seek to understand brain concepts we tend to use a metaphorical model. In the case of recent neuroscience, that model overwhelmingly is the computer. The computer is the readiest external analogy we have for the brain, and indeed a large chunk of computer science is dedicated to making computers more brain-like, to create effective artificial intelligence capable of understanding context and eventually programming itself (there is an AI concept of The Singularity, the point where computer intelligence starts to function independently and beyond the capabilities of human intelligence).

The irony here, as writers like Iain McGilchrist and Bryan Appleyard have pointed out, is that just as we want to make our computer machines more brain-like, in the way we conceive of the brain/mind we want to make the brain more machine-like, and this is a blinding metaphor, which is already pervasive (think how people will speak of humans being “hard wired” to behave in certain ways, or indeed the probably specious notion of neurolinguistic programming).

Beyond reductive

We’ve come to think of scientific analysis as a process of breaking things down to their constituent parts, understanding how those parts work together. Though this can be useful it is not the only means we have of pursuing an enquiry and when it comes to understanding consciousness it may obscure as much as it illuminates.

This kind of reductive analysis may be appropriate if the questions you want to answer are about the essence of something (at some level). So if you wanted to understand how colour exists in nature (or perhaps cure colour blindness) you would want to do some essential work around the chemistry of pigments, about light and optics and so on.

But if you wanted to understand why a painting moved you, discussing its physical chemistry would not get you very far. This is not a perfect analogy: consciousness doesn’t exactly present itself like a work of art (though actually the consciousness of other humans does). But if, as seems likely, the way consciousness arises in the brain is a complex phenomenon, then reductive analysis is unlikely to get you to the answers you want.

Conceiving of the brain as a computer is a reductive analogy. There is no reason to believe that the fundamental building blocks of brain activity are like binary code. Equally there is no reason to believe that increasing processing power will somehow help you leap from the world of binary processing to the world of the human brain. As Appleyard among others has pointed out, it’s hard to imagine computers becoming conscious like brains, because we are nowhere near being able to describe what consciousness is, or even what it is like (the philosopher Thomas Nagel has argued that one of the distinguishing features of consciousness is that we cannot say it is like anything else).

Dealing with complexity

In The Brain is Wider than the Sky Bryan Appleyard argues that the world is mostly complex, and unless we’re very careful, attempts to understand it through (reductive) simplification are bound to produce distortions. In this view he is explicitly influenced by McGilchrist, who in The Master and His Emissary moves from a review of the current state of neuroscience to a cultural history of the West. McGilchrist is unusual in that he’s a working psychiatrist, but has been a professional (academic) literary critic, and philosopher.

McGilchrist suggests that the left and right brain hemispheres, though always interdependent (in a healthy brain), process our experiences in very different ways. To summarise a usually complicated argument, he suggests that only our right hemisphere has direct experience of the world outside ourselves, and that it handles this experience holistically, being the part of ourselves capable of understanding metaphors, analogies and likenesses (it connects things). The left hemisphere, in contrast, takes the information flowing from the right hemisphere, and seeks to make analytic (reductive) sense of it, so that it can maintain close control of the situation associated with that information. To do this it seeks to exclude further information flowing from the right brain’s continued experience of the world.

McGilchrist is very aware that this way of talking is itself shaped by metaphor, but his claims are rooted in the empirical evidence of hemispherical dysfunction (the capabilities we lose when part of a hemisphere is damaged): our brains really are divided and there are differences between the two hemispheres. He goes on to suggest that our cultural history reflects a vying for superiority between the world views of our respective brain hemispheres. He argues that in the modern world, the left brain, analytic, reductive, fixated on control, has come to dominate thinking about experience, and what we should be doing. He argues that this isn’t a necessary way of looking at what we have, and in many respects is a misleading way.

But it is left brain thinking which sees the computer (machine) as a good model for mind. The left brain can relate to the way machines do things. We can see the left brain at work too in the routine attempts to oversimplify how organisations can be managed.