Minds, machines and business models (2)

Minds, machines and business models (2)

Since the industrial age, the machine has been the model for business (explicitly in the work of the late 19th century management theorist Frederick Winslow Taylor, whose thinking, though hardly fashionable or explicitly embraced, still dominates management ideas).

That’s probably because efficiency is directly correlated with profit, and making a business work like a machine seemed a good way of lowering costs and achieving consistent output.

The machines of the industrial age were large, mechanical, their processes sequential. Bryan Appleyard notes the significance of Babbage’s Difference Engine (a computational device) as the first machine with an abstract rather than tangible output, heralding a new and very different machine age, the one in which we’re now living. But coming from the first machine age we constructed organisations around the idea that managers should have as much control as possible, the scope for deviation written out of the program.

The very best, obvious way to achieve this was to replace human work with machines. This has duly happened, with ever more sophisticated robots in recent years eliminating the need for human labour even in tasks that once called for more advanced human skills. We’ve put machines into places that we’d reasonably expect to be about human interaction, whether in the ATMs that took over many common banking activities, or the automated call answering systems that pretend to be a first level of customer responsiveness.

Control and creativity

Machines bring their own management tasks: they have to be maintained, and upgraded in line with technology advances. But they do simplify, reducing the possible variables between management intention and output to the market (which itself is only okay if you see your relationship with the market as essentially one-way). This simplicity became all the more attractive in a world that was rapidly becoming more complex, as communications technology made globalisation possible. Businesses themselves had evolved into more complex structures, expanding their product ranges, their geographical reach, and the sophistication of their support functions, from human resources through logistics to marketing.

For a while this evolution seemed the most obvious means for managers to keep control of all the different elements that the business required, but as the functional complexity multiplied management theorists (and practitioners) began to talk of “core competencies”. A different model quickly emerged, in which managers define their requirements, and then contract out to specialist suppliers to meet those requirements. On the face of it this model multiplies profit centres in the supply chain, and so would push costs up. In reality it rests on the assumption that specialist operators will be better at identifying and removing cost in their operations without compromising quality. Crucially it allows managers to shorten lines of accountability, restoring a sense of control in an ever more challenging world.

But this desire for control is only part of the story. In the last twenty years we have found ourselves moving into a different kind of machine age, driven by information technology (if the importance of the Difference Engine is that it’s the first machine with an abstract output, we’re only just beginning to experience how transformational this could be). This shift away from the mechanical demands a rethink of the machine model for those who remain in the workplace.

Machines are generally a poor source of competitive advantage. Early adopters of new technology may gain a useful early lead, but others will soon catch up (machines are relatively easy to replicate). This probably explains why in the last fifteen years or so management theory has put an increasing premium on creativity on business. What really defines a business is its culture (not what it makes). It seemed that at some level we no longer wanted the organisation to work like a machine, but to have a culture in which people could be spontaneous, where they routinely thought “out of the box”, a culture that was entrepreneurial and nurtured innovation and responsiveness.

The left brain world view doesn’t yield this ground easily.  Managers have sometimes tried to make this kind of human creativity machine-like, following the idea that “knowledge management” could be an IT function. The American consultant and writer Don Tapscott sums up the folly of this (and suggests a necessary alternative).

“Knowledge management has failed. We had this view that knowledge is a finite asset, it’s inside the boundaries of companies, and you manage it by containerizing it.

“So, if we can get all of Jessica’s knowledge into this container, or computer system, then when she leaves the company we’ll still have Jessica, or we can get to Jessica in this container. And this was, of course, illusory, because knowledge is an infinite resource. The most important knowledge is not inside the boundaries of a company. You don’t achieve it through containerization, you achieve it through collaboration.”

(in an interview with McKinsey Quarterly January 2013)

It’s true that Tapscott is still thinking about technology (he is urging businesses to investigate and adopt social media), but that’s because he understands that technology can help us do human things more effectively, rather than working as a mechanised substitute for those activities (which are valuable exactly because they are unpredictable).

The crucial point here, and one which has attracted surprisingly little comment, is that there is a big contradiction between this ambition to foster creativity, and the prevailing, fundamental assumptions about the machine-like things managers should be doing to make a business run effectively. Something has to give. The problem is also evident in the ways we have come to think about brand, a problem I will be exploring in subsequent pieces.

Minds, machines and business models (1)

Minds, machines and business models (1)

Neuroscience raises a question about the possible distortions of the models we use to organise and shape our knowledge. With neuroscience it is quickly necessary to think about the relationship between observable brain functions and consciousness.

It’s become possible to make a direct correlation between specific states of mind and visible activity in the brain, which has led some effectively to equate brain activity and consciousness. But this ignores the fundamental philosophical challenge that consciousness offers. We are not aware of our consciousness as we are of other things, because that consciousness is the starting point for our awareness of other things. It’s hard to talk of our consciousness as if it was actually oneself rather than something we own, something separate, and yet we know that it’s not something separate. It is not like a hand or a leg which we can possess in the sense that we could also live without our limbs. It is, fundamentally, what we think of as our self (so loss of certain brain functions can equate literally to a loss of self).

When we seek to understand brain concepts we tend to use a metaphorical model. In the case of recent neuroscience, that model overwhelmingly is the computer. The computer is the readiest external analogy we have for the brain, and indeed a large chunk of computer science is dedicated to making computers more brain-like, to create effective artificial intelligence capable of understanding context and eventually programming itself (there is an AI concept of The Singularity, the point where computer intelligence starts to function independently and beyond the capabilities of human intelligence).

The irony here, as writers like Iain McGilchrist and Bryan Appleyard have pointed out, is that just as we want to make our computer machines more brain-like, in the way we conceive of the brain/mind we want to make the brain more machine-like, and this is a blinding metaphor, which is already pervasive (think how people will speak of humans being “hard wired” to behave in certain ways, or indeed the probably specious notion of neurolinguistic programming).

Beyond reductive

We’ve come to think of scientific analysis as a process of breaking things down to their constituent parts, understanding how those parts work together. Though this can be useful it is not the only means we have of pursuing an enquiry and when it comes to understanding consciousness it may obscure as much as it illuminates.

This kind of reductive analysis may be appropriate if the questions you want to answer are about the essence of something (at some level). So if you wanted to understand how colour exists in nature (or perhaps cure colour blindness) you would want to do some essential work around the chemistry of pigments, about light and optics and so on.

But if you wanted to understand why a painting moved you, discussing its physical chemistry would not get you very far. This is not a perfect analogy: consciousness doesn’t exactly present itself like a work of art (though actually the consciousness of other humans does). But if, as seems likely, the way consciousness arises in the brain is a complex phenomenon, then reductive analysis is unlikely to get you to the answers you want.

Conceiving of the brain as a computer is a reductive analogy. There is no reason to believe that the fundamental building blocks of brain activity are like binary code. Equally there is no reason to believe that increasing processing power will somehow help you leap from the world of binary processing to the world of the human brain. As Appleyard among others has pointed out, it’s hard to imagine computers becoming conscious like brains, because we are nowhere near being able to describe what consciousness is, or even what it is like (the philosopher Thomas Nagel has argued that one of the distinguishing features of consciousness is that we cannot say it is like anything else).

Dealing with complexity

In The Brain is Wider than the Sky Bryan Appleyard argues that the world is mostly complex, and unless we’re very careful, attempts to understand it through (reductive) simplification are bound to produce distortions. In this view he is explicitly influenced by McGilchrist, who in The Master and His Emissary moves from a review of the current state of neuroscience to a cultural history of the West. McGilchrist is unusual in that he’s a working psychiatrist, but has been a professional (academic) literary critic, and philosopher.

McGilchrist suggests that the left and right brain hemispheres, though always interdependent (in a healthy brain), process our experiences in very different ways. To summarise a usually complicated argument, he suggests that only our right hemisphere has direct experience of the world outside ourselves, and that it handles this experience holistically, being the part of ourselves capable of understanding metaphors, analogies and likenesses (it connects things). The left hemisphere, in contrast, takes the information flowing from the right hemisphere, and seeks to make analytic (reductive) sense of it, so that it can maintain close control of the situation associated with that information. To do this it seeks to exclude further information flowing from the right brain’s continued experience of the world.

McGilchrist is very aware that this way of talking is itself shaped by metaphor, but his claims are rooted in the empirical evidence of hemispherical dysfunction (the capabilities we lose when part of a hemisphere is damaged): our brains really are divided and there are differences between the two hemispheres. He goes on to suggest that our cultural history reflects a vying for superiority between the world views of our respective brain hemispheres. He argues that in the modern world, the left brain, analytic, reductive, fixated on control, has come to dominate thinking about experience, and what we should be doing. He argues that this isn’t a necessary way of looking at what we have, and in many respects is a misleading way.

But it is left brain thinking which sees the computer (machine) as a good model for mind. The left brain can relate to the way machines do things. We can see the left brain at work too in the routine attempts to oversimplify how organisations can be managed.