Why work for nothing? (3)

Why work for nothing? (3)

Person of Interest is on one level a slick bit of US TV pap. But it’s found itself in an interesting position in the light of Edward Snowden’s revelations about the NSA.

The series’ premise is that after 9/11 the (US) government commissioned a system that would monitor the public and private data running across the world’s computer systems, and spot threats as they emerged. The government was only interested in terrorist threats, so the system’s genius (and billionaire) creator decided to hold on to his own version which would detect threats to ordinary people. What started out as a twist on vigilante stories has necessarily turned into an examination of surveillance, security and privacy. It’s conducted with the necessary bloodless violence and impossible glamour of mainstream entertainment, but impressively has not shied away from the difficult questions raised by its set up.

In this universe the government (particularly in the form of the NSA/Secret Service) and a commercial criminal entity with links to the Chinese are the usual bad guys, ruthless while respectively justifying their actions either by pleading the cause of national security or the new realities of power. Our heroes are somewhere in the middle, well-intentioned but increasingly struggling to contain the “machine” which once served them. Most interesting of all is a terrorist group, Vigilance, who articulate the proper concerns we should all have about the way this surveillance is pervading our lives, but who unfortunately share the ruthlessness of the government and the criminal agency: this is TV and the debate is played out with gunfights rather than reasoning.

It’s not clear at this stage (as the programme nears the end of its third series) whether in fact it’s going to have much to contribute to public debate, but perhaps the most remarkable thing about the present moment is how little the questions it raises are being discussed in any political forum, despite the renewed profile given them by Edward Snowden.

Snowden’s revelations caused most of the large data collecting businesses (Google, Microsoft etc) to discuss for a few days what kind of disclosure they would routinely have to make to the NSA. This seemed like an exercise in damage limitation, but in truth the damage was done. Google chairman Eric Schmidt, admittedly in another context, once suggested that if you had nothing to hide privacy was a non-issue, which only demonstrates a profound misunderstanding of what’s at stake: nothing less than a fundamental shift in our relationship both with government and business.
We’ve also seen in the last day or so a landmark decision by the European Court against Google, asserting that whatever else it might think, Google remains subject to European privacy laws. This isn’t a simple issue: some have argued that Google searches only make it easier to find information that’s already somehow in the public domain, and that we should not have a right to mould our own public profiles. Again I think this misses the point. The way information is being collected and used is unprecedented, and we do need to have a very public debate about the limits to which it can be used.

We also need to be aware that the collection mechanisms themselves are not necessarily neutral. Jemima Kiss, commenting in The Guardian, made the point that Facebook mediates our social relations in order to extract information which might be commercially useful. This echoes Bryan Appleyard’s argument in his book The Brain is Wider than the Sky, that automated phone answering systems, with their cascades of menu choices, are designed to make us machine-readable.

We are it seems quite blithely, blindly participating in a very large experiment. Google and others are offering us a form of augmented intelligence, in return for being able to sell the same root personal or behavioural data to others, who in theory can then sell more effectively to us. In a sense we are willingly making ourselves cyborgs (albeit by carrying the technology rather than having it embedded in our bodies). For the moment I don’t doubt Google’s good intentions, but the point of the European court ruling is to insist on the maintenance of checks and balances to ensure this level of trust is not abused in the future, either commercially or by government. It would be comforting if Google seemed willing to acknowledge the viability of these concerns, rather than dismissing them.

Until recently we have acceded to the idea of government because it offered some clear benefits (security, more efficient marshalling of infrastructure resources, and so on). Government systems themselves have evolved, most obviously as democratic systems, to ensure they remain responsive to the needs and interests of the greater part of the population they exist to serve.

There’s been some discussion recently about increasing public disaffection with government, with some major business figures suggesting that this would be a good moment for business to step into the breach. That seems to me naïve about the nature of our political disaffection, let alone the capability of business organisations to adapt as they would have to in order to fill the vacuum. This naivety seems evident in the continuing argument over GM products (as in food, rather than cars). Many seem to think that it’s an argument about science, about the safety of genetically modified crops, but that’s not the point at all. It’s about whether we want to grant intellectual property in something as fundamental as grain to a commercial organisation.

In other words, do we want grant ownership of fundamental assets which until now have been held in common (even if they’ve not been free) to businesses, which as they currently conceive themselves, exist to serve the interests of a much narrower constituency than any government (ie their shareholders)? As our digital and real lives become more closely entwined, the same question must be asked of the internet and software service giants.

It’s a reflection of how far things have already gone that the answer is unlikely to be a simple yes or no, but at the very least we need to ensure that we have some options, commercial and non-commercial, in how we proceed. This is another reason why the work on open, freely licensed digital assets, is so important.

I’m aware that I haven’t really addressed the questions at the end of my last blog, but then it seemed important to say something about privacy and public interest before coming to consider the changing pressures on ownership, which all being well will be the subject of my next blog.

Minds, machines and business models (2)

Minds, machines and business models (2)

Since the industrial age, the machine has been the model for business (explicitly in the work of the late 19th century management theorist Frederick Winslow Taylor, whose thinking, though hardly fashionable or explicitly embraced, still dominates management ideas).

That’s probably because efficiency is directly correlated with profit, and making a business work like a machine seemed a good way of lowering costs and achieving consistent output.

The machines of the industrial age were large, mechanical, their processes sequential. Bryan Appleyard notes the significance of Babbage’s Difference Engine (a computational device) as the first machine with an abstract rather than tangible output, heralding a new and very different machine age, the one in which we’re now living. But coming from the first machine age we constructed organisations around the idea that managers should have as much control as possible, the scope for deviation written out of the program.

The very best, obvious way to achieve this was to replace human work with machines. This has duly happened, with ever more sophisticated robots in recent years eliminating the need for human labour even in tasks that once called for more advanced human skills. We’ve put machines into places that we’d reasonably expect to be about human interaction, whether in the ATMs that took over many common banking activities, or the automated call answering systems that pretend to be a first level of customer responsiveness.

Control and creativity

Machines bring their own management tasks: they have to be maintained, and upgraded in line with technology advances. But they do simplify, reducing the possible variables between management intention and output to the market (which itself is only okay if you see your relationship with the market as essentially one-way). This simplicity became all the more attractive in a world that was rapidly becoming more complex, as communications technology made globalisation possible. Businesses themselves had evolved into more complex structures, expanding their product ranges, their geographical reach, and the sophistication of their support functions, from human resources through logistics to marketing.

For a while this evolution seemed the most obvious means for managers to keep control of all the different elements that the business required, but as the functional complexity multiplied management theorists (and practitioners) began to talk of “core competencies”. A different model quickly emerged, in which managers define their requirements, and then contract out to specialist suppliers to meet those requirements. On the face of it this model multiplies profit centres in the supply chain, and so would push costs up. In reality it rests on the assumption that specialist operators will be better at identifying and removing cost in their operations without compromising quality. Crucially it allows managers to shorten lines of accountability, restoring a sense of control in an ever more challenging world.

But this desire for control is only part of the story. In the last twenty years we have found ourselves moving into a different kind of machine age, driven by information technology (if the importance of the Difference Engine is that it’s the first machine with an abstract output, we’re only just beginning to experience how transformational this could be). This shift away from the mechanical demands a rethink of the machine model for those who remain in the workplace.

Machines are generally a poor source of competitive advantage. Early adopters of new technology may gain a useful early lead, but others will soon catch up (machines are relatively easy to replicate). This probably explains why in the last fifteen years or so management theory has put an increasing premium on creativity on business. What really defines a business is its culture (not what it makes). It seemed that at some level we no longer wanted the organisation to work like a machine, but to have a culture in which people could be spontaneous, where they routinely thought “out of the box”, a culture that was entrepreneurial and nurtured innovation and responsiveness.

The left brain world view doesn’t yield this ground easily.  Managers have sometimes tried to make this kind of human creativity machine-like, following the idea that “knowledge management” could be an IT function. The American consultant and writer Don Tapscott sums up the folly of this (and suggests a necessary alternative).

“Knowledge management has failed. We had this view that knowledge is a finite asset, it’s inside the boundaries of companies, and you manage it by containerizing it.

“So, if we can get all of Jessica’s knowledge into this container, or computer system, then when she leaves the company we’ll still have Jessica, or we can get to Jessica in this container. And this was, of course, illusory, because knowledge is an infinite resource. The most important knowledge is not inside the boundaries of a company. You don’t achieve it through containerization, you achieve it through collaboration.”

(in an interview with McKinsey Quarterly January 2013)

It’s true that Tapscott is still thinking about technology (he is urging businesses to investigate and adopt social media), but that’s because he understands that technology can help us do human things more effectively, rather than working as a mechanised substitute for those activities (which are valuable exactly because they are unpredictable).

The crucial point here, and one which has attracted surprisingly little comment, is that there is a big contradiction between this ambition to foster creativity, and the prevailing, fundamental assumptions about the machine-like things managers should be doing to make a business run effectively. Something has to give. The problem is also evident in the ways we have come to think about brand, a problem I will be exploring in subsequent pieces.

Minds, machines and business models (1)

Minds, machines and business models (1)

Neuroscience raises a question about the possible distortions of the models we use to organise and shape our knowledge. With neuroscience it is quickly necessary to think about the relationship between observable brain functions and consciousness.

It’s become possible to make a direct correlation between specific states of mind and visible activity in the brain, which has led some effectively to equate brain activity and consciousness. But this ignores the fundamental philosophical challenge that consciousness offers. We are not aware of our consciousness as we are of other things, because that consciousness is the starting point for our awareness of other things. It’s hard to talk of our consciousness as if it was actually oneself rather than something we own, something separate, and yet we know that it’s not something separate. It is not like a hand or a leg which we can possess in the sense that we could also live without our limbs. It is, fundamentally, what we think of as our self (so loss of certain brain functions can equate literally to a loss of self).

When we seek to understand brain concepts we tend to use a metaphorical model. In the case of recent neuroscience, that model overwhelmingly is the computer. The computer is the readiest external analogy we have for the brain, and indeed a large chunk of computer science is dedicated to making computers more brain-like, to create effective artificial intelligence capable of understanding context and eventually programming itself (there is an AI concept of The Singularity, the point where computer intelligence starts to function independently and beyond the capabilities of human intelligence).

The irony here, as writers like Iain McGilchrist and Bryan Appleyard have pointed out, is that just as we want to make our computer machines more brain-like, in the way we conceive of the brain/mind we want to make the brain more machine-like, and this is a blinding metaphor, which is already pervasive (think how people will speak of humans being “hard wired” to behave in certain ways, or indeed the probably specious notion of neurolinguistic programming).

Beyond reductive

We’ve come to think of scientific analysis as a process of breaking things down to their constituent parts, understanding how those parts work together. Though this can be useful it is not the only means we have of pursuing an enquiry and when it comes to understanding consciousness it may obscure as much as it illuminates.

This kind of reductive analysis may be appropriate if the questions you want to answer are about the essence of something (at some level). So if you wanted to understand how colour exists in nature (or perhaps cure colour blindness) you would want to do some essential work around the chemistry of pigments, about light and optics and so on.

But if you wanted to understand why a painting moved you, discussing its physical chemistry would not get you very far. This is not a perfect analogy: consciousness doesn’t exactly present itself like a work of art (though actually the consciousness of other humans does). But if, as seems likely, the way consciousness arises in the brain is a complex phenomenon, then reductive analysis is unlikely to get you to the answers you want.

Conceiving of the brain as a computer is a reductive analogy. There is no reason to believe that the fundamental building blocks of brain activity are like binary code. Equally there is no reason to believe that increasing processing power will somehow help you leap from the world of binary processing to the world of the human brain. As Appleyard among others has pointed out, it’s hard to imagine computers becoming conscious like brains, because we are nowhere near being able to describe what consciousness is, or even what it is like (the philosopher Thomas Nagel has argued that one of the distinguishing features of consciousness is that we cannot say it is like anything else).

Dealing with complexity

In The Brain is Wider than the Sky Bryan Appleyard argues that the world is mostly complex, and unless we’re very careful, attempts to understand it through (reductive) simplification are bound to produce distortions. In this view he is explicitly influenced by McGilchrist, who in The Master and His Emissary moves from a review of the current state of neuroscience to a cultural history of the West. McGilchrist is unusual in that he’s a working psychiatrist, but has been a professional (academic) literary critic, and philosopher.

McGilchrist suggests that the left and right brain hemispheres, though always interdependent (in a healthy brain), process our experiences in very different ways. To summarise a usually complicated argument, he suggests that only our right hemisphere has direct experience of the world outside ourselves, and that it handles this experience holistically, being the part of ourselves capable of understanding metaphors, analogies and likenesses (it connects things). The left hemisphere, in contrast, takes the information flowing from the right hemisphere, and seeks to make analytic (reductive) sense of it, so that it can maintain close control of the situation associated with that information. To do this it seeks to exclude further information flowing from the right brain’s continued experience of the world.

McGilchrist is very aware that this way of talking is itself shaped by metaphor, but his claims are rooted in the empirical evidence of hemispherical dysfunction (the capabilities we lose when part of a hemisphere is damaged): our brains really are divided and there are differences between the two hemispheres. He goes on to suggest that our cultural history reflects a vying for superiority between the world views of our respective brain hemispheres. He argues that in the modern world, the left brain, analytic, reductive, fixated on control, has come to dominate thinking about experience, and what we should be doing. He argues that this isn’t a necessary way of looking at what we have, and in many respects is a misleading way.

But it is left brain thinking which sees the computer (machine) as a good model for mind. The left brain can relate to the way machines do things. We can see the left brain at work too in the routine attempts to oversimplify how organisations can be managed.