Editor’s note: Today we are happy to feature excerpts from a post on technology and ethics from Ericsson’s User Experience Lab blog. The post was written by Joakim Formo, a designer and senior researcher at the User Experience Lab, which is a multidisciplinary unit within Ericsson Research exploring networked societies, people and artifacts through strategic design and making.
Should public safety trump civil liberty? Will cities (or our lives for that matter) become better if we make them more efficient? Does it matter if common technologies are indecipherable to most people? Is it always a given that the data generated by people’s use of products and services belongs to the ones providing those products and services?
These kinds of questions are a tacit or clearly stated part of most of our projects. Here are some thoughts about how we reckon morality works and how we relate ethics to technology.
Morality
First thing first. The dictionary defines the term morality as “principles concerning the distinction between right and wrong or good and bad”, which sounds relatively straight forward.
The notion of what morality constitutes however is not as simple. Just think about how we have to use metaphors when we try to talk about it. What kind of geography is for example a ’moral compass’ operating in? I don’t know, but in my experience people’s moral compasses don’t seem to point towards the same North and South Pole. In addition to that the ethical poles seem to jump around dependent on where people are, who they are, what they are doing and why they are doing what they are doing. The Poles don’t seem to be in the opposite end of straight axes either.
To me, the ‘moral compass’ metaphor makes a lot more sense if I imagine that people are actually on different planets that have their axes tilted differently. The planets are co-existing in time/space in overlapping parallel universes, which by the way are fully transparent to each other so nobody can really figure out where one planet ends and the other begins …
Technology is not morally neutral
Since technology is such an integral part of society, it is important that those who shape it can think and talk about the ethical aspects. This means to be able to recognise that when people for example consider nano-technology or genetically modified food as immoral, they may be concerned with the moral principle of authority; i.e. that technology can be seen as a threat to natural and/or divine orders. That it may even be considered wrong because of moral purity (mixing genes from different species), or because there is a chance it could turn out to be harmful. We should be able to see that the same principles apply to people’s moral concerns about artificial intelligence and robots. The uncanny valley — the disturbing wrongness in very human-like robots where the distinction between human and machine is blurred but still noticeable — can make people feel that it is wrong because of moral impurity (mixing human and machine). That Asimov’s Laws of Robotics are about preventing harm to humans. That the EU cookie law is about fairness. Etcetera.
It’s important to go a little bit beyond the “guns don’t kill people, people kill people”-argument, i.e. that technology is neutral and that ethics only apply to how technology is used. In our work we tend to view any topic or concept we are working with through a number of different ‘lenses’. One such lens is to think of technology as analog to language, as ‘ethical expressions’. The words and expressions we use both reflect and reinforce how we conceptualise whatever we are talking about and technology, as well as design, also reflects and reinforces the morality of those who shape and use it in a similar fashion.
Another ‘lens’ is to bring Actor-Network Theory and Latour’s ideas about ‘agency’ into the design of products, service and technology. To view technology as a mediator or amplifier of intent makes it apparent that technology has some moral agency by itself, but without also having responsibility it is kind of existing morally somewhere between the morally biased and morally neutral. (A philosophical question is then if there can be any ‘half-ethical’ state at all, or if morality must be binary).
Yet another lens is to look at how ethics relate to trust. Trust is one of those vaporous matters that is crucial for making technology work well in the real messy world. The trick is to view trust is an ethical ‘credit rating’, not as something that we automatically get by making the most advanced or reliable technology.
Anyway. We are not aiming for consensus about what’s right and wrong or good and bad. Quite often we are in different ‘parallel universes’ ourselves, and we disagree about many things most of the time. What is much more important is that when we constantly bounce our ideas off the notions of morality and ethics as a part of our creative process, it makes us ask and discuss more important questions.
You can read the full post by Joakim at the User Experience Lab blog here: http://www.ericsson.com/uxblog/2015/02/moral-compasses-parallel-universes-technology-ethics/
The post Moral compasses and parallel universes — a few thoughts about technology and ethics appeared first on The Networked Society Blog.