Every year in various scientific and technological fields awards are given out for breakthrough discoveries. Every year new disruptive inventions and products are announced in breathless press releases. Innovation and application have become watchwords and metrics in the quality assessment of scientific research and grant applications. In other words: things change. On the other hands, our values and principles seem to require some sort of stability. We would hardly say that someone values something if they are prepared to let go of all their principles at the drop of a hat:
“Those are my principles, and if you don’t like them… well, I have others.” (Groucho Marx).
It is clear that on this view of technology and ethics (as a science of principles and values) there is a serious mismatch. Judging tomorrows tools by the standards of yesteryear doesn’t really work. Hence, the continuing need for applied ethics, for translating general principles into concrete, specific, applicable policies. Now, after each breakthrough discovery or disruptive technology there follows a discussion of societal impact, ethical concerns, and policymaking. There are two risks in this process: that we reinvent the ethical wheel every time we reinvent the technical wheel, crafting ad hoc policies and guidelines for every new piece of equipment, and that the ethical discussion lags further and further behind the progress of science.
New discoveries, new technologies, new social arrangements in the external world erupt into our lives in the form of increased turnover rates
‘shorter and shorter relational durations’
They force a faster and faster pace of daily life
They demand a new level of adaptability.
And they set the stage for that potentially devastating social illness
The massive injection of speed and novelty into the fabric of society will force us not merely to cope more rapidly with familiar situations, events and moral dilemmas, but to cope at a progressively faster rate with situations that are, for us, decidedly unfamiliar, “first-time” situations, strange, irregular, unpredictable
(Toffler, Future Shock)
Ethics, societal acceptance, political decision making, etc. lag behind, because the process is time consuming if done right. It is perhaps tedious, but it needs to be thorough. Consultations need to be held with experts, multiple drafts seeking feedback from politics, industry, science, and the public. However, at the same time also multiple countries, businesses, research agencies, consultancies, etc. are engaging in the same process, drafting their own sets of guidelines. And while these processes are ongoing, of course a number of new breakthroughs are made, providing new cases to be covered, new unforeseen applications, etc. Not to mention that even if it is all done by the book and turns out as intended, this is often just about one specific set of technologies and cannot cover all possible interactions with other breakthroughs. For instance, big data, machine learning, AI seem to have endless applications: in surveillance as well as art history, in protein folding as well as automating court rulings, and so on. It can combine with technological breakthroughs in other fields: in medicine, robotics, machine translation, etc. Do we need separate policies and guidelines for each and every one of those myriad applications? Can we expect everything to be covered by some general principles set in stone for all time? Or will these be inevitably broken by the next innovation?
“Value turnover is now faster than ever before in history. While in the past a man growing up in a society could expect that its public value system would remain largely unchanged in his lifetime, no such assumption is warranted today.”
(Toffler, Future Shock)
This reactive approach risks to always lag behind and become dispersive and inconsistent. Being often forced to focus on the shiniest new thing by public or politics, ethical reflection ends up focusing on narrow case studies, which cannot always be easily translated to the broader field situations. The literature on the topic speaks about a “vacuum” between ethics and technology.
This vacuum is not only conceptual, but also a power vacuum: who is responsible for filling it?
Who has the authority to fill it?
Which role should be played by all the various actors involved?
Should politics impose regulations, businesses self-regulate, or will users and consumers end up as guinea pigs?
In this post the images comes from:
- Maurício Mascaro from Pexels
- Ylanite Koppens from Pexels
- Chokniti Khongchum from Pexels
- Wallace Chuck from Pexelss