14 Jul 2018

Themes at Coed:Ethics

Coed:Ethics is a conference about ethics for developers (odd name, but that’s due to being related to Coed:Code, an event supporting diversity in systems programming).

There were a bunch of themes and ideas that caught my attention at Coed:Ethics. Some from talks, some from conversations during breaks. I’ve tried to capture them in this post. There are no answers in what follows. These are observations to remind me of the issues.

coed:ethics 2018

Code that harms

Technology covers a range of harm including: emotional, mental, financial, and physical.

“Killer AI” has existed for a while. We’re not talking about autonomous robots, but algorithms that are used to select targets. In a non-military application, I’m reminded of the harm caused by accounting systems, and basic user management.

Some of that harm is due to biases, bad ideas, lack of ethical considerations. Some of it due to technology being used deliberately: profiling people can lead to extremely negative outcomes.

How software is used

You can make an ethical argument for building tools to improve situations. Examples include: self-driving vehicles to reduce road injuries; using advanced algorithms to reduce the pitfalls of existing solutions. One problem here is in separating the code from its use: you can’t. You don’t know who will be in power and how they will use the technology.

Power

Software can be used to concentrate power and increase inequity. That could go the other way, but designers and engineers are not empowered to make ethical decisions.

Law

There’s due process in law and government. How do you get the same capabilities for software and data ethics?

Perhaps there’s a place for professional bodies that commit to pro bono work (professional work done for public good). Professional organisations might also give weight and legitimacy to individuals making ethics observations.

Process

As companies and individuals, how should we include ethical considerations in our work? Ask: what’s the worst that can happen? (I’m reminded of Design for Real Life). Include ethics not only in the intent of the system, but the algorithms and implementation.

One talk mentioned that ethics evaluation checklists exist or are being developed. That means having a process of asking questions of the work we do, and presenting them as a continuous improvement process for businesses. Think: a dashboard for management.

Psychological safety

Psychological safety is a prerequisite for raising ethical decisions. Given our need to obey and conform, pushing back is hard. Creating a safe environment is also hard: acknowledge to your team that you may miss things, and need their input.

Reasons for optimism

It was headline news when Google cancelled a military project after pushback from engineers. It reminds us that we can make change happen.

Developers are turning down projects because of ethical reasons. There should be more options than “do it” or “don’t do it”. The good news is that agile practices, which encourage conversation, gives an opportunity to raise questions.

Companies become ethical one person at a time.

What next?

There are two main areas of research I need to do:

  1. I’m conscious that there’s a huge body of work on philosophy and ethics that applies here. I don’t have a good primer on that.

  2. As individuals it seems straightforward to make decisions about what work we do or don’t do (assuming you can afford to). As a company, evaluating and holding yourself to account feels like it will take more work.

I’m looking forward to picking up the discussion later this year at Good Tech Conf and at the next Coed:Ethics (whenever it might be). Maybe see you there.

I’ve posted photos of the event.