From Industry to Research: Why We Need More Ethics in Undergraduate Computer Science

I am half way through my third year of my undergraduate degree in mathematics and computer science, and these days I spend most of my time in the computer science department. I've taken courses and attended seminars covering a huge range of topics, including software engineering, machine learning and 'big data1' processing, and model checking of missions for UAVs (unmanned aerial vehicles). However, in all those lectures, seminars, and talks, I remember hearing the word 'ethics' at most once or twice.

I feel that ethical considerations are vitally important in computer science, and that universities should make more of an effort to bring these discussions to the attention of undergruate students.

  1. I once discussed machine learning with a former (mathematics) tutor of mine, who was strongly of the opinion that most of machine learning and 'big data' is a collection of buzzwords to describe what statisticians have been doing for decades. I especially remember him shouting "you know what they used to call 'big data'? Data!"

Is it really relevant?

Yes. A good degree in computer science can qualify someone for a huge range of jobs where ethical issues can be at the heart of almost everything they do. As I will go on to mention, these issues are mentioned almost every day in mainstream news at the moment. So why aren't they being mentioned in the lecture theatre?

Industry

When I talk with other students on the cusp of a career in computer science, I find that many of them are hoping to go on to work for big software companies, such as Google, Microsoft, Facebook, and Amazon. There are all sorts of reasons for wanting to work for these companies. Firstly, programming is fun! And it probably pays better at those companies than at smaller companies. It's exciting to contribute to services and products that 'everybody' uses. Also, crucially, these companies seem to offer competent computer scientists interesting problems to work with. They typically have access to huge amounts of data, and there is some really interesting work to be done to interpret that data in a useful and efficient way!

The thing is, that data comes from people. And a lot of those people care about what happens to their data. I suspect that most people won't mind Amazon analysing someone's book purchases to offer them recommendations of books they might want to read, but that they might not be happy with pictures of their children being used in adverts on Facebook without explicit permission. Privacy and data protection issues are not simple, but they are discussed all the time in the news, as well as on the street.

And, while privacy issues tend to generate the most emotive headlines, they are far from the only issues of ethical significance faced by employees of these companies. In 1998, when they were postgraduate students at Stanford, Sergey Brin and Lawrence Page (Google founders) wrote a paper called The Anatomy of a Large-Scale Hypertextual Web Search Engine, talking about their early work on Google. They had this to say on the subject of search engines which tweak results for their own gain:

Since it is very difficult even for experts to evaluate search engines, search engine bias is particularly insidious. A good example was OpenText, which was reported to be selling companies the right to be listed at the top of the search results for particular queries [Marchiori 97]. This type of bias is much more insidious than advertising, because it is not clear who "deserves" to be there, and who is willing to pay money to be listed. This business model resulted in an uproar, and OpenText has ceased to be a viable search engine. But less blatant bias are likely to be tolerated by the market. For example, a search engine could add a small factor to search results from "friendly" companies, and subtract a factor from results from competitors. This type of bias is very difficult to detect but could still have a significant effect on the market.

Over a decade later, we see frequent headlines like Google warned by EU's antitrust inquiry of 'search manipulation concerns'.

There are countless other big ethical issues these companies are caught up in. I'm not saying that new graduates shouldn't apply to work for Facebook or Google -- but surely it's important that people consider the consequences of what they are using their skills for? And shouldn't that discussion be a part of the process by which they learn those skills and the environment in which that happens?

Research, the government, and the military

One of the biggest and most persistent news stories lately has been the revalations of the NSA and GCHQ's spying techniques on innocent civilians, as well as suspected criminals. In the UK, GCHQ hires a very large number of mathematicians and computer scientists. Like the big companies I talked about above, it is easy to get excited about jobs with organisations like this: there is a lot of funding behind them, all to support cutting-edge work on difficult and compelling problems. And secret work just sounds cool, right?

Now, not all work for GCHQ is focussed on getting backdoors to Skype conversations, and not everybody will feel that privacy concerns outweigh national security issues. But aren't these important considerations? And shouldn't the young people pondering these issues not be allowed to air their concerns and discuss the different points of view in a (hopefully) neutral environment, such as their university, rather than waiting to be persuaded by recruiters?

An issue which feels very significant to me, but gets less press than the above topics, is that of military funding for university researchers. Everybody knows that finding funding is rarely easy for academics today. And military organisations have a lot of money for big research projects. But how tempting is it to take that funding without fully considering how the research it pays for will be used? It is easy to feel distant from the battlefield when working on problems in computer vision, perhaps with application to delivery drones. But when that money is paid for by the US Department of Defence, isn't it important for such researchers to consider what else that drone research might be used for?

Drones are an emotive and obvious example, but this applies to any research, whoever is funding it, but especially when it's funded by government and military. I'm not saying (in this post, at least) that all researches should reject all government and military funding, or that that research doesn't sometimes go towards incredibly positive work that saves and improves lives. I'm just saying that the students who will go on to work on that research should be encouraged to think about these issues before they become an issue, and not just turn a blind eye to them.

What I would like to see

In a recent lecture, a picture of an American drone was used to illustrate one of the many different kinds of robots that exist. (Other examples included a factory robot, C-3PO, a surgical 'arm' remotely controlled by a surgeon, and Iron Man's suit). The lecturer pointed out the drone, and said something about how these kinds of robots are used by armies to kill people from the sky. It was a slightly controversial moment, and maybe he shouldn't have said anything. I'm glad he did, rather than just breezing past it like any old robot, but he shouldn't have needed to say it like that and in that context, and it needn't have been controversial. If a discussion of ethical issues were built into the syllabus, it would give a chance to discuss these issues in a balanced and open way.

Beyond this, I would like to see my department hold at least an annual optional-attendance talk on ethics open to undergraduates. I'd love to see a more regular seminar series on these topics open to, or even given by undergraduates. This would give academics like my lecturer a chance to discuss these important issues in an appropriate context, and it would give students a fair and open environment to hear about, think about, and discuss these issues which will become important for many of them in the very near future.

I feel the department has an opportunity to have a serious influence on the ethics of the future of computer science, and it's morally irresponsible to ignore that opportunity.

With the support of one of my undergraduate course representatives, I am currently in the process of encouraging my department to implement some kind of forum for these issues to be discussed. If that fails, I might try to take matters into my own hands and organise a discussion group of my own.

If your university or department talks about ethics as part of their computer science courses, or if you also frustrated by the lack of ethical discussion in computer science departments, I'd love to hear about it! Get in touch by email or Twitter (@modusgolems).