Today I had two lectures on very different topics, and I noticed something they had in common: the concept of using human intelligence and machine intelligence *together* to achieve things that neither could alone.

People are better than computers at some things, such as spotting some kinds of patterns and *understanding* stuff. Computers are better at things that boil down to doing lots of simple things in a tiny amount of time, such as huge calculations. But some problems can't be solved by *just* doing things humans are good at, or *just* doing things computers are good at.

Here's a simple example: Google Translate. Someone links you to a German article that you want to read, but you don't know German. You *could* look up every word in the dictionary and try to understand it that way. But who can be bothered with that!? So you paste it into Google Translate, which throws the words into a kind of dictionary and spits out the result^{1}, *much* faster than you could have done yourself. Now, Google Translate is great at translating individual words, but I'm sure you've seen it turn full sentences into a jumbled mess. Google is not great at recognising correct English... but you are! So between Google's ability to look things up really fast, and your ability to get the gist of a string of English words which may or may not be grammatically ordered, you can get an understanding of what the article means.

It's a bit more complicated than that, but this is the basic idea. â†©

Now I'll give a couple of examples of how this principle is being applied in computer science research today.
### Visual analytics
Visual analytics is quite a new field in computer science which combines two old ideas.
#### Visualisation
Humans can be pretty good at processing visual data.
This means we want to know roughly how much longer it will take to run if we give it a bigger input. â†©

From this pie chart you can learn several things in just a couple of seconds, including that I'm not very adept at making pie charts yet. For hundreds of years, humans have been using pie charts and other visualisation methods to instantly give readers an idea of what some data means. Part of visual analytics is developing new kinds of visualisation to capitalise on this ability to quickly infer information from an image.

#### Data analysis

One things humans aren't good at is reading through enormous lists of millions of data points, such as a list of 53.5 billion HTTP requests made by users at Indiana University. A lot of research goes into ways to get computers to process and organise this kind of data in a way that makes it useful.

#### Putting the two together

What if a dataset is too big to put into a simple bar chart, but too subtle or complicated for a computer to automatically process? Sometimes we can get a computer to do half the work: organise the data into an image that humans can easily read. A human can then come along and do what comes naturally, making inferences and judgements based on the resulting visualisation.

A nice computer-sciencey example of this is complexity plots. Sometimes it's necessary to estimate the complexity of a computer program^{2}. We could just run it ten times on different inputs, time it each time, and guess - but our answer probably won't be very accurate.

So we get the machine to run our program *thousands* of times on different inputs and generate a complexity estimate after each new run. These estimates are then plotted on a special kind of line graph called a complexity plot, and a person who knows what they're doing can quickly estimate the complexity of the program, or realise that more sample runs are needed. I highly recommend having a look at the website from the inventors of these plots to get an idea of what they look like!

### Interactive theorem provers

Mathematical reasoning is really hard for both humans and computers. When it comes to tackling mathematical problems, we have different strengths: human mathematicians are good at building large, complex arguments that pull together different ideas to form a line of reasoning leading to an interesting conclusion. Computers are good at checking short bits of logical reasoning to make sure no mistakes have been made. Put these ideas together, and you might come up with an interactive theorem prover.

#### Coq

Coq is a well-known interactive theorem prover which has been used to come up with real proofs, including a new proof of the four colour theorem. Here's a short video where you can watch someone write a proof for Cantor's theorem in Coq. You can see that it combines the human's mathematical understanding with the machine's ability to quickly verify that the mathematician's reasoning is correct.

### Use this idea in your own work!

The concept of combining human intelligence and machine intelligence is a powerful one which shows up in many more places than I've shown here. By having a good understanding of what the human mind is capable of, as well as what computers are capable of, we can solve problems that are difficult or impossible to solve any other way. Maybe this idea will give you a helpful perspective to solve the next seemingly-impossible problem you come across!