The mathematics of war, chess playing centaurs and augmented intelligence

Posted by Mike Walsh

7/5/15 4:35 AM

Sean

Sean.jpg

 

I met Sean when we both in Tokyo speaking at an intimate summit for banking executives. As an information junkie myself, I was fascinated by his company, Quid, which offered analysts and decision makers a visual platform for seeing patterns in complex data. Not surprisingly, Sean’s background is also a nexus of complexity. He is a physicist, decathlete, political advisor, and a TED fellow. Sean studied at Oxford as a Rhodes Scholar where he received a PhD for his research on the mathematical patterns that underlie modern war. In this episode of Between Worlds we talked about the power laws behind violence and insurgency, what Kasparov learnt from his infamous chess defeat by Big Blue, the merits of creative exploration through visualization and the art of defining the 21st century concept of manliness.


 

Mike Walsh: I’m joined today with Sean Gourley, who's the CTO and founder of Quid, and very much a big data extraordinaire. When we were talking before, Sean, you told me the story about how you started - not wearing white coats and studying numbers, but looking at the underlying mathematics of war and conflict. So, how did a 25 year old end up telling the Pentagon how they should be running the war more effectively in Iraq?

 

Sean Gourley: Yeah, that's an interesting story. I came across to do my PhD work at Oxford. I was nominally there to do a PhD in physics. I had everything lined up with a supervisor to do biomolecular motors, which were fascinating. But I got there, and walked around the lab, and saw my life at Oxford unfold with these 15 hour lab days. I thought, "Oxford's too exciting a place to be stuck in a lab." The very next thing was like, "Right, I'd better find something that's theoretical. I'd better find something that is not me doing experiments." 

 

There was a new branch of mathematical physics emerging that was really focused on trying to understand human dynamics. I guess it's the field of what we now call complexity and complex systems. There was a supervisor there, Neil Johnson, who was one of the professors. I met him, and thought, "This is fantastic. We'll take the tools from physics and maths, and use them to understand our human dynamics." 

 

The most difficult problem that was confronting us at the time was this unfolding conflict in Iraq. No one really understood why the strongest military in the world was being taken on, and often times even defeated by a bunch of insurgents that were poorly armed, poorly trained, and so on.

 

It was an incredibly important problem to solve: to understand how insurgency worked. It had a lot of attributes about it that lent itself towards a data driven approach. That's how we got started.

 

Mike Walsh: But war, unlike buying shoes, clothes, and books on Amazon, doesn't have a natural data set. Where did you find the source information to be able to parse it?

 

Sean Gourley: Yeah, that was the first stumbling block. We thought that the US military, the Pentagon, would give us the information - which was kind of naïve at the time. 

 

Mike Walsh: Because they're all driven by military intelligence. 

 

Sean Gourley: That's right. Of course, that was all classified. We might have been able to get it if we had jumped through many, many hoops. Maybe by the time the war was finished, we might have got the data to look at. 

 

Then we had this realization: this was 2003, and remember that people were jumping up and down with excitement about these blogs that were coming out of Iraq. On these blogs people were recording what they were seeing, and dis-intermediating the traditional media system. I started to think a lot about the distributed nature of collecting data, and that the information was out there in the transcripts of TV shows, and the blog posts that were written on the New York Times websites. It was relatively easy to point computers at those systems and start training them to look for identifiers of significant events that were happening. 

 

Mike Walsh: Sounds like a genome of conflict.

 

Sean Gourley: Yeah, that's right. You're helped a lot, because there's only so many ways to say someone died in a bomb. As far as natural language processing goes today, it would actually be considered pretty simple, but at the time, training computers to recognize the stuff was pretty far out there. But we managed to do it.

 

Mike Walsh: What did you discover? Was there actually an underlying relationship, or mathematics, behind these disparate events?

 

Sean Gourley: Yeah. The first thing we discovered is actually if you do collect information that way, it's actually pretty effective. We actually ended up being able to compare it thanks to the WikiLeaks information. We found out that we actually had very, very good fidelity of the information on the ground, despite not actually being there. 

 

The second thing, of course, is that once you've got that data, you can look for any mathematical patterns that might exist within it. It turns out there's some very striking ones in the timing, the size, and the geographic diffusion of violence. A pretty simple one that jumped out at us initially, was what we call a power law distribution of the size of the attack. Very simply, if you plot the frequency of an attack versus the size of the attack on a log-log graph, you'll get a straight line. That straight line repeats itself for every conflict, every major conflict that we've look at in recent times. What's more than that, the slope of that line is approximately the same. 

 

So, not only did we uncover a thing that wasn't random, we uncovered a very, very strong mathematical signature of violence that seemed to cut across religious, geographic, and political boundaries. 

 

Mike Walsh: What does that really mean? Does that mean there tends to be fewer, very high intensity conflict events, and lots of very small scale ones?

 

Sean Gourley: Yeah. A lot of distributions can give you that. A Gaussian distribution can give you a few big ones, some small ones, some very, very small ones. Log-normals can give you that, as well as different kinds of exponential functions. I think the thing that characterizes power laws is that there's almost no limit to the tail.

 

Mike Walsh: It's scale free.

 

Sean Gourley: It keeps going. We looked at this data and said, "It could be a group here that is capable of just tremendous amounts of violence, based on the growth and the size that they were able to get to. We haven't seen it yet, but the model allows that to exist.” 

Of course, now we do see it with ISIS. We see, effectively, a nation state having emerged from within that violence.

 

Mike Walsh: ISIS is at the very edge of that power law?

 

Sean Gourley: Right at the edge. It's the biggest thing that we've seen. But it certainly wasn't unexpected that that such a thing could emerge, much like a magnitude nine earthquake off the coast of Japan.

 

Mike Walsh: It's almost like power laws are the fingerprint of networks. Because you see also them in the Web, the structure of the infrastructure, the internet, viruses and design of cities.

 

Sean Gourley: Yeah, power laws are the signature that you get when you have systems at a global scale. At a human scale, it's Gaussian and log-normal. At a global scale, it tends to be power laws. A Power law is indicative of systems that are at criticality. Criticality is something you see with water. You can heat it up, and it gets warm; you can cool it down and it gets cold. That's fine, as long as it's within that kind of temperature range. But you heat water to a certain point, and it doesn't become water any more.

 

Mike Walsh: There's a state change.

 

Sean Gourley: It changes, it becomes steam. You cool water down, and it changes into ice. The first time you see that, you're like, "Wait a second, my liquid just became solid." We're very used to it, but that's kind of like crazy. You see these phase changes. At phase changes in criticality, you get power laws emerging from that. When we're seeing these in our world, we should also be aware that, on global scales, the systems we build can tip in and out of criticality. 

 

Mike Walsh: Whether you're in business or in the military, getting your mind around the sheer complexity of the decisions that you have to now make is going to get increasingly difficult. I know one of the things you talk about is the difference between artificial intelligence and augmented intelligence. I love the story you tell about Garry Kasparov, and that turning point when he lost to Deep Blue. Can you share a little about what happened after that? Because I think that's really interesting.

 

Sean Gourley: Kasparov very quickly realized that, although he might get one more rematch against Deep Blue, and maybe he would be able to outsmart it that one time - for a human, the game was over. He knew enough to know that in a decade, humans weren't going to be competing against machines and winning in chess.

 

I think, quite smartly, he said, "Why would you want to compete against a machine when you could play with a machine?" Or posing it another way, "What type of system would play the world's best chess? Is it going to be a machine by itself? Is it going to be a team of humans? Is it going to be some sort of collective intelligence? Or is it going to be a combination of that?" 

 

He set up a competition called Freestyle Chess to test this out. The first competition, 48 teams from around the world showed up online to play. They were broadly categorized into two groups: those that made the bet that the best strategy was the biggest, baddest computer, and if the humans just plugged it in and left it alone, that would play the best chess.

 

Mike Walsh: This was the Headless team. 

 

Sean Gourley: The Headless team, exactly. They called themselves Headless, because there's no human head guiding them. The other side was the Centaurs. The Centaurs believed that the machines were good, but they needed a bit of guidance. They gave you the horsepower, but they didn't give you the strategy, or they didn't give you the intuition. 

It really wasn't clear, going into that competition, who was going to win. But as the competition went on, it became very, very clear that the Centaurs were playing the best chess. So much so that the final four teams were all Centaurs. Three we knew, they were grand masters and military grade super computers, and the fourth chose to keep themselves anonymous, they were called Zack-S. Zack-S, as it transpired, ended up winning the tournament. 

 

What was fascinating for me, was that Zack-S weren't grand masters, they were amateur chess players. They weren't playing on the best hardware in the world. In fact they had three different AI systems running on consumer grade computers. When they were asked about it, they said, "We knew that this AI system performed better in this environment. We knew that this other one was better over here. When the system and the game moved itself into those places, we'd switch between our machines. Sometimes we'd ignore them all, and sometimes we'd play what we thought was best."

 

Mike Walsh: You had average players, with average computers, but a very good workflow.

 

Sean Gourley: That's exactly right. It was interesting, one of them was a soccer coach, which I thought was fantastic. And the other one was a database administrator. They were using one of their dad's computer's which they'd borrowed, because they didn't have three separate computers themselves. 

 

If you were to ask, "What system of intelligence is going to play the best chess in the world?" If someone had said, "Two amateur chess players, and three consumer computers plays the best chess in the world," you wouldn't believe it. Because we forget how important it is to interface with the machines. That interface is what makes for a success or a fail.

Mike Walsh: I think about this issue a lot in regard to leadership. It is still a very open question about what it takes to be an effective 21st century leader. Obviously, it's not enough just to have the technology and have the data, you're going to have to have the judgement to know when to listen and when not to listen to the systems that you have. 

 

Sean Gourley: Yeah, I think that's exactly right. The people that are obsessing about this now are the fundamental hedge funds. Not the Renaissance and the Two Sigmas of the world, but the Bridgewaters and the Blue Mountain Capitals. The big funds that are fundamental. They know that the investments that they're making are not going to be purely automated like the quantitative funds. They are obsessing about how do they plug their best analysts into the best machines.

 

Mike Walsh: They're not trying to solve the market now?

 

Sean Gourley: Well, people are. There's still Renaissance, they're still doing that. The big fundamental funds, which still move hundreds of billions of dollars, are obsessing now about how to design systems to amplify the cognitive powers of their analysts they have making their bets. I think, as finance goes, so does the rest of the industries in terms of data and algorithms. These guys tend to front run things.

 

Mike Walsh: Kevin Kelly put it well. He said that the people who will get paid the most in the future will be the ones who work best with machines. 

 

Sean Gourley: I think that's right. Actually, our relationship with machines, will broadly be classified into three different things. There'll be those that can work with the machine and add something of value to the machine. 

 

Let's not pretend ourselves. If I sit down with a chess playing super computer, and decide I want to go against its best move, that's not in any way, shape, or form a good idea. I should listen to the chess computer, right? I don't know enough about the algorithms, or the game, or the data structures to question it. And I shouldn't, anymore than I should question the output from a model of meteorological weather forecasting predictions.

 

You have to be pretty damn good to add anything to the best machines. But if you do add something, then the returns and the rewards are tremendous, because you've now got the best system in the world. So the difference between the best and being second best is huge, especially in games like finance. 

 

There's another kind of group, the second group, which I think will have relationships and will be working for the algorithms. You see that in Mechanical Turk for example.

 

Mike Walsh: Where they're feeding the data into the system?

 

Sean Gourley: No, effectively, you're doing the work they call human intelligence tasks, which is a euphemism for humans doing things that computers haven't yet figured out how to do.

 

Mike Walsh: Is this someone driving an Uber car?

 

Sean Gourley: Not quite. It's more like solving CAPTCHAs to get logins to a web page, or classifying a picture as being safe for work or not safe for work. Things that computers struggle with, but humans can very quickly look at and make pattern recognition decisions. 

There's an interesting question around that - am I working to enhance the algorithm, or is the algorithm giving me stuff that it's not smart enough to do, and I'm working for the algorithm? I think that kind of dichotomy of what's our working relationship with these algorithms ...

 

Mike Walsh: … will determine our pay scales.

 

Sean Gourley: Absolutely. At the moment you get two or three cents per task if you're a Mechanical Turk worker. You get two or three million dollars if you're driving the augmented intelligence platform at Bridgewater. 

 

Mike Walsh: And the third category?

 

Sean Gourley: The third category of relationship is when we are simply treated as products by the algorithms. We're basically being bought and sold. Your presence online is sold through an online ad option, in the space of about 150 milliseconds, to the highest bidder.

 

Mike Walsh: Real time bidding...

 

Sean Gourley: Exactly. However, these algorithms are not making a lot of money. The algorithms that drive Facebook are making about $6 per user. Pennies. Our relationship as products to these algorithms is where we are today. However, the economics of that suggest that if we started paying more money, maybe several hundred dollars, we would move from being treated as a product to being treated as an owner. How would I have the algorithm work for me? How would I have it act in my interests, rather than acting to maximize my value as a product? 

 

Mike Walsh: In other words, the algorithms become an agent for you to make your life more interesting.

 

Sean Gourley: That's right. 

 

Mike Walsh: As opposed to us serving some abstract economic goal for a platform.

 

Sean Gourley: Instead of the information being shown to us to be bought and sold, instead we purchase. We say, "You know what? I'd like to be a better friend," or, "I'd like to be challenged more about my ideas," or, "I'd like my set of algorithms to go out and find me this information, summarize it, and give it back to me." 

 

When the algorithms work for you, and you're not being treated as a product, you have a very different relationship. I think economics, at $6, suggests that a lot of people will jump on that, and say, "Actually, I'll pay that, and have that experience where the algorithms work for me." I think we'll start to see that.

 

In fact, we see it already with virtual assistants. You have your assistant, it emails, it's not a person. It's an algorithm that works for you that acts as an assistant. We're going to see more and more of that. 

 

Mike Walsh: This is more than just a buyer decision between “do not track," or, "I don’t want to be a product.” This is recognizing that we live in a data driven world, but you want the data to work for you.

 

Sean Gourley: That's right. It's the ownership model. All of these things, to see where the technology goes, you've got to follow the economic path. We will increasingly start owning artificial intelligence as we'll start owning algorithms. We're seeing the very first emergences of that. That's where most of us will spend our time. We'll buy algorithms that will work for us. In the two tails, the working for the algorithm and augmenting the algorithm at the top end, that's where the extremes of the economic system will move.

 

Mike Walsh: I love that in the 21st century, working for “The Man” has been replaced by working for the algorithm.

 

Sean Gourley: That's right. If you're a factory worker now at Amazon, everything's prescribed with what you have to do at every step on a tablet. I don't know if they use Amazon tablets, I guess they probably do. That tablet tells you everything you have to do to put the thing you are holding in the tray. 

 

Mike Walsh: It's all time and motion studies.

 

Sean Gourley: It's Taylorism taken to the extreme. Interestingly, it's things that the computers or the robots can't do. They don't have the fidelity to do everything, but what they've found is there are enough humans out there that can operate cheaply enough that they don't need to learn to do everything. Hence, the “Mechanical Turking” of the world. 

 

Mike Walsh: In terms of augmenting people's intelligence, one of the incredible things you're doing now with Quid is giving people those tools in a very user accessible sort of way. Can you talk a little bit about what your vision is with Quid and where you plan to go with it?

 

Sean Gourley: With Quid, we're focused on the augmented intelligence space. For me, it's how do we give the people that have to make the most difficult decisions in the world a better set of tools to do that? It brings me back to my experience of sitting around the table at the Pentagon, and being with some very, very smart people, but realizing that the complexity of the conflict was beyond what any one mind could have understood. Then realizing the tools that we had were like Excel, Powerpoint, and Google searches. No one had built a set of tools to allow people to navigate and make decisions effectively in this world. For me, that was like, "Wow, that's a place we can really make some contributions here." 

 

Then you start thinking about it. What does that machine look like? For us, it was about trying to give a high resolution view of the world and make it accessible to a lot of people. The kinds of things that we as data scientists would do with different kinds of clustering techniques, and transformations, and complex linear algebra, that would all wrap itself up into a bunch of machine learning algorithms. We wanted to abstract that away. 

 

Because, even if you have learned how to do that, which most of the world hasn't, the moment you're in that mindset, you're in a very mathematical mindset, you're not in a creative exploration mindset. We wanted to abstract that layer away, and make the experience very visual, very tactile, and, ultimately, let people explore the data.  

 

What we found was that, until you've explored it, you don't really know what you don't know. You don't know the questions that you want to ask. Or you're not aware of some of the complexities and nuances. When you give that to people, we hope that by plugging into Quid, you see the world in higher resolution, but you also come back with questions that are better than you started with. 

 

Mike Walsh: There's something very seductive and beautiful about your visualizations, because they resemble these organic networks that give you a sense of the topography of a problem, which allows you to see relationships you wouldn't have seen otherwise. 

 

Sean Gourley: It's the mapping. We used to live in maps. If you're an oil company, the map of the world is very important. "There's oil on the west coast of Africa, so we need to go and get that oil." "There's oil off the coast of Venezuela, we need to go and get that oil." The geographic map was very important in the oil industry.

 

But if you're an online payments company, or a social network, or a machine learning organization, or network security service, does the geographic map of the world make any sense? Put it another way, is that the first map that you'd go to? You go back and say, "No, I want a map for the current state of the technologies," or, "I want to map all the major players," or, "I want to map the actual network that's underlying this." 

 

What Quid does is it gives you a map of a higher dimensional world, but one that's ultimately very relevant for what you're doing. So I can get every single scientific paper published around artificial intelligence. I can plug into that. See the state of that play. Come away knowing the key papers, the key trends, and also the white spaces where nothing's being done. I can then very quickly jump from that into all the news coming out in social media about Indian politics, and if I want, I can triangulate the two together. I'm probably the only person, then, who's understood AI and Indian politics at that breadth. I can then go and make decisions based on a better knowledge of that structure. 

 

Mike Walsh: So, is the real value simply learning what you don't know?

 

Sean Gourley: I always try and plug into Quid and jump into a data set where I'm like, "All right, this is one thing I think I've got. I'm pretty sure I've got this." It could be cricket, or it could be natural language processing, or something like that. "I've got this." I'll plug and I do that, because every time I plug in, I find stuff that I didn't know. I find things that the reality is different to how it was in my head. I do that enough in spaces that I think I know that it keeps teaching me the lesson that I don't know what's going on in the world. 

 

Mike Walsh: They said Francis Bacon was the last human on earth to claim reliably that he had read every book ever written.

 

Sean Gourley: That's right. It's funny how much we think we know the space that we're in. I think the most dangerous thing, as an executive at a big organization, is thinking you know the landscape you're in, when the reality is your mental projection is off. You act with a certain confidence when you think you know the space. If you don't have that arrogance, you'd probably act in a way that's more befitting of the uncertainty. 

 

That said, as of now, there's no real excuse not to have a very detailed map of the space you're in. You plug into a platform like Quid, you hit a button, and then you're exploring it within a few seconds. 

 

Mike Walsh: What have been some of the use cases that your key clients have been using it for?

 

Sean Gourley: There has been some fascinating stuff in three areas. One is the advertising industry. We partner with publicists, there's a big advertising agency and they have many different firms beneath them. For them, they're really fascinated about this idea. "We normally run focus groups. We go and ask people like what to do you think of manliness as concept? Because we want to sell some new deodorant and body spray." They'll bring people together, 30 people in a room. But they know that the people will say whatever you want them to say. It's expensive. And what they'll see, is it even really representative? They do it anyway, because they need to go and pitch the consumer product company about their new vision for a narrative to define that company. 

 

That's the old way. 

 

The new way is they plug into Quid. They say, "Give me all news articles about manliness, tailored to this demographic. Let me see what narratives emerge." There's stuff there about bullying, and there's stuff there about football and military, and questions like - is Putin more manly than Obama? All of these things start to emerge. They're like, "Well, this is rich." They say, "Who owned each of these stories?" Because they can see each cluster and say, "Who owns this?" What you're looking for is a cluster of stories about manliness that no one owns. 

 

They say, "Wow," they can take that. It already exists. People already have it as an idea. No brand owns it. "We'll pitch that, and take this company, and go and own that open space." It's like finding oil that no one's seen before. 

 

Mike Walsh: They picked the Putin story, didn't they?

 

Sean Gourley: They didn't. They actually found out ...

 

Mike Walsh: …and now he's going to the new face of Old Spice, on a horse. 

 

Sean Gourley:What was he doing in that infamous photo? Riding a squirrel? But yeah, you get these wacky stories that are just not connected to anything. Like - misogyny in space is never a good idea. It was like, "That's just a random story that someone wrote." Often, it saves you doing what creatives normally do, which is coming up drug aided crazy ideas. 

The thing is - what would you know if you read every single story about manliness? What would you know? You would know where there were opportunities, and you would know where there were crazy ideas that were maybe just crazy enough to work. The future is driving through that, seeing that information, and converting it into something that's actionable. When you go in and pitch that company and say, "Here's our idea," they're like, "Wow. That's really good." But you already know it was, because the world's done the experiment for you.

 

You can flip from that right back to plugging into call transcripts from a credit card company with people on the help desk, and finding out that people get really annoyed when the numbers rub off, because they can't read it to type it into their computer. Fully, six percent of all calls in the last 12 months were about the numbers being filed off on a credit card, and not being able to see them closely when you try and enter it on your iPhone. 

From the very practical through to the very big picture, the data will tell us many stories, if only we find ways to see it all.

 

Mike Walsh: When you think about the availability of these tools over the next couple of years, who do you think the most valuable employees in an organization will be?

 

Sean Gourley: I think there's two types of employees. You've got to ask yourself, are you someone whose job it is to optimize the space you're in, or are you someone whose job it is to uncover new space? Or, are you zero to one, or are you one to 100? Both are very, very valuable. 

 

I spend a lot of time working with people whose job it is to go from zero to one. They have a very difficult role, because the chances of getting to one are low, and no one really believes that they can, but yet they've still got to. They're the ones that have to create new products, open new markets, make new bets. They can be incredibly valuable, because they can be the ones that turn Apple from a computer company into a mobile technology company.

 

They can be the ones that may turn Google from a search company into a logistics company driven by self driving cars. Those bets can open up tens of billions of dollars of value. And I think that, ultimately, they are going to be the ones that are very important as we move into an even faster changing, technology landscape. 

 

How do we give those people, that have to go from zero to one, better tools? How do we give them data and platforms that allow them to make big swings and big bets, but to reduce that risk? Because something that's risky as one in a million is probably beyond things, but if you can use data to make it one in fifty, that may be a bet you really want to take. If data can point us in those directions, what seemed like a one in a million bet will actually be actually something we can stomach, because the reality is, once we've got the information, it's actually only one in fifty.

 

I think people that are able to use machines to effectively orientate themselves around complex spaces, and are very adept at knowing when the machines work and when they don't, when to trust themselves and when to defer to the machine, can ultimately better predict where the world is going to be by using the data about where the world is. They are going to be the ones that are able to unlock billions of dollars. That, if I was a betting man, is where I'd put my money. 

 

Mike Walsh: Sean, this is all fascinating stuff. Thanks for being on the show.

 

Sean Gourley: Cheers buddy, thanks.

 

Topics: Leadership

New call-to-action

Latest Ideas