Interviews

Interview: Steve Collins, Chief Technology Officer at King, discusses how the company is implementing AI tech

Interview: Steve Collins, Chief Technology Officer at King, discusses how the company is implementing  AI tech
|

With King currently celebrating its 20th anniversary, we were invited to the company's London office to learn about the history and future of its games. While there, we were treated to several talks, including one from Steve Collins, Chief Technology Officer at King, all about AI.

AI is a hot topic at the moment, with people excited and terrified, in equal measure, about how it could change our lives. During the event, I spoke with Steve about how King is incorporating this rapidly evolving technology and how he believes it might change over the coming months and years.

Could you introduce yourself and your role at King to our readers, please?

I'm Steve, CTO here at King. I've been at King for nearly four years now. My responsibility is on the Central Technology Platform and King's tech strategy. The Central Tech Platform is what all of our games run on, including everything from AI machine learning, our data, the game engine, and the live operation tools we use.

During your presentation today, you spoke a lot about AI. Could you attempt to sum up everything that you're using it for here at King?

That's a big question. I will try to summarise, but I might need ChatGPT. We think about it in two or three different ways. One is traditional AI, which we're increasingly taking advantage of to create and operate our games more efficiently. And when I say traditional, I mean pre-Large Language Model, Pre-ChatGPT and things like that.

We started investing in AI back in 2016. And then, last year, we acquired a company called Peltarion, which allowed us to significantly increase the team we have at King to do AI and machine learning. We now have 50 people specifically dedicated to that.

And what they're doing is, they go in with a team of AI consultants or experts who talk to all the different parts of our business and try to find opportunities to use AI. We then have another team, who are looking at all of those things and figuring out which technology we need to put in place in order to have all that happen. How do we connect that technology to our data?

The last thing we have is a team focused on education and training. Because things are moving so quickly, it's really important for everybody at King to at least have some understanding of what's happening. To start thinking about how, over time, those technologies might impact their roles and make them easier or bring new capabilities to them and make them more productive.

Was last year's AI boom a surprise to you? Or, since you were already investing in this tech, were you aware that it was coming?

I think we had a fair idea of the pace of change because while things like ChatGPT became very public very quickly, the technologies behind them have been known for quite some time. It's based on a paper effectively that Google released in 2016 - the Transformer paper.

So, in a sense, there's a pre-knowledge of that. But I would say that the speed at which this has become public took everyone by surprise. And also the speed at which the capabilities have improved, including open AI.

I think last year when they released ChatGPT and GPT 3.5, the difference in capability that had over what was done the previous year was extraordinary. And in this year, in March 2023, they released GPT 4, which is another leap ahead. And that's probably the cream of the crop right now in terms of Large Language Models.

And in the meantime, we've seen technologies like generative AI – DALL-E 2 and now DALL-E 3 or Midjourney – showing incredible capabilities for content creation. That's something the entire industry is watching closely because just as these things are improving in quality and performance, we have an increasing number of questions around the legality of that. Who owns this content? Can they be copyrighted? And all of that needs to get figured out before it can become generally useful.

Ultimately, what does everything you're doing translate to for players? From my perspective, it seems they would get a better experience and may never know that AI had a hand in that.

In a sense that would be the ideal. The game should just get better and more fun. It's a game. It's supposed to be fun. I think if that can happen completely transparently for the player, we've done our job well.

It's a bit like the analogy in movies – if the special effect is so good, you don't know it's a special effect. Then it's doing its job. I think you can still spot most special effects, but sometimes they're truly awesome. That's what we'd like here. We want players to play and enjoy our game. And increasingly, we're going to do what we can to get out of the way of that and to make great experiences for them.

Was there any trepidation from employees when you announced plans to increase the use of AI?

Any time technology moves rapidly, I think everyone takes a stand back and tries to understand 'What does this mean for me?' And that's absolutely understandable. But I think what we're seeing now is people are leaning into this.

We did some internal pilots where we made this technology available to some of our coders and artists. And then asked them afterwards, 'Is this exciting to you? Do you want to continue using it?' '100%. Yes. This is really great. It's making my job easier, my life easier.'

What we're finding is that these tools are enabling more focus for the team using them. It means they're spending more time doing the things they actually enjoy doing, which is design, architecture, being creative, and making real decisions as opposed to iterating and iterating.

A coder, for example, one of our software engineers, will spend an awful lot of time writing tests. Literally write some great code and then lots of tests to try and make sure that code works. Those tests are things that could increasingly be automated. That's a great use for some of these technologies. So, in general, there's great excitement now about how these might come to pass.

At this point in time, why do you think humans bring that AI simply can't at the moment?

I mean, on one level, that's a very philosophical question. But, at a purely practical level, these AIs are so far away from being generally useful. They're very good at doing specific tasks, and Language Models are particularly good at manipulating language and nearly seeming to have a degree of intelligence.

But they don't in many respects. I think there's a long way before we know exactly what's happening, and we've full control of the outcome and we can guarantee the results are correct. Language Models are famous for hallucinating. You probably are aware of that. So, there are no guarantees of correctness.

So, in a sense, you're still back to the situation where you have to be an expert to use it. You have to know what the right answer is in order to use it. And then it can be productive. So I still see them as being great augmentation tools, but they're miles and miles away from being generally useful.

I remember, in the presentation, you mentioned AI could suggest something that makes a level 10% harder or easier. At the moment, I'm guessing that might not always be a good suggestion.

Absolutely. And you still have to remember as well that we have many different types of players, so that might be true for a certain segment of our player base, but it won't be true for everyone. So that's where increasingly we're trying to get more sophisticated to answer questions like that.

We can say 'Is this 10% harder for everybody, or is just 5% harder for these people, 50% for these people?' So it becomes more nuanced over time. And even then, you're only really dealing with a probability. It's not an absolute because different players are going to do different things. They're going to make different choices.

Naturally, it's difficult with technology, but if you had to predict what's going to happen in the next few months and years, what do you think will happen?

I really believe that, particularly, Large Language Models are going to be great tools for productivity for many of the people in our organisation. I think it's going to be a number of years before we have certainty about that. Before we have real clear guidance on when they can be used and when they shouldn't be used. I think that's going to require a lot of conversations, changes and regulations. But I know it's going to have an impact. I think everyone's role, everyone's job, is going to be made slightly easier and, in some cases, way better, depending on rolling out this technology.

Stephen Gregson-Wood
Stephen Gregson-Wood
Stephen brings both a love of games and a very formal-sounding journalism qualification to the Pocket Gamer team.