WEBVTT

00:00.000 --> 00:05.000
Thank you.

00:05.000 --> 00:09.000
Yeah, thanks to everyone who's stuck around to the end of the day.

00:09.000 --> 00:12.000
I was going to make a joke about how I'm headlining foster,

00:12.000 --> 00:16.000
but based on the room size, I'm not going to make that joke anymore.

00:16.000 --> 00:20.000
So thanks to coming to my talk, it's about pie gambit,

00:20.000 --> 00:23.000
which is an open-source software for game theory.

00:23.000 --> 00:26.000
It's part of a broader gambit project.

00:27.000 --> 00:31.000
And yeah, my name's Ed, I work at the Andrewing Institute in London.

00:31.000 --> 00:36.000
I'm a research data scientist and also research software engineer.

00:36.000 --> 00:40.000
So some of you may have heard the talk earlier about that.

00:40.000 --> 00:45.000
So before I talk about gambit software itself, I need to introduce game theory.

00:45.000 --> 00:51.000
So game theory is this field of study that came out of economics all the way back in the 1950s.

00:52.000 --> 00:56.000
So it was invented by John Nash and the relevance to today,

00:56.000 --> 01:01.000
as well as economics, is that it's increasingly being used in modelling of cybersecurity,

01:01.000 --> 01:06.000
especially anything to do with AI and multi-agent systems.

01:06.000 --> 01:12.000
And game theory is defined as mathematical models of what we call strategic interactions between games,

01:12.000 --> 01:14.000
between players rather.

01:14.000 --> 01:18.000
So strategic interactions between players, which we call games.

01:19.000 --> 01:24.000
And the players in these games can be humans, as was the case obviously,

01:24.000 --> 01:27.000
in the original use case of economics.

01:27.000 --> 01:33.000
But that can also be extended to animals, you know, there are ecologists that if you use game theory,

01:33.000 --> 01:36.000
and of course, AI agents.

01:36.000 --> 01:43.000
So the players of the games are basically any sort of intelligent agent that can form strategies.

01:43.000 --> 01:51.000
So gambit is a software project that includes tools for doing game theory.

01:51.000 --> 01:56.000
In particular, it focuses on finite non-cooperative games.

01:56.000 --> 02:01.000
So these are games that have an end, so they're finite.

02:01.000 --> 02:07.000
The end of the game, each player has kind of a pay-off associated.

02:07.000 --> 02:13.000
And a non-cooperative, so the players are adversarial, they're competing with one another.

02:13.000 --> 02:19.000
As I mentioned, as it's used in a variety of domains, in terms of the software,

02:19.000 --> 02:22.000
I'm going to be talking about Pi Gambit, which is the Python package,

02:22.000 --> 02:29.000
where that's built on top of C++, and there are CLI tools and a graphical user interface.

02:29.000 --> 02:36.000
And the project is quite old, so I don't know if anyone remembers the 1980s.

02:36.000 --> 02:44.000
I don't, but it's been going that long, and some of the people involved who I'm working with have been working on it for that long.

02:44.000 --> 02:50.000
But more recently, the work has been taking place at the Alan Cheering Institute,

02:50.000 --> 02:56.000
and in particular, we've been working on Pi Gambit, which is what I'm talking about in this talk.

02:56.000 --> 03:01.000
But just to note, we've also been working on a project involving large language models,

03:01.000 --> 03:08.000
so that project is about translating natural language descriptions of games into the game code.

03:08.000 --> 03:12.000
The game code is Pi Gambit, which is what are we talking about now.

03:12.000 --> 03:16.000
And also, a relevance to this conference, and some of the talks earlier,

03:16.000 --> 03:22.000
we want to work on open source community building, because it is an open source project.

03:22.000 --> 03:29.000
So Pi Gambit is the Python package associated with Gambit, as I mentioned,

03:29.000 --> 03:36.000
it basically has two core features. It allows you to construct games in the few lines of code,

03:36.000 --> 03:39.000
and then it has a library of algorithms.

03:39.000 --> 03:45.000
And one set of algorithms is used to compute what we call Nash Equilibria,

03:45.000 --> 03:52.000
so Nash was the guy he saw earlier. Nash Equilibria basically scenario in a game,

03:52.000 --> 03:56.000
where none of the players have any incentive left to deviate their strategy,

03:56.000 --> 04:03.000
so they're at Equilibrium, and I'll walk you through an example of that in the moment.

04:03.000 --> 04:06.000
So what I'm going to talk about is exactly those two things.

04:06.000 --> 04:11.000
I'm going to show how you can construct a game in Pi Gambit, how you can compute the Nash Equilibria.

04:11.000 --> 04:21.000
And I'm also going to demonstrate how the package is interoperable with some of the other packages in the Python scientific computing ecosystem,

04:21.000 --> 04:27.000
in particular one from DeepMind, called OpenSpeel, which is also all about game theory.

04:27.000 --> 04:30.000
Okay, so let's switch to the live demo.

04:30.000 --> 04:33.000
So hopefully you're all still awake.

04:33.000 --> 04:41.000
It can try and pay attention to this, so I'm going to demonstrate Pi Gambit in action through a couple of example games.

04:41.000 --> 04:46.000
The first thing I'm going to demonstrate is that first feature it's all about which is how to construct a game.

04:46.000 --> 04:52.000
And I'm going to demonstrate that both with Pi Gambit, and as you can see here at the bottom, the draw tree package,

04:52.000 --> 04:58.000
which we've built specifically to help visualize games that are built with Pi Gambit.

04:58.000 --> 05:06.000
So I'm going to talk you through this very simple game, and it's a game that is an extensive form game.

05:06.000 --> 05:08.000
That's the game theory terminology.

05:08.000 --> 05:11.000
As you'll see, basically that just means tree structure.

05:12.000 --> 05:14.000
The game is called stripped down poker.

05:14.000 --> 05:19.000
That's not to be confused with the strip poker, which is an entirely different thing.

05:19.000 --> 05:21.000
Pause for laughter, thank you.

05:21.000 --> 05:23.000
There's a unit.

05:23.000 --> 05:26.000
Okay, so basically there's two players.

05:26.000 --> 05:28.000
We're going to call them Allison Bob.

05:28.000 --> 05:31.000
And the start of the game is there's a deck of cards.

05:31.000 --> 05:39.000
There's only two types of cards, King and Queen, and there's 50% King cards, 50% Queen cards, King is better than Queen.

05:39.000 --> 05:43.000
Okay, so player one Alice, we begin the game.

05:43.000 --> 05:49.000
The card is dealt at random to Alice, and she gets to see whether it's a King or a Queen.

05:49.000 --> 05:55.000
Bob, whose player two does not observe the card, that's very important.

05:55.000 --> 06:00.000
So the second thing is that Alice then gets a decision.

06:00.000 --> 06:04.000
She has to choose either the bet, action, or fold action.

06:04.000 --> 06:10.000
If she chooses to fold, obviously she loses, and Bob wins what's in the pot.

06:10.000 --> 06:15.000
If she chooses to bet, then she'll add another $1 to the pot.

06:15.000 --> 06:21.000
Now, of course, she could be betting on the King, which would be a genuine, you know, she's betting on the King,

06:21.000 --> 06:25.000
or she could be bluffing because she's betting on the Queen.

06:25.000 --> 06:31.000
So if Alice did bet, Bob then gets an action decision.

06:32.000 --> 06:35.000
He then either calls or folds.

06:35.000 --> 06:39.000
Of course, again, if he folds, then Alice wins and the game ends.

06:39.000 --> 06:43.000
If he chooses to call, then he's adding his $1 to the pot.

06:43.000 --> 06:52.000
And then if he got to that point where he's called Alice's race, then if Alice had the King, then she's the winner.

06:52.000 --> 06:58.000
But if it was the Queen and she was bluffing, then Bob's the winner.

06:58.000 --> 07:04.000
Okay, so see how we can model that simple game with Pygambit code.

07:04.000 --> 07:06.000
Can everyone read the code?

07:06.000 --> 07:08.000
Alright, so I'm big enough.

07:08.000 --> 07:09.000
I'm seeing knots.

07:09.000 --> 07:10.000
Okay, good.

07:10.000 --> 07:15.000
Okay, so the first thing we're going to do is create the game object in Pygambit.

07:15.000 --> 07:18.000
So very simply, we're going to run Game Neutree.

07:18.000 --> 07:22.000
We've added our players, Alice and Bob, given it title.

07:22.000 --> 07:27.000
And the important thing to note straight away is that when we print the list of players here,

07:27.000 --> 07:29.000
there's actually three players.

07:29.000 --> 07:32.000
So we've got Alice involved, the players that we've created.

07:32.000 --> 07:34.000
But we've also got this chance player.

07:34.000 --> 07:39.000
And this is how Gambit models the element of chance.

07:39.000 --> 07:42.000
See, model it as a chance player.

07:42.000 --> 07:46.000
And that what that actually means is that the very first action in the game,

07:46.000 --> 07:50.000
which we can create by using this append move function.

07:50.000 --> 07:55.000
We're going to append it to the root node of the game, which is all we have so far.

07:55.000 --> 07:59.000
But the player that we're going to set it to is the chance player.

07:59.000 --> 08:05.000
So the first action of the game, we're going to append to the chance player.

08:05.000 --> 08:13.000
And essentially, it's going to be either a king or a queen because that's what's being dealt from the deck of cards.

08:13.000 --> 08:18.000
Each of those is going to be with probability 50% one half.

08:18.000 --> 08:22.000
And then we get the start of our game tree, which is very simply like this.

08:22.000 --> 08:27.000
But the start of the game, and then after either that king or queen was dealt,

08:27.000 --> 08:32.000
we've now got two new nodes in our Gambit game object.

08:32.000 --> 08:40.000
Okay, so how do we then add Alice player ones moves to the game?

08:40.000 --> 08:45.000
So in this case, what we're going to do is iterate through the children of the root nodes,

08:45.000 --> 08:48.000
which are these two we've just constructed.

08:49.000 --> 08:52.000
And we're going to add our actions of betting fold.

08:52.000 --> 08:55.000
And we're going to do that for both of those.

08:55.000 --> 09:07.000
And if we do that now, you can see that we've independently for each of those child nodes of the root node added Alice's actions.

09:07.000 --> 09:15.000
So that's important as we'll come to see that those are two distinct what we call in game theory information sets.

09:15.000 --> 09:20.000
And this is different when we come to Bob player two.

09:20.000 --> 09:26.000
Because if you remember earlier, we said that Bob doesn't know what card was dealt at the start of the game.

09:26.000 --> 09:30.000
So in this case, we're going to call the append move function again.

09:30.000 --> 09:40.000
But instead of looping through each of the nodes, we're just going to call it once and pass in a list of the two nodes to apply it to.

09:40.000 --> 09:46.000
So when we do this, we generate what looks the set almost the same.

09:46.000 --> 09:52.000
But instead, we've got the two nodes at which Bob has to make a decision circled.

09:52.000 --> 09:57.000
And what that is called is in game theory as an information set.

09:57.000 --> 10:02.000
It basically tells us that when Bob is making a decision, he doesn't know whether he's here or here.

10:02.000 --> 10:05.000
He doesn't know whether the king or queen was dealt.

10:05.000 --> 10:09.000
All he knows is that Alice has bet.

10:09.000 --> 10:11.000
Okay.

10:11.000 --> 10:20.000
So the final step of constructing our game is that we need to add outcomes to all of the terminal nodes of the game.

10:20.000 --> 10:24.000
So there are four possible outcomes that we have in our game.

10:24.000 --> 10:34.000
The first one, which we're labeling from Alice's perspective, player one, is the big win, where she's got two dollars and Bob has lost two.

10:34.000 --> 10:39.000
And there's another option where she's just one with one dollar.

10:39.000 --> 10:44.000
And then she can also lose two and lose just one.

10:44.000 --> 10:47.000
So we can add all of them with the add outcome.

10:47.000 --> 10:54.000
But then we also need to assign all of those outcomes to the terminal nodes in the game tree,

10:54.000 --> 10:58.000
which if we look at the tree, there's actually six of them not just four.

10:58.000 --> 11:02.000
So there's four different fold nodes and those two others.

11:02.000 --> 11:09.000
So we assign each of those outcomes to their appropriate terminal nodes.

11:09.000 --> 11:12.000
And look at our final game.

11:12.000 --> 11:20.000
We've got our stripped poker that's stripped down poker game models entirely.

11:20.000 --> 11:24.000
So that's how you construct a game in pie gambit.

11:24.000 --> 11:28.000
I'll also just mention this package here, draw tree that we've developed.

11:28.000 --> 11:36.000
So for the software engineers in the room, basically under the hoods, this is generating latex code,

11:36.000 --> 11:42.000
and specifically tick graphics, which is a submodular of latex.

11:42.000 --> 11:49.000
And even more specifically it's using the Jupiter ticks widget to render this nicely.

11:49.000 --> 11:56.000
OK, but back to understanding this game and what pie gambit can do.

11:56.000 --> 12:02.000
So as I mentioned, the other thing as well as constructing games that pie gambit does is it solves them.

12:02.000 --> 12:04.000
So what does that mean?

12:04.000 --> 12:09.000
It means we can compute the Nash equilibria to remind you what that is.

12:09.000 --> 12:19.000
So equilibria are stable situations where no player has an incentive to deviate their strategy that they play.

12:19.000 --> 12:24.000
So gambit has a range of algorithms to do this.

12:24.000 --> 12:34.000
I'm just going to show one of them here, which is the LCP solve, which is a linear complementarity problem algorithm.

12:34.000 --> 12:39.000
And we can just call that in one line of code, and we've got the answer.

12:39.000 --> 12:46.000
OK, but it gives you back this Nash computation result object, which is a bit hard to understand.

12:46.000 --> 12:50.000
So let's take a little look at what's actually going on here.

12:50.000 --> 12:59.000
If we look at the equilibria of the result, we can see that there's just one that was found for this simple game.

12:59.000 --> 13:12.000
And if we look at index the equilibria, the equilibria part of the result object, what we get is something called a mix behavior.

13:13.000 --> 13:26.000
So what's a mix behavior? A mix behavior profile basically tells us for each information set the probability distribution over actions of the information set.

13:26.000 --> 13:31.000
So that's probably quite hard to understand, so I'll try and illustrate that a bit more clearly.

13:31.000 --> 13:42.000
If we look at the mix behavior for Alice, player one, what we've got here is one zero and then one third, two thirds.

13:42.000 --> 13:51.000
OK, so what does that mean? So I'm just going to run through some code here that's explained that a bit more clearly.

13:51.000 --> 14:02.000
Basically at the first information set, which we'll call information set zero, Alice plays bets with the probability one, Alice plays fold with probability zero.

14:02.000 --> 14:09.000
At the second information set Alice plays bet with probability one third and fold with probability two thirds.

14:09.000 --> 14:17.000
That's what this means, and to explain that even more, if you think about it, this is the scenario where the king was dealt.

14:17.000 --> 14:25.000
So essentially, her strategy, according to the equilibria, is when she's got the king, is always bet.

14:25.000 --> 14:32.000
When she's dealt the queen, she's going to bluff one third at the time, and she's going to fold two thirds at the time.

14:32.000 --> 14:43.000
So that's what that means, and that's how you can explore the strategy that was given to the first player, according to this equilibria that gambits computed.

14:43.000 --> 14:54.000
If we now look at player two, Bob, we can see he only has one information set as you remember earlier, so he's always going to play.

14:54.000 --> 15:02.000
He's always going to play call with probability two thirds and fold with probability one third.

15:02.000 --> 15:11.000
So that's his strategy. He has no incentive to deviate from according to this equilibria that gambit has calculated for this particular game.

15:11.000 --> 15:19.000
Okay, cool. So that's how you can construct a game with pygambit and also compete in the equilibria.

15:19.000 --> 15:34.000
I'm going to show a little bit now about interoperability with this other package called OpenSphere, which is a python package developed by, I guess python and c++ package developed by DeepMind.

15:35.000 --> 15:45.000
We're going to use the same game again, let's begin with, but yeah, so this package is developed by DeepMind, it's also used for game theory.

15:45.000 --> 15:53.000
So we can very easily load our game that we defined in pygambit in pygspile.

15:53.000 --> 16:04.000
And what this allows us to do, maybe after zoom out a little bit to get it all on screen, is it allows us to step through the game.

16:04.000 --> 16:20.000
So where gambit allowed us to construct the game in compute equilibria, OpenSphere is a package that's all used for reinforcement learning and other kinds of ways of iteratively playing games.

16:20.000 --> 16:33.000
And so it has this idea of a game state, so we can start with our initial game state and then all that this code is doing is sort of iterating through the game.

16:33.000 --> 16:49.000
And it's saying, if it's a chance node, so remember from our game, the very first action is taken by the chance player, it's just going to pick the king or the queen according to the probabilities.

16:49.000 --> 16:58.000
And for everything else after that, it's just going to pick up random one of the possible actions.

16:58.000 --> 17:03.000
So I ran that code and we've got the simulation of the game here.

17:03.000 --> 17:08.000
A queen was dealt, Alice picked fold and the game ended with Alice losing.

17:08.000 --> 17:12.000
If we run it again, let's try and get a different answer.

17:13.000 --> 17:27.000
Maybe we'll get, yeah, there we go. We'll get the king was dealt, Alice stupidly folded, which is not what she would have done if she was playing the equilibrium strategy we looked at earlier.

17:27.000 --> 17:35.000
But yeah, that's just the way you can, so with OpenSphere, you can just step through the game that way and simulate pay threes.

17:35.000 --> 17:48.000
This is important because unlike Gambit where what you're doing is just defining a game and then computing the equilibrium for much larger games where that comes computationally intractable.

17:48.000 --> 17:57.000
You can use OpenSphere to do things like reinforcement learning agents to learn strategies through self play.

17:58.000 --> 18:06.000
So player strategies can change over time as these games are played and you may eventually really reach an equilibrium.

18:06.000 --> 18:12.000
So what I'm going to try and do, I think we've got time, I've not seen any time science come up.

18:12.000 --> 18:15.000
I'm not paying attention to anything.

18:15.000 --> 18:17.000
Okay.

18:18.000 --> 18:21.000
Well everyone's riveted, so we'll carry on.

18:21.000 --> 18:27.000
So I'm going to demonstrate sort of, yeah, as I mentioned, OpenSphere would be used for a much larger games.

18:27.000 --> 18:36.000
But I'm going to demonstrate how you might iteratively get to an equilibrium that could be via OpenSphere.

18:36.000 --> 18:39.000
It looks similar to what you were getting gambit.

18:39.000 --> 18:47.000
So to do that, I'm going to load a much an even simpler game, which is from the PiSphere library matrix RPS,

18:48.000 --> 18:54.000
that's literally rock paper scissors. So in gambit, that literally looks like this.

18:54.000 --> 18:59.000
What we were looking at previously was an extensive form tree game.

18:59.000 --> 19:02.000
Now we're looking at a normal form table game.

19:02.000 --> 19:04.000
This is not very hard to understand.

19:04.000 --> 19:08.000
If you have player one player's rock against rock, it's a draw.

19:08.000 --> 19:14.000
If they play rock against paper, they lose, and yeah, it's very simple to understand.

19:15.000 --> 19:18.000
Hopefully you'll know rock paper scissors.

19:18.000 --> 19:31.000
So again, we can simulate a play through with OpenSphere by setting the states and then simply doing one, a Pi actions with player one playing zero,

19:31.000 --> 19:39.000
which is a player one, a player two playing one, which is paper and player one, the second player wins.

19:39.000 --> 19:58.000
Okay, so if we plug that game into gambit's LCP solve, very intuitively, the equilibrium strategy that it finds for both players is that they're both going to play exactly one-thirds rock paper and scissors.

19:58.000 --> 20:00.000
Okay, makes sense, right?

20:00.000 --> 20:12.000
So let's now look at OpenSphere's replicated dynamics module to model how player strategies are the choice of rock paper scissors,

20:12.000 --> 20:20.000
with probabilities, x, y, or z, can evolve over time based on how they perform once against one another.

20:20.000 --> 20:26.000
And we can see how they will converge on the very same equilibrium that gambit competed.

20:26.000 --> 20:42.000
So to simulate this, I'm going to start with a situation where both players are playing a strategy of rock 30% at the time, paper 30% of the time, scissors 40% of the time.

20:43.000 --> 21:05.000
So ignore this plotting code, but very simply, if we start with both players playing rock and paper 30% and sorry, yeah, and scissors 40% what you can see is that playing rock becomes a good strategy,

21:05.000 --> 21:09.000
because the other player is playing scissors more often.

21:09.000 --> 21:22.000
And so over time steps obviously rock becomes more chosen more often as the players change their strategies to respond to the rewards that they're getting.

21:22.000 --> 21:31.000
And as it becomes really popular, what that means is that paper becomes actually a better strategy, because the other player is choosing rock most often.

21:31.000 --> 21:36.000
And so that is the chosen strategy that goes up in frequency.

21:36.000 --> 21:39.000
And then what you get is that oscillation over time.

21:39.000 --> 21:46.000
It's all around this one third, which is the equilibria, but it never quite meets it.

21:46.000 --> 21:56.000
But if we simply just plot the average over time, what you see is that clearly we are approximating that equilibrium.

21:57.000 --> 22:05.000
Anyway, that was just a fun way of showing that these two packages are interoperable and how with a very simple game.

22:05.000 --> 22:11.000
You can use this iterative approach to come to the same conclusions you might have done with gambit.

22:11.000 --> 22:17.000
And if we plug in our actual equilibria, the strategies never deviates.

22:17.000 --> 22:20.000
Okay.

22:20.000 --> 22:25.000
So that was a very brief introduction to pygambit and some associated packages.

22:25.000 --> 22:29.000
So it is an open source project, otherwise wouldn't be here.

22:29.000 --> 22:31.000
And new contributors are welcome.

22:31.000 --> 22:39.000
Please do check out our website, which has links to documentation, where you can find all of that information.

22:39.000 --> 22:43.000
Just a little bit about the project and what's coming next.

22:43.000 --> 22:51.000
So there's a lot of work that I've done sort of this year in updating the project documentation for gambit.

22:51.000 --> 22:57.000
There's a bunch of nice tutorials online from which this material has been derived.

22:57.000 --> 22:59.000
But there's going to be more of that.

22:59.000 --> 23:06.000
A lot of the postdocs that I work with who are the proper game theory people, not software engineers like me.

23:06.000 --> 23:12.000
They've been working on increasing the algorithmic efficiency of some of those national computation algorithms that you can see listed here.

23:12.000 --> 23:20.000
We're going to carry on working on this interfacing with other tools in the scientific computing ecosystem.

23:20.000 --> 23:27.000
And a big feature that we're working on at the moment is just making a nice library or catalog of games.

23:27.000 --> 23:33.000
So there's plenty of examples that users can easily load when they start using the package.

23:33.000 --> 23:38.000
And so that's all I'll take any questions. Thank you.

23:38.000 --> 23:48.000
Yes.

23:48.000 --> 23:57.000
How do you know if I can put some thought to the game in a bit of a time frame?

23:57.000 --> 24:03.000
How do you know the question was how do you know Pygambit can solve the game in a reasonable time frame?

24:03.000 --> 24:13.000
That's a good question. I don't know the answer to that. I think the best answer would be as a user would be to just give things a try.

24:13.000 --> 24:23.000
But those of the people I work with who are the experts in all of these algorithms certainly themselves have recognized plenty of inefficiencies.

24:23.000 --> 24:30.000
And that's why I mentioned one of the things that they're working on at the moment is trying to make you know improve the performance as much as possible.

24:30.000 --> 24:34.000
But yeah, I think the only answer can give us try stuff.

24:34.000 --> 24:51.000
Certainly another thing I'll mention is that part of the reason that I talked about this interoperability with OpenSpile is one thing that people have noticed is that these algorithms fall down for like really large games.

24:51.000 --> 24:55.000
What do I mean by really large games? I'm not particularly what I mean by that.

24:55.000 --> 25:03.000
But some of the people I worked with did a project on generating games through prompting LLMs.

25:03.000 --> 25:16.000
And so they came up with some very big complicated games which could be the LLMs or co-generation LMs and they were able to output Pygambit code that could construct a game.

25:16.000 --> 25:21.000
And that wasn't difficult, but of course with some of those really large games that came out of our projects.

25:21.000 --> 25:27.000
These algorithms fell down, so that is an ongoing area of research.

25:27.000 --> 25:34.000
What is the key for real use case scenario for example in economics?

25:34.000 --> 25:39.000
Yeah, you're also not the domain expert on this.

25:40.000 --> 25:47.000
Yeah, I mean the example I have is not for economics, but from war gaming, but that's one of the other.

25:47.000 --> 25:52.000
It's not a concrete example. I'm just saying another field.

25:52.000 --> 26:07.000
So there's also some of my colleagues are working on sort of combined traditional game theory and reinforcement learning strategies for modeling cyber attacks.

26:07.000 --> 26:13.000
And attack a defender scenarios, so that's another big area of research at the moment.

26:13.000 --> 26:31.000
But yeah, I can't give you a list of papers, but hopefully that is one thing that by visiting the Gambit website, it'll be us find out.

26:32.000 --> 26:36.000
Okay. Cool.

26:36.000 --> 26:37.000
Thank you.

26:37.000 --> 26:38.000
Thank you very much.

