WEBVTT

00:00.000 --> 00:15.000
All right, welcome everyone, great to see many people.

00:15.000 --> 00:21.000
I'm Jo from the KLurvin University in Belgium.

00:21.000 --> 00:28.000
Professor there and joining me for this talk later is Skobb, who started a PhD with us.

00:28.000 --> 00:31.000
And during his Marsity, this is the work with me on this project.

00:31.000 --> 00:34.000
I want to talk a bit about today.

00:34.000 --> 00:39.000
So this dev room kind of has a long history.

00:39.000 --> 00:43.000
And in one of the first editions, this was not called confidential computing, right?

00:43.000 --> 00:45.000
It was called trusted execution.

00:45.000 --> 00:48.000
And so what I want to talk about today is trust.

00:48.000 --> 00:51.000
And I really like this XKCD image.

00:51.000 --> 00:55.000
And when we talk about trust, because it shows very visually

00:55.000 --> 00:59.000
that in computer systems, trust is very much a vertical thing, right?

00:59.000 --> 01:04.000
You can see how if something goes wrong here at the bottom of the stack, the whole thing falls down, right?

01:04.000 --> 01:08.000
And we are engineers, we all know this way too well.

01:08.000 --> 01:14.000
But if you really want to think more about trust and this idea that you depend on

01:14.000 --> 01:18.000
the operating system and compile it and so, I very much recommend to have a look at the

01:18.000 --> 01:21.000
monitoring of our lecture of Kantonson.

01:21.000 --> 01:26.000
Kantonson, I think you all know him, developer of Unix,

01:26.000 --> 01:31.000
inventor of the B programming language, the preceded C.

01:31.000 --> 01:37.000
And it's very nice, he had a talk, it's also nice little paper, it's only a couple of pages.

01:37.000 --> 01:44.000
There are really things about what happens if you start implementing back doors and malicious codes

01:44.000 --> 01:48.000
in lower layers of the system stack, let's say, in compilers in this case.

01:48.000 --> 01:52.000
And he concludes somewhere at the end of this couple of pages.

01:52.000 --> 01:54.000
It's written in a very accessible style.

01:54.000 --> 01:56.000
So he says, the moral is obvious.

01:56.000 --> 02:00.000
You can trust codes that you did not totally create yourself.

02:00.000 --> 02:04.000
And I think he asked after that, especially if it's written by people like me.

02:04.000 --> 02:11.000
And so this, I think is something to really think about and that I will repeat a couple of times later on in the dark.

02:11.000 --> 02:16.000
So if you kid code yourself and it's not too big, you might have a reasonable twist, at least,

02:16.000 --> 02:19.000
not in terms of books, but at least in terms of, let's say, back doors.

02:19.000 --> 02:22.000
I think that's also very much the spirit of trusted execution.

02:22.000 --> 02:28.000
So if we go from reflections on trusting trust and the tiering of art lecture to reflections on trusting trusted execution environments,

02:28.000 --> 02:32.000
you all know these pictures being shown a couple of times today.

02:32.000 --> 02:38.000
The idea, of course, of trusted execution of confidential computing is to put big,

02:38.000 --> 02:42.000
of the stacks like the operating system outside of the trusted computing space, which that's,

02:42.000 --> 02:47.000
we can, we can embed trust directly in the process of the end of the end of the end.

02:47.000 --> 02:52.000
And if you look at the picture, a lot of the research that the community as a whole has been doing,

02:52.000 --> 02:56.000
but also our group in learn, is trying to look at these two green boxes.

02:56.000 --> 02:58.000
And on the one hand, you have a trusted processor.

02:58.000 --> 03:00.000
We all know that's not perfect.

03:00.000 --> 03:05.000
There's been a number of high impact attacks, some of them we did in the past, like the foreshadow attack,

03:05.000 --> 03:12.000
that show if things go wrong in the CPU, in the microarchitecture, the whole thing falls down again.

03:12.000 --> 03:15.000
That's not what I want to talk about today, if you want to talk about that,

03:15.000 --> 03:17.000
confine me in the hallway, super exciting stuff.

03:17.000 --> 03:23.000
But what I want to talk about today is the inverse side of that, namely,

03:23.000 --> 03:31.000
what do we put inside that, what I call an angliff now, but it could also be a CVM, a confidential, virtual machine.

03:31.000 --> 03:33.000
What do you put inside the tris bounty?

03:33.000 --> 03:36.000
As of today, it's not perfect, it also has bugs.

03:36.000 --> 03:43.000
And in this respect, I think we've all seen in this confidential computing devroom, as well,

03:43.000 --> 03:50.000
a shift over the more recent years from this original idea of confidential computing or artistic execution,

03:50.000 --> 03:55.000
which was very small containers, typically, isolated to level of a function call,

03:55.000 --> 04:02.000
and the original idea of aesthetics, towards what the industry is now adopting on the large scale.

04:02.000 --> 04:10.000
Actually, it has been vastly covered in the last talk.

04:10.000 --> 04:15.000
So let's go back.

04:15.000 --> 04:19.000
Well, I want to do today is not talk about VMs, maybe a nice hardware discussion,

04:19.000 --> 04:23.000
but I want to talk a bit more about this original idea of small interfaces.

04:23.000 --> 04:29.000
And if you look at sjits, which was the most ambitious today,

04:30.000 --> 04:37.000
I'm maybe partially at fault for that, but sjits has gotten a lot of bad publicity, right?

04:37.000 --> 04:39.000
And Intel needs to do something with that.

04:39.000 --> 04:44.000
So one of the things they did is to try to sell that, and to some extent, as fair,

04:44.000 --> 04:49.000
and they say, Intel sjits is the most researched and battle tested to such execution environment.

04:49.000 --> 04:51.000
That's a good marketing, I think.

04:51.000 --> 04:53.000
But they also say something very interesting here.

04:53.000 --> 04:56.000
It has the smallest attack surface within the system.

04:56.000 --> 05:00.000
In the case of course, in a confidential VM, you put your several million lines of code,

05:00.000 --> 05:04.000
Linux kernel in there, and in sjits, the ideas, we write small good things, right?

05:04.000 --> 05:06.000
And that's true support.

05:06.000 --> 05:11.000
So that's the idea, the star eyes here are always fine.

05:11.000 --> 05:18.000
Okay, okay, that's what, when we also started 2011, we were on board with that idea.

05:18.000 --> 05:20.000
But then the reality hits, right?

05:20.000 --> 05:25.000
And the reality is, we now have a vast, quite big open source ecosystem to develop into a zig-saint

05:25.000 --> 05:26.000
place.

05:26.000 --> 05:28.000
And again, I'm very much aware, there is a lot there.

05:28.000 --> 05:34.000
But I'm going to look at the projects that try to build the enclaves in the most minimal way.

05:34.000 --> 05:37.000
And those are not language integration and likely open systems,

05:37.000 --> 05:41.000
but those are what we call software development kits, right?

05:41.000 --> 05:43.000
So they allow you to write an enclaving C.

05:43.000 --> 05:45.000
And there's two of those in production today.

05:45.000 --> 05:48.000
You have the one from Intel, the Intel SDK, and you have the one from Microsoft,

05:48.000 --> 05:50.000
essentially, the open enclave SDK.

05:50.000 --> 05:54.000
And so what we did together with Corban, Corban in the T.S.

05:54.000 --> 05:57.000
Let's first just try to map out the landscape.

05:57.000 --> 05:58.000
How big are these things?

05:58.000 --> 06:01.000
And you see just a simple slow count here.

06:01.000 --> 06:04.000
And we were talking, maybe not about million lines of codes,

06:04.000 --> 06:06.000
like in Linux, but you're talking about hundreds of thousands of lines of code.

06:06.000 --> 06:09.000
And this is already, let's say very kindly,

06:09.000 --> 06:11.000
because if you measure in the root directory, it's much more,

06:11.000 --> 06:13.000
because they have all these git modules.

06:13.000 --> 06:18.000
And for some reason, they have the whole Google portal buff in there.

06:18.000 --> 06:20.000
So it's bad, right?

06:20.000 --> 06:25.000
But take away here is, you want to isolate a small secret.

06:25.000 --> 06:30.000
It's like, like, say, this little apple here.

06:30.000 --> 06:33.000
And you want to package it in an enclave container.

06:33.000 --> 06:35.000
And instead of putting it in a nice little box,

06:35.000 --> 06:38.000
that you can reason about you put it in a big trip, right?

06:38.000 --> 06:42.000
And that's what we are doing, that's what least questionable.

06:42.000 --> 06:44.000
Right?

06:44.000 --> 06:47.000
That's bad, because software is buggy.

06:47.000 --> 06:50.000
And in some of the past research, we also showed out,

06:50.000 --> 06:54.000
this is practically part of when we did a first on analysis.

06:54.000 --> 06:57.000
We've done many since, also automated tools.

06:57.000 --> 07:01.000
But this shows that these hundreds of thousands of lines of code

07:01.000 --> 07:02.000
have film abilities.

07:02.000 --> 07:04.000
And again, I'm coming back to my friend here,

07:04.000 --> 07:07.000
Kentomsson, who says, yeah, this again shows you can trust

07:07.000 --> 07:10.000
these hundreds and thousands of lines of code.

07:10.000 --> 07:13.000
And you cannot reasonably check them out, right?

07:13.000 --> 07:15.000
Okay, maybe open source and that's great.

07:15.000 --> 07:18.000
I have fun going through all these hundreds of thousands of lines of code.

07:18.000 --> 07:21.000
And so the idea for this project is,

07:21.000 --> 07:22.000
can we do better?

07:22.000 --> 07:24.000
Can we maybe build anchors,

07:24.000 --> 07:29.000
that we can totally, like, where we write all of the code

07:29.000 --> 07:31.000
that goes in there, literally all of the code,

07:31.000 --> 07:33.000
and then maybe we can trust them.

07:33.000 --> 07:35.000
That's the idea.

07:35.000 --> 07:39.000
Okay, and that's the topic of today.

07:39.000 --> 07:42.000
I started thinking about that long ago,

07:43.000 --> 07:45.000
and one of the things you need to do, of course,

07:45.000 --> 07:47.000
when you want to make anchors,

07:47.000 --> 07:50.000
a zig-sync place in this case, is to talk to the hardware,

07:50.000 --> 07:52.000
and you talk to the hardware typically to the operating system.

07:52.000 --> 07:55.000
And so, many of you will know the Linux control,

07:55.000 --> 07:57.000
now this has upstream support for SVX,

07:57.000 --> 07:59.000
it took a lot of effort and wooden tiers,

07:59.000 --> 08:02.000
but we now have this slash Jeff slash a zig-sync place.

08:02.000 --> 08:04.000
And so, what I wanted to do is,

08:04.000 --> 08:06.000
okay, can I talk to that drive directly,

08:06.000 --> 08:08.000
to create anchors for me?

08:08.000 --> 08:10.000
And I looked to turn it out,

08:10.000 --> 08:11.000
and I don't have to do that from sketch,

08:11.000 --> 08:13.000
because there was already some support in the Linux kernel,

08:13.000 --> 08:16.000
in the form of the what they call self-test,

08:16.000 --> 08:19.000
super, super, yeah,

08:19.000 --> 08:22.000
super, super, super tricky of the kernel directory.

08:22.000 --> 08:25.000
We're essentially doing a bunch of small tests,

08:25.000 --> 08:27.000
and they had some sort of testing left there,

08:27.000 --> 08:28.000
and I looked at the interesting, right?

08:28.000 --> 08:31.000
You just have a C file, and an assembly file,

08:31.000 --> 08:33.000
you're remdored through something like GCC,

08:33.000 --> 08:35.000
typically GCC, you might also want to use client,

08:35.000 --> 08:37.000
and then you'll get an ELF file,

08:37.000 --> 08:39.000
and they have a very small C file,

08:40.000 --> 08:43.000
or something, that basically talks to the driver.

08:43.000 --> 08:45.000
And they use stats in Linux explicitly,

08:45.000 --> 08:48.000
because they wanted to test their driver infrastructure

08:48.000 --> 08:51.000
without pulling in these hundreds of thousands of lines of code.

08:51.000 --> 08:54.000
Okay, so I thought that's great, let's use that,

08:54.000 --> 08:56.000
and start at looking a bit at that,

08:56.000 --> 08:59.000
and this is actually part of the project

08:59.000 --> 09:02.000
that Martin mentioned earlier in the open IP,

09:02.000 --> 09:05.000
and he talked that we have this in both infrastructure,

09:05.000 --> 09:07.000
in both execution infrastructure,

09:07.000 --> 09:10.000
and so one of the projects we did there,

09:10.000 --> 09:13.000
in this Pandora project, find the paper here,

09:13.000 --> 09:16.000
is to check small enclaves,

09:16.000 --> 09:19.000
and I checked this test, self-test enclaves,

09:19.000 --> 09:22.000
and it was full of bugs, right?

09:22.000 --> 09:25.000
It's quite bad because some of that was even,

09:25.000 --> 09:30.000
like, to some extent, used in production by Intel.

09:30.000 --> 09:32.000
Okay, so you'll find bugs,

09:32.000 --> 09:34.000
the next thing you do is you try to fix them,

09:34.000 --> 09:36.000
so I talked to the Linux kernel developers,

09:36.000 --> 09:40.000
and what you see here is that Dave Hansel,

09:40.000 --> 09:41.000
made any of the x and the 6,

09:41.000 --> 09:43.000
and one of the maintenance of the x and the 6 subsystem,

09:43.000 --> 09:45.000
let's put that look, I see,

09:45.000 --> 09:48.000
or you have a point, but I don't want to have this in the kernel,

09:48.000 --> 09:50.000
and it's that there is a gap,

09:50.000 --> 09:52.000
and it would be wise to close this gap,

09:52.000 --> 09:54.000
but take it basically in a fork,

09:54.000 --> 09:57.000
and that's where this bargex project was born.

09:57.000 --> 09:59.000
I forked off this code from the Linux kernel,

09:59.000 --> 10:03.000
and started sort of making that into a self-contained framework

10:03.000 --> 10:05.000
to build in place.

10:07.000 --> 10:10.000
Okay, looks basically like this,

10:10.000 --> 10:11.000
and we have the same idea here,

10:11.000 --> 10:14.000
that you have basically an assembly file.

10:14.000 --> 10:15.000
If you want to see file,

10:15.000 --> 10:17.000
you don't even have to do that.

10:17.000 --> 10:18.000
And if you want, there is some,

10:18.000 --> 10:21.000
to sort of library that the code will talk a bit about.

10:21.000 --> 10:23.000
But the basic idea is,

10:23.000 --> 10:25.000
you render into a very small inklet,

10:25.000 --> 10:30.000
and then there is an interest at runtime functionality

10:30.000 --> 10:33.000
that, that, then talks to the drive.

10:33.000 --> 10:34.000
The cool thing is,

10:34.000 --> 10:38.000
this is all packaged these days in a build route,

10:38.000 --> 10:40.000
file system,

10:40.000 --> 10:43.000
so you basically have a super small VM that runs,

10:43.000 --> 10:44.000
and that,

10:44.000 --> 10:46.000
only dependency has the upstream kernel drive,

10:46.000 --> 10:48.000
which is supposed to be forward and backward,

10:48.000 --> 10:49.000
compatible, right?

10:49.000 --> 10:52.000
So it should be fine if you build a bargex,

10:52.000 --> 10:54.000
and claim if you don't touch it in maybe ten years,

10:54.000 --> 10:56.000
hopefully if the hardware is there,

10:56.000 --> 10:58.000
you might be able to run it.

10:58.000 --> 11:00.000
And Barry, in mind,

11:00.000 --> 11:02.000
these are super small inklet,

11:02.000 --> 11:04.000
and we're talking really about a couple of pages.

11:04.000 --> 11:05.000
Typically one code page,

11:05.000 --> 11:07.000
plus than the architectural pages you need,

11:07.000 --> 11:11.000
like the threat control structure in the safe state area.

11:11.000 --> 11:17.000
Okay, so here you see a screenshot of this build route container,

11:17.000 --> 11:19.000
and you basically see,

11:19.000 --> 11:20.000
you can just operate them,

11:20.000 --> 11:22.000
you can interact with them like,

11:22.000 --> 11:24.000
like normal C programs essentially,

11:24.000 --> 11:28.000
and you even get some asky art for free.

11:28.000 --> 11:30.000
And another thing I did,

11:30.000 --> 11:33.000
already, is to integrate this with the SX step,

11:33.000 --> 11:34.000
attack framework,

11:34.000 --> 11:35.000
and some of you will notice,

11:35.000 --> 11:38.000
if you're researching attacks on SX,

11:38.000 --> 11:39.000
like we do,

11:39.000 --> 11:42.000
which means that you can do all kinds of rapid prototyping,

11:42.000 --> 11:43.000
of smaller attacks,

11:43.000 --> 11:45.000
and maybe also make them long-term available,

11:45.000 --> 11:47.000
without pulling in these dependencies

11:47.000 --> 11:51.000
from the SDK that are very fragile.

11:51.000 --> 11:53.000
So that's the story of,

11:53.000 --> 11:55.000
of what you can do with Barry's Jix,

11:55.000 --> 11:57.000
from perhaps a research site,

11:57.000 --> 12:00.000
but Cobra also works in his master's thesis

12:00.000 --> 12:02.000
on trying to make something real,

12:02.000 --> 12:07.000
and I'll hand over to him to explain a lot of bits.

12:18.000 --> 12:20.000
All right, so,

12:20.000 --> 12:24.000
everybody says the proof of the pudding is in the eating,

12:24.000 --> 12:26.000
so that's what I did.

12:26.000 --> 12:28.000
I built a prototype,

12:28.000 --> 12:30.000
and in this prototype,

12:30.000 --> 12:32.000
I ported Hackelstar,

12:32.000 --> 12:35.000
which is a high assurance cryptographic library.

12:35.000 --> 12:38.000
It has been formally verified in F-star,

12:38.000 --> 12:42.000
and so, just as a proof of concept,

12:42.000 --> 12:47.000
we made that work in our bare SX and cliff SDK.

12:47.000 --> 12:50.000
So an application with a C file,

12:50.000 --> 12:52.000
and you have some buffer,

12:52.000 --> 12:53.000
you want to, for example,

12:53.000 --> 12:55.000
calculate the hash,

12:55.000 --> 12:59.000
and that can be done entirely inside the enclave.

12:59.000 --> 13:01.000
What did I do?

13:01.000 --> 13:05.000
This bridge code that is needed between the application,

13:05.000 --> 13:07.000
and the enclave is automatically generated.

13:07.000 --> 13:09.000
As it's really error prone,

13:09.000 --> 13:11.000
you can trust on yourself to write it.

13:11.000 --> 13:13.000
It has to be done by something,

13:13.000 --> 13:16.000
someone really small or smart or by iterated.

13:16.000 --> 13:20.000
So, this iterated tool also has a lot of dependencies,

13:20.000 --> 13:24.000
and I fulfill these dependencies with caution,

13:24.000 --> 13:25.000
and with care.

13:25.000 --> 13:28.000
So, I implemented, from F-artos,

13:28.000 --> 13:30.000
a heap implementation with this really bare bones,

13:30.000 --> 13:31.000
a really minimal,

13:31.000 --> 13:33.000
that can be really,

13:33.000 --> 13:35.000
it's trustworthy,

13:35.000 --> 13:39.000
and I also implemented some small ellipse.

13:39.000 --> 13:43.000
So, what does this all mean from a performance point of view?

13:43.000 --> 13:45.000
If we look at the Y-axis,

13:45.000 --> 13:49.000
you see the amount of code lines of code,

13:49.000 --> 13:53.000
so the actual line count of the trusted code,

13:53.000 --> 13:55.000
and for our project bare G-axis,

13:55.000 --> 13:58.000
you can see that's only limited to a really, really tiny amount,

13:58.000 --> 14:01.000
and especially if we look at the binary size,

14:01.000 --> 14:05.000
we're below a usual page.

14:05.000 --> 14:10.000
So, that's astonishingly minimal.

14:10.000 --> 14:12.000
Right there.

14:12.000 --> 14:16.000
So, if we look at it from the application included,

14:16.000 --> 14:20.000
you can see that's really, really small runtime that we have to trust,

14:20.000 --> 14:25.000
and again, the runtime that we have is really trustworthy from F-artos.

14:25.000 --> 14:30.000
And so, again, this application part is also really massive,

14:30.000 --> 14:35.000
but higher assurance because it has been formally verified.

14:35.000 --> 14:37.000
If we look at the performance,

14:37.000 --> 14:39.000
because we also want performance,

14:39.000 --> 14:41.000
we don't just want security,

14:41.000 --> 14:45.000
you can see on the Y-axis,

14:45.000 --> 14:48.000
the CPU cycles that are needed to perform certain calculations,

14:48.000 --> 14:51.000
so we have the bare bones return,

14:51.000 --> 14:54.000
and clip that immediately returns upon entering,

14:54.000 --> 14:59.000
and now we have the H-Mac calculation and the authenticated encryption.

14:59.000 --> 15:03.000
The lightly marked area is the area

15:03.000 --> 15:07.000
that is absolutely minimal for an angle to perform.

15:07.000 --> 15:11.000
So, if that's the theoretical limit that we can achieve,

15:11.000 --> 15:14.000
and you can see that with the bare SGX return,

15:14.000 --> 15:17.000
and clip super duper close,

15:17.000 --> 15:25.000
and also perform a lot of the state-of-the-art production as the case.

15:25.000 --> 15:27.000
And then, maybe zoom in,

15:27.000 --> 15:31.000
because the CPU cycle count as a bit perturbed,

15:31.000 --> 15:35.000
the actual instruction is really large to enter the enclave.

15:35.000 --> 15:39.000
So, if we only look at the instructions that are executed with SGX steps,

15:39.000 --> 15:45.000
where we can interrupt the enclave step by step instruction by instruction,

15:46.000 --> 15:49.000
you can see that for the really bare bones and clip,

15:49.000 --> 15:51.000
we only have 29 instructions,

15:51.000 --> 15:55.000
which is, as we would say, bare minimum.

15:55.000 --> 15:57.000
So, to conclude,

15:57.000 --> 16:00.000
the reality is right now that's Intel SGX,

16:00.000 --> 16:03.000
the Intel SGX is quite large,

16:03.000 --> 16:07.000
and as we want to say, it's really bloated.

16:07.000 --> 16:09.000
It is a large text surface,

16:09.000 --> 16:12.000
and again, our trusted computing base is massive,

16:13.000 --> 16:17.000
actually the goal, or the dream that we would try to avoid,

16:17.000 --> 16:22.000
and keep a minimal TCP is a bit lost.

16:22.000 --> 16:25.000
So, what we introduce is a small box with a small apple,

16:25.000 --> 16:28.000
bare SGX, where it's truly minimal trust,

16:28.000 --> 16:32.000
and where you can actually know all the code that's inside,

16:32.000 --> 16:34.000
and know what we are using.

16:34.000 --> 16:37.000
There's tons of use cases,

16:37.000 --> 16:39.000
for example, formal verification,

16:39.000 --> 16:43.000
but you can think about, like, your set with a prototyping,

16:43.000 --> 16:47.000
or the long-term packaging, anything is possible.

16:47.000 --> 16:51.000
So, one last thing, it's all open source.

16:51.000 --> 16:54.000
You can scan the QR code or access the link.

16:54.000 --> 16:57.000
As you can see, I recently joined this training at Research Group.

16:57.000 --> 17:00.000
It's a fun place, and it's a nice working environment,

17:00.000 --> 17:02.000
so if you're interested,

17:02.000 --> 17:05.000
just also link down below for that.

17:05.000 --> 17:08.000
And if there are any questions, happy to answer.

17:08.000 --> 17:10.000
Thank you.

17:16.000 --> 17:18.000
So, very interesting work.

17:18.000 --> 17:19.000
I really like the score.

17:19.000 --> 17:22.000
If you see between as a script, and to this,

17:22.000 --> 17:25.000
I'm not a fan of moving towards the VM,

17:25.000 --> 17:29.000
which is really going up all the spirit that we had as a community.

17:29.000 --> 17:31.000
So, I really appreciate this work.

17:31.000 --> 17:33.000
And one question specifically in hand,

17:33.000 --> 17:36.000
is about the formal verification use case that you mentioned here.

17:36.000 --> 17:40.000
If you can calibrate how you think that use case of this work,

17:40.000 --> 17:43.000
or are you saying that is a potential future work,

17:43.000 --> 17:44.000
someone can calibrate it,

17:44.000 --> 17:46.000
and heckle star was already by and of star, right?

17:46.000 --> 17:47.000
Right.

17:47.000 --> 17:49.000
So, what is the key,

17:49.000 --> 17:50.000
because formal verification,

17:50.000 --> 17:52.000
and you can use that risk, maybe.

17:52.000 --> 17:54.000
So, quickly repeating the question,

17:54.000 --> 17:58.000
what other parts do you consider to be formally verified,

17:58.000 --> 18:00.000
or are you thinking about?

18:00.000 --> 18:02.000
So, we have some projects ongoing right now

18:02.000 --> 18:05.000
to formally verified the iterator bridge code

18:05.000 --> 18:06.000
that has been generated.

18:06.000 --> 18:09.000
And that would already cover a lot.

18:09.000 --> 18:13.000
The T-lip C that I wrote is also really minimal.

18:13.000 --> 18:16.000
So, it's a mamm copy and a mamm set, for example,

18:16.000 --> 18:18.000
that's basically it.

18:18.000 --> 18:21.000
And I'm also the heap implementation

18:21.000 --> 18:24.000
from free articles is also left behind.

18:24.000 --> 18:30.000
But there's also been some rumors about people wanting to formally verify that.

18:30.000 --> 18:34.000
So, that's a bit the state that we are currently interested.

18:34.000 --> 18:36.000
Just to replace, like it's not going to be used,

18:36.000 --> 18:39.000
it's kind of some future work that you are planning

18:39.000 --> 18:41.000
that some parts which are not really identified,

18:41.000 --> 18:42.000
you can verify, right?

18:42.000 --> 18:43.000
Is it correct?

18:43.000 --> 18:44.000
Yeah.

18:44.000 --> 18:45.000
Maybe, wait.

18:45.000 --> 18:47.000
If I unclip and I just hold it.

18:51.000 --> 18:53.000
So, the question,

18:53.000 --> 18:55.000
your follow-up question was,

18:57.000 --> 19:00.000
whether this is future work or not.

19:00.000 --> 19:02.000
So, I think,

19:03.000 --> 19:06.000
So, when I say use, or when we say use cases,

19:06.000 --> 19:10.000
the idea is that formal verification and enclaves

19:10.000 --> 19:12.000
go very well together, right?

19:12.000 --> 19:14.000
Because we have security critical code,

19:14.000 --> 19:17.000
typically that is relatively small size.

19:17.000 --> 19:20.000
The problem is, if you take something like Hackelstar

19:20.000 --> 19:23.000
and you just put it in an enclave with tooling that we have today,

19:23.000 --> 19:25.000
you would add 200,000,

19:25.000 --> 19:28.000
of very verified lines of code, right?

19:28.000 --> 19:30.000
And we've also had some prior work, for instance,

19:30.000 --> 19:35.000
Martin, on doing attacks on Hackelstar in an embedded context

19:35.000 --> 19:38.000
where there was some bridge code that was in verified,

19:38.000 --> 19:41.000
that then invalidated the nice F-star security proofs.

19:41.000 --> 19:44.000
And the idea is to fold on the one hand,

19:44.000 --> 19:45.000
can we minimize that code?

19:45.000 --> 19:47.000
That's what we have today, that's what the code will build.

19:47.000 --> 19:50.000
And you see here, the code is minimal.

19:50.000 --> 19:53.000
But the immediate future work with B,

19:53.000 --> 19:56.000
can we also get security guarantees,

19:56.000 --> 19:58.000
formal security guarantees, for that minimal code.

19:58.000 --> 20:03.000
So this is now in the scope of being small enough to formally verify

20:03.000 --> 20:04.000
at a moment.

20:04.000 --> 20:06.000
And that's a spectrum of things.

20:06.000 --> 20:09.000
One of the things we are doing already is to simply execute it.

20:09.000 --> 20:11.000
That does not give you a strong security guarantees,

20:11.000 --> 20:13.000
perhaps, as something like F-star.

20:13.000 --> 20:15.000
But we also have other work, for instance,

20:15.000 --> 20:21.000
to do more sound verification at a C language level

20:21.000 --> 20:23.000
with a two-called wayfast.

20:24.000 --> 20:28.000
But so the idea is to create, to engineer something

20:28.000 --> 20:31.000
that is in the scope of formal verification.

20:46.000 --> 20:49.000
Yes, by all means, I think we should do that.

20:49.000 --> 20:52.000
I would like to create something like barricade, barricade,

20:52.000 --> 20:55.000
barricade, barricade, and so on.

20:55.000 --> 20:58.000
I think in terms of VMB's architecture,

20:58.000 --> 21:02.000
as we can probably use the systems engineering work

21:02.000 --> 21:05.000
from the last decade, some micro kernels and nano kernels

21:05.000 --> 21:07.000
and all that, right?

21:07.000 --> 21:09.000
Also, if you were interested in that, maybe,

21:09.000 --> 21:11.000
so we don't have it yet.

21:11.000 --> 21:13.000
But if you're interested in that,

21:13.000 --> 21:16.000
I have a look at either NX, which is a small form scratch

21:17.000 --> 21:20.000
represented in a time, or Tom Domans,

21:20.000 --> 21:24.000
a first-time talk from last year, called Mushroom, right?

21:24.000 --> 21:27.000
Which also has a think-similar spirit of,

21:27.000 --> 21:29.000
can we build something from scratch,

21:29.000 --> 21:31.000
not just run a play in Linux in there?

