WEBVTT

00:00.000 --> 00:10.000
Thank you everyone for waking up this early on the Sunday.

00:10.000 --> 00:19.000
I'm sure you wanted to sleep in, but you came to the first talk and it's a really good one.

00:19.000 --> 00:24.000
We have Fedor who's going to talk about WebAssembly on constrained devices.

00:24.000 --> 00:27.000
So please give him a warm welcome.

00:28.000 --> 00:35.000
Hey, good morning everyone and thank you for the introduction.

00:35.000 --> 00:43.000
I'm Fedor and I'm very excited to be giving this talk in the rest of the room of Foster and again, thanks for getting up early.

00:43.000 --> 00:51.000
So what I'm going to be talking about today is about running WebAssembly on really microcontrollers.

00:51.000 --> 00:58.000
So think something like an ESP32, like C3, C5 or something from Nordic, like an NRF53.

00:58.000 --> 01:06.000
And those are basically the learnings that period the company I work at have drawn from just that kind of trying to do that.

01:06.000 --> 01:10.000
So how the talk is going to be organized is like very straightforward.

01:10.000 --> 01:12.000
We're going to start with motivation.

01:12.000 --> 01:18.000
So kind of answering the question, why would you want to run WebAssembly on constrained devices in the first place?

01:18.000 --> 01:24.000
And then looking at it from like these two directions, like on the one hand, what does it mean for your code?

01:24.000 --> 01:31.000
What does it mean for how the functionality you want to bring on your device looks like in the Rust code you can pile for WebAssembly?

01:31.000 --> 01:38.000
And the other direction we're going to look at it from is, what does it mean for what your firmware looks like and specifically?

01:38.000 --> 01:46.000
Which runtime will you pick and what does it mean for your footprint, but also for the integration of this runtime with your functionality?

01:46.000 --> 01:48.000
So let's start with the motivation.

01:48.000 --> 01:54.000
For that I'm going to tell you a little bit about what we do at period and also like about modernic.

01:54.000 --> 01:57.000
So we work in the IoT domain.

01:57.000 --> 02:07.000
So you can imagine systems which are these large distributed systems consisting of a large number of heterogeneous devices.

02:07.000 --> 02:16.000
So think everything from cloud servers to Linux, Edge nodes and at least for us more reason they also micro control this.

02:16.000 --> 02:21.000
And what we do there is we work in Mermek and Mermek is a middleware.

02:21.000 --> 02:28.000
And you can imagine it as this thing where you have an agent, which is like a thin soft firmware layer you put on each of your nodes.

02:28.000 --> 02:32.000
And what it does for you is it manages your applications.

02:32.000 --> 02:38.000
So you just handed out your application which you want to run and Mermek will do the rest.

02:38.000 --> 02:41.000
It will find the devices where they should be put.

02:41.000 --> 02:49.000
It will transfer the executable there, run them, manage the communication, manage the configuration, monitor the thing at runtime.

02:49.000 --> 02:51.000
That's all cool and interesting.

02:51.000 --> 03:01.000
The only thing we care about there in this talk is this point where you take code from the user at runtime and put them on nodes,

03:01.000 --> 03:03.000
which you manage.

03:03.000 --> 03:07.000
On nodes where you potentially have a code running from other users.

03:07.000 --> 03:11.000
And in that context I think that at least for like NOS, great devices.

03:11.000 --> 03:16.000
I won't have to do too much work to convince you that whatever somebody is a pretty cool idea there.

03:16.000 --> 03:19.000
Because like we said, we want to run anywhere.

03:19.000 --> 03:29.000
So already having something we just talked at independent as a big win because you don't have to do anything to switch from one target hardware to the other.

03:29.000 --> 03:35.000
But the other thing and from my perspective it's way more important is the strong sandboxing that you get.

03:35.000 --> 03:43.000
Whether it gives you this very explicit way of defining the capabilities of what this piece of software that you deploy there can do.

03:43.000 --> 03:51.000
And the sandbox around it is very strong so you can be very confident deploying unknown code, which is like, but it has potentially arrows.

03:51.000 --> 03:55.000
It could even have like security problems.

03:55.000 --> 04:03.000
Now, if we look at this on micro controllers, all these advantages hold, in fact you have more advantages.

04:03.000 --> 04:16.000
Because as soon as you take a large part of your functionality and put it from firmware into a web assembly module, you get a part of functionality which is way easier to update.

04:16.000 --> 04:22.000
Because you don't need reflection, you don't need an OTA, you just load another module.

04:22.000 --> 04:27.000
And the great thing about it, the great thing about this update that you get there, it cannot break your device.

04:27.000 --> 04:33.000
The worst thing that can happen is the module runs, it has some arrow, it would just trap to the host.

04:33.000 --> 04:35.000
So your host can just tell you it's still there.

04:35.000 --> 04:38.000
It didn't crash, it can tell you hey give me another module.

04:38.000 --> 04:49.000
And especially if you think about the fact that these micro controllers are often deployed in remote settings, like in settings where specifically don't want to go there at refresh, this could be a big advantage.

04:50.000 --> 04:52.000
Now, this advantage is also obvious.

04:52.000 --> 04:55.000
We are micro controllers, we have very little memory as it is.

04:55.000 --> 05:02.000
And you won't get around introducing a higher memory footprint, you won't get around having a certain performance overhead.

05:02.000 --> 05:04.000
So that's the trade-off we are talking about here.

05:04.000 --> 05:12.000
And just to mention it, I'm definitely not saying that web assembly is like the silver bullet, and that this should be the only way how you deploy a code.

05:12.000 --> 05:15.000
There are definitely many other ways how to do that.

05:15.000 --> 05:19.000
Or interesting, they all have their own use case where this is useful.

05:19.000 --> 05:24.000
The takeaway here is just web assembly is an interesting point to do that.

05:24.000 --> 05:29.000
And we're going to be talking a little bit more about like two of the things that you could consider.

05:29.000 --> 05:35.000
One is tooling ecosystem and the other one is the footprint, which depends very strongly on the chosen runtime.

05:35.000 --> 05:40.000
So first, the practical consequences regarding your code.

05:40.000 --> 05:46.000
So what we're talking about here is just the question, like assume your curious now.

05:46.000 --> 05:50.000
Assume you want to try out to run web assembly on micro controllers.

05:50.000 --> 05:53.000
What does that mean for the code that you write?

05:53.000 --> 05:55.000
And I think there are like two perspectives here.

05:55.000 --> 06:02.000
On the one hand, you could be someone like me coming more from the Linux site, where you maybe have worked with like,

06:02.000 --> 06:08.000
like, present time for a while, with web assembly for a while, probably with present time.

06:08.000 --> 06:13.000
Or the other perspective is you may be actually someone who develops embedded firmware.

06:13.000 --> 06:15.000
And now wants to try to web assembly.

06:15.000 --> 06:19.000
I'm kind of trying to cater to both the perspective a little bit.

06:19.000 --> 06:26.000
So with the first one, say you come from Linux, you may likely be using a varsity target.

06:26.000 --> 06:32.000
So the first thing you will realize, you won't be able to do that on embedded.

06:32.000 --> 06:38.000
Although some of the embedded runtime's due support varsity, you will never get it for free.

06:38.000 --> 06:43.000
It always comes with additional memory cost because you have to implement these whole functions additionally.

06:43.000 --> 06:45.000
The other thing is varsity as it is.

06:45.000 --> 06:51.000
It in general assumes certain like OS support with things like timers, randomness and IO.

06:51.000 --> 06:58.000
So like many of these things are just not written for the constraint environment of microcontrollers.

06:58.000 --> 07:04.000
The other thing, and that is something that kind of surprised me was that as soon as you switch to unknown unknown,

07:04.000 --> 07:13.000
you have to step away from the comfortable situation of just taking any rust code using STD and just putting it to web assembly.

07:13.000 --> 07:15.000
That doesn't work anymore.

07:15.000 --> 07:18.000
That was new for me, but actually the fact that you can do it normally.

07:18.000 --> 07:21.000
It doesn't come from like Western as this, it actually comes from varsity.

07:21.000 --> 07:26.000
Like many standard library things can only be mapped to like a web assembly.

07:26.000 --> 07:29.000
If you have like those varsity host functions available.

07:29.000 --> 07:33.000
So with that, it means your code will become more embeddy.

07:33.000 --> 07:38.000
It will be an OSDD code, for example, your allocation will be explicit.

07:38.000 --> 07:44.000
And since you also don't have the varsity host functions, you will have to implement all of your host functions yourself.

07:44.000 --> 07:49.000
And that's also kind of the segue to the more like embedded perspective.

07:49.000 --> 07:57.000
If you are used to write embedded software, then you will have to get used to in your way of accessing your peripherals.

07:57.000 --> 07:59.000
So you firmware will look something like that.

07:59.000 --> 08:04.000
So you have on the one side your device peripherals, they are kind of the same as they were before.

08:04.000 --> 08:08.000
And you have this big new block of the Western runtime containing the web assembly modules.

08:08.000 --> 08:13.000
And the important thing here is that the web assembly modules, they are extremely sandboxed.

08:13.000 --> 08:18.000
That means the way they can access anything outside the modules have it's very restricted.

08:18.000 --> 08:26.000
In fact, the only thing you can do is you can, there only exchange between hosts and guests through export and import functions.

08:26.000 --> 08:31.000
They are explicitly defined and you are also very restricted about how you define them.

08:31.000 --> 08:34.000
So their signature has to follow the CABI.

08:34.000 --> 08:39.000
And the biggest restriction here is also that the arguments in return very use of those functions.

08:40.000 --> 08:43.000
Those are essentially just integers and flows.

08:43.000 --> 08:51.000
Now, there would of course suck if that was always, if that was the only thing you had, you want to exchange more complex information.

08:51.000 --> 08:57.000
And how you will normally do that is you will make use of the fact that the host can access the guest memory.

08:57.000 --> 09:02.000
And so how these exchanges typically go is like when the module wants to communicate something complex to the host.

09:02.000 --> 09:06.000
It would give it arguments just describing a region in its memory.

09:06.000 --> 09:08.000
And then the host knows how to interpret it.

09:09.000 --> 09:11.000
So these are basically the things.

09:11.000 --> 09:13.000
Now, how does it look in code?

09:13.000 --> 09:18.000
This is just I hope it kind of works out from the font size also the slides are just available.

09:18.000 --> 09:23.000
You could just look and direct the up in the like a font in the page of the talk.

09:23.000 --> 09:28.000
But this is how one of the all the modules looks, which we used to have in Mernick.

09:28.000 --> 09:33.000
And I mean you see two things here like on the one hand, as promised, we have no STD, we have exclusive allocation.

09:33.000 --> 09:37.000
But like with macros, you can make that look fairly nice.

09:37.000 --> 09:41.000
The other thing is down there, you have no unsafe code.

09:41.000 --> 09:43.000
You have no weapons specifics.

09:43.000 --> 09:47.000
You do nice idiomatic error handling also with the try operator.

09:47.000 --> 09:50.000
And this is actually only domain logic.

09:50.000 --> 09:54.000
I mean this is obviously like a demo module is like very simplistic.

09:54.000 --> 09:56.000
But you can imagine it would be more complex.

09:56.000 --> 09:57.000
It was something real.

09:57.000 --> 10:00.000
But the complexity would not come from the fact that we operate with resume.

10:00.000 --> 10:04.000
The complexity would come from the domain, which is exactly what you want.

10:04.000 --> 10:10.000
And this is done like with with our SDK for resume development.

10:10.000 --> 10:14.000
And just to like show some of the things for example, this receive.

10:14.000 --> 10:18.000
This is actually how you access host functionality under the hood.

10:18.000 --> 10:24.000
It is a thing with a safe rust stripper around a C import.

10:24.000 --> 10:26.000
And that's exactly the thing I was talking about.

10:26.000 --> 10:30.000
The thing where you talk to the host through the CABI, which looks like this.

10:30.000 --> 10:34.000
But nevertheless, in your code describing functionality, this can look nice.

10:34.000 --> 10:36.000
This can look like a normal rust function.

10:36.000 --> 10:40.000
And similar, why do we have nice error handling here?

10:40.000 --> 10:44.000
Well, because of this macro, because you know, macro size essentially magic.

10:44.000 --> 10:46.000
So you can expand it to whatever you want.

10:46.000 --> 10:48.000
And this is what it expands to.

10:48.000 --> 10:52.000
This is like something that looks like like the web assembly.

10:52.000 --> 10:53.000
Export.

10:53.000 --> 10:58.000
You export up there and you kind of wrap around a valuable rust function.

10:58.000 --> 11:00.000
And there you can do whatever you want.

11:00.000 --> 11:03.000
So basically, essentially, this just looks where it was an error or success.

11:03.000 --> 11:06.000
If it's an error, it's going to give the status code back.

11:06.000 --> 11:11.000
And this one actually also uses host logging to tell you the error message.

11:11.000 --> 11:16.000
So that you kind of have more experience you expect from rust.

11:16.000 --> 11:17.000
Okay.

11:17.000 --> 11:21.000
So to summarize, there are some constraints.

11:21.000 --> 11:24.000
You will have to do things slightly differently.

11:24.000 --> 11:26.000
But due to the fact, the trust is awesome.

11:26.000 --> 11:29.000
You can actually never let us live with it quite well.

11:29.000 --> 11:30.000
Okay.

11:30.000 --> 11:32.000
Now to the other part.

11:32.000 --> 11:37.000
And the other part, since we now kind of know what it means for the code you write.

11:37.000 --> 11:41.000
The other question is, what does it mean for how you firmware looks?

11:41.000 --> 11:44.000
And specifically, and this is a decision you will have to make,

11:44.000 --> 11:49.000
which where some runtime do you want to pick to run on your microcontrollers?

11:49.000 --> 11:53.000
And I'm just going to walk you through kind of our decision process that we had on that.

11:53.000 --> 12:01.000
So first, we were thinking about what are the things that actually matter for us with respect to the Western runtime.

12:01.000 --> 12:07.000
And the very obvious and actually like the main point is memory footprint.

12:07.000 --> 12:12.000
Because all else, it's kind of nice, but if it doesn't fit, you cannot do anything with it.

12:12.000 --> 12:16.000
So that is definitely the main thing you will be looking at.

12:16.000 --> 12:20.000
The other points are it has of course, it needs to run on bare metal.

12:20.000 --> 12:28.000
And for us, our use case is running it on an ESP, like something like C3, C5.

12:28.000 --> 12:36.000
So there we want to use embassy to access the peripherals and also to be somewhat concurrent.

12:36.000 --> 12:41.000
So it was very important for us that the runtime will pick integrates well with that.

12:41.000 --> 12:45.000
And for this reason, and also for the reason that we generally have a complete rust stack,

12:45.000 --> 12:49.000
we were very biased towards like rust times written in rust.

12:49.000 --> 12:58.000
Yeah, okay, then the question, well, what do we do? How do we find out about memory footprint?

12:58.000 --> 13:02.000
We did like a very straightforward thing to say, hey, let's set up a benchmark.

13:02.000 --> 13:09.000
So the benchmark is very simple, it has like one module which uses one import from the host to just lock.

13:09.000 --> 13:12.000
And then we implement it for the different runtime that you have.

13:12.000 --> 13:14.000
And this is something we actually publish.

13:14.000 --> 13:20.000
So like if you have like an NRF53 board, you can just like stick it to your computer.

13:20.000 --> 13:25.000
And you can run this repository like and each of the features gives you a different runtime.

13:25.000 --> 13:30.000
And then you can, on the one hand, basically see the numbers that I'm going to show you.

13:30.000 --> 13:32.000
And hopefully see the same numbers.

13:32.000 --> 13:36.000
You can tell me how I can do better because maybe I can get it smaller.

13:36.000 --> 13:37.000
That would be awesome.

13:37.000 --> 13:42.000
And the other thing is also like you see the code that you see basically the integration with where

13:42.000 --> 13:44.000
you have the momentum and with each of the runtime.

13:44.000 --> 13:50.000
So if you want to start playing with that, I think this would be kind of a cool place to start.

13:50.000 --> 13:52.000
So these are the runtime to be evaluated.

13:52.000 --> 13:54.000
We have like the three rest ones.

13:54.000 --> 13:56.000
Like where's me? Where's the time?

13:56.000 --> 14:00.000
Which I was very happy is going more embedded direction now.

14:00.000 --> 14:04.000
I think they also need help with like for some of the architecture.

14:04.000 --> 14:09.000
So like if you want to contribute somewhere, that would be a cool place because we would be awesome if you could run

14:09.000 --> 14:12.000
on embedded, tiny ways into my knowledge.

14:12.000 --> 14:16.000
The restaurant time with a smaller footprint.

14:16.000 --> 14:18.000
And of course we had to consider whammer.

14:18.000 --> 14:20.000
Because that's like the standard for running,

14:20.000 --> 14:22.000
WebAssembly and MCU.

14:22.000 --> 14:25.000
And sadly we have seen why.

14:25.000 --> 14:27.000
Because these are numbers.

14:27.000 --> 14:31.000
Like you see, where's me in principle is intended to run on

14:31.000 --> 14:36.000
constraint devices, but it needs like 10 types more memory than whammer.

14:36.000 --> 14:40.000
Tiny weather, which is like the best one which we had written in rough.

14:40.000 --> 14:42.000
Still needs two to three times more.

14:42.000 --> 14:44.000
And also important to highlight.

14:44.000 --> 14:46.000
Like all three.

14:46.000 --> 14:48.000
Where's me? Where's the time in tiny weather?

14:48.000 --> 14:50.000
Running in trouble to mode here.

14:50.000 --> 14:52.000
Where? Whammer runs in AOT.

14:52.000 --> 14:55.000
Like AOT is like 10 times faster.

14:55.000 --> 14:56.000
It performance wise.

14:56.000 --> 14:59.000
Then in trouble at any time you see these numbers when someone says that like

14:59.000 --> 15:02.000
Where's him introduces only like twice and overhead,

15:02.000 --> 15:05.000
compared to native, they always talk about AOT.

15:05.000 --> 15:08.000
So I'm not talking very much about performance here,

15:08.000 --> 15:12.000
but whammer also outperforms significantly like those three.

15:12.000 --> 15:14.000
So that was bad.

15:14.000 --> 15:17.000
Because you know, it means we cannot use the restaurant time.

15:17.000 --> 15:21.000
And the other thing is we still want to integrate with embassy.

15:21.000 --> 15:24.000
We still want to integrate with synchronous embassy,

15:24.000 --> 15:29.000
which is kind of a problem because whammer is a synchronous sea library.

15:29.000 --> 15:33.000
So that's kind of the last thing I want to talk about,

15:33.000 --> 15:35.000
like how we did that.

15:35.000 --> 15:38.000
First just to like kind of reiterate on what we are talking about here.

15:38.000 --> 15:40.000
We're talking about this specific situation.

15:40.000 --> 15:44.000
Well, you can imagine you are running whammer on your MCU.

15:44.000 --> 15:46.000
And just kind of starts up there.

15:46.000 --> 15:47.000
Essentially it's a module.

15:47.000 --> 15:49.000
The module starts running.

15:49.000 --> 15:52.000
And at some point, it now requires something from a peripheral.

15:52.000 --> 15:55.000
For example, like imagine it just wants to know when someone

15:55.000 --> 15:56.000
pushed a button.

15:56.000 --> 15:58.000
So it will call to the host and just kind of say, hey,

15:58.000 --> 16:01.000
tell me when they push a button.

16:01.000 --> 16:05.000
Now what we want to happen is that whammer just triggers this

16:05.000 --> 16:08.000
asynchronous functionality, kind of blocks until the button pushes

16:08.000 --> 16:09.000
there.

16:09.000 --> 16:12.000
And then gives control back to the module.

16:12.000 --> 16:17.000
And I think probably, like for the people coming from the Linux side,

16:17.000 --> 16:20.000
you at this point kind of know what I'm talking about.

16:20.000 --> 16:21.000
It's like not a big deal.

16:21.000 --> 16:23.000
Like everybody has done that.

16:23.000 --> 16:25.000
If we are talking, if we are on Linux,

16:25.000 --> 16:28.000
like anybody using Tokyo has probably have the situation of having

16:28.000 --> 16:32.000
to work with something which hasn't synchronous interface.

16:32.000 --> 16:35.000
And how you solve it is you just say, okay, fine.

16:35.000 --> 16:38.000
We will just separate the asynchronous from the synchronous

16:38.000 --> 16:39.000
functionality.

16:39.000 --> 16:41.000
Whammer gets its own threat.

16:41.000 --> 16:42.000
It's just running there.

16:42.000 --> 16:46.000
They talk through some way of communicating, say, through channels.

16:46.000 --> 16:48.000
So when whammer wants something asynchronous,

16:48.000 --> 16:50.000
it will communicate this intent.

16:50.000 --> 16:53.000
And it will go to sleep a way in the response.

16:53.000 --> 16:54.000
It's fine.

16:54.000 --> 16:57.000
The OS will wake it up as soon as the synchronous functionality is done.

16:57.000 --> 16:59.000
And that's like a sing-async bridge.

16:59.000 --> 17:03.000
Now, the problem, again, we are on an ESPC-3.

17:03.000 --> 17:05.000
You specifically do C-Series.

17:05.000 --> 17:08.000
So we don't have an OS to wake us up.

17:08.000 --> 17:10.000
We don't have threats to separate it.

17:10.000 --> 17:14.000
We can do the embassy thing and just say, okay, let's put them in different tasks.

17:14.000 --> 17:18.000
But as soon as whammer blocks, it blocks the entire context.

17:18.000 --> 17:20.000
So this doesn't solve it.

17:20.000 --> 17:25.000
And this was something that we saw, like as a serious problem for us.

17:25.000 --> 17:30.000
And we were, at this point, it really looked like a tradeoff.

17:30.000 --> 17:34.000
It really looked like a tradeoff between, do we go for the lower footprint

17:34.000 --> 17:35.000
that whammer gives us?

17:35.000 --> 17:38.000
Or do we have to go away from embassy?

17:38.000 --> 17:43.000
But the cool thing was that, like a colleague of mine,

17:43.000 --> 17:46.000
Asunder, he has solved this problem in a very cool way,

17:46.000 --> 17:49.000
which actually gives us kind of the best of both worlds.

17:49.000 --> 17:54.000
And the solution is essentially, we do put both whammer.

17:54.000 --> 17:58.000
And the synchronous services each in their own task.

17:58.000 --> 18:01.000
Like a synchronous, like the whammer task is not really a synchronous,

18:01.000 --> 18:03.000
but that's how it looks on embassy.

18:03.000 --> 18:06.000
And the important thing is a synchronous host service

18:06.000 --> 18:08.000
has a higher priority.

18:08.000 --> 18:11.000
And now these two talk via signals.

18:11.000 --> 18:13.000
And so how this exchange now works is,

18:13.000 --> 18:16.000
when whammer needs something from their synchronous side,

18:16.000 --> 18:17.000
it will signal it.

18:17.000 --> 18:22.000
And then it will try to read the response signal.

18:22.000 --> 18:24.000
And if it doesn't find anything,

18:24.000 --> 18:27.000
it goes to sleep using waiting for interrupts.

18:27.000 --> 18:30.000
Now, what do their synchronous host services do?

18:30.000 --> 18:32.000
They do basically the things that they always do.

18:32.000 --> 18:34.000
When we tell them what we need,

18:34.000 --> 18:36.000
they register an interest in an interrupt,

18:36.000 --> 18:37.000
and go to sleep themselves.

18:37.000 --> 18:41.000
And so at this point, we are at the very cool point

18:41.000 --> 18:44.000
where nobody is busy waiting.

18:44.000 --> 18:46.000
Nobody is burning CPU cycles.

18:46.000 --> 18:49.000
Nobody is using up any power.

18:49.000 --> 18:53.000
But we still are 100% reactive to interrupts

18:53.000 --> 18:55.000
and in a very efficient way.

18:55.000 --> 18:58.000
So if an interrupt comes in that we are not interested in.

18:58.000 --> 18:59.000
Anything doesn't care.

18:59.000 --> 19:01.000
Whammer will briefly wake up.

19:01.000 --> 19:03.000
Check its response signals.

19:03.000 --> 19:05.000
See there's nothing there and go to sleep again.

19:05.000 --> 19:08.000
If we get the interrupt, we are actually waiting for.

19:08.000 --> 19:10.000
Like in this case, the button push.

19:10.000 --> 19:13.000
Then the asing side, it will get to act first

19:13.000 --> 19:15.000
because it is a higher priority.

19:15.000 --> 19:16.000
It will process the response.

19:16.000 --> 19:18.000
It will send the response signal to whammer.

19:18.000 --> 19:21.000
So that when whammer gets to act, it just finds it result.

19:21.000 --> 19:23.000
It can give the control back to the module

19:23.000 --> 19:26.000
and we have exactly the behavior we wanted.

19:26.000 --> 19:27.000
So this was very cool.

19:27.000 --> 19:31.000
I mean, this was honestly something which we saw as a problem

19:31.000 --> 19:32.000
and at the end.

19:32.000 --> 19:34.000
This is the way we are integrating whammer.

19:34.000 --> 19:37.000
This gives us a small memory footprint, high performance

19:37.000 --> 19:42.000
and like the full integration and the embassy ecosystem.

19:42.000 --> 19:45.000
So we wanted to share this because this is cool.

19:46.000 --> 19:49.000
Yeah, and with that, I'm at the end.

19:49.000 --> 19:51.000
Again, everything I was telling about the integration.

19:51.000 --> 19:53.000
I was just doing the talking.

19:53.000 --> 19:54.000
That's all a Sunday.

19:54.000 --> 19:55.000
This is really cool.

19:55.000 --> 19:56.000
That's his GitHub handle.

19:56.000 --> 19:57.000
Check it out.

19:57.000 --> 19:59.000
Also, while working on this,

19:59.000 --> 20:01.000
he has been dealing a lot with integrating

20:01.000 --> 20:05.000
like a sea library into like rust on embedded.

20:05.000 --> 20:08.000
And for that purpose, he has actually published

20:08.000 --> 20:10.000
the sea combat crate.

20:10.000 --> 20:11.000
This really knew.

20:11.000 --> 20:14.000
So like given also like a little bit of time

20:14.000 --> 20:17.000
it's like not perfect yet, but the purpose behind it is

20:17.000 --> 20:21.000
that you can interact with sea libraries without requiring

20:21.000 --> 20:24.000
any Windows specific tooling and also without requiring

20:24.000 --> 20:26.000
like root privileges.

20:26.000 --> 20:28.000
So that is pretty cool.

20:28.000 --> 20:29.000
Decide that.

20:29.000 --> 20:32.000
You can check out the repo with a benchmark.

20:32.000 --> 20:36.000
And also, this is just like one of the many interesting

20:36.000 --> 20:39.000
problems we deal with while working on Mermack.

20:39.000 --> 20:42.000
Mermack will be open source later this year.

20:42.000 --> 20:45.000
We're probably going to be doing a pre-launch test.

20:45.000 --> 20:47.000
So also check out the website.

20:47.000 --> 20:48.000
Yeah.

20:48.000 --> 20:49.000
And that's pretty much it.

20:49.000 --> 20:51.000
I hope it was interesting.

20:51.000 --> 20:54.000
And I hope we have time for some questions.

20:54.000 --> 21:03.000
Thank you.

21:03.000 --> 21:04.000
Yes.

21:04.000 --> 21:07.000
So you talked a bit about the A-Sink Hall

21:07.000 --> 21:09.000
in the very moment of the interesting.

21:09.000 --> 21:12.000
Is it possible for another layer A-Sink Hall

21:12.000 --> 21:15.000
within the last few tasks to open the error?

21:15.000 --> 21:16.000
A-Sink Hall, right?

21:16.000 --> 21:20.000
So, if I, yes.

21:20.000 --> 21:21.000
Yeah.

21:22.000 --> 21:25.000
I repeat the question just to make sure that I understood correctly.

21:25.000 --> 21:29.000
So you are asking whether it's also possible to make it

21:29.000 --> 21:32.000
so that we can write a synchronous code and compile it

21:32.000 --> 21:37.000
to WebAssembly.

21:37.000 --> 21:38.000
Yeah.

21:38.000 --> 21:40.000
So there are two parts to this.

21:40.000 --> 21:42.000
So short answer is yes.

21:42.000 --> 21:44.000
It likely is possible.

21:44.000 --> 21:47.000
But it's like not that simple.

21:48.000 --> 21:54.000
So from the runtime side, the one thing you need

21:54.000 --> 21:57.000
is you need an ability to execute anything A-Sink.

21:57.000 --> 22:01.000
So that would already be solved with this integration we had.

22:01.000 --> 22:04.000
Now from the module side, the problem is a little bit

22:04.000 --> 22:06.000
like when you write WebAssembly.

22:06.000 --> 22:08.000
Everything for the module looks blocking.

22:08.000 --> 22:12.000
So then you have to set up some infrastructure

22:12.000 --> 22:16.000
where I mean you would need to define specific host functions,

22:16.000 --> 22:19.000
which mimic this asynchronous behavior,

22:19.000 --> 22:21.000
where you like kind of block from the inside,

22:21.000 --> 22:25.000
but actually you signal your interest in some event

22:25.000 --> 22:28.000
and also give it like a function to call back.

22:28.000 --> 22:32.000
But also,

22:32.000 --> 22:38.000
I'm not sure if it's that difficult,

22:38.000 --> 22:41.000
but it's also not super simple.

22:41.000 --> 22:44.000
Okay, I think that's all the time we have.

22:44.000 --> 22:47.000
We had a bit of a late start.

22:47.000 --> 22:49.000
Thank you again, Fedor.

22:49.000 --> 22:50.000
Awesome, thank you.

