WEBVTT

00:00.000 --> 00:09.000
I'm very sorry for my colleagues who have been hearing me talk about

00:09.000 --> 00:14.000
riling over and over again, so yeah, got another three minutes of me repeating everything.

00:14.000 --> 00:17.000
What's one of what's my point?

00:17.000 --> 00:23.000
Okay, the violin guy diseases to my work for a mutable and violin, what's that?

00:23.000 --> 00:26.000
I have to make the same joke, this is not that.

00:27.000 --> 00:29.000
Extremely simple IPC.

00:29.000 --> 00:32.000
It's Jason, former than it was so extreme soccer.

00:32.000 --> 00:33.000
Terminated was a null bite, replies.

00:33.000 --> 00:36.000
Also Jason was a terminate null bite and that's really all there is.

00:36.000 --> 00:39.000
And thank you, this is where my talk was.

00:42.000 --> 00:45.000
But of course, there's more.

00:45.000 --> 00:48.000
There are specific occasions, they're not very good specifications,

00:48.000 --> 00:52.000
but there's something like a specification on that thing.

00:52.000 --> 00:55.000
The binding's available for all the popular languages, including one new one.

00:55.000 --> 00:57.000
And system deep.

00:57.000 --> 01:01.000
It has been around for quite a while, initially found very little adoption,

01:01.000 --> 01:07.000
but we started to make use of it very much, though, and system deep.

01:07.000 --> 01:12.000
Yeah, and before we talk to another video, since they will.

01:12.000 --> 01:16.000
Let's talk a little bit about diva, because if we talk about a new IPC system,

01:16.000 --> 01:22.000
let's suppose to take, like, where we want to move the focus from diva to this thing,

01:22.000 --> 01:28.000
and I guess I need to explain why diva is not the future in my point of view.

01:28.000 --> 01:33.000
As an explanation, I've been around for diva for a long time, because, like, initially,

01:33.000 --> 01:37.000
I was one of the first, like, I worked on a vahee, like, the zero-conf,

01:37.000 --> 01:40.000
demon, and was one of the first heavy users of diva.

01:40.000 --> 01:44.000
And then I implemented SD bus, like, the SD bus library for systemy.

01:44.000 --> 01:50.000
And I think I'm still officially one of the containers of diva's demon.

01:50.000 --> 01:54.000
Anyway, so I have a long history of SD bus.

01:54.000 --> 02:01.000
So I hope this explains a little bit why I think I know a bit or two about diva's.

02:01.000 --> 02:07.000
And if I have an opinion, I'm not a complete idiot by saying this.

02:07.000 --> 02:11.000
So yeah, SD bus is actually okay for many users, right?

02:11.000 --> 02:14.000
Like, it's an IPC system, and you can use it.

02:14.000 --> 02:17.000
But it is serious limitation for many other users.

02:17.000 --> 02:22.000
In particular, it has many limitations for what I, and in the systemy,

02:22.000 --> 02:25.000
we, as a project, have been using it for.

02:25.000 --> 02:29.000
Systemy deeply integrates into diva's, right?

02:29.000 --> 02:35.000
Like, all our APIs, until two years ago, were exclusively available by a diva's.

02:35.000 --> 02:39.000
And this level of integration is not just going away.

02:39.000 --> 02:45.000
But yeah, we will focus, like, move our focus probably more to vahee.

02:45.000 --> 02:49.000
So let's be more specific and talk about problems with diva's.

02:49.000 --> 02:51.000
There are many problems, right?

02:51.000 --> 02:53.000
Like, I could have many, many slides.

02:53.000 --> 02:57.000
But this talk is not supposed to be our diva's supposed to be a vahee link,

02:57.000 --> 02:59.000
so I'll try to keep this short.

02:59.000 --> 03:04.000
One of the main problems as diva's is not available in, until like boot, right?

03:04.000 --> 03:12.000
Like at the demon, it gets started when systemy leaves the early boot phase, right?

03:12.000 --> 03:16.000
Like, it's not available in the inner d, it's not available in the early boot phase.

03:16.000 --> 03:20.000
After the inner d is annoying.

03:20.000 --> 03:22.000
Yeah.

03:22.000 --> 03:31.000
It's, I mean, it's not on my laptop, it's somewhere else here, issue.

03:31.000 --> 03:39.000
So it's not available in, it's not available on the late boot.

03:39.000 --> 03:44.000
Not an inner d, for example, and this is painful for us because, of course,

03:44.000 --> 03:50.000
a good chunk of the complexity and systemy of everything that it does is starting up the system.

03:50.000 --> 03:54.000
And if we have one of the core concepts that we need for this, which is the IPC system,

03:54.000 --> 03:58.000
not around during early boot, things are really, really painful.

03:58.000 --> 04:01.000
Yeah, it's just too late.

04:01.000 --> 04:06.000
It's been probably more, too late for more of the half the things that we do in systemy.

04:07.000 --> 04:09.000
Systemy has a way out so far.

04:09.000 --> 04:13.000
We basically wear our own little divas demon in a way.

04:13.000 --> 04:17.000
So you can talk to divas to pd1 via the divas protocol.

04:17.000 --> 04:20.000
And you get basically a direct connection.

04:20.000 --> 04:28.000
And in a way, we embed a little pseudo divas demon in our code base for this.

04:28.000 --> 04:32.000
But it doesn't have the proper divas emancs because direct connections and divas,

04:32.000 --> 04:35.000
they do exist, but they are weird, right?

04:35.000 --> 04:38.000
Like because there are no signal matching,

04:38.000 --> 04:43.000
there are a lot of other things that a real divas demon has that we don't have if you have such a dark connection.

04:43.000 --> 04:49.000
So it uses the marshaling, but it doesn't really reproduce this amandics,

04:49.000 --> 04:54.000
but it was good enough for us to make systemy work.

04:54.000 --> 04:59.000
So this is actually really annoying to us, because, you know,

04:59.000 --> 05:02.000
there are so many demons that network differ example.

05:02.000 --> 05:07.000
It's a networking thing and we would like to offer a DAPI for this of course.

05:07.000 --> 05:13.000
And then if we make it based in divas, then this would mean that networking is only available in late boot,

05:13.000 --> 05:15.000
but then of course that's not how you can do things.

05:15.000 --> 05:21.000
We need nowadays networking during very, very early boot because you might want to boot from a network source.

05:21.000 --> 05:28.000
So, you know, the way how this works in networking right now is that we actually watch when divas is available.

05:28.000 --> 05:37.000
So first you have this networking demon that is starts up based solely on configuration until the point where divas becomes available.

05:37.000 --> 05:42.000
And then it connects to divas and also makes it serve as available and divas and suddenly can actually talk to it.

05:42.000 --> 05:45.000
But this is really painful because it shouldn't be that way.

05:45.000 --> 05:48.000
You should always be able to talk to network D.

05:48.000 --> 05:53.000
So there's also the other problem which is that it cannot be used by many basic IPC service.

05:53.000 --> 05:56.000
It's divas itself consumes them, right?

05:56.000 --> 05:58.000
This is for example to use a group database.

05:58.000 --> 06:02.000
So divas has this policy language.

06:02.000 --> 06:05.000
It's like this exome alum monster in a way.

06:05.000 --> 06:09.000
Where you can say, yeah, this shall be accessible by this user in this group.

06:09.000 --> 06:17.000
But this user in group stuff is traditionally, like, like, I mean, most of the time it's just at C possibility.

06:17.000 --> 06:26.000
But it's inherently designed so that they can be back and serve as a provider user database instead.

06:26.000 --> 06:33.000
And then you have this problem, like, okay, we have this demon that is supposed to provide the user database or a part of it.

06:33.000 --> 06:35.000
But hence I need to talk to it.

06:35.000 --> 06:43.000
So I want to use IPC, but I can't use divas for this because divas would be a consumer of the user database and then you have a cyclic loop.

06:43.000 --> 06:52.000
The divas uses that IPC service which implement, like, implements a divas service which then wants to talk to divas and then you have a deadlock.

06:52.000 --> 06:59.000
So that's also very, very painful that for many of the core things that system do wants to do in other really low level systems software wants to do.

06:59.000 --> 07:04.000
You just can't use it because, yeah, this was resolved in deadlock.

07:04.000 --> 07:10.000
And if it wouldn't result in deadlock because you're very, very careful with asynchronous programming, it's still a terrible nightmare.

07:10.000 --> 07:15.000
And you can have to figure out, like, special handling to deal with this.

07:15.000 --> 07:24.000
And other problem that people typically don't know even if they know divas very well is that it can really not be used to string data, right?

07:24.000 --> 07:30.000
Like, it's fine for control messages. It's horrible if you have to do streaming data, right?

07:30.000 --> 07:33.000
Like, because it has no sensible flow control.

07:34.000 --> 07:38.000
This is because, like, it's the shared resource, the divas demon, right?

07:38.000 --> 07:42.000
It needs to make sure that all the clients that talk to it do so.

07:42.000 --> 07:51.000
Not flooding the demon was too many messages, because then one of them could monopolize the resources and the other ones could not be processed.

07:51.000 --> 07:55.000
So they, they, what the way out for them is basically they do rate limiting.

07:55.000 --> 08:01.000
So if one of the clients sends too many messages, then they, or it doesn't process the messages of center it,

08:01.000 --> 08:04.000
then they will kick the, then divas demon kicks it off the bus.

08:04.000 --> 08:06.000
But this is terrible, right?

08:06.000 --> 08:10.000
Like, because if you want to stream some data through divas,

08:10.000 --> 08:15.000
yeah, I don't know, you have some huge user database or something.

08:15.000 --> 08:20.000
If you want to stream through it, like, somebody asks you for the user database and you want to census through divas,

08:20.000 --> 08:26.000
you have to careful for this with this, because if, if your user database gets to large,

08:26.000 --> 08:29.000
and you try to do this, then divas will recognize this,

08:29.000 --> 08:33.000
this thing is flooding me with messages on kicks you off the bus.

08:33.000 --> 08:35.000
This is a real problem.

08:35.000 --> 08:44.000
And basically means that whenever you have actual, or even have the risk that it could potentially be huge data that you have to stream through there,

08:44.000 --> 08:46.000
you cannot use divas, right?

08:46.000 --> 08:51.000
People ignore this, and this shows up basically as a scalability problem.

08:51.000 --> 08:58.000
Right where I don't know, systems are used in suddenly in a big scale on some installation,

08:58.000 --> 09:02.000
and suddenly everything falls apart because divas kicks everybody off the bus.

09:02.000 --> 09:10.000
So the people who do understand this, then try to find patterns around this,

09:10.000 --> 09:17.000
and use side channels to get the actual data, or they split up all this in your narration,

09:17.000 --> 09:22.000
they could happen this way in individual messages and do flow controls themselves and things like this.

09:22.000 --> 09:29.000
But this is also inherently racy, because if you split it up, it's not going to be atomic snapshot,

09:29.000 --> 09:36.000
and then atomic snapshot of what you're doing, but it's going to be a series of probably hopefully closely related stuff,

09:36.000 --> 09:38.000
but not the atomic snapshot.

09:38.000 --> 09:40.000
That's terrible.

09:40.000 --> 09:42.000
It's also slow.

09:43.000 --> 09:46.000
Because like this divas broker, this divas divas demon that sits in the middle,

09:46.000 --> 09:49.000
needs to basically process every single message sent on the system, right?

09:49.000 --> 09:54.000
Like it needs to copied from the percentage, and then to the destination,

09:54.000 --> 09:58.000
and then on the way back for the reply, needs to do this thing again.

09:58.000 --> 10:02.000
And this also means that every IPC call becomes really expensive,

10:02.000 --> 10:04.000
because it's at least four context, which is, right?

10:04.000 --> 10:09.000
Like from the client to divas broker, from divas broker to the service,

10:09.000 --> 10:13.000
from the service to divas broker, and from the broker back to the client, right?

10:13.000 --> 10:15.000
And that is expensive, right?

10:15.000 --> 10:20.000
If modern CPUs and model operating systems are good at stuff,

10:20.000 --> 10:24.000
what they are not good at is doing these kind of context, which is it?

10:24.000 --> 10:29.000
So it's really wasteful on the one thing you really shouldn't do,

10:29.000 --> 10:34.000
if you care about the performance latency and things.

10:34.000 --> 10:37.000
So also security model is simply garbage, right?

10:37.000 --> 10:41.000
Like there's a good reason why everybody uses who implement the system,

10:41.000 --> 10:46.000
so it's at least uses policy kits to make security changes.

10:46.000 --> 10:50.000
I mean policy kits is also not the perfect design,

10:50.000 --> 10:53.000
but it's at least a divas security model.

10:53.000 --> 11:00.000
And it's like in essence, if you look around at the security policies

11:00.000 --> 11:05.000
that are shipped on general Linux distributions for system divas services,

11:05.000 --> 11:07.000
they basically let everything through,

11:07.000 --> 11:10.000
and then use the policy kit to make the actual decisions.

11:10.000 --> 11:12.000
And if you look at the desktop stuff nowadays,

11:12.000 --> 11:14.000
with flat pack and sandboxed apps,

11:14.000 --> 11:17.000
they also don't use the security model of divas.

11:17.000 --> 11:19.000
They put a divas proxy in the middle,

11:19.000 --> 11:24.000
which makes the actual security decisions and isolates the app world

11:24.000 --> 11:28.000
from the desktop world outside of it.

11:29.000 --> 11:32.000
Policy kit in particular, the way it's worked out,

11:32.000 --> 11:34.000
was also really, really messy,

11:34.000 --> 11:38.000
because it adds like six more process context features, right?

11:38.000 --> 11:40.000
Like because now we have the client,

11:40.000 --> 11:43.000
talking to divas broker, divas broker talking to the service,

11:43.000 --> 11:45.000
that's supposed to respond to this,

11:45.000 --> 11:48.000
the service then sending a message to divas again,

11:48.000 --> 11:50.000
that is intended for policy kit,

11:50.000 --> 11:52.000
then the diva broker sends that to policy kit,

11:52.000 --> 11:54.000
policy kit back to divas demon,

11:54.000 --> 11:57.000
divas demon back to the service back to the divas demon,

11:57.000 --> 11:59.000
divas demon to the client.

11:59.000 --> 12:02.000
It's so wasteful regarding process context,

12:02.000 --> 12:05.000
which is, and again, those are the things

12:05.000 --> 12:07.000
that make everything slow.

12:07.000 --> 12:09.000
We should avoid them.

12:09.000 --> 12:12.000
Then another thing, I need this goes on and on.

12:12.000 --> 12:15.000
Another thing that is really, really painful is like,

12:15.000 --> 12:16.000
very often you have this pattern,

12:16.000 --> 12:18.000
if you talk to a diva service,

12:18.000 --> 12:20.000
but you create some object on the other side,

12:20.000 --> 12:21.000
I don't know,

12:21.000 --> 12:23.000
and a vahee, this works for example,

12:23.000 --> 12:24.000
the service browser, right?

12:24.000 --> 12:27.000
Like you say, I want to see all the web-deft shares

12:27.000 --> 12:29.000
and whatever else,

12:29.000 --> 12:30.000
there are on my network,

12:30.000 --> 12:32.000
so you create this web-deft sharing

12:32.000 --> 12:34.000
browsing object on the other side,

12:34.000 --> 12:36.000
and it's yours,

12:36.000 --> 12:38.000
and you expect to get signals from it,

12:38.000 --> 12:39.000
so you, like,

12:39.000 --> 12:41.000
whenever something like the shows up,

12:41.000 --> 12:42.000
goes away.

12:42.000 --> 12:44.000
So you want to subscribe to it,

12:44.000 --> 12:46.000
and the way this works in divas is then,

12:46.000 --> 12:48.000
yeah, you first do the message called,

12:48.000 --> 12:49.000
create the thing,

12:49.000 --> 12:50.000
you get a handle back,

12:50.000 --> 12:51.000
which they call the object fast,

12:51.000 --> 12:53.000
and you go to divas broker,

12:53.000 --> 12:54.000
and ask now,

12:54.000 --> 12:56.000
I want to install a match,

12:56.000 --> 12:57.000
how they call it,

12:57.000 --> 12:59.000
so that I get the notifications out of this.

12:59.000 --> 13:01.000
But what's really painful about this,

13:01.000 --> 13:04.000
is that this is necessarily racy, right?

13:04.000 --> 13:07.000
Because you can only ask for the match to be installed,

13:07.000 --> 13:10.000
if you know the object path,

13:10.000 --> 13:13.000
that the browsing object is about.

13:13.000 --> 13:16.000
So there's always a time window, basically,

13:16.000 --> 13:20.000
where the service has already established

13:20.000 --> 13:21.000
this browsing object,

13:21.000 --> 13:22.000
this object on the other side,

13:23.000 --> 13:25.000
where the object path not travels into your direction,

13:25.000 --> 13:27.000
where you then have to send it back to the divas broker,

13:27.000 --> 13:29.000
until the divas broker actually installs the match for you,

13:29.000 --> 13:31.000
so that you can get the signals for it.

13:31.000 --> 13:32.000
This is super painful,

13:32.000 --> 13:34.000
because, in particular, in the avahi case,

13:34.000 --> 13:35.000
this meant that, like,

13:35.000 --> 13:37.000
you would always lose the initial message,

13:37.000 --> 13:38.000
this way it tells you about,

13:38.000 --> 13:40.000
like, the services that are already on the network.

13:40.000 --> 13:43.000
So that is a conceptual,

13:43.000 --> 13:45.000
just complete misdesign, right?

13:45.000 --> 13:46.000
Like, if you are,

13:46.000 --> 13:49.000
have the conceptual rights to create an object somewhere,

13:49.000 --> 13:51.000
of course you should be able to get the signals,

13:51.000 --> 13:53.000
the navigation is about it,

13:53.000 --> 13:55.000
and the fact that this is not race free,

13:55.000 --> 13:57.000
it's just total misdesign.

13:57.000 --> 13:59.000
It's painful.

13:59.000 --> 14:01.000
And then there's, like,

14:01.000 --> 14:02.000
some of the kernel people,

14:02.000 --> 14:05.000
when they look at the divas, they call the stuttering.

14:05.000 --> 14:08.000
Like, if you put together a divas method call,

14:08.000 --> 14:12.000
for example, with even on the shallow, like with divas send,

14:12.000 --> 14:14.000
then it feels like it's stuttering,

14:14.000 --> 14:16.000
in the sense that you,

14:16.000 --> 14:20.000
you specify basically the same string three times like you.

14:20.000 --> 14:21.000
I, like, like,

14:21.000 --> 14:24.000
like, uh, Arc.Three desktop.systemD.

14:24.000 --> 14:27.000
Whatever service you call a call to,

14:27.000 --> 14:29.000
you specify that first as a service,

14:29.000 --> 14:31.000
then you specify it as the object has,

14:31.000 --> 14:33.000
then you specify it as an interface name,

14:33.000 --> 14:35.000
and then you add the method called to it.

14:35.000 --> 14:37.000
So, conceptually,

14:37.000 --> 14:40.000
service object paths,

14:40.000 --> 14:43.000
interface name, are different things, right?

14:43.000 --> 14:44.000
Effectively,

14:44.000 --> 14:47.200
And almost all services, they are the same strings, right?

14:47.200 --> 14:49.800
Like sometimes you write them with dots and sometimes with slashes,

14:49.800 --> 14:52.800
but there's the same, same literal stuff.

14:52.800 --> 14:54.920
And they call this stuttering, and it's terribly annoying,

14:54.920 --> 14:56.960
because it makes everything hard to read, right?

14:56.960 --> 14:59.680
Like because, in particular, for simplest services,

14:59.680 --> 15:01.840
that's really all you always have,

15:01.840 --> 15:04.960
and you still have to type this in everything all the time.

15:04.960 --> 15:07.360
Then, I mean, Shell completion makes this nicer,

15:07.360 --> 15:09.560
but still it's like somewhat visual clutter

15:09.560 --> 15:12.480
for very little value that it's annoying.

15:14.840 --> 15:18.640
That's so much more.

15:18.640 --> 15:20.800
The type system has an all the concepts.

15:20.800 --> 15:23.240
It's not extensive without breaking API.

15:23.240 --> 15:26.440
It's has a, we've done an encoding.

15:26.440 --> 15:28.520
Interspection doesn't know name structures.

15:28.520 --> 15:31.800
It's very cryptic and XML has no documentation string.

15:31.800 --> 15:35.320
Has property model that pushes non-atomic behavior,

15:35.320 --> 15:36.600
so annoying too, right?

15:36.600 --> 15:38.480
Like because they have all these individual properties,

15:38.480 --> 15:39.960
they can have an object.

15:39.960 --> 15:43.320
And typically, they, these, they all change together,

15:43.320 --> 15:45.240
but you cannot get an notification about

15:45.240 --> 15:47.280
that they change together, that you will get all the way,

15:47.280 --> 15:50.280
so if you have five properties, you'll get five separate messages

15:50.280 --> 15:52.880
about this, and I don't know.

15:52.880 --> 15:55.400
The error system is just to naive, I just see connections.

15:55.400 --> 15:57.680
They're not serializable and real-autable, right?

15:57.680 --> 15:59.280
Like this is something really important for us.

15:59.280 --> 16:01.600
Like, just to me, people expect us to do

16:01.600 --> 16:04.880
a system-d, a demon reload, and then we'd feel everything down,

16:04.880 --> 16:06.800
come back, but do not lose connections.

16:06.800 --> 16:09.040
Can't reduce that with dealers.

16:09.040 --> 16:11.040
It's not a there's an delegation concept of containers,

16:11.200 --> 16:14.360
and boxes unless you do proxy filtering.

16:14.360 --> 16:17.840
I mentioned this kind of, it's very web unfriendly, right?

16:17.840 --> 16:21.440
Like, nowadays the world runs web stuff,

16:21.440 --> 16:26.280
and it's very hard to connect these two worlds

16:26.280 --> 16:31.280
because they have very different semantic behavior.

16:32.160 --> 16:34.960
There's no enums, there's a global ordering climate,

16:34.960 --> 16:37.680
makes means that you can never have workers' reds

16:37.680 --> 16:38.800
and things like this.

16:38.800 --> 16:40.960
Let's go on and on and on and on and on.

16:40.960 --> 16:43.360
So I'm not going to go into much detail with this.

16:45.360 --> 16:49.760
So much about D-bus, before it's talked about the good parts.

16:49.760 --> 16:50.640
I don't know there would-

16:50.640 --> 16:52.080
You have a question?

16:52.080 --> 16:54.640
Yeah, quick question about the dimensions

16:54.640 --> 16:58.200
that the D-bus is not really suited to the web.

16:58.200 --> 17:00.960
Could you give an example, use change for that?

17:00.960 --> 17:03.280
So the question was regarding it.

17:03.280 --> 17:05.920
I said that D-bus wasn't really suitable for the web,

17:05.920 --> 17:07.760
and I should give an example for this.

17:07.760 --> 17:11.040
So the way how the web works is usually

17:11.040 --> 17:14.960
like this get a response based approach, right?

17:14.960 --> 17:17.200
Like you do one request and you get one response,

17:17.200 --> 17:20.400
or you push something to the cell and you get a response, right?

17:20.400 --> 17:21.360
D-bus is not like that.

17:21.360 --> 17:22.720
It's inherently stateful, right?

17:22.720 --> 17:24.480
Like you create a connection.

17:24.480 --> 17:27.840
This connection is continuous.

17:27.840 --> 17:29.840
It's multi-plex, even, right?

17:29.840 --> 17:31.680
So you will get multiple requests over this,

17:31.680 --> 17:33.520
and you need to deal with this.

17:33.520 --> 17:37.120
You have to continue to process what's coming on on it,

17:37.120 --> 17:40.480
because if you don't, and somebody sends you message,

17:40.480 --> 17:43.920
you will kick off the bus, because that's the logic always, right?

17:43.920 --> 17:47.360
So very, very stateful, and then they install the matches on it, right?

17:47.360 --> 17:51.840
Like, so the connection actually has a lot of state

17:51.840 --> 17:53.040
attached to it in the D-bus broker.

17:53.040 --> 17:55.840
Like what signals shall we get to it?

17:55.840 --> 17:58.480
So it's very, yeah, the get-based stuff,

17:58.480 --> 18:02.720
that the web people are used to, and this inherently stateful stuff,

18:02.720 --> 18:04.400
they just don't go well together.

18:04.400 --> 18:05.680
People do this, like, for example,

18:05.680 --> 18:08.480
you know, cockpit project from Radhat.

18:08.480 --> 18:13.360
They, they, it's basically a web front-down to configure your server.

18:13.360 --> 18:17.760
It's like, it started even out with a translation layer

18:17.760 --> 18:20.560
between HTTP style stuff and D-bus stuff,

18:20.560 --> 18:22.080
and it's just painful.

18:22.080 --> 18:25.280
It's, you can suddenly pull this off, right?

18:25.280 --> 18:27.680
Like, but the impintence mismatch means that you

18:27.680 --> 18:31.200
have to maintain a lot of state in this proxy

18:31.200 --> 18:32.800
to make this work.

18:33.760 --> 18:36.320
Any other questions at this point?

18:36.320 --> 18:37.520
Um, okay.

18:37.520 --> 18:39.440
So I'll start with the problems about violin,

18:39.440 --> 18:42.400
so that you all forget about them before we come to all the good part.

18:42.400 --> 18:45.200
Um,

18:45.200 --> 18:47.200
I mentioned this that violin uses Jason.

18:47.200 --> 18:50.480
Jason is a very, very popular encoding of it.

18:50.480 --> 18:51.440
It has problems.

18:51.440 --> 18:54.240
One of them is that it's not 64-bit integer clean.

18:54.240 --> 18:56.320
I mean, the Jason specification,

18:56.320 --> 19:00.640
the original one, kind of leaves open how precise,

19:00.640 --> 19:03.600
like, what's the, what the bit with is supposed to be for

19:03.600 --> 19:04.880
for the numbers that they have.

19:04.880 --> 19:06.320
They basically just leaves this open,

19:06.320 --> 19:08.240
so it could be auditory precision.

19:08.240 --> 19:11.040
Effectively, though, everybody, um,

19:11.040 --> 19:15.120
like, because the origin, like, I mean, was derived from JavaScript

19:15.120 --> 19:18.160
and JavaScript didn't have an integer.

19:18.160 --> 19:20.960
It only had, like, floating point numbers, um,

19:20.960 --> 19:24.720
like, set it at, like, 64-bit floating point numbers.

19:24.720 --> 19:28.080
And, yeah, the lot of this, like, if you care about integers,

19:28.080 --> 19:29.680
the largest integer you can store there,

19:30.000 --> 19:32.160
in a lossless fashion of 52-bit.

19:32.160 --> 19:34.080
So, uh, and this, this was an average

19:34.080 --> 19:35.760
and many, many, many implementation.

19:35.760 --> 19:39.600
So, people generally assume that, uh, yeah, 52-bit bits,

19:39.600 --> 19:41.680
you can encode losslessly.

19:41.680 --> 19:43.760
If you want to go more than you risk losing

19:43.760 --> 19:46.400
compatibility with certain implementations.

19:46.400 --> 19:47.440
It's a real problem.

19:47.440 --> 19:48.800
I think it's not a big problem.

19:48.800 --> 19:50.480
Not as big as people think it is.

19:50.480 --> 19:54.080
But, yeah, um, it one of the solutions is to form

19:54.080 --> 19:57.120
our format, large integers, the strings, um,

19:57.120 --> 19:58.640
what I think what's more important there is,

19:58.880 --> 20:00.880
like, at least in system, we, we figured out

20:00.880 --> 20:03.840
that most of the times where we actually wanted to use an integer

20:03.840 --> 20:07.840
above 52-bit, um, this was mostly to mark,

20:07.840 --> 20:10.560
like, to have a niche value, you do, like, a special value

20:10.560 --> 20:12.400
that means this does not apply.

20:12.400 --> 20:17.120
Like, let's say, for example, um, memory limit is configured, right?

20:17.120 --> 20:18.880
Like, on some systemly service.

20:18.880 --> 20:21.520
Um, and, uh, if there was no memory limit,

20:21.520 --> 20:24.320
we would expose this as a, uh, uh,

20:24.400 --> 20:28.000
you in 64 max, like, to, to 64-1, right?

20:28.000 --> 20:31.200
Um, and then, uh, this was our special case.

20:31.200 --> 20:33.920
Our niche value that says does not apply.

20:33.920 --> 20:35.040
There is no limit.

20:35.040 --> 20:38.560
Um, uh, but, uh, this, in adjacent world,

20:38.560 --> 20:40.640
this is much better module and modeled,

20:40.640 --> 20:42.800
if you use the null expression, right?

20:42.800 --> 20:45.840
Like, which is, which is a literal value that indicates

20:45.840 --> 20:47.680
this does not apply here, right?

20:47.680 --> 20:50.800
So, uh, in practice, we realized, uh,

20:50.800 --> 20:53.760
yeah, the 62-bit limit kind of sucks, um,

20:53.760 --> 20:56.560
but, uh, most things in this world, uh, um,

20:56.560 --> 20:59.040
I'm kind of fine with it, but yeah, we,

20:59.040 --> 21:01.120
the way out is, uh, to form a disaster.

21:01.120 --> 21:02.560
So, it sucks a little bit, but it's not awful.

21:02.560 --> 21:07.120
Um, uh, one other thing, of course, is that the marshalling,

21:07.120 --> 21:11.840
uh, sings into Jason, is, uh, is a little bit slower than it is for the

21:11.840 --> 21:15.360
binary encoding, the divas use, um, like, uh,

21:15.360 --> 21:18.960
40% in average, like, if you look at the various distribution there,

21:19.040 --> 21:23.200
like, um, there, like, um, there's a,

21:23.200 --> 21:25.520
z-chan already did, uh, like, measurements of this,

21:25.520 --> 21:28.400
where I tried to figure this out, because,

21:28.400 --> 21:31.120
you know, many people assume that marshalling is where

21:31.120 --> 21:34.800
performance, uh, um, has a big effect, um,

21:34.800 --> 21:38.640
but, you know, um, uh, uh, uh, uh, uh, uh,

21:38.640 --> 21:41.040
it's not, like, marshalling, like, 40% like,

21:41.040 --> 21:43.920
a linear increase of this, or 40%.

21:43.920 --> 21:47.040
It's effectively means that for small messages,

21:47.120 --> 21:49.600
it's not even, like, you cannot even measure this, right?

21:49.600 --> 21:53.520
Like, uh, and the actual latency price that you pay,

21:53.520 --> 21:57.360
you don't pay for the marshalling, anyway, you pay for the Roundtrips, right?

21:57.360 --> 22:00.160
So, um, uh, uh, Márch pelvic....

22:00.160 --> 22:02.560
in my point of view, uh, just ignoring it entirely.

22:02.560 --> 22:05.600
Yeah, sure, it's a tiny bit slower, um,

22:05.600 --> 22:10.880
but, uh, this is offset like by many, many, many, many millions of times

22:10.880 --> 22:14.080
by the, uh, fact that, uh, we can get rid of all the

22:14.080 --> 22:18.080
context, which is. Yeah, so it's around trips that kill you, sliding

22:18.080 --> 22:22.080
wasteful marshalling doesn't. Anyway, so much about the problems of

22:22.080 --> 22:24.080
harling, any question at this point?

22:24.080 --> 22:30.080
Yeah, can you send binary data about the problem?

22:30.080 --> 22:34.080
So, the question was regarding whether we can send binary data

22:34.080 --> 22:40.080
of a volume difference, that's a problem. So, yes, you can,

22:40.080 --> 22:44.080
that two ways, like if you have small data, like I don't know, a hash value

22:44.080 --> 22:48.080
or something, you just form it as string generally, right? Or you can

22:48.080 --> 22:50.080
code it in basic form, like if it's a, I don't know, public key or

22:50.080 --> 22:54.080
certificate, whether it's not going to be like gigabytes of data

22:54.080 --> 22:56.080
than you do this. But, uh, uh,

22:56.080 --> 23:00.080
harling inherently has a mode also where you can switch

23:00.080 --> 23:04.080
protocols. Um, so the idea is basically there's a very, very clearly defined

23:04.080 --> 23:06.080
way way how you can say. Initially, we're going to be

23:06.080 --> 23:10.080
balling to set everything up and then we do a mode switch

23:10.080 --> 23:14.080
to something else. Um, so the idea is basically, you can't even transfer

23:14.080 --> 23:18.080
like gigabytes of, I don't know, disc images via volume in a very nice way

23:18.080 --> 23:22.080
because you'd first do balling to the JSON thing. You set the marker

23:22.080 --> 23:26.080
that after this message, that's going to be a binary data

23:26.080 --> 23:30.080
and then the, this would happen, they just continue writing a binary data.

23:30.080 --> 23:34.080
So, uh, that's really nice, actually. It's one of the

23:34.080 --> 23:38.080
types features of balling that you can do something like this. Because it means you just

23:38.080 --> 23:42.080
do one connection. You do not need some extra side channels with this. Um,

23:42.080 --> 23:46.080
and, uh, I mean, to say this differently, you know, debuts sending binary

23:46.080 --> 23:50.080
data is like because of the, the, the streaming issue, like the,

23:50.080 --> 23:54.080
the flow control issue, you, you cannot do this at all, right? Like you

23:54.080 --> 23:58.080
force you into the side, uh, channel. This one's so much nicer. Um,

23:58.080 --> 24:02.080
if there are now questions at this point, I'm going to talk a little bit about

24:02.080 --> 24:06.080
the benefits. One of the things that balling is really nice is that

24:06.080 --> 24:10.080
connections are not just connections where you can send the data

24:10.080 --> 24:14.080
over, but they conceptually also entail a, uh, uh,

24:14.080 --> 24:18.080
pinning of an object on the other side, right? Like so, you do your

24:18.080 --> 24:24.080
method call and, uh, that conceptually, the idea is that as long as you keep that

24:24.080 --> 24:28.080
connection are open, the object on the other side is pinned. So, with my

24:28.080 --> 24:32.080
avahi example earlier, where, uh, we have this browser object created in

24:32.080 --> 24:36.080
avahi and, uh, we need to have the signals and things like this. The

24:36.080 --> 24:40.080
way, how you would express that involving is, um, uh, uh, uh, uh, uh,

24:40.080 --> 24:44.080
you create the connection to the other side. This creates the browser

24:44.080 --> 24:48.080
and the notification chain and pins the browser's existence on the

24:48.080 --> 24:52.080
other side. And as long as you keep that connection open, it stays that way

24:52.080 --> 24:56.080
and usually drop the connection, everything goes away, the notifications, the,

24:56.080 --> 25:00.080
the existence of the browser object on the other side. And, uh, yeah,

25:00.080 --> 25:06.080
the communication channel itself. Um, one of, uh, uh, uh, uh,

25:06.080 --> 25:10.080
I think this, this is one of the best things ever is there is no

25:10.080 --> 25:14.080
multiplexing involving, right? You never use a single

25:14.080 --> 25:18.080
of our own connection to multiplex connections to multiple different

25:18.080 --> 25:22.080
services. You instead, uh, if you have 25 services, you talk to, you create,

25:22.080 --> 25:26.080
and 25, uh, uh, uh, different connections. And each one of them has

25:26.080 --> 25:30.080
local ordering, meaning that, uh, a message you send on the connection

25:30.080 --> 25:34.080
is always going to stay after the one before it and before the one, uh,

25:34.080 --> 25:38.080
uh, um, behind it. But there's no global ordering. This is, by the way,

25:38.080 --> 25:42.080
another thing, diva and diva is there's global ordering, which has benefits,

25:42.080 --> 25:46.080
but it also makes things very, very inefficient because global, like,

25:46.080 --> 25:50.080
a global ordering basically means that any message received by the broker

25:50.080 --> 25:56.080
can never be ordered after any other message received later, um, in the

25:56.080 --> 26:00.080
entire broker. And that's, that's so painful because it means that you can never,

26:00.080 --> 26:04.080
uh, uh, uh, have threads, uh, that process things, uh,

26:04.080 --> 26:08.080
because that would mean out of all of our execution. That's not a lot of divas. But then,

26:08.080 --> 26:12.080
uh, rather than everything is easy, right? Like you can have a, have a thread for everything,

26:12.080 --> 26:16.080
uh, uh, uh, uh, a connection you have, and the, the focus really in having,

26:16.080 --> 26:20.080
yeah, just local ordering on the individual connection, but have many connections.

26:20.080 --> 26:24.080
And then, uh, you got the best of two worlds. You get ordered behavior, because that's

26:24.080 --> 26:28.080
sometimes what you want, but you also, um, don't force the whole world into ordered,

26:28.080 --> 26:34.080
uh, behavior because that would be terrible. Um, and it's request response space,

26:34.080 --> 26:38.080
right? Like you create a connection. You, uh, uh, uh, uh, uh, send the,

26:38.080 --> 26:42.080
request or when you're going to reply back. That is, conceptually,

26:42.080 --> 26:46.080
very, very copy, uh, compatible with the HTTP world. So there was this question, uh,

26:46.080 --> 26:50.080
earlier regarding divas and the HTTP world. This fact makes it so nice,

26:50.080 --> 26:54.080
because it's not HTTP, but it is, uh,

26:54.080 --> 26:58.080
semantically, very, very close to it. And that makes it very,

26:58.080 --> 27:04.080
very easy to have a proxy that exposes viral and KPIs as, uh, as a true HTTP,

27:04.080 --> 27:10.080
and it will actually feel natural and good in a way. Um,

27:10.080 --> 27:16.080
it's also, like, one of the biggest benefits is simplicity of it, right? Like,

27:16.080 --> 27:22.080
it's, it's just JSON over a socket. I can explain this to everybody,

27:22.080 --> 27:26.080
explaining divas to people, uh, so painful, like, so, so much more complex.

27:26.080 --> 27:32.080
And in particular, the web native folks, and I guess this is what dominates IT these days,

27:32.080 --> 27:36.080
um, uh, know the concepts, the underlying concepts, and can, can very quickly get on the

27:36.080 --> 27:42.080
violent train if they already know JSON and these kind of things. Um,

27:42.080 --> 27:46.080
security, uh, any question at this point? Um,

27:46.080 --> 27:50.080
the security model of our link is, uh, a completely different one. Uh,

27:50.080 --> 27:56.080
is, uh, that there is no security model of itself. Um, we rely on the kernel, um,

27:56.080 --> 28:00.080
to do access control for us, uh, primarily, which means like,

28:00.080 --> 28:05.080
because every violent service is inherently an operating system object, right? Like,

28:05.080 --> 28:11.080
it's, uh, binds a socket in the file system. Uh, it's also means that the access control

28:11.080 --> 28:17.080
on that, uh, uh, entry points socket in the file system is managed by the kernel, so they can do,

28:17.080 --> 28:21.080
uh, like, classic unix access modencies kind of things, but they can also do that all that

28:21.080 --> 28:26.080
max, like, as a Linux and whatever the else they want, uh, on this. Um, it can be, uh,

28:26.080 --> 28:30.080
auditors, blah, blah, blah, blah, blah, all the things all the infrastructure that the kernel has,

28:30.080 --> 28:36.080
because we no longer have the concept of a service as a user space concept,

28:36.080 --> 28:39.080
that is a dependent of what the kernel manages, but it's inherently,

28:39.080 --> 28:43.080
it is just a way to, uh, like, it's exposed to kernel object.

28:43.080 --> 28:47.080
Of course, this doesn't always work because, um, like, it's not sufficient to,

28:47.080 --> 28:51.080
to just rely on that because kernel access control stuff is generally,

28:51.080 --> 28:55.080
non-interactive, right? Like, there's no way how the user can be,

28:55.080 --> 28:59.080
can be asked if that something's okay, and we probably want that, right?

28:59.080 --> 29:04.080
Like, so that for, for privilege operations, there's a way how a user can be prompted,

29:04.080 --> 29:07.080
do you really want to format your hard disk and things like this?

29:07.080 --> 29:09.080
And that's something that policy kit does.

29:09.080 --> 29:12.080
So, uh, yeah, the security focus is different one.

29:12.080 --> 29:15.080
We just don't do anything on our own. We just put the focus much stronger

29:15.080 --> 29:20.080
on relying on the two other, uh, uh, uh, uh, uh, uh, uh, subsystems that do this much better.

29:20.080 --> 29:25.080
What I've added, uh, which is kernel stuff, and, uh, I'm policy kit.

29:25.080 --> 29:31.080
Um, then there is also, uh, it's really nice is that there's a delegation model, right?

29:31.080 --> 29:35.080
Because, um, every service and violin just has their own,

29:35.080 --> 29:41.080
I have unique socket and the file system. Um, uh, we can use this for, for delegating access.

29:41.080 --> 29:45.080
So, uh, uh, we can mount them through into sandboxes and containers,

29:45.080 --> 29:47.080
and every service individually because, yeah,

29:47.080 --> 29:51.080
like, and not sure if you guys all know this but the way how a Linux you can,

29:51.080 --> 29:55.080
like, you can mount individual, bind mount individual files somewhere else,

29:55.080 --> 29:59.080
so you can have the host list that binds to an AF unix,

29:59.080 --> 30:02.080
so get somewhere and slash run, and then bind mount that into your sandbox,

30:02.080 --> 30:06.080
and then that sandbox could also access. Um, yeah.

30:06.080 --> 30:11.080
And then there is, uh, like these concepts called SOP-acred, SOP-acred, SOP-acred, SOP-acred, SOP-acred, SOP-acred,

30:11.080 --> 30:14.080
SOP-acred, SOP-acred, so, uh, uh, uh, to do first,

30:14.080 --> 30:16.080
or, like, identification of who you're talking to.

30:16.080 --> 30:18.080
I am a divest and realize of this too,

30:18.080 --> 30:22.080
but, uh, for us, it's just directly how you do things, right?

30:22.080 --> 30:26.080
Like, um, it's a way how you can identify the other side of a socket

30:26.080 --> 30:31.080
and a trusted way, um, uh, which is one of the benefits of using AF unix

30:31.080 --> 30:34.080
that you can identify things like this.

30:34.080 --> 30:39.080
And, uh, something that's also awesome is that there are no synthetic identifiers for objects.

30:39.080 --> 30:42.080
Um, like, in deepest there was always this object pass concept,

30:42.080 --> 30:45.080
which is, like, synthetic meaning that, um,

30:45.080 --> 30:52.080
the services behind it and the clients, uh, never knew this kind of, uh, identification for an object.

30:52.080 --> 30:56.080
But, uh, you would have to make one up if you wanted to communicate via a divest.

30:56.080 --> 30:59.080
Like, for example, in, in, in, in, in, insisting the, you know,

30:59.080 --> 31:03.080
the, the primary object that's insisting managers is probably the unit, right?

31:03.080 --> 31:08.080
And it has a name, right, like, uh, HTTP-D, uh, dot service or something like this.

31:08.080 --> 31:11.080
When it's, like, talking about this via a divest,

31:11.080 --> 31:14.080
you would always have to turn this into diva's object pass.

31:14.080 --> 31:19.080
So it becomes slash or slash free desktop slash system d slash the units slash HTTP,

31:19.080 --> 31:22.080
and then we are escaping service.

31:22.080 --> 31:25.080
Um, so, uh, uh, this is painful, right?

31:25.080 --> 31:28.080
Like, because every time you pass something over diva's,

31:28.080 --> 31:30.080
and want to talk about it in diva's and have an object on diva's,

31:30.080 --> 31:34.080
you have to do this, um, conversion for some actions just terrible.

31:34.080 --> 31:37.080
We're only going to do this, um, it just assumes that, yeah,

31:37.080 --> 31:39.080
every system has the native identifiers.

31:39.080 --> 31:43.080
And if that's a file descriptor, if it's a, uh, pass in the file system,

31:43.080 --> 31:46.080
or if it's a domain name, or if it's a unit name,

31:46.080 --> 31:47.080
doesn't really matter.

31:47.080 --> 31:49.080
It's that thing that should be exchanged over the wire,

31:49.080 --> 31:54.080
and not something made up that everybody on both signs has to convert for some back to.

31:54.080 --> 31:57.080
Um, and there's so much more.

31:57.080 --> 32:01.080
Like, methods can have multiple replies, uh, which is, yeah, that's a question.

32:02.080 --> 32:04.080
So, you don't have an object.

32:04.080 --> 32:06.080
So, uh, if that's object identified,

32:06.080 --> 32:08.080
how could you service this temporary,

32:08.080 --> 32:11.080
and find out that an API isn't made?

32:11.080 --> 32:15.080
So, the question was regarding, it was regarding, uh,

32:15.080 --> 32:19.080
how we do services coverage if we do not have an object identifier.

32:19.080 --> 32:25.080
Um, so, uh, services coverage to a large degree is debugging concept, right?

32:25.080 --> 32:27.080
You want, you're an admin concept, right?

32:27.080 --> 32:29.080
You want to see what's actually available.

32:30.080 --> 32:34.080
Um, so, and, uh, because, I mean, it's a really good question.

32:34.080 --> 32:38.080
Um, the, uh, the stuff, like, the every,

32:38.080 --> 32:41.080
the violin service that we have is exposed as a, uh,

32:41.080 --> 32:43.080
as a socket and the file system, right?

32:43.080 --> 32:47.080
So, you could just go to proc net unix, right?

32:47.080 --> 32:49.080
And have a list of the, a socket stare,

32:49.080 --> 32:52.080
and it's going to cover all the violin stuff, but also more.

32:52.080 --> 32:57.080
Uh, we're working on getting a kernel patch merch, so that we can tag the sockets nicely,

32:57.080 --> 32:58.080
as violin sockets.

32:58.080 --> 33:02.080
Um, that would be wonderful, because then we do not need any further, uh,

33:02.080 --> 33:05.080
uh, uh, a service admiration, which just can ask the kernel, um,

33:05.080 --> 33:09.080
uh, uh, give me all, uh, a few unix sockets in the file system that has this,

33:09.080 --> 33:12.080
extended attribute marker, as this is violin,

33:12.080 --> 33:15.080
and then that's the list of services, right?

33:15.080 --> 33:17.080
Um, but, uh, this is working progress, right?

33:17.080 --> 33:22.080
Right now you're supposed to know, uh, which socket you have to talk to, right?

33:22.080 --> 33:24.080
Um, uh, it's not, uh, but, you know, like, for a system deed,

33:24.080 --> 33:27.080
basically, we put all the sockets for the system deed, uh,

33:27.080 --> 33:31.080
uh, services in the same directory in Slet Run, so you can just do an LS there,

33:31.080 --> 33:36.080
and we used, like, the reverse domain name notation, so it's, yeah.

33:36.080 --> 33:39.080
But, uh, it's not, it's not like systematic.

33:39.080 --> 33:43.080
I mean, the violin specifications actually know a concept of services recovery,

33:43.080 --> 33:46.080
uh, but we never implemented that in Arco, because we don't need it, right?

33:46.080 --> 33:51.080
Like, because, uh, uh, as long as you only talk about a few unix sockets,

33:51.080 --> 33:56.080
it's kind of, like, the file system hierarchy in a way is your services recovery.

33:56.080 --> 33:59.080
Um, so we didn't, uh, need this.

33:59.080 --> 34:01.080
So, well, they've passed that.

34:01.080 --> 34:02.080
Yeah.

34:02.080 --> 34:06.080
It's, he said, uh, if it's about well-known past, uh, and yes.

34:06.080 --> 34:09.080
Have you thought about implementing, uh, debust library,

34:09.080 --> 34:13.080
that's actually a shame performing, or the differences between the two systems to learn?

34:13.080 --> 34:17.080
So the question was regarding, if, uh, we thought about, uh, adding a, uh,

34:17.080 --> 34:22.080
a library that is a shame around, uh, uh, uh, like, which way did you want to have it?

34:22.080 --> 34:24.080
Like, the violin over diva, so diva's over there.

34:24.080 --> 34:28.080
I think neither of them really, it makes me nervous because this is a mathematical differences, right?

34:28.080 --> 34:32.080
Which is too big. Um, like, diva's is really, really complex.

34:32.080 --> 34:34.080
The violin is very, very simple.

34:34.080 --> 34:38.080
And, uh, it's, it's just the impedance mismatches, just too massive.

34:38.080 --> 34:39.080
I don't think.

34:42.080 --> 34:46.080
I don't think that's realistic, like, I mean, the way I think how this should continue, right?

34:46.080 --> 34:51.080
Like, that, the, these two worlds left side by side, and we slowly move things over.

34:51.080 --> 34:57.080
And, uh, I don't think, um, if you have a diva's interface and that's a good one, just keep it.

34:57.080 --> 35:03.080
Um, but, uh, I know that for our stuff, we could just move the focus over to the violin.

35:03.080 --> 35:09.080
Um, one I think is that the, uh, like, violin and hernity know something, uh, like a concept,

35:09.080 --> 35:11.080
where you have massive calls that receive minor replies.

35:11.080 --> 35:16.080
Uh, this is useful for any narration and for, for, uh, a subscription.

35:16.080 --> 35:19.080
Um, where you basically say, you have a message called, subscribe.

35:19.080 --> 35:22.080
And you do this and then you get one reply that says, okay, the subscription is in effect.

35:22.080 --> 35:26.080
Then you get multiple, like, further replies every time, uh, something changes.

35:26.080 --> 35:32.080
So it's about unimeration and such, like, you know, in unimeration, basically means you get all the replies at this, very quickly,

35:32.080 --> 35:35.080
after another subscription means you have a long running connection.

35:35.080 --> 35:36.080
And it's freebable like this.

35:36.080 --> 35:40.080
Um, it's also really, really compatible with per connection.

35:40.080 --> 35:48.080
This is like, uh, this is, I always say that this is one of the absolute killer features of, uh, of violin.

35:48.080 --> 35:55.080
Um, because, you know, traditionally, if you want to write a diva's demon, um, you're forced into basically using an event loop.

35:55.080 --> 35:56.080
There isn't no other way.

35:56.080 --> 36:00.080
Um, because, uh, everything is multi-plexed over one connection.

36:00.080 --> 36:04.080
Um, you will get, like, the, the instant you can, uh, instance you connect to diva,

36:04.080 --> 36:08.080
you must be ready to process multiple messages and parallel.

36:08.080 --> 36:12.080
Otherwise, I mean, you could avoid it, but then you're not really well-behaving diva server.

36:12.080 --> 36:15.080
Um, so, uh, um, um, yeah.

36:15.080 --> 36:21.080
And because it's coming in from one connection and the global ordering and things like this, you cannot use threads.

36:21.080 --> 36:25.080
So it, uh, the only thing that you can do really is, it's an event loop.

36:25.080 --> 36:26.080
This is very limiting, right?

36:26.080 --> 36:30.080
Like it means things are slow and it seems they're painfully complex.

36:30.080 --> 36:35.080
Because if you do a vet event loop, at least if you do it and see, then, uh, yeah, like, uh,

36:35.080 --> 36:42.080
uh, don't talk about this yesterday about how painful it is to have these complex, uh, systems, uh,

36:42.080 --> 36:49.080
uh, um, where you have to basically pass the context data of what you're doing from one event loop iteration to the next one,

36:49.080 --> 36:50.080
because, uh, yeah.

36:50.080 --> 36:51.080
It's, it's very, very painful.

36:51.080 --> 36:53.080
Now with violin, you don't need this.

36:53.080 --> 36:56.080
Because in violin, every connection is separate from each other.

36:56.080 --> 37:01.080
So what you can do, actually, is you can have a, uh, uh, um,

37:02.080 --> 37:06.080
first of all, you suck at activation, system D, meaning that system D binds a soccer for you.

37:06.080 --> 37:12.080
And then every time a connection comes in, we actually fork of a new instance of your, uh, program to, uh,

37:12.080 --> 37:14.080
a handle that specific connection.

37:14.080 --> 37:17.080
And this is, this is, makes things so much easier, right?

37:17.080 --> 37:22.080
Like because the individual, uh, binary can be single threaded, um, it can, uh,

37:22.080 --> 37:26.080
deal with one connection at a time and process that and then exit.

37:26.080 --> 37:30.080
And then, uh, the next connection will get a separate instance and so on and so on.

37:30.080 --> 37:34.080
This in a way is a lot like, uh, unix tools are traditionally written, right?

37:34.080 --> 37:37.080
Like regardless if you use, um, I don't know, L.S.

37:37.080 --> 37:40.080
Or if you use, I don't know, find them out from utilities.

37:40.080 --> 37:41.080
All these little unix tools.

37:41.080 --> 37:43.080
They always have the same way.

37:43.080 --> 37:47.080
You pause in, uh, like you invoke them, pass in parameters on the command line.

37:47.080 --> 37:52.080
You possibly, uh, uh, provide some input via a studio in an output via a studio.

37:52.080 --> 37:55.080
Now that these tools even often speak JSON natively, right?

37:55.080 --> 37:59.080
Like a, like a, you tell, unix most of them can output their stuff as JSON and,

37:59.080 --> 38:02.080
many of them can even read JSON's input.

38:02.080 --> 38:06.080
So the only thing that's like the parameters to the, to the calls are just in the,

38:06.080 --> 38:09.080
and then, um, um, command line still.

38:09.080 --> 38:16.080
But now you can basically take one step further from a such a command line to that already speaks JSON and say,

38:16.080 --> 38:24.080
OK, now I bind this to a socket and teach it the special functionality that it can also read its parameters directly from the JSON,

38:24.080 --> 38:30.080
uh, violin, uh, function, uh, stuff that is before the payload of the data.

38:30.080 --> 38:32.080
And then you have a violin service, right?

38:32.080 --> 38:39.080
Like so to me, in a way, this is like taking classic unix tools that is already one of the good ones that does JSON for everything.

38:39.080 --> 38:41.080
Doing one more thing, which is like this.

38:41.080 --> 38:47.080
Yeah, instead of using the process command line to provide parameters, you move this into the,

38:47.080 --> 38:49.080
as to the in-stuff issue as well.

38:49.080 --> 38:53.080
Um, and it's already a violin service.

38:53.080 --> 38:59.080
So, yeah, so the fact that it can do things like this and spawn off multiple instances,

38:59.080 --> 39:02.080
each dealing was one, um, uh, a connection coming in.

39:02.080 --> 39:04.080
This is, this is game changer, right?

39:04.080 --> 39:10.080
Like because this, in, in systemd in particular, this led us to an explosion of debas, uh, of violin services.

39:10.080 --> 39:12.080
Right? Like I think we are at 13.

39:12.080 --> 39:16.080
I forget the number of debas services right now that, uh, systemd has,

39:16.080 --> 39:20.080
but we are already at 30 or something, um, uh, violin services.

39:20.080 --> 39:25.080
And the reason for this is like consider a little tool and systemd like boot control, right?

39:25.080 --> 39:28.080
Like boot controls is tool that, I mean, you can do various things,

39:28.080 --> 39:33.080
but the primary thing is it can install a boot loader into the ESP, like SD boot into the ESP.

39:33.080 --> 39:38.080
It's a, it's a command that you have to execute if you want to install SD boot,

39:38.080 --> 39:42.080
but also, uh, it's not really called that often, right?

39:42.080 --> 39:45.080
Like it's called during installation, or if your hack,

39:45.080 --> 39:48.080
maybe you call it a little bit more times, but that's kind of it.

39:48.080 --> 39:55.080
Um, and, uh, uh, the idea, like, it's also something you inherently want to have an IPC service,

39:55.080 --> 39:59.080
because if you write an installer, it's one of the things that the installer has to do while

39:59.080 --> 40:01.080
installing something like this, right?

40:01.080 --> 40:06.080
So, uh, but the idea of turning that into a deemous demon is just revolting, right?

40:06.080 --> 40:11.080
Like because it's this little tool that ultimately does nothing more than couple of copying and files somewhere.

40:11.080 --> 40:16.080
And if you want to do it in deemous, you would have to think about, like, how do I do this copying now

40:16.080 --> 40:20.080
inside of an event loop and deal with all terrible?

40:20.080 --> 40:26.080
And then, for what, even, because, uh, it's not that they're going to be like 25, uh,

40:26.080 --> 40:31.080
requests coming in a parallel to install a boot loader, and almost every possible,

40:31.080 --> 40:36.080
um, case it's going to be one invocation during the lifetime of the system, right?

40:36.080 --> 40:41.080
So, uh, uh, uh, uh, uh, in borrowing, all the things become super, super simple.

40:41.080 --> 40:45.080
We just, yeah, it's already this tool that is a command line tool you invoke once.

40:45.080 --> 40:51.080
Now you make it read, um, uh, it's parameters from, from the borrowing method called parameters

40:51.080 --> 40:57.080
that come in from SDN, and, uh, write out a, I've done my work, okay, seeing when it's done.

40:57.080 --> 41:01.080
And then you bind it to a socket, and there you go, it's a rolling service.

41:01.080 --> 41:05.080
So, this is a game changer, because all these little tools, these unique style tools,

41:05.080 --> 41:12.080
that do one thing or single thread it, deal with SDN, SDD out, and the command line are so easily converted

41:12.080 --> 41:18.080
into a violin thing. So, yeah, I think this is a killer thing, right? Like, because it makes it easy.

41:18.080 --> 41:23.080
Easy, easy, easy to expose, uh, system level tools as, uh, services.

41:23.080 --> 41:24.080
There's a question.

41:24.080 --> 41:27.080
You said you treated you explained why you wanted IPC in that example.

41:27.080 --> 41:30.080
Why wouldn't you just use Unix for X, as you are?

41:30.080 --> 41:39.080
So, the question was regarding why wouldn't Unix for X, I could be enough as a, as a way to call

41:39.080 --> 41:45.080
these tools instead of doing IPC. Um, like, doing IPC, uh, like, first of all, it's very useful

41:45.080 --> 41:51.080
if you get, uh, like, structured data, right? Like, uh, you do not have to care so much about

41:51.080 --> 41:55.080
escaping, do not have to care about error handling, like figuring out what the actual error is.

41:55.080 --> 42:01.080
Also about the isolation, right? Like, the execution context of, uh, of that tool is separate from

42:01.080 --> 42:06.080
the client that makes the invocation. So, you can use it as a security, um, isolation that one

42:06.080 --> 42:09.080
is on privilege, the other one is privilege, and things like this, right?

42:09.080 --> 42:15.080
So, um, one is coming from user contact, a context, uh, one from, from system context.

42:15.080 --> 42:20.080
Like, for example, installer tool, most likely, um, is a UI tool, right? Like, a graphical tool,

42:20.080 --> 42:25.080
which you decay, writing a lot running as an unprovoked, um, uh, a piece of code.

42:25.080 --> 42:30.080
But boot control install and parenting needs high privileges, right? Like, so you can use the IPC

42:30.080 --> 42:35.080
as a, as a security boundary, why, where you have a limited interface that does exactly what,

42:35.080 --> 42:38.080
what shall be allowed, but with fork and X, that you can't do this. But actually,

42:38.080 --> 42:42.080
having said, was that, that one of the other killing, killer features of, uh,

42:42.080 --> 42:47.080
borrowing them, and sure you might put this on the slides here is, um, because it's so similar,

42:47.080 --> 42:51.080
they're actually two ways to, you can call, uh, such a viling, enable tool.

42:51.080 --> 42:55.080
You can either bind it to a socket, talk to the socket, and everything's good.

42:55.080 --> 42:59.080
But you can also just fork it off, um, and use a socket pair for communication,

42:59.080 --> 43:02.080
and still speak viling to it. And we use that, actually, like,

43:02.080 --> 43:06.080
I've recently added this, like, many installer things that just just,

43:06.080 --> 43:10.080
command line and text row to, to system D, and it's just a wrap around boot control install,

43:10.080 --> 43:14.080
kernel install, and, uh, it's just a new report, right?

43:14.080 --> 43:17.080
And the way it communicates, actually, it does fork things off, um,

43:17.080 --> 43:21.080
because that was easy for me to, to develop, but at the same time, it's true to be able

43:21.080 --> 43:24.080
to just not fork it off, um, have this stuff unprivileged,

43:24.080 --> 43:28.080
to have the backends privilege and use IPC for, as the installation, uh,

43:28.080 --> 43:32.080
a point, right? So, and that is just, it's just awesome, right?

43:32.080 --> 43:35.080
Like, because you can have both modes, right? The simple one,

43:35.080 --> 43:39.080
offline mode, we're just forksing something yourself off, where you don't care about

43:39.080 --> 43:42.080
isolation, and the, and the other one, where you do care about isolation,

43:42.080 --> 43:46.080
just bound into a socket. So, it's, it's, uh, it's also killer,

43:46.080 --> 43:49.080
but it's already advanced killer for the people who, uh,

43:49.080 --> 43:53.080
uh, made the deep dive into adopting this kind of stuff.

43:53.080 --> 43:55.080
Um, that question.

43:55.080 --> 43:58.080
Another problem with the most today, I think, is that he was acting

43:58.080 --> 44:01.080
extra, when they tried to make them shut down.

44:01.080 --> 44:03.080
Uh, you have to do this in a very particular order.

44:03.080 --> 44:05.080
You have to look at the files to be in advance,

44:05.080 --> 44:07.080
so that you are bound to shut down,

44:07.080 --> 44:10.080
and otherwise, you might have some, uh,

44:10.080 --> 44:12.080
most of the messages on the screen.

44:12.080 --> 44:14.080
Was this really pretty good about it?

44:14.080 --> 44:16.080
So, the question was regarding, um,

44:16.080 --> 44:19.080
like, in Diba's, um, shutting down properly, uh,

44:19.080 --> 44:22.080
is kind of hard, because, uh, yeah, you need to tell system D about,

44:22.080 --> 44:24.080
that you're going to go down, because otherwise,

44:24.080 --> 44:26.080
it might reactivate you and things like this.

44:26.080 --> 44:28.080
It's, yeah, uh, it, yes.

44:28.080 --> 44:30.080
So, the answer is, this will get better as well,

44:30.080 --> 44:32.080
because, you know, in the, the, the individual connections

44:32.080 --> 44:34.080
are entirely separate, right? Like, so then,

44:34.080 --> 44:38.080
if the client drops a connection, this will immediately be visible

44:38.080 --> 44:41.080
on the, on the other side, independently of any other scheduling

44:41.080 --> 44:43.080
that happens on the system, independently of system,

44:43.080 --> 44:45.080
as a drop connection, right?

44:45.080 --> 44:48.080
And then you, you can, eventually react to it.

44:48.080 --> 44:50.080
So, uh, subject activation makes a lot of things easier,

44:50.080 --> 44:54.080
because you have inherently always kernel concepts

44:54.080 --> 44:57.080
that are direct between client and service,

44:57.080 --> 45:03.080
or that you can tie your life cycle to, um, and, yeah.

45:03.080 --> 45:11.080
Um, yeah, so they were, uh,

45:11.080 --> 45:14.080
question was regarding, um, uh, uh,

45:14.080 --> 45:17.080
deep bargaining, seeing I have that somewhere, um,

45:17.080 --> 45:19.080
like at the upgrade mechanism, let's see.

45:19.080 --> 45:21.080
I mean, I have so many more slides,

45:21.080 --> 45:23.080
and I don't intend to go through all of them,

45:23.080 --> 45:26.080
because it's, it's, it's so much good stuff in there,

45:26.080 --> 45:28.080
but the deep bargaining thing I have somewhere on this life,

45:28.080 --> 45:31.080
and deep bargaining borrowing is, is really, really nice,

45:31.080 --> 45:33.080
because, uh, you don't need any special tools for it,

45:33.080 --> 45:36.080
because it's Jason, you can just ask trace

45:36.080 --> 45:39.080
and figure out what the fuck's going on.

45:39.080 --> 45:41.080
You know, I have, in my life, uh, uh,

45:41.080 --> 45:43.080
uh, looked at so many traces from system,

45:43.080 --> 45:45.080
the sending deep-up messages,

45:45.080 --> 45:49.080
and it's just like, is it's impossible to digest, right?

45:49.080 --> 45:52.080
Like, people write, like, their multiple tools,

45:52.080 --> 45:55.080
that only exists to deal with, uh,

45:55.080 --> 45:58.080
debugging deep-up and tracing deep-up messages,

45:58.080 --> 46:02.080
and even dealing with problems that only deep-up creates for itself,

46:02.080 --> 46:06.080
like all these latency, uh, context-switching stuff, um,

46:06.080 --> 46:10.080
stuff, and those actually exist because, you know,

46:10.080 --> 46:13.080
you know, you don't borrowing, you just do the s-trays thing,

46:13.080 --> 46:18.080
and it's literally all that, and you can read it without any further tools,

46:18.080 --> 46:20.080
so that is, that is just,

46:20.080 --> 46:23.080
another one of these killer features.

46:23.080 --> 46:26.080
Um, any other question?

46:27.080 --> 46:30.080
Yeah, this is like the two very natural models I mentioned this,

46:30.080 --> 46:32.080
and I mentioned the, yeah, I think,

46:32.080 --> 46:34.080
another thing is like, when you use violin,

46:34.080 --> 46:37.080
it's competal versus rat pool handling and process pool handling,

46:37.080 --> 46:40.080
uh, the difference is basically being just pieces like weights thread

46:40.080 --> 46:42.080
or, uh, library process.

46:42.080 --> 46:44.080
Um, uh, this is also kind of nice, right?

46:44.080 --> 46:47.080
Like, because what I was just telling you about processes,

46:47.080 --> 46:49.080
uh, it also means that you can use,

46:49.080 --> 46:51.080
because processes are a security isolation things, right?

46:51.080 --> 46:53.080
So not, right? Like, so you can basically have it,

46:53.080 --> 46:56.080
so that if a client calls to demon, um,

46:56.080 --> 46:58.080
uh, a violin service, then, uh,

46:58.080 --> 47:02.080
it can be handled process by a process that is

47:02.080 --> 47:05.080
memory separated and UAD separated and everything,

47:05.080 --> 47:08.080
from all the other ones that run there.

47:08.080 --> 47:10.080
Um, so that is like, uh,

47:10.080 --> 47:12.080
that's like, that's really really good, right?

47:12.080 --> 47:14.080
Like, because typically, um,

47:14.080 --> 47:18.080
system services will do operations for underage clients,

47:18.080 --> 47:22.080
and, uh, if they're multiple of those,

47:22.080 --> 47:24.080
they will come with different privileges. So, um,

47:24.080 --> 47:27.080
isolating the handling of it is kind of valuable.

47:27.080 --> 47:29.080
Anyway, my time is mostly over. Um,

47:29.080 --> 47:32.080
I have still, like, uh,

47:32.080 --> 47:34.080
uh,

47:34.080 --> 47:37.080
but that's fine.

47:37.080 --> 47:40.080
Let's just say, use what,

47:40.080 --> 47:43.080
violin since, uh, two five seven, uh,

47:43.080 --> 47:46.080
we have a, uh, a library and system to use that one

47:46.080 --> 47:48.080
if you can see if you do not,

47:48.080 --> 47:51.080
you see don't wrap that, use something native.

47:52.080 --> 47:54.080
Um, it's not very well documented,

47:54.080 --> 47:57.080
but, um, uh, uh,

47:57.080 --> 47:59.080
uh, there's so many examples because

47:59.080 --> 48:01.080
systemy tree is basically full of it.

48:01.080 --> 48:03.080
There's a command line tool.

48:03.080 --> 48:05.080
Yeah. Um, there's currently

48:05.080 --> 48:07.080
some things that are, uh,

48:07.080 --> 48:09.080
uh, admitted in our implementation,

48:09.080 --> 48:11.080
but that's even more in there.

48:11.080 --> 48:14.080
So, that file is good for path passing, uh,

48:14.080 --> 48:16.080
and we also do, uh,

48:16.080 --> 48:18.080
uh, I think, you know,

48:18.080 --> 48:19.080
I originally wrote this live,

48:19.080 --> 48:20.080
actually for another conference.

48:20.080 --> 48:22.080
Um, I think this data is not entirely correct.

48:22.080 --> 48:23.080
Um, uh,

48:23.080 --> 48:25.080
I've said 30 and 13.

48:25.080 --> 48:28.080
Yeah, I think it's actually more or less nowadays.

48:28.080 --> 48:30.080
Um, yeah.

48:30.080 --> 48:31.080
Um, the future is,

48:31.080 --> 48:33.080
Diva is not going to weigh in the system.

48:33.080 --> 48:35.080
I, I don't want to send the signal

48:35.080 --> 48:37.080
that we're not going to ban in all of the old stuff.

48:37.080 --> 48:39.080
I'm just trying to tell people that,

48:39.080 --> 48:40.080
yeah,

48:40.080 --> 48:42.080
the new stuff is going to be rolling.

48:42.080 --> 48:44.080
And we're going to add alternative,

48:44.080 --> 48:45.080
uh,

48:45.080 --> 48:46.080
API for much of the,

48:46.080 --> 48:47.080
the Diva stuff,

48:47.080 --> 48:48.080
but yeah,

48:48.080 --> 48:50.080
the old stuff's not going to go away.

48:50.080 --> 48:51.080
Um,

48:51.080 --> 48:52.080
that's all I'd have.

48:52.080 --> 48:53.080
Let's maybe,

48:53.080 --> 48:54.080
how many minutes do you have?

48:54.080 --> 48:55.080
Like two minutes?

48:55.080 --> 48:57.080
Let's fill that with questions and answers.

48:57.080 --> 48:59.080
You said Diva's is not going to weigh in the way,

48:59.080 --> 49:01.080
but I'm curious to your opinion about,

49:01.080 --> 49:02.080
um,

49:02.080 --> 49:04.080
is that the Diva's is going to be difficult

49:04.080 --> 49:05.080
or a new project?

49:05.080 --> 49:06.080
We don't have,

49:06.080 --> 49:07.080
like,

49:07.080 --> 49:08.080
this is my entire idea.

49:08.080 --> 49:09.080
Well, do you think that they're going to use

49:09.080 --> 49:10.080
types of data?

49:10.080 --> 49:12.080
And Diva's is still beautiful.

49:12.080 --> 49:13.080
Um,

49:13.080 --> 49:14.080
so the question was regarding,

49:14.080 --> 49:15.080
uh, if we should consider Diva's

49:15.080 --> 49:16.080
deprecated,

49:16.080 --> 49:18.080
or if there's a future where it is deprecated?

49:18.080 --> 49:19.080
So,

49:19.080 --> 49:20.080
to summarize,

49:20.080 --> 49:21.080
like, you know,

49:21.080 --> 49:22.080
the one thing,

49:22.080 --> 49:23.080
like,

49:23.080 --> 49:25.080
really the one thing that I still think

49:25.080 --> 49:26.080
that Diva's is going for it,

49:26.080 --> 49:27.080
is the fact that it's,

49:27.080 --> 49:28.080
you know,

49:28.080 --> 49:29.080
actually adopt it,

49:29.080 --> 49:30.080
and there's so much more language

49:30.080 --> 49:31.080
binding,

49:31.080 --> 49:32.080
whileing doesn't compare, right?

49:32.080 --> 49:33.080
Um,

49:33.080 --> 49:34.080
so,

49:34.080 --> 49:35.080
I don't know,

49:35.080 --> 49:37.080
I think Diva's is not great,

49:37.080 --> 49:39.080
but also it's not complete garbage,

49:39.080 --> 49:40.080
right?

49:40.080 --> 49:41.080
So,

49:41.080 --> 49:42.080
I,

49:42.080 --> 49:43.080
I,

49:44.080 --> 49:46.080
like,

49:46.080 --> 49:47.080
like,

49:47.080 --> 49:48.080
so,

49:48.080 --> 49:49.080
um,

49:49.080 --> 49:50.080
if you check a look on it there,

49:50.080 --> 49:53.080
you will do a separate implementation of software.

49:53.080 --> 49:54.080
But I think the official,

49:54.080 --> 49:56.080
which looks like to be developed all

49:56.080 --> 49:57.080
or developer of the time,

49:57.080 --> 50:00.080
so looks like you choose to create your own

50:00.080 --> 50:01.080
implementation.

50:01.080 --> 50:03.080
How do you sort of feature in terms of

50:03.080 --> 50:04.080
tooling like,

50:04.080 --> 50:06.080
are you going to create an idea

50:06.080 --> 50:08.080
from the finish to because looks like it's

50:08.080 --> 50:09.080
manual,

50:09.080 --> 50:10.080
and it has,

50:10.080 --> 50:11.080
I deal.

50:11.080 --> 50:12.080
That's right.

50:12.080 --> 50:14.080
It already has an IDL.

50:14.080 --> 50:17.080
It is that it is there, or by the time they need to.

50:17.080 --> 50:23.080
So, the question is, I'm going to call it divert from the library.

50:23.080 --> 50:24.080
It's good to know.

50:24.080 --> 50:27.080
I think the question was about the relationship of the stuff

50:27.080 --> 50:30.080
that we do in system D from the rolling main repository.

50:30.080 --> 50:33.080
So, first of all, like there is an IDL,

50:33.080 --> 50:38.080
like the way we generate it is the other way around though,

50:39.080 --> 50:42.080
like it's a seasing that we have,

50:42.080 --> 50:44.080
and you encode the IDL and sea structures,

50:44.080 --> 50:47.080
and then we generate the textural ID form from that.

50:47.080 --> 50:51.080
So, we will not have a compiler to turn IDL into into a secret.

50:51.080 --> 50:55.080
We have a secret that spits out IDL.

50:55.080 --> 51:02.080
The relationship in general is that we had some specific needs,

51:02.080 --> 51:06.080
that we wanted to address, like regarding JSON libraries and things like that.

51:06.080 --> 51:10.080
You also need to think JSON, and then make your choice with that.

51:10.080 --> 51:12.080
Memory safety and all these kind of things.

51:12.080 --> 51:15.080
So, it made sense to do this on our own.

51:15.080 --> 51:18.080
But, I mean, I think the rolling is not an isolated thing,

51:18.080 --> 51:20.080
like the fact that the system uses a lot suddenly

51:20.080 --> 51:22.080
will give it hopefully a big push,

51:22.080 --> 51:25.080
but my assumption is also that other languages

51:25.080 --> 51:28.080
should never wrap as devaling,

51:28.080 --> 51:32.080
because as devaling is like it uses sea concepts,

51:32.080 --> 51:35.080
and if you use rust or go or anything fancy,

51:35.080 --> 51:38.080
then you should think twice,

51:38.080 --> 51:40.080
like do not wrap as devaling.

51:40.080 --> 51:43.080
It leaves that as a seasing, and it's very sea friendly,

51:43.080 --> 51:45.080
but for everything else,

51:45.080 --> 51:48.080
I would assume that some other people take over the responsibility.

51:48.080 --> 51:51.080
And there is a community about this, like for rusts, for example,

51:51.080 --> 51:53.080
there is that link.

51:53.080 --> 51:55.080
Anyway, my time is over,

51:55.080 --> 51:57.080
if you have any further questions,

51:57.080 --> 51:58.080
let's talk about that.

52:02.080 --> 52:04.080
Thank you.

