WEBVTT

00:00.000 --> 00:15.840
All right. We have resolved the technical issue, apparently. So, up next is Standemire, who has

00:15.840 --> 00:26.360
achieved, did you reenable? Yeah, that should work. Standemire, who he has achieved

00:26.360 --> 00:33.160
to think that we have not yet had before in the two years of new tools, GCC

00:33.160 --> 00:40.060
death room. He has actually achieved two things. This talk is not directly about new

00:40.060 --> 00:48.060
tool chain, but close enough in the layer in the software stack. So, we thought that would

00:48.060 --> 00:53.600
be interesting to hear in particular, I guess, why? Why? It's not directly using the

00:53.600 --> 00:59.200
standard interfaces. And he also has achieved that he has presentation in the GCC

00:59.200 --> 01:05.600
death room this morning and presentation in the LLVM death room in the afternoon. Both

01:05.600 --> 01:12.600
about system-D related things. All right. With that, please get started.

01:12.600 --> 01:19.600
You can clip it on if you like. I can just carry out this one.

01:19.600 --> 01:24.520
Hello, everyone. So, I'm done. I work on system-D as a maintainer and I have been playing

01:24.520 --> 01:29.280
around with fibers. So, like I said, this is also new for me. I've never done anything

01:29.280 --> 01:34.320
in the GCC death room. And I guess it's not unlike the other talks since it's like a

01:34.320 --> 01:41.080
lot higher level, but hopefully this still is useful. So, I mean, the background is pretty

01:41.080 --> 01:46.600
simple. System-D has a lot of demons. The most important one is pit one, but we also have

01:46.600 --> 01:52.200
like system-D resolve the for DNS. We have network management and so forth. And these

01:52.200 --> 01:56.280
demons need to handle requests concurrently. When you're starting a single unit and pit

01:56.280 --> 02:01.880
one, then you have to be able to do other operations. You can just refuse any operations

02:01.880 --> 02:06.760
while that is going on. So, we need concurrence. That's the basic background. Then the

02:06.760 --> 02:11.920
interesting question is how do we do that? So, generally when doing concurrency, you have

02:11.960 --> 02:16.720
two options. Either you have threats or you have an event loop and then you run a non-blocking

02:16.720 --> 02:21.680
code in there. And because we're talking in the context of C, if a non-blocking code usually

02:21.680 --> 02:29.280
means callbacks. So, threats are not an option for system-D. And most in pretty much all cases

02:29.280 --> 02:33.760
where we're running a demon. So, we end up doing the other thing which is an event loop

02:33.840 --> 02:41.840
with callbacks. So, why exactly are threats not an option for system-D? So, this is where

02:43.840 --> 02:49.280
the nature of system-D comes in. This is not a general problem, generally you can use threats

02:49.280 --> 02:56.560
for concurrency. But it's when you start combining threats with forking that things become

02:56.640 --> 03:03.280
problematic. So, the trivial example is you have two threats running in your process.

03:04.560 --> 03:08.640
One threat is doing malloc, it acquires a global allocator lock and then the other threat

03:08.640 --> 03:15.440
forks. Only the threat that does the forks copy it into the new process. But all the memory

03:15.440 --> 03:21.280
is also copied into the new process, including allocated locks and everything. So, you end up in

03:21.280 --> 03:26.480
the situation down below where you have the second threat that tries to do a malloc as well.

03:26.960 --> 03:32.160
The lock is still taken from the memory that was copied from the first process and you end

03:32.160 --> 03:39.440
up in a dead lock. And so, this is solved by kilopsie basically telling you that you can only use

03:39.440 --> 03:44.960
a-sync signal-safe functions after forks. But there's severely limits what you can do after forking.

03:44.960 --> 03:52.560
And so, we end up not doing it in system-D because we generally need to do more, we rely on

03:52.560 --> 03:57.440
being able to allocate memory after forks. This is not a problem when you only have one threat

03:57.440 --> 04:01.600
because there's no other threat that can be taken in the lock, but if you have multiple threats,

04:01.600 --> 04:08.560
then it can happen. And this isn't just theoretical. We used to use threats for a few purposes

04:08.560 --> 04:14.960
in bit one, primarily to asynchronously close file descriptors because closing file descriptors

04:14.960 --> 04:19.760
is a blocking operation. There is no non-blocking interface for it in the kernel. Except maybe

04:19.760 --> 04:27.360
these days for IOU ring. So, we spawned off a threat to do that asynchronously and we did get like

04:27.360 --> 04:32.560
when I was still working at meta. In production, we saw various crashes related to

04:33.280 --> 04:40.320
this specific case. And so, it's not just related to forks because you could ask yourself the

04:40.320 --> 04:46.800
question, why would it be, why could you only use async signal-safe functions after forks because

04:46.880 --> 04:54.480
it's signal. There's no fork in that name, but it turns out that signal handers have exactly

04:54.480 --> 05:00.480
the same problem because the signal hander can generally be invoked any time during the execution

05:00.480 --> 05:06.960
of your project or your application. So, what could be happening is that your threat is executing

05:06.960 --> 05:12.960
malach takes the global lock and then the signal hander gets invoked before it releases the lock

05:12.960 --> 05:16.480
and then your signal hander tries to do malach and you have exactly the same problem and that's why

05:16.480 --> 05:23.600
you can also only use async signal-safe functions in signal handers and like the fork has exactly

05:23.600 --> 05:29.040
the same problem but you need to be using more than one threat. So, that's why threats don't work in

05:29.040 --> 05:34.720
pit one and why we end up doing the event looping. It turns out that unfortunately event looping

05:34.800 --> 05:43.920
callbacks also leads to pain and so to explain that, this is like a very simple example. We have

05:43.920 --> 05:50.320
our event loop running and the event loop, I mean see it as a thing that it waits for events and

05:50.320 --> 05:56.320
it invokes callbacks when those events happen. So, we have one or callback one and we have our callback

05:56.320 --> 06:04.160
two. What I callback one this is it allocates a string fubar then it registers a callback for another

06:04.480 --> 06:10.960
event and then the stack unwinds. Right the callback is done so the callback function will return

06:10.960 --> 06:16.960
and the stack unwinding starts and we use this extremely useful TCC extension in system the right

06:16.960 --> 06:23.360
the clean up attribute to effectively emulate C++ instructors and so what that means is that when

06:23.360 --> 06:33.680
the stack unwinds we will end up freeing our s-for-riable with our string fubar. That's done our callback

06:33.680 --> 06:39.120
two is also past our string fubar because we needed in the second callback. Eventually our second callback

06:39.120 --> 06:43.360
gets invoked it tries to use our fubar string and we get a segmentation fault because the string

06:43.360 --> 06:49.040
has been freed. So, pretty simple error. So then the question is like how do we fix that?

06:50.720 --> 06:55.840
What we could do is we could just not use the clean up attribute. We do exactly the same thing

06:56.480 --> 07:02.480
and then eventually we end up in our second callback again we use the string s. Everything works fine.

07:02.560 --> 07:06.880
The second callback returns and we leak memory because nothing freed our fubar string.

07:06.880 --> 07:15.040
So that doesn't work either. What could we do then? We could say that the second callback

07:15.040 --> 07:20.480
is responsible for freeing our string fubar but what is the problem that you run into then?

07:22.080 --> 07:27.760
You do your first callback everything goes fine. Your second callback never gets invoked because

07:27.840 --> 07:33.760
there's a third callback that stops the program. It exits the event loop. The program exits.

07:33.760 --> 07:38.960
Your string s was never freed by the second callback and so you leak memory again. So that doesn't work

07:38.960 --> 07:46.320
either. So another option is we have this in an system. This event loop does the way to set

07:46.320 --> 07:53.120
a destroy callback for an event source. So what you cannot do is you can set the free function as the

07:53.200 --> 08:01.440
callback for our string fubar that we give to the when we register the event and that will

08:01.440 --> 08:07.040
be invoked when the event source is cleaned up and that's generally a will ensure that the

08:07.040 --> 08:11.920
thing gets freed anytime you need it to be freed. But the problem with that approach is that it

08:11.920 --> 08:17.840
doesn't really scale. Imagine you're in a function five in an asset function deep in a call stack

08:17.840 --> 08:22.880
and you need to wait for another event. You don't just have one piece of data. You don't just have

08:22.960 --> 08:27.200
a string fubar that you want to pass to the second callback. You have like I don't know five

08:27.200 --> 08:32.720
six pieces of data that need to be available in that second callback. So because like the way

08:32.720 --> 08:37.840
it works when you do callbacks you can pass one pointer generally right we call that the user data

08:37.840 --> 08:43.040
pointer. So when you have five or six pieces of data that means you need to create a struct.

08:43.040 --> 08:47.840
You need to define a cleanup function for that struct. You have to put all the data in the

08:47.840 --> 08:54.720
struct to pass it along. You have to define how the data in that struct gets freed and generally

08:54.720 --> 09:00.160
you will not know how the data needs to be freed. If it's just a simple fubar string of course

09:00.160 --> 09:04.480
you need to know you call you need to call free on it. But if you're getting like for example a

09:04.480 --> 09:08.880
path you might just not need to free the memory you might also need to remove the path from this

09:09.520 --> 09:14.240
or not. Right effectively in a lot of cases you don't know especially in generic code

09:14.320 --> 09:18.960
which one you should take. So like the color of that function would have to provide you with the

09:18.960 --> 09:23.520
cleanup function that needs to be called and you can start to imagine how this is not exactly

09:24.160 --> 09:32.720
a fun way to write code or a robust way to write code. So this works for simple cases but it

09:32.720 --> 09:41.920
quickly becomes like inadequate. And so effectively what the problem comes down to is that when you

09:41.920 --> 09:50.240
do a thread and you do blocking code you get a stack on winding or you know you do not get

09:50.240 --> 09:54.800
stack on winding and that's the good thing. So you do your read call it will block and then when

09:54.800 --> 09:58.160
your read call is finished you process the data and you can be sure that your stack has not

09:58.160 --> 10:02.800
been wound. Now your none of your data has been freed and it's all available for processing

10:02.800 --> 10:08.640
whatever your read call returned. When you do the event loop your call back will return your

10:08.720 --> 10:14.160
first call back everything goes like your stack on winds everything gets freed and you need to go

10:14.160 --> 10:19.520
through extreme like hoops to make sure it is available for the second call back. And so the question

10:19.520 --> 10:27.680
is can we have the best of both worlds? Can we have separate stacks because that's what we want so

10:27.680 --> 10:33.200
that we can avoid having to unwind the stack but without the separate threats that's what it comes

10:33.200 --> 10:38.320
down to and like you could imagine this as like instead of passing around like our string

10:38.320 --> 10:44.080
food bar can we just like pass around the entire stack between callbacks and avoid the problem

10:44.080 --> 10:50.560
that way. And so this is effectively what a fiber comes down to right and so it's the alternative

10:50.560 --> 10:57.840
to callbacks which does allow us to do or does allow us to write synchronous looking code

10:57.840 --> 11:07.280
without having to use threats. So what does that look like in our previous example? Our stack does

11:07.280 --> 11:14.640
not cover our fibers so it's only there for running the event loop or primary stack and for each

11:14.640 --> 11:21.840
fiber we allocate a separate stack and so what that allows us to do is we can do like the fiber

11:21.840 --> 11:28.560
reads which I will explain in the next slide and that will that will suspend the fiber we will

11:28.560 --> 11:33.680
I will explain what's suspending is later we go back to the event loop it can resume another fiber

11:33.680 --> 11:40.160
which can suspend and go back and the key difference here is that when a fiber suspends there is

11:40.160 --> 11:44.880
no stack on winding and so you don't get all of the clean up running and it means you don't have to

11:45.760 --> 11:53.520
allocator strikes and everything to pass data around. So implementing this like implementing fibers

11:53.520 --> 11:59.120
implementing the resume implementing this suspending that's where we use the youcontext.h hat

11:59.120 --> 12:10.800
are from CLIPC so well you're the experts so I'll believe you it has three functions get context

12:10.800 --> 12:17.520
make context and swap context get context gives you a reference to the current context which is

12:17.520 --> 12:22.560
effectively the stack the primary stack the make context allows you to create a new one from a region

12:22.560 --> 12:28.480
of memory what you generally end up doing is you am up some memory and then you pass that to make

12:28.480 --> 12:33.440
context and it will give you a new a new stack a new context that you can use for a fiber and

12:33.440 --> 12:39.680
swap context is the interesting one which trips switches between different stacks or different

12:40.560 --> 12:47.040
fibers so that you can implement this resume and this is spent. So what does this end up look like so

12:47.040 --> 12:54.800
or imaginary like fiber read function what do we do in there well first we do the read then the read

12:54.800 --> 12:59.040
returns like the read is on a non-blocking file descriptor so you know it won't block

13:00.640 --> 13:04.560
then we check are we on a fiber so if we're not on a fiber then like there's nothing to do

13:04.640 --> 13:10.880
we just return whatever we got if we are on a fiber then we can check the error code if we get

13:10.880 --> 13:17.120
e again or it would block that means the kernel is telling us we are going to block if we try to do this

13:18.560 --> 13:24.080
if we did not get any again then also easy case right we just return but if we did that get the

13:24.080 --> 13:30.640
e again then we can call that as the event rio function again but this time we pass in a different

13:30.640 --> 13:36.880
callback and the callback is effectively swap context and the user data pointer we give to it is

13:36.880 --> 13:44.640
our current fibers context or stack. What we then do is we can swap context ourselves and so this

13:44.640 --> 13:52.640
will effectively stop execution of the current fiber right it will save everything inside the current

13:52.720 --> 14:01.040
context of the fiber so what is the stack pointer registers and everything and then it will swap

14:01.040 --> 14:07.920
to the main context of the event loop and that will then continue executing eventually like

14:07.920 --> 14:13.280
the events that we registered for will happen in which case the event loop will tell swap context

14:13.280 --> 14:18.080
with the fiber context of the reverse case and we will simply continue executing right after

14:18.160 --> 14:22.560
swap context where we left off and the crucial thing here is that there is no stack on winding

14:22.560 --> 14:28.960
happening so there is nothing getting freed nothing getting like lost and you don't have to

14:28.960 --> 14:38.240
pass things around manually yes so that's what I was going to get into in the second

14:38.320 --> 14:48.880
cancellation is also very important so yeah then there's the cancellation which is also very important

14:48.880 --> 14:54.720
to make sure everything gets freed up but eventually like because we now know that our

14:54.720 --> 14:59.920
read will succeed we don't really because the event loop told us we can call just call read again

14:59.920 --> 15:04.240
it will not block this time and then we can just return the result to the user and this was the

15:04.240 --> 15:10.480
example for read but the exact same goes for sleep for our waiting for a child process to finish

15:11.760 --> 15:20.640
doing this guyo and you can also like trivially implement this or build on top of this if you

15:20.640 --> 15:30.400
have a call back based interface because the callback is just going to be swap context so cancellation

15:30.720 --> 15:41.120
if you go back if we go back to this case here where our event loop gets basically stopped

15:41.120 --> 15:47.440
in the middle by another callback then it's not sufficient to simply free the memory that we have

15:47.440 --> 15:55.200
for the that we allocated for the stack because what the fiber may as done it might have allocated

15:55.200 --> 15:59.360
the memory on the heap right and so if we just free the stack then all that memory on the heap will

15:59.360 --> 16:05.600
be leaked so whenever you want to shut down your project your application you need to make sure

16:05.600 --> 16:10.160
you do you unwind the stack of all the fibers to make sure that they all have a chance to clean up

16:10.160 --> 16:18.560
their memory and everything gets freed correctly and so doing this is kind of simple because instead

16:18.560 --> 16:24.320
of waiting for the actual event to occur what you simply do is you inject an e-canceled error and

16:24.320 --> 16:29.600
you swap context back to the fiber and it's just checks okay do I get the e-canceled error and then

16:30.960 --> 16:36.160
yes it just returns it and so it functions what keep returning the e-canceled error your stack

16:36.160 --> 16:42.000
on wines all the cleaner functions can run and everything gets freed correctly and then you just

16:42.000 --> 16:47.200
you inject the e-canceled error and then you wait for all the fibers to finish and wind their stack

16:47.200 --> 16:56.160
and then you can exit exit program so what I've currently been discussing is a single fiber and

16:56.160 --> 17:02.160
so our single fiber is a sequence of non-blocking operations but often that is not enough you want

17:02.160 --> 17:08.800
to run the non-blocking operations concurrently so imagine you need to talk to five demons at once

17:08.800 --> 17:14.560
and get our day results then having a single fiber is not enough because you can only execute a single

17:14.640 --> 17:23.280
non-blocking operation at once on a fiber so what you end up doing is you end up just spawning more

17:23.280 --> 17:29.280
fibers you create a new fiber you set which function it should run and it will run but you do need

17:29.280 --> 17:35.920
to be careful here because while we're not using threads we're still executing code concurrently

17:35.920 --> 17:40.640
and so whenever you end up having shared state there is all the usual pitfalls that you get what

17:40.640 --> 17:48.000
concurrency which is mutating shared state because your fiber can decide to suspend any time it is

17:48.000 --> 17:52.880
executing it will switch to another fiber and if it is modifying the same data structure then it is

17:52.880 --> 17:58.880
very easy for things to be in an inconsistent state and things will just blow up so you still need

17:58.880 --> 18:04.400
like you still need to concurrency primitives logs and generally good design patterns to make sure

18:04.400 --> 18:10.080
that your concurrency makes sense you can just introduce unlimited concurrency and expecting

18:10.080 --> 18:14.640
things to work correctly sometimes you need to make sure you sequentialize things again

18:16.160 --> 18:22.640
if you're not actually mutating state state state but you just need to copy inputs then

18:22.640 --> 18:27.040
like copying is still like I mean you you keep seeing these talks about Russ where they say

18:27.040 --> 18:31.600
in that one end up just clone the object and that applies here as well right any input

18:31.600 --> 18:36.240
in any inputs you can just copy them to the fiber and you don't need to worry about their

18:36.240 --> 18:40.320
lifetime in the parent's fiber like it can just clean up because you passed in a copy

18:42.720 --> 18:47.600
so like the sounds great of course but there are a bunch of this if like it's not perfect

18:47.600 --> 18:52.400
so like I said you still need to concurrency primitives it's less efficient than call that

18:52.400 --> 18:58.560
because you're juggling around an entire stack it's not possible right because it's effectively

18:58.560 --> 19:03.760
deprecated right because they make context the function prototype is like against the standard

19:04.400 --> 19:09.920
I didn't look into it too closely but it doesn't really work so I found it a bit of a weird reason

19:09.920 --> 19:15.040
to deprecate the entire header but so we we still end up using it and I hope he let's see

19:15.040 --> 19:17.840
one for a move it because it's deprecated in politics but I don't take that well

19:19.920 --> 19:24.640
swap context is not available unless all right because they refuse to implement the interface

19:24.640 --> 19:32.160
because it's deprecated in politics but luckily there is like a little library made by one of the

19:32.160 --> 19:36.240
alpine developers that effectively implements the same interface so we can also use it on on

19:36.240 --> 19:43.440
muscle and for postmarker OS the integration with gdb is not as great as with threads

19:44.480 --> 19:50.160
because with threads you just get you get all your back traces for all your threads built into

19:50.160 --> 19:57.040
gdb and because all these fiber because all this fiber stuff generally is application dependent

19:57.040 --> 20:02.000
there is no generic interface in gdb to allow you to inspect the state of all your fibers

20:02.000 --> 20:08.880
so every project that implements fibers effectively and supreme implementing some kind of gdb

20:08.880 --> 20:14.960
plug-in thing to allow them to inspect their state of their fibers will probably end up doing

20:14.960 --> 20:23.120
something similar not ideal but there's no easy solution here I think performance so

20:24.240 --> 20:28.560
the thing that politics mandates for swap context is that it restores the pretreat signal mask

20:28.880 --> 20:35.440
in our case that's not required any of the other core teen implementations that I found online

20:35.440 --> 20:41.120
because they all focus on performance performance performance for their fibers they don't do that

20:41.120 --> 20:46.240
part because it means a Cisco and that's apparently slow I don't think like we are going to be

20:47.600 --> 20:51.440
using that many fibers that this will actually make a difference but I was still like curious

20:51.440 --> 20:58.240
if there would be a way to make a chillipsy not to this is called if possible but if it's not

20:58.240 --> 21:04.080
possible then it's probably not a problem either and while we do get like non-blocking socket I

21:04.080 --> 21:09.040
owe we still don't have non-blocking disk I owe because I owe you ring is still blocked in most

21:09.680 --> 21:14.880
container run times and and sandboxing environments because it doesn't allow you to filter the

21:14.880 --> 21:20.240
obcoids but that is being fixed by NSX so eventually I will be able to integrate this with I owe

21:20.240 --> 21:24.160
you ring behind the scenes and then we will have non-blocking disk I owe on fibers as well

21:25.840 --> 21:31.840
the current status of this is this is a PR force lip system the I call it as the future of

21:31.840 --> 21:37.520
age because we don't just need fibers we need futures as well but I didn't discuss those because

21:37.520 --> 21:42.160
the talk would become too long it's internal for now but I think this is useful enough that

21:42.160 --> 21:46.800
it will probably become public I don't know how much useful it will be for the wider community

21:46.800 --> 21:50.480
because generally when you're starting new projects that need to do this kind of stuff they

21:50.480 --> 21:54.880
end up already being rust or something but if there is like existing code using lip system

21:54.880 --> 22:00.640
then maybe they can make use of this so it has the basic operations socket I owe child process

22:00.640 --> 22:05.520
waiting sleeping waiting for all the fibers and some other stuff I still need to implement

22:05.520 --> 22:12.400
concurrency primitives but didn't find time yet for that and that was it happy to answer questions

22:16.800 --> 22:21.440
all of them

22:46.800 --> 22:59.960
I was going to, so I'll give her a Peter question.

22:59.960 --> 23:03.120
The question was, there's Peter at the fork.

23:03.120 --> 23:04.680
Why can we use this to solve the problem?

23:05.680 --> 23:11.680
And then, access the thread of the thread of the one thread from the other thread.

23:11.680 --> 23:18.680
There's many ways that fibers can go on to remove, you understand it often.

23:18.680 --> 23:22.680
If you want to do the existing game, you're supposed to have it and want to play,

23:22.680 --> 23:29.680
but just you'll let you have one time, because it's a commodity, it's a complex platform.

23:29.680 --> 23:33.680
Where does the account still have to do it?

23:33.680 --> 23:41.680
Yes, so the question was that when you start doing multiple threads with fibers, things can go horribly wrong in various ways, or the comments was.

23:41.680 --> 23:47.680
But yeah, so generally we use threads, I think, in one more place, some very isolated place and for the rest,

23:47.680 --> 23:52.680
everything is single thread and when we need to concurrency, that runs at the same time we use fork.

23:52.680 --> 23:55.680
And we run a separate set of sub-process.

23:56.680 --> 24:00.680
Just to take a couple of minutes before, right?

24:02.680 --> 24:04.680
I know what it looks like.

24:04.680 --> 24:05.680
You look for it, guys.

24:05.680 --> 24:08.680
You need to read that by the wall.

24:08.680 --> 24:10.680
Let's have a break.

24:15.680 --> 24:20.680
Well, I see this road isn't like it's almost finished.

24:20.680 --> 24:24.680
I'm not expecting the system to go to this place.

24:24.680 --> 24:31.680
But the reason I've written that up is because as part of this year, it's the first time I've written it,

24:31.680 --> 24:38.680
there is a SD-institution abstraction, which is basically some of the things you've got to tell the truth about.

24:38.680 --> 24:45.680
It separates the concerns of actually working across curing operations and getting the most corporations,

24:45.680 --> 24:51.680
and allows you to move things like weight on five things where it brings out to them.

24:51.680 --> 25:04.680
So there might be some design systems from there for regretting these things.

25:04.680 --> 25:05.680
Yeah.

25:05.680 --> 25:10.680
So the common was that C++ has cover teams and the execution and that would fix all of this.

25:10.680 --> 25:12.680
I totally agree, right?

25:12.680 --> 25:17.680
I would love for nothing more than to have a language available that does this for me.

25:17.680 --> 25:22.680
I looked into cover teams as well for C, but effectively where our compiler helped doing cover teams.

25:22.680 --> 25:25.680
So stackless cover teams is impossible.

25:25.680 --> 25:29.680
So this was the best thing I could come up with.

25:29.680 --> 25:32.680
I think it can come from design.

25:32.680 --> 25:37.680
I don't want to add that.

25:37.680 --> 25:52.680
I want to clarify that we don't actually need to spawn necessarily five new fibers for every single non-blocking operation.

25:52.680 --> 25:59.680
It's only when you want to do two non-blocking operations after each other that we allocate a stack.

25:59.680 --> 26:05.680
That's why that's why the SD-instit is called SD-future.age because for stuff that you don't need a fiber for you,

26:05.680 --> 26:08.680
we do have like, we don't do a new fiber.

26:08.680 --> 26:10.680
Like I did that in the beginning and then I really asked,

26:10.680 --> 26:12.680
oh my god, this is hard for you in efficient.

26:12.680 --> 26:15.680
And so now we don't do it for unless actually need it.

26:15.680 --> 26:20.680
Well, future features about the future.

26:20.680 --> 26:21.680
Yeah.

26:21.680 --> 26:23.680
Oh, the time is up, unfortunately.

26:23.680 --> 26:25.680
I can be outside for more questions.

26:25.680 --> 26:26.680
Thank you.

26:26.680 --> 26:27.680
Thank you.

26:35.680 --> 26:38.680
Thank you.

