WEBVTT

00:00.000 --> 00:10.000
As a first speaker, we have Oscar and I'll let him introduce himself.

00:10.000 --> 00:15.000
Okay, so it seems with the smikest working, that's great.

00:15.000 --> 00:18.000
Hey everybody, thanks for the introduction.

00:18.000 --> 00:22.000
And yeah, let's get started.

00:22.000 --> 00:26.000
I'm here to talk with you about the fast and disparities,

00:26.000 --> 00:31.000
which is what I've kind of called the various congestion control features

00:31.000 --> 00:35.000
that we've been experimenting with for Firefox's quick stack.

00:35.000 --> 00:41.000
As Max said, I'm Oscar, I'm a working student on Firefox's networking team

00:41.000 --> 00:44.000
and over like the last 10 to 12 months.

00:44.000 --> 00:49.000
I've mostly been focusing on quick and on congestion control.

00:49.000 --> 00:53.000
So yeah, first of all, it's a very short primer, what congestion control even is,

00:53.000 --> 00:56.000
so we're kind of all on the same page.

00:56.000 --> 00:59.000
And inherently congestion control is trying to solve the problem

00:59.000 --> 01:03.000
that if everybody on the internet would be sending as many packets as they could,

01:03.000 --> 01:07.000
just to get their stuff out quickly.

01:07.000 --> 01:11.000
All the routers would get congestion and we would have packet loss

01:11.000 --> 01:14.000
and basically the internet would stop working.

01:14.000 --> 01:18.000
So what congestion control is doing is trying to algorithmically

01:18.000 --> 01:25.000
find the ideal centrate without having any knowledge about the actual path

01:25.000 --> 01:29.000
and about who else is on this path and sending data.

01:29.000 --> 01:33.000
And one sort of algorithm is cubic, which is what neck view implement,

01:33.000 --> 01:36.000
neck view is Firefox's quick stack.

01:36.000 --> 01:42.000
And cubic works with an additive increase and multiplicative decrease approach.

01:42.000 --> 01:45.000
Let's quickly look at that. If we look at this graph,

01:46.000 --> 01:51.000
we can see that here the congestion window that is the allowed centrate

01:51.000 --> 01:55.000
is over time slowly additively increasing.

01:55.000 --> 01:58.000
Until it is at a point where we're actually congested

01:58.000 --> 02:02.000
and we experience packet loss and then we have a congestion event

02:02.000 --> 02:08.000
and we multiplicatively decrease the allowed window again.

02:08.000 --> 02:12.000
And with that we get kind of this zigzag pattern that you're seeing

02:12.000 --> 02:17.000
that homes in on the ideal centrate ideally.

02:17.000 --> 02:21.000
And one more building work of congestion control that I want to quickly mention

02:21.000 --> 02:25.000
is slow start and for that we're going to zoom in on the very start of the connection

02:25.000 --> 02:29.000
and that is this exponential ramp up at the beginning of the connection.

02:29.000 --> 02:35.000
So we're kind of getting quickly getting up to the available bandwidth

02:35.000 --> 02:38.000
and not just like very slowly ramping up.

02:38.000 --> 02:43.000
So the name is kind of funny, but yeah, that's slow start.

02:43.000 --> 02:46.000
Having said that and now we're all in the same page.

02:46.000 --> 02:51.000
I want to give a very quick shout out to Huwis, which is an amazing open source

02:51.000 --> 02:56.000
web-based visualization tool for quick.

02:56.000 --> 03:00.000
And for example, all of my congestion window plots that you see here,

03:00.000 --> 03:03.000
basically just screen grabs from Huwis.

03:03.000 --> 03:07.000
And so if you're working with Quake and you haven't checked out Huwis yet,

03:07.000 --> 03:11.000
you should, I encourage you to take a look.

03:11.000 --> 03:14.000
With that, let's get into the first feature.

03:14.000 --> 03:17.000
And that is Paris congestion event recovery.

03:17.000 --> 03:19.000
So let's unpack what that means.

03:19.000 --> 03:24.000
And to do that, I want to go back with you to foster 2024.

03:24.000 --> 03:28.000
Huwis Manuel, I'm not sure if he's in the room.

03:28.000 --> 03:32.000
But Huwis Manuel, who back then worked at Firefox networking,

03:32.000 --> 03:36.000
and he showed the talk about H3 upload speed and Firefox.

03:36.000 --> 03:40.000
And he showed this graph and he said something along the lines of

03:40.000 --> 03:44.000
packet reordering causes too many congestion events.

03:44.000 --> 03:50.000
When he was talking about stuff that we are yet to more thoroughly look into.

03:50.000 --> 03:53.000
Ah, there is Manuel.

03:53.000 --> 03:59.000
So yeah, let's further zoom into this graph to understand what that means.

03:59.000 --> 04:03.000
So here, all those orange dots are

04:03.000 --> 04:06.000
extended, are coming in for packets.

04:06.000 --> 04:08.000
And here we see this reordering.

04:08.000 --> 04:14.000
So those for green dots, we didn't get an act back for them,

04:14.000 --> 04:18.000
but we did get an act for packets that were sent later.

04:18.000 --> 04:20.000
So we are giving those as lost.

04:20.000 --> 04:24.000
But then just a few milliseconds after we're seeing,

04:24.000 --> 04:27.000
we do get actually do get the act back for them.

04:27.000 --> 04:31.000
So this was just a packet reordering, and this loss was spurious.

04:31.000 --> 04:37.000
The problem with spurious loss is that it does lead to spurious congestion events.

04:37.000 --> 04:41.000
So the congestion controller treats the spurious loss just as it treats

04:41.000 --> 04:45.000
any other loss, and our send window is reduced.

04:45.000 --> 04:48.000
And this is happening despite the spurious loss,

04:48.000 --> 04:52.000
of course not being any indicator for real congestion

04:52.000 --> 04:56.000
happening somewhere in the path.

04:56.000 --> 05:00.000
So yeah, if this happens more often, then this kind of course

05:00.000 --> 05:02.000
like heavily degrade performance.

05:02.000 --> 05:05.000
And of course also completely unnecessary,

05:05.000 --> 05:08.000
because it was just a packet reordered.

05:08.000 --> 05:12.000
Luckily, the cubic RC also suggests a mechanism for both

05:12.000 --> 05:16.000
detecting those and for recovering from those spurious congestion events.

05:16.000 --> 05:20.000
And so at first, we implemented the detection logic,

05:20.000 --> 05:24.000
because we wanted to get a feel for,

05:24.000 --> 05:29.000
we wanted to get a feel for how often we actually see those in the wild.

05:29.000 --> 05:33.000
And if it's actually like, if this never actually happens on real connections,

05:33.000 --> 05:36.000
there's no need to care about it basically.

05:36.000 --> 05:40.000
But 5% of connections actually do see those spurious congestion events,

05:40.000 --> 05:44.000
and we found out that those that see them see quite a lot of them.

05:44.000 --> 05:49.000
So heavily degrade performance for up to 5% of connections

05:49.000 --> 05:51.000
that doesn't sound good.

05:51.000 --> 05:54.000
And so we implemented the recovery mechanism,

05:54.000 --> 05:56.000
and this is actually now merged.

05:56.000 --> 06:00.000
I think this Monday with the latest naked release

06:00.000 --> 06:02.000
into Firefox nightly.

06:02.000 --> 06:04.000
So let's take a look at some measurements,

06:04.000 --> 06:07.000
and take a look how this actually works.

06:07.000 --> 06:09.000
Before we do that, fun fact.

06:09.000 --> 06:12.000
My home Wi-Fi is one of those lucky 5%.

06:12.000 --> 06:18.000
Lucky, because this was a great testing ground for implementing this.

06:18.000 --> 06:21.000
And so all those measurements that you're seeing have actually been made,

06:21.000 --> 06:24.000
sitting at my kitchen table in Berlin.

06:24.000 --> 06:28.000
So first off, this is kind of like the healthy connection.

06:28.000 --> 06:31.000
So no spurious congestion events happened,

06:31.000 --> 06:35.000
and we have a 7 second completion time for a 50 megabyte upload.

06:35.000 --> 06:39.000
My Wi-Fi is also slow, but let's not get into that.

06:39.000 --> 06:42.000
But yeah, this is kind of the healthy connection.

06:42.000 --> 06:45.000
We can see that in the beginning of the connection,

06:45.000 --> 06:50.000
the violet graph, we have the ramp up of slow start,

06:50.000 --> 06:52.000
and yeah, this is how it's supposed to look like.

06:52.000 --> 06:56.000
Now this is how it looks like with spurious congestion events.

06:56.000 --> 06:58.000
So we have a 13 second completion time,

06:58.000 --> 07:00.000
almost taking twice as long.

07:00.000 --> 07:03.000
That's of course not very good.

07:03.000 --> 07:06.000
And if we look at the graph then we understand why,

07:06.000 --> 07:08.000
because the never was a slow start phase.

07:08.000 --> 07:10.000
Again, if we look at the healthy one,

07:10.000 --> 07:12.000
we have this ramp up in the beginning.

07:12.000 --> 07:15.000
Here, apparently there was a spurious loss.

07:15.000 --> 07:17.000
Right in the beginning of the connection,

07:17.000 --> 07:19.000
we accepted slow start, never ran up.

07:19.000 --> 07:22.000
We're never getting up to the proper bandwidth,

07:22.000 --> 07:24.000
and throughout the whole connection,

07:24.000 --> 07:26.000
we never have actual loss.

07:26.000 --> 07:28.000
We never have actual congestion.

07:28.000 --> 07:32.000
We never actually touching the bandwidth that we're supposed to use.

07:32.000 --> 07:36.000
And then this is how it looks like with the recovery patch.

07:36.000 --> 07:38.000
We're back at 7 second completion time,

07:38.000 --> 07:40.000
and if we look at the graph side by side,

07:40.000 --> 07:41.000
the healthy one and this one,

07:41.000 --> 07:46.000
we see that they're kind of looking almost the same.

07:46.000 --> 07:50.000
And so it can be said that with this recovery patch,

07:50.000 --> 07:53.000
we were able to basically completely emerge

07:53.000 --> 07:57.000
just performance degradation, at least in my home network.

07:57.000 --> 08:00.000
And if we zoom in further,

08:00.000 --> 08:03.000
we can kind of see how this recovery works in action.

08:03.000 --> 08:05.000
If you look at where the error is point at,

08:05.000 --> 08:08.000
you can see this dip in the congestion window.

08:09.000 --> 08:12.000
For example, I think it's best seen on the one on the right.

08:12.000 --> 08:17.000
We have a congestion event with reduced the send window,

08:17.000 --> 08:20.000
and then after just a few milliseconds,

08:20.000 --> 08:23.000
we see R, this was spurious,

08:23.000 --> 08:25.000
and we basically just restart of state

08:25.000 --> 08:28.000
as it was prior to the congestion event.

08:28.000 --> 08:32.000
So yeah, we have this pretty strong anecdotal evidence

08:32.000 --> 08:33.000
from measurements,

08:33.000 --> 08:37.000
and we also have some singular from CI simulations and benchmarks.

08:38.000 --> 08:40.000
But the question is, what about real world data?

08:40.000 --> 08:42.000
It would be really nice to be able to say,

08:42.000 --> 08:45.000
yeah, we're also seeing that this is actually impacting

08:45.000 --> 08:47.000
our users, we can prove that with data.

08:47.000 --> 08:49.000
The problem is the higher level metrics,

08:49.000 --> 08:51.000
like throughput, for example,

08:51.000 --> 08:54.000
are very noisy for changes like that,

08:54.000 --> 08:58.000
because Firefox has such vastly different environments.

08:58.000 --> 09:00.000
So like all the different Firefox installations,

09:00.000 --> 09:02.000
and all of the network environments,

09:02.000 --> 09:04.000
that's a huge melting pot,

09:04.000 --> 09:07.000
and it's very hard to see any signal through that,

09:07.000 --> 09:09.000
especially if like in this change,

09:09.000 --> 09:13.000
we're only changing something for a smaller subset of users,

09:13.000 --> 09:16.000
in this case, those 5% of connections.

09:16.000 --> 09:20.000
So yeah, even though I think this change is quite impactful

09:20.000 --> 09:23.000
for connections that have packet reordering,

09:23.000 --> 09:28.000
it's very hard to actually prove that with telemetry data,

09:28.000 --> 09:30.000
which I think is a shame.

09:30.000 --> 09:33.000
But having said that, let's move on to the next future,

09:34.000 --> 09:38.000
which is called alternative back off with ECN.

09:38.000 --> 09:41.000
To understand that, let's first have a quick primer,

09:41.000 --> 09:43.000
what is ECN?

09:43.000 --> 09:46.000
So explicit congestion notification.

09:46.000 --> 09:48.000
The idea is basically that right now,

09:48.000 --> 09:50.000
how our congestion controller works,

09:50.000 --> 09:52.000
we always have to have loss,

09:52.000 --> 09:54.000
it's kind of part of this design.

09:54.000 --> 09:56.000
But loss is not good,

09:56.000 --> 09:59.000
like we ideally wouldn't want to have loss.

09:59.000 --> 10:02.000
So ECN is a mechanism for having a signal,

10:02.000 --> 10:04.000
to adjust the send rate,

10:04.000 --> 10:06.000
without relying on packet loss as a signal.

10:06.000 --> 10:10.000
And that's basically working through the middle boxes,

10:10.000 --> 10:11.000
like for example, a router,

10:11.000 --> 10:15.000
and notifying the sender, Firefox in this case,

10:15.000 --> 10:18.000
if a queue starts building up,

10:18.000 --> 10:21.000
and if congestion might happen soon,

10:21.000 --> 10:24.000
so that the sender can react.

10:24.000 --> 10:26.000
But this also means that the path,

10:26.000 --> 10:27.000
the whole network path,

10:27.000 --> 10:28.000
has to be capable, of course,

10:28.000 --> 10:31.000
marking, like notifying,

10:31.000 --> 10:33.000
and also passing along the signal,

10:33.000 --> 10:35.000
and not stripping it.

10:35.000 --> 10:39.000
So the internet has to play along for this to work.

10:39.000 --> 10:42.000
For that, let's look at the state of ECN right now.

10:42.000 --> 10:45.000
And before we talk about this graph,

10:45.000 --> 10:47.000
just a very quick disclaimer,

10:47.000 --> 10:48.000
because this is the first time

10:48.000 --> 10:51.000
that we're now actually looking at Firefox telemetry,

10:51.000 --> 10:54.000
and so to say real world user data,

10:54.000 --> 10:57.000
that of course all the data that we do collect

10:57.000 --> 10:59.000
for Firefox is anonymized,

10:59.000 --> 11:01.000
and it's also not like we're collecting this

11:01.000 --> 11:03.000
and like doing our finger with it,

11:03.000 --> 11:05.000
but all the data is public,

11:05.000 --> 11:07.000
and you can actually look at this exact graph,

11:07.000 --> 11:09.000
if you click the link beneath the slides,

11:09.000 --> 11:11.000
that I will upload after my talk.

11:11.000 --> 11:15.000
And yeah, I think not too many people are aware of that,

11:15.000 --> 11:19.000
so I just wanted to shout that out.

11:19.000 --> 11:21.000
But yeah, state of ECN,

11:21.000 --> 11:27.000
we can see that about 60% of connections are ECN capable.

11:27.000 --> 11:29.000
So this is promising,

11:29.000 --> 11:32.000
and it might be an indicator that it could be worthwhile,

11:32.000 --> 11:35.000
actually optimizing what we do with ECN.

11:35.000 --> 11:37.000
And this optimization,

11:37.000 --> 11:40.000
alternative backup with ECN.

11:40.000 --> 11:42.000
So how it works right now,

11:42.000 --> 11:45.000
is that ECN just triggers the same congestion event

11:45.000 --> 11:48.000
that was also triggered, so the same reaction.

11:48.000 --> 11:50.000
But because ECN is an earlier signal,

11:50.000 --> 11:52.000
so ECN is saying,

11:52.000 --> 11:54.000
there might be congestion happening soon,

11:54.000 --> 11:58.000
but it might be a little less saying there's congestion.

11:58.000 --> 12:02.000
We could have a smaller decrease,

12:02.000 --> 12:04.000
and this would then make,

12:04.000 --> 12:06.000
if we have a smaller decrease,

12:06.000 --> 12:08.000
increase our average congestion window

12:08.000 --> 12:10.000
and lead through higher overall utilization.

12:10.000 --> 12:12.000
So this makes sense in theory,

12:12.000 --> 12:14.000
it looked in some simulations

12:14.000 --> 12:16.000
and quite honestly,

12:16.000 --> 12:20.000
it also wasn't that big an effort to implement this,

12:20.000 --> 12:22.000
so we implemented it,

12:22.000 --> 12:24.000
it would be nice to know,

12:24.000 --> 12:28.000
how does this behave and how does it look like in reality?

12:30.000 --> 12:31.000
And in this case, again,

12:31.000 --> 12:34.000
is usual the higher level metrics too noisy,

12:34.000 --> 12:36.000
and lower level metrics,

12:36.000 --> 12:38.000
there were some that we could have looked at for this change,

12:38.000 --> 12:40.000
like for example, packet loss,

12:40.000 --> 12:45.000
we could have looked at how many ECN reactions there are,

12:45.000 --> 12:50.000
and the problem was that this got shipped together

12:50.000 --> 12:51.000
with other features,

12:51.000 --> 12:53.000
it also could have influenced those things,

12:53.000 --> 12:55.000
so this makes it very hard to isolate

12:55.000 --> 12:56.000
as features impact,

12:56.000 --> 12:58.000
so while we were able to say that this didn't

12:58.000 --> 13:00.000
like, while we would read something,

13:00.000 --> 13:03.000
it's very hard to say that it,

13:03.000 --> 13:06.000
like, reason about actual impact,

13:06.000 --> 13:09.000
apart from, yeah, it makes sense in theory.

13:11.000 --> 13:13.000
And then another thing,

13:13.000 --> 13:15.000
we wrote another probe,

13:15.000 --> 13:17.000
which is actually,

13:18.000 --> 13:20.000
which we didn't have before,

13:20.000 --> 13:22.000
and now just recently added it,

13:22.000 --> 13:25.000
which is checking how many of our congestion events

13:25.000 --> 13:27.000
are actually caused by ECN,

13:27.000 --> 13:30.000
and now despite there being 60% capable paths,

13:30.000 --> 13:35.000
it's only like 3% of congestion events are caused by ECN,

13:35.000 --> 13:38.000
so if we knew that before,

13:38.000 --> 13:42.000
yeah, the impact of this feature is contained

13:42.000 --> 13:45.000
to just 3% of congestion events,

13:45.000 --> 13:48.000
so it also makes sense that we're not able

13:48.000 --> 13:51.000
to see anything in metrics.

13:51.000 --> 13:54.000
But then the question is,

13:54.000 --> 13:56.000
how do we fix this data problem?

13:56.000 --> 14:00.000
If we cannot see anything in the higher level metrics,

14:00.000 --> 14:04.000
how can we do anything about it,

14:04.000 --> 14:08.000
and how can we hopefully better proof

14:08.000 --> 14:13.000
that the stuff that we're doing had a big impact?

14:14.000 --> 14:17.000
One idea would be having more specialized metrics,

14:17.000 --> 14:21.000
so if the higher level metrics are too noisy,

14:21.000 --> 14:23.000
then it might make sense to,

14:23.000 --> 14:26.000
for example, in the case of spurious congestion events,

14:26.000 --> 14:29.000
have like a built-in filter in metric collection,

14:29.000 --> 14:32.000
so for example, only collect throughput for connections

14:32.000 --> 14:36.000
that do at least see one spurious congestion event,

14:36.000 --> 14:40.000
so with that we would hopefully have less noise,

14:40.000 --> 14:42.000
and then maybe be able to say something

14:42.000 --> 14:45.000
along the lines of, yeah, we see this amount of throughput

14:45.000 --> 14:48.000
for connections that have spurious congestion,

14:48.000 --> 14:52.000
and those are this amount of connections,

14:52.000 --> 14:56.000
and that would be much easier understandable

14:56.000 --> 14:59.000
than, for example, all of this likes that actually before,

14:59.000 --> 15:02.000
and would be a nice statement.

15:02.000 --> 15:08.000
One other thing that we could do is running AB experiments,

15:08.000 --> 15:12.000
which is a great way to isolate features,

15:12.000 --> 15:18.000
but even though we have tools for this,

15:18.000 --> 15:22.000
and it still needs a lot of time to just instrument,

15:22.000 --> 15:24.000
have the extra code instrumentation

15:24.000 --> 15:26.000
to be able to run those experiments,

15:26.000 --> 15:29.000
take time to set up, take time to evaluate,

15:29.000 --> 15:33.000
and so even though it gives the most accurate signal,

15:33.000 --> 15:38.000
yeah, where it's always kind of a trade-off of time

15:38.000 --> 15:41.000
that is available.

15:41.000 --> 15:48.000
Having set that, what's next for congestion control

15:48.000 --> 15:54.000
in Firefox, and that's, at least for what I'm working on now,

15:54.000 --> 15:57.000
slow-start exit, so quick reminder, slow-start,

15:57.000 --> 16:01.000
the beginning of the connection where we exponentially ramp up,

16:01.000 --> 16:05.000
and now look at the problem with slow-start exit.

16:05.000 --> 16:08.000
So if we look at those graphs, again,

16:08.000 --> 16:11.000
we see this exponential ramp up in the beginning,

16:11.000 --> 16:16.000
and right now, slow-start always just exits with loss,

16:16.000 --> 16:21.000
and because of this exponential ramp up,

16:21.000 --> 16:26.000
we can overshoot our actual ideal bandwidth by so much

16:26.000 --> 16:29.000
by the time that we get feedback from this loss,

16:29.000 --> 16:33.000
we actually have multiple losses right after slow-start,

16:33.000 --> 16:36.000
and then if we're unlucky with those multiple losses,

16:36.000 --> 16:40.000
we land at a point where we're quite beneath that,

16:40.000 --> 16:42.000
the actual saturation point again,

16:42.000 --> 16:45.000
and then we have to very slowly ramp up again.

16:45.000 --> 16:48.000
So this all doesn't sound great,

16:48.000 --> 16:52.000
but because we had this before, is it actually worth optimizing?

16:52.000 --> 16:56.000
So I implemented a probe to check how many connections

16:57.000 --> 17:01.000
in Firefox actually do see a slow-start exit,

17:01.000 --> 17:06.000
because Firefox as a browser isn't necessarily sending

17:06.000 --> 17:10.000
like big amounts of data like a server would, very often,

17:10.000 --> 17:13.000
but we often have very short connections.

17:13.000 --> 17:16.000
So intuitively, I thought this is like 5% or 10%,

17:16.000 --> 17:19.000
but as it turns out, it's actually 20% of connections

17:19.000 --> 17:21.000
that's do see a slow-start exit,

17:22.000 --> 17:27.000
and that, in my opinion, is then definitely worth optimizing around.

17:27.000 --> 17:32.000
So what we're doing is right now working on a research project

17:32.000 --> 17:35.000
to compare different slow-start jurisdictions,

17:35.000 --> 17:37.000
for example, high-start plus plus,

17:37.000 --> 17:42.000
which is what, for example, TCP does, in Linux,

17:42.000 --> 17:47.000
and search, which is for which is a newer algorithm,

17:48.000 --> 17:51.000
that aims to exit slow-start without having any packet loss,

17:51.000 --> 17:56.000
by, for example, reasoning about RTT measurements or about egg bites.

17:56.000 --> 17:58.000
And of course, while doing that,

17:58.000 --> 18:01.000
we're going to take all of those learnings that we just talked about,

18:01.000 --> 18:04.000
so we're designing some specialized metrics,

18:04.000 --> 18:07.000
and also in project planning already scheduling time

18:07.000 --> 18:09.000
for AB experimentation.

18:09.000 --> 18:14.000
So hopefully, we're going to get really nice data out of this,

18:14.000 --> 18:19.000
and also maybe be able to publish those results in a paper,

18:19.000 --> 18:23.000
and, for example, help advance emerging standards like search,

18:23.000 --> 18:27.000
with some real-world data and help out the academic

18:27.000 --> 18:30.000
and scientific community in that way.

18:30.000 --> 18:35.000
So I'm personally very excited about that and looking forward to it,

18:35.000 --> 18:39.000
and yeah, who knows, maybe we'll talk about that next year.

18:39.000 --> 18:43.000
And with that, yeah, thank you for your attention,

18:43.000 --> 18:46.000
and if you do have any questions, I think we have,

18:46.000 --> 18:48.000
I have, like, a minute left,

18:48.000 --> 18:51.000
also talked to me in the hallway, there's my mail,

18:51.000 --> 18:54.000
and my matrix, and yeah, and remind again,

18:54.000 --> 18:56.000
feel free to play with our data,

18:56.000 --> 18:58.000
and also check out Cubus.

18:58.000 --> 18:59.000
Thank you.

19:04.000 --> 19:07.000
Very cool, so I think we have a question for,

19:07.000 --> 19:08.000
a time for question one or two,

19:08.000 --> 19:11.000
but maybe fill this, would you mind already getting set of,

19:11.000 --> 19:14.000
during, and I'll unplug myself here.

19:14.000 --> 19:15.000
Yeah.

19:15.000 --> 19:19.000
I was just curious, how many of these, like, other toggles,

19:19.000 --> 19:22.000
switches for some of the features that you were talking about today,

19:22.000 --> 19:25.000
is it easy to, that say do a comparison as a consumer,

19:25.000 --> 19:30.000
to say hey, what about if we switch off some of these features?

19:30.000 --> 19:35.000
For those, it's not super easy, certainly.

19:35.000 --> 19:38.000
This is part of the learnings.

19:38.000 --> 19:42.000
For the slow start, we're implementing,

19:42.000 --> 19:44.000
like, a plaggable framework, where you can,

19:44.000 --> 19:46.000
for example, through a config,

19:46.000 --> 19:48.000
shows which algorithm you want to use.

19:50.000 --> 19:52.000
Yeah, more questions.

19:55.000 --> 19:58.000
So you, you talk about Cubic,

19:58.000 --> 20:01.000
and optimization for Cubic, so any,

20:01.000 --> 20:06.000
are you looking at more modern alternative, like BBR?

20:07.000 --> 20:10.000
We're not planning to implement BBR,

20:10.000 --> 20:14.000
but with L4S, slowly rolling out,

20:14.000 --> 20:18.000
we are with our ECN capabilities ready

20:18.000 --> 20:22.000
to at least echo ECN for L4S,

20:22.000 --> 20:26.000
and be a receive path for L4S,

20:26.000 --> 20:28.000
but we're not planning to roll out BBR.

20:30.000 --> 20:31.000
Okay, thanks.

20:31.000 --> 20:32.000
Thank you.

