WEBVTT

00:00.000 --> 00:08.560
All right, we're ready for the next talk.

00:08.560 --> 00:14.200
We've got Kean Butler who's going to talk to us about the building performance critical

00:14.200 --> 00:15.800
Python tools with Rust.

00:15.800 --> 00:17.400
So thank you.

00:26.400 --> 00:32.680
So yeah, so I'm here to talk about Rust in Python.

00:32.680 --> 00:34.520
We add Clouds Miss.

00:34.520 --> 00:36.760
We are a package management company.

00:36.760 --> 00:39.760
So we download lots of bytes.

00:39.760 --> 00:47.320
Everything goes through our core monolith, which is a Django app, which works for a lot

00:47.320 --> 00:48.840
of the cases.

00:48.840 --> 00:59.520
But we process 100, 110 million API requests a day, give or take a weekend or off, which

00:59.520 --> 01:03.560
equates to several petabytes of requests.

01:03.560 --> 01:09.520
When we don't have a cache hiss on our request, we have to send that back to our

01:09.520 --> 01:10.520
monolith.

01:10.520 --> 01:17.400
So that basically means everything goes through a Python application, which is slow, because

01:17.400 --> 01:26.800
our Python application was written 10 years ago before Python had very fancy asing stuff.

01:26.800 --> 01:31.560
It starts to show how slow it is over time.

01:32.560 --> 01:41.440
So to scale our Python application, we have to take some interesting design choices and

01:41.440 --> 01:44.280
we started putting services in front of us.

01:44.280 --> 01:48.880
So we use and we use internet for some basic validation.

01:48.880 --> 01:52.560
We use H.A. proxy for some queuing and some rooting of stuff.

01:52.560 --> 01:58.960
And then we use U.F.D. for actually sending it off to our Django app.

01:58.960 --> 02:05.520
As you can imagine, it's pretty hellish to look up a request if you're using something

02:05.520 --> 02:11.760
like tracing and say, where was this request slow, what was it doing?

02:11.760 --> 02:16.320
We also found that U.S.E. was basically a black box for us.

02:16.320 --> 02:21.560
There was no data coming in our out of S. We couldn't tell why a request was blocked in

02:21.560 --> 02:22.560
there.

02:22.560 --> 02:27.560
What was it doing or if there was just anything wrong with us.

02:27.560 --> 02:33.480
We also, we needed to look at and then yeah, so we just started throwing hardware out

02:33.480 --> 02:41.000
of it and saying, okay, we'll need to just add more nodes, more replicas and more requests.

02:41.000 --> 02:50.160
This unsurprisingly gets expensive and money isn't infinite as long as we're that one.

02:50.160 --> 02:57.520
So we wanted to look at how could we make this faster.

02:57.520 --> 03:03.920
And what could we like do to help out?

03:03.920 --> 03:07.680
Of course, there's no chance we're ever going to do a full rewrite of our code base.

03:07.680 --> 03:09.960
It's 10-year-old.

03:09.960 --> 03:19.000
It's custom business logic and makes what protocol logic for 36 different package formats.

03:19.000 --> 03:22.360
And I would like to say we understand this, but it's 10 years old.

03:22.360 --> 03:24.800
We have cycle engineers over the last 10 years.

03:24.800 --> 03:29.360
So there are edge parts of our code that people, I don't really know why we do.

03:29.360 --> 03:30.360
We set that head up.

03:30.360 --> 03:36.680
But we set it for a very good reason because it suits you unsatisf, everything explodes.

03:36.680 --> 03:45.760
So we were looking for basically easy wins for performance without having to guess our

03:45.760 --> 03:53.240
all of our engineers to learn a new language or restart everything from scratch.

03:53.240 --> 03:59.000
And as I said, we're processing head-up by the day-to-day with 100,000 requests for

03:59.000 --> 04:00.960
very large companies.

04:00.960 --> 04:04.640
So we have to be reliable when we're actually doing this roll-out.

04:04.640 --> 04:10.320
We can't just throw code at the wall and hope that it improves stuff.

04:10.320 --> 04:11.720
I say that.

04:11.720 --> 04:16.200
We did do that a time, so yeah.

04:16.200 --> 04:20.600
So we kind of wanted to approach it with a good methodology of like, we'll measure

04:20.600 --> 04:25.520
it, we'll identify the bottlenecks and look at what's out of there that we can start

04:25.520 --> 04:31.560
to like, speed things up.

04:31.560 --> 04:36.360
We're not trying to just pull in rust for the sake of pulling in rust because I think

04:36.360 --> 04:41.560
it was said in the clickhouse talk, people like Russford and it's cool and I'm sure all

04:41.560 --> 04:46.160
of you like Russ because you're in a room called the Russ Devroom, but we're actually

04:46.160 --> 04:49.160
looking for ways to just make our code faster.

04:49.240 --> 04:53.800
We found out that we could have done it with C or by rewriting our Python to be more efficient,

04:53.800 --> 04:57.000
we wanted to do that as well.

04:57.000 --> 05:03.040
So yeah, we had a long casting timeline for doing this, but it was basically just me,

05:03.040 --> 05:07.120
and I was all I was doing, so a little bit of time for experiments.

05:07.120 --> 05:17.600
Sorry, there we go, and this one, okay.

05:17.600 --> 05:23.280
So we used open telemetry and we started the load testing.

05:23.280 --> 05:28.480
We had a set of load tests written in Locust for those that don't know Locust is a load

05:28.560 --> 05:31.600
testing tool that runs in Python.

05:31.600 --> 05:38.720
We managed to set not push enough pressure through the Locust load testing to actually stress

05:38.720 --> 05:45.000
our system mainly because we needed to throw more hardware at Locust to make it scale,

05:45.000 --> 05:49.160
which then cost more money and more operation time.

05:49.160 --> 05:52.640
I didn't have infinite operation time to be spinning up hundreds of thousands of

05:52.640 --> 05:58.400
Locust workers to simulate hundreds of thousands of users, so I tried to cheat.

05:58.400 --> 06:05.440
We looked at work, which was better, it's written in, I think, C, so it's much better at load testing,

06:05.440 --> 06:09.760
but I ran into issues with it in that it was really good at just hitting an endpoint

06:09.760 --> 06:16.520
and saying, here's a thousand requests, and then, but that's not really what a user does.

06:16.520 --> 06:22.520
So we're not, we're not Wikipedia, so people don't just scrape our website and say,

06:22.520 --> 06:28.360
I want this page a hundred thousand times, they go say, I want this package,

06:28.600 --> 06:33.400
they then start saying, where is that package, they then, we then break that package up into millions of different

06:33.400 --> 06:39.440
bytes and send them and send it to them, so we had very complicated users, so we need to simulate.

06:39.440 --> 06:47.200
We found a project called Goose, it's rust based and it's modeled on Locust, so it effectively

06:47.200 --> 06:53.800
let us rewrite all of our load tests and rust and then run them on much cheaper hardware

06:53.800 --> 06:57.960
than our previous Locust and push much more requests through it.

06:57.960 --> 07:06.000
So we start, then we start to actually see our system on the stress and get an idea of what we're broken and what we could improve on.

07:06.000 --> 07:13.760
We found that, on surprise, we, everything was slow inside the application, it's all business logic,

07:13.760 --> 07:20.800
we were waiting on third party requests to different back ends, database load, that kind of stuff.

07:21.720 --> 07:28.320
We also just notice weird slowness, so with you, whiskey, we use a concept called green threads,

07:28.320 --> 07:38.040
which is effectively when the Python would wait for IO, it goes off and does picks up another request and starts handling it.

07:38.040 --> 07:45.520
But the request that it was waiting on would, wouldn't be able to restart until the other request finished.

07:45.520 --> 07:53.560
So if you had a really quick request come in, that was then just, it could block waiting on a very slow request that came in right after it,

07:53.560 --> 07:57.440
because it very needed to do a very quick database lookup.

07:57.440 --> 08:07.920
Unfortunately, we do a lot of database lookups, something we're working on, but it's, we do, it's, we're basically constantly doing database lookups.

08:07.920 --> 08:16.560
So long slow tasks could interrupt and cause insane latency spikes as we're trying to figure out what's happening.

08:16.560 --> 08:22.480
We also noticed that a lot of our time with these very slow requests was spent in serialization.

08:22.480 --> 08:26.480
So we send back very large JSON and XML payloads.

08:26.480 --> 08:32.560
If you've ever looked at, say, the Python index file for like what all the packages are,

08:32.560 --> 08:40.960
what we have to generate that constantly for our users, for their private registries, these can be massive serialization cost for us.

08:40.960 --> 08:46.560
And then yeah, we had our complex proxy chain, which we didn't have rate visibility into,

08:46.560 --> 08:51.600
like, we have metrics from HA proxy and traces from EngineX, but not the other way around.

08:51.600 --> 08:57.600
So we can see how long you set in HA proxy, and we can see how long your request was between EngineX and our application,

08:57.600 --> 09:00.800
and then it's kind of just guessing from there.

09:00.800 --> 09:08.800
There wasn't like a thing where we said, oh, look, we've, we're clearly just spending a lot of time in this one database loop request or anything like that.

09:08.800 --> 09:14.880
It was lots of little cuts here and there that we needed to like resolve and fix.

09:14.880 --> 09:19.520
So we started looking at a couple of different things.

09:21.120 --> 09:23.920
The ordering probably wrong is actually wrong, so I'm going to go through this in different order.

09:24.880 --> 09:30.480
But I'm going to touch on these tools, Granyan, Warjason and Jason Schema RS.

09:31.760 --> 09:35.840
Each solve different bottlenecks we had that were constantly kind of like being hit.

09:38.480 --> 09:47.520
So yeah, I'll go through them all. So Warjason is a drop-in replacement for Jason serialization in Python.

09:47.600 --> 09:57.280
It uses Pyotree under the hood. It's quicker by about six to 12 times that of the standard Jason library,

09:57.280 --> 10:03.440
because I say drop-in, it's actually dropping really for more for the Python to one because it returns

10:03.440 --> 10:12.800
bites rather than strings, and it affects both. It does cover the vast majority of these cases, so all that.

10:13.600 --> 10:20.240
So we managed Django being the very handy framework it is, actually just had a plug-in that said,

10:20.240 --> 10:24.400
do you need to serialize into Jason, and you can just switch out into Warjason,

10:24.400 --> 10:27.520
so we just switched in that library and something we were faster.

10:28.160 --> 10:35.040
This is also true of our Jason logger. We serialize all of our logs to Jason, that adds up

10:35.040 --> 10:40.400
as you're like handling 110 million requests to all that time adds up, so we switched that out for

10:40.480 --> 10:47.520
Orjason, and then we got on to the more annoying part. We call Jason load and Jason don't

10:48.160 --> 10:56.560
true out our codebase. And with this one we had to go through every single call of it and switch it

10:56.560 --> 11:02.480
over to Jason load, determine did we need bites or strings or could we get away or what could we

11:02.480 --> 11:09.120
get away with? I would like to say that this would have been automatable with some like L.A.

11:10.080 --> 11:15.760
But I never even thought about doing that. I just started, I just had a train ride and I sat down

11:15.760 --> 11:20.960
and started hammering through it for two hours and then just hoped the test would catch everything.

11:22.960 --> 11:31.120
Yeah, so yeah, so that's what it looked like, it's very simple, it's basically the exact same

11:31.200 --> 11:40.960
interface, I said the type is slightly different. We it worked. We only had one issue, one customer,

11:40.960 --> 11:46.720
noticed we changed this because they were parsing their response with a red jacks and bash.

11:47.840 --> 11:53.600
And one of those, yeah, it was like, you kind of go, how, what are you doing there that they say that

11:53.600 --> 11:59.840
this is broken? We looked and we told them have you tried JQ and they said, no, we'll try that.

12:01.600 --> 12:07.440
Yeah, so it's also one of the fun little things. Or Jason doesn't have any white space in it,

12:07.440 --> 12:13.040
so payloads are ever so slightly smaller, which is nice. Yeah, happy little outcome for that one.

12:14.480 --> 12:20.400
We then looked at J. Jason Schema R.S., which is for like validating your payloads and stuff like that.

12:22.160 --> 12:27.680
This was one where clearly an engineer long before me had found the rust library and said,

12:27.760 --> 12:32.240
let me experiment with this. We were using it in about half the times we were doing

12:32.240 --> 12:37.440
seem validation, the other half we were using Python one. Clearly it worked for us. I just

12:37.440 --> 12:43.680
one day changed one import in half of our files and did nothing else and shifted. I didn't even

12:43.680 --> 12:49.120
actually tell anyone I shipped us. I just did this one day in a part of a bug fix and it worked.

12:49.920 --> 12:55.360
And we got some nice improvements, but like I said, I didn't actually test this one because

12:55.440 --> 13:00.400
it was so slow, it was so small and we were already doing it, so I felt no need to actually

13:00.400 --> 13:11.920
validate this. Then we got on to the big one. Graning. Graning. So Graning is Tokyo Hyper

13:12.320 --> 13:21.120
Server 4 Python. It effectively runs a hyper event loop, hyper server that takes in HP requests

13:21.200 --> 13:28.640
and farms them out to Python workers. It supports S, G, I, W, S, I, and a custom protocol called

13:28.640 --> 13:36.640
R, S, I, which I haven't kind of challenged to experiment with. We use W, S, I, so that was pretty

13:36.640 --> 13:42.880
good for us. I want to say that like we were, this was just a level of like, I was just googling,

13:42.880 --> 13:47.760
I looked at our requests, and I said, this looks just like a hyper server. So I literally just googled

13:47.840 --> 13:52.400
hyper for Python, and this was the first, first thing I found, and then I started reading the code.

13:54.800 --> 14:00.160
We managed one of the reasons that was very good for us is it actually replaced everything in our

14:00.160 --> 14:08.320
stack. It does TLS termination, it does request Qing, and it does the worker farming out of work. So

14:08.320 --> 14:14.960
we were able, so the big thing that we wanted was to be able to say we could get more out of our

14:14.960 --> 14:20.800
compute. We had, because all these services ran on the same box, if we could replace one thing

14:20.800 --> 14:25.520
to do so, we could maybe get squeeze more cycles out of each request and each box.

14:27.440 --> 14:33.760
The other reason we were looking at specifically rust tooling for this one is I'm a rust developer

14:33.760 --> 14:38.960
in a Python shop. I wasn't enjoying, I don't enjoy writing Python, I do it for, because I

14:38.960 --> 14:46.320
paid to do it, but there was a willingness that if I found code that needs to be fixed in rust,

14:46.320 --> 14:49.360
I would be told I could fix it, I could help out, I could write features.

14:50.720 --> 14:55.520
It also has an active developer who's constantly improving it and making things better for us.

14:57.440 --> 15:06.720
So we moved on to it, and we did some low testing. This is just replacing U.S.G. with Glanyan.

15:07.280 --> 15:16.640
We were able to push 32% more requests through the system with latency going down for the vast majority

15:16.640 --> 15:23.520
of things. This is actually to do with that trace the threading model, again, because we weren't

15:23.520 --> 15:28.880
getting those random weights. We were actually each request was going through and living as life cycle

15:28.880 --> 15:34.400
rather than randomly getting these latent spikes. That's why we kind of started seeing,

15:34.480 --> 15:40.960
we believe we saw the weird latency spikes on the P95s, because these are the real P95s rather than

15:40.960 --> 15:48.240
the random P95s. So we had a more track traffic, and we didn't have any errors that we noticed at the

15:48.240 --> 15:56.960
time. We then decided we could rip out everything, try with just gradient, granion, and no

15:56.960 --> 16:03.600
engine X, no HM proxy. So that's the problem with the other one. We weren't really testing the

16:03.600 --> 16:09.520
Qing, which is a core part of our infrastructure. If we can't queue requests, we can, they get

16:09.520 --> 16:14.480
dropped in the floor, and then they get redriven, and then we get a tundering problem, it looks like

16:14.480 --> 16:23.120
requests, that's a request, that we're not doing work. So when we did that, we actually saw a 2x

16:23.120 --> 16:32.960
increase in speed. No, TrueX input in TruePost with a 50% reduction in latency. I honestly didn't

16:32.960 --> 16:37.600
believe the numbers when I did this, so I had to rerun the tests a couple of times. So these numbers

16:37.600 --> 16:44.240
are a little over multiple tests and average showers, but yeah, good numbers, and we were pretty happy.

16:44.960 --> 16:50.480
So we begin testing it out in actual environments, real requests and all that kind of stuff.

16:51.200 --> 16:57.520
We ran out of the staging, everything looked good. We then moved it to our internal environment,

16:57.520 --> 17:07.040
everything looked good. We then rolled it out to one region then onto an X region, and things stopped

17:07.040 --> 17:15.680
looking so good. Like I said, we have, I think it was on the final region, yeah, we things exploded.

17:16.560 --> 17:21.760
So I was actually about to go off and have coffee. That's how good I thought it was going to be.

17:21.760 --> 17:28.960
I was like, I'm done for a day kind of thing. We exploded the database when we hit the last region,

17:28.960 --> 17:35.760
because we waited to the last region was our primary region, and what we didn't know is that

17:36.640 --> 17:41.360
we didn't have connection pooling. And I do not know how we could have so many engineers,

17:41.360 --> 17:47.200
not know we had connection pooling, but for those that don't know, Django, four, and lower,

17:47.200 --> 17:55.200
doesn't have connection pooling. It just has connection reuse. So every thread had a dedicated connection.

17:56.480 --> 18:03.440
And as we split up more threads, more services, we maxed out our primary database and

18:03.440 --> 18:10.240
every and we couldn't connect to it and everything collapsed. The U.S.D. green threads,

18:10.240 --> 18:17.360
because of how it handled the reuse of requests, effectively meant it kind of pretended we have

18:17.360 --> 18:25.840
connection pooling. We looked out different ways to figure this out. We tried RDS proxy because we're

18:25.840 --> 18:33.360
an Amazon shop, and we didn't have time to spin up a peach balancer. It's not rate. So maybe

18:33.360 --> 18:38.800
we should try peach balancer and see if it's better. So we're just waiting for our Django five

18:38.800 --> 18:42.880
of great before we can actually get this rolled out, because there is real connection pooling in

18:42.880 --> 18:49.600
that, and we're pretty sure it's going to fix us. So we're not 100% finished with us, but we're

18:49.600 --> 18:53.840
ready to go once we get our Django upgrade running. We're still running this in our internal

18:53.840 --> 18:58.720
environments where our testing is, knowing that it's still working for us for our internal traffic.

18:59.760 --> 19:05.680
We also run into some fun with HVP. I think I was chatting to people about this last night.

19:06.800 --> 19:14.720
NGX is doing something weird to our traffic that fixes us. We have no idea what it is, but

19:14.720 --> 19:21.360
package formats are really weird. Package managers are really weird. They do weird things constantly

19:21.440 --> 19:28.960
that are not exactly right HVP. They'll follow 307s to look for packages to look for packages

19:28.960 --> 19:33.440
at different locations like if you're hosting an S tree or on a different CDM because

19:34.240 --> 19:38.800
you do it yourself. They'll go to that backend and pull it out and say, grant, I've got the

19:38.800 --> 19:44.400
there's a file. But if you're a doctor layer and you say the body of 307 is 0,

19:45.040 --> 19:52.000
doctor will then look for a body that's 0 at the other end. And if it doesn't have a body of 0,

19:52.000 --> 19:55.520
it'll actually still download it, but it will encode it all the metadata of the image saying,

19:55.520 --> 20:04.400
that body is 0, which Unsurprisingly doesn't work. So we're not, it also started randomly

20:04.400 --> 20:09.600
our LB had another one where it because we were setting that body. We only noticed that the body

20:09.680 --> 20:14.320
we were sending the body wrong, which caused our LB's health checks to randomly air out. That's

20:14.320 --> 20:20.640
why we fixed it and why we corrupted a lot of users' dockers. Yay, I will always be willing to

20:20.640 --> 20:27.120
complain about Docker for the end of time. We're not going to fix it in our edge. We'll just

20:27.120 --> 20:31.440
do some magic things, but Hyper was doing things correctly, and next to those things. Weirdly,

20:31.440 --> 20:36.080
we're not on a hunger server. We actually, while doing the RCA for discovered we had the exact same

20:36.080 --> 20:45.280
bug when we attempted to use Unicorn for instead of UISCII. So it's not just Hyper, it seems to be

20:45.280 --> 20:53.200
UISCII and EngineX just magically make this work. They shouldn't. It also broke our, one of the

20:53.200 --> 20:59.040
nice things was we could be run on a user's laptop, UISCII. It's really painful to run on a user's laptop.

20:59.040 --> 21:04.080
So we switched our dev environments for gradient, and it got to testing as we run them in

21:04.960 --> 21:09.280
approach. Unfortunately, poke all our bookers. We chatted to the maintainer. He says,

21:10.000 --> 21:14.480
he's not going to fix it. The reason is the model for the general Python debugger can't connect

21:14.480 --> 21:20.560
through the rawst pilotry running S. There's no real way around this. We just had to switch back

21:20.560 --> 21:27.280
to a normal whiskey thing. I have a lot of notes that I'm going to speed true because I think

21:27.280 --> 21:31.920
there's a lot of interesting things. We spend a lot of time just tuning S. We got values to match

21:32.000 --> 21:38.400
our H.A. proxy, workers, backwals, but about, if you want to talk more about that, I can talk

21:38.400 --> 21:46.640
more about that. So we then were running it out, and we realized there was no metrics in gradient.

21:46.640 --> 21:51.680
Like I said, we needed that visibility. So this is where I actually got that time to start writing

21:51.680 --> 22:01.280
metrics. So we have, we've worked the project, we added in a metrics collector for Methias,

22:02.400 --> 22:09.520
we started, because of the way it works, we needed to move metrics across each Python process

22:09.520 --> 22:17.280
back to the main rules process to expose the metrics to the, to the Python server.

22:19.280 --> 22:23.920
It works. We got it working. We created PR. We were super happy. That's, and this is the

22:23.920 --> 22:31.440
fork we're running internally in our versions. We never got it merged because we only test on Linux

22:31.440 --> 22:35.600
and Max. The maintainer wants Windows support, so he didn't get there.

22:36.800 --> 22:43.280
Both saying that on Thursday or Friday, the maintainer took RPR and didn't merge it, but

22:43.280 --> 22:47.440
implement his own one that matches what he wanted, and it's merged in, and it should be in part

22:47.440 --> 22:52.240
of the next release. So you can get backpressure metrics, what's happening, how many of

22:52.240 --> 22:57.280
how many requests are in the slide, how many Python processes, all that stuff. So we'll be switching

22:57.360 --> 23:08.160
to this, I think, two weeks time on it, or if when it ships. So this is what we're running

23:08.160 --> 23:12.720
now. Django, we're almost there. I have little time left, so I'm going to speed through. The

23:12.720 --> 23:20.080
real win here was the company now has faith in Rust and our ability to use Rust. To the point

23:20.080 --> 23:25.280
that we're actually rewriting more of our stack into Rust. So what you might think as we go,

23:25.360 --> 23:29.920
okay cool, we'll use Pyotree and ship it to the monolith. No, we're just going to throw out

23:29.920 --> 23:34.800
our Python edge network and rewrite our edge network and Rust. So we're starting a project now

23:34.800 --> 23:39.120
to do all that, but none of that would be possible, and I don't think I would have that sign off

23:39.120 --> 23:44.640
to do that. If we had improved that we can do things faster by just the small little improvements

23:44.640 --> 23:50.880
in the stack in Rust. It really built that made that cultural difference and people had the faith.

23:51.840 --> 23:58.800
So if you're wanting to use Rust and Python, look for the bottlenecks, the serialization, the Io,

23:58.800 --> 24:04.480
those things they just are better and Rust, you'll find that the error handling is good,

24:04.480 --> 24:10.400
comparatively, if you want to do it yourself, use Pyotree. I've done it for small side projects as well.

24:10.400 --> 24:16.160
It generates really nice Python. You wrap the code and some decorators, you're just going

24:16.160 --> 24:23.440
to generate some very simple Python objects that you is well typed and easy to use. If you want

24:23.440 --> 24:28.480
people to adopt your thing, aim for API compatibility, if it's drop in, I'll probably just drop

24:28.480 --> 24:34.160
throw it in and see if it breaks my tests. And if it doesn't, I'll ship it. And with one minutes of

24:34.160 --> 24:48.880
despair, that's it. Thank you.

24:48.880 --> 24:54.720
Have you got any questions?

24:55.680 --> 25:04.400
For the recording? Just a quick question. Any chance you are going to blog or open source,

25:04.400 --> 25:10.000
your Goose load tests, because the scientific method you took was pretty awesome.

25:10.000 --> 25:16.400
Yeah, I would, I can't actually open source the tests because they're very specific and there's

25:16.400 --> 25:22.080
some stuff in there that if I open source, I would get a little bit trouble, but I think just

25:22.080 --> 25:26.640
go have a look at it. They have really great examples of how to write tests in the repository.

25:27.680 --> 25:32.560
I would recommend it for load testing because you can compile it to a binary and then ship

25:32.560 --> 25:36.480
that binary around the world and then stimulate load tests from anywhere in the world,

25:37.200 --> 25:41.360
which is great when you have a CDN like us and need to actually valid do end to end tests.

25:44.320 --> 25:49.920
I have a non-technical question, maybe a social question. So you said that the maintainer of

25:50.640 --> 25:57.200
one of the projects you're uptreaming to, they rewrote your patch? Yes. I've seen this a few times.

25:57.920 --> 26:02.000
How do you feel like, because then you're not maybe a co-author on it?

26:02.560 --> 26:08.720
I, yeah, it was a little like, oh, I wanted to get around to this, but I didn't, it's his

26:08.720 --> 26:15.120
project. It's not my project. I was, I was a little sad, but I kind of like, I want to feature more

26:15.120 --> 26:21.760
than I want to be the contributor myself. And I had let it sit there on patch from his feedback

26:21.760 --> 26:26.160
for months. Like, he wants to feature, I want to feature, I didn't have time to finish this.

26:26.160 --> 26:30.160
That's the, at some fortunate part of it. You don't know what you can write. It's the first time.

26:30.160 --> 26:34.960
It took maybe like two weeks to get back for review. And by that time, my work couldn't give

26:34.960 --> 26:40.400
me more time to go back and review the features. Do you tell what's your, yeah, exactly.

26:40.400 --> 26:42.400
Well, thank you very much. Thanks.

