WEBVTT

00:00.000 --> 00:22.000
Thank you. Thank you very much for the introduction. A brief disclaimer before. We're going to have a look at quite a lot of code at this talk, but don't worry. We'll get for this together.

00:22.000 --> 00:32.000
My name is Miles Clydele. I work as a senior software engineer at stream, where I mostly build backends using go, and have been doing so already over the last 10 years.

00:32.000 --> 00:38.000
So that's pretty nice. Getting to go, use go professionally.

00:38.000 --> 00:50.000
This talk is about file uploads, and how we can improve them. So let's actually see what a typical file upload looks like and what problems can arise from this.

00:51.000 --> 01:04.000
Here we see a typical setup on the left-hand side that should be a phone represents a client that wants to upload a video file. For example, on the right-hand side to a server that's represented by the file cabinet.

01:04.000 --> 01:14.000
So let's assume the upload just begins all good until stuff breaks and an interruption occurs.

01:14.000 --> 01:23.000
This interruption can be basically caused by anything. It can be caused by a network outage. It could be that the connection simply drops.

01:23.000 --> 01:30.000
It could be that you're switching from Wi-Fi to cellular data because you're leaving the house or this conference room, for example.

01:30.000 --> 01:37.000
It could also be caused by the server crashing, some infrastructure crashing, or you're device is shutting down.

01:37.000 --> 01:50.000
Now that's pretty annoying. And the way to resolve this in that case is basically to just try again. You start uploading the data again from scratch from offset zero.

01:50.000 --> 02:02.000
So you transmit everything again. In this case, we hope it works fine. And if it works fine, that's great. But at the end of the day, this is not a really efficient process.

02:02.000 --> 02:09.000
Because essentially, the first half of the data was already transmitted to the server, but it just like had to ignore it.

02:09.000 --> 02:21.000
It wasn't able to use the data and it was retransmitted again. This is wasting resources on the server side, on the client side, as well as causing bad user experience.

02:21.000 --> 02:24.000
Because uploads just take longer, right? You have to retransmit data.

02:24.000 --> 02:32.000
And this is especially problematic if you're working with unreliable networks, for example in mobile applications or you're dealing with large files, right?

02:32.000 --> 02:38.000
Video recordings can be pretty huge these days and transferring them over the network can be challenging.

02:38.000 --> 02:49.000
So can we make this more resilient in terms of handling those errors better? Can we do better than just trying again and throwing all the work away that was done before?

02:49.000 --> 03:03.000
And yes, there is a way. And that's what resumable uploads is about. It's about taking errors, handling them gracefully, being resilient and recovering from them gracefully.

03:03.000 --> 03:08.000
So let's see how resumable uploads can help with this.

03:08.000 --> 03:17.000
At the beginning, the client initiates an upload procedure by telling the server, hey, I want to upload this file resumably.

03:17.000 --> 03:26.000
It tells us some metadata. For example, the file size could also include the media type, for example, information about authentication and authorization,

03:26.000 --> 03:35.000
how to associate the file, et cetera. The endpoint, slash files in this case, is something that will be application dependent.

03:35.000 --> 03:41.000
So has to be configured in advance, but that's usually what we do on the client side anyways.

03:41.000 --> 03:46.000
The servant, that case, creates what we call an upload resource.

03:46.000 --> 03:52.000
An upload resource is a resource on the server side that tracks the upload state.

03:52.000 --> 04:02.000
It is unique to each upload and has a unique URL represented on the right-hand side by this location header that is then communicated back to the client.

04:02.000 --> 04:07.000
And then the client can then commence to transfer the data.

04:07.000 --> 04:13.000
We assume again, it happens nicely, progresses until disaster strikes again, and it is interrupted.

04:13.000 --> 04:24.000
Instead of resuming from the beginning again, instead of starting from zero again, we can then ask the server how much data did you receive.

04:24.000 --> 04:29.000
That happens, for example, using a head request against this upload resource.

04:29.000 --> 04:35.000
So essentially, the client asks, hey, server, how much data did you get? What are you still missing?

04:35.000 --> 04:42.000
The server then looks up in its upload resource and can respond with, for example, I receive 50%.

04:42.000 --> 04:47.000
We communicate that through what we call an offset. How much bytes did you already receive?

04:47.000 --> 04:54.000
For example, out of 20,000 bytes, 10,000 already received, so 50% is still remaining.

04:54.000 --> 04:59.000
And from there on, we just can continue to upload where we left off.

04:59.000 --> 05:06.000
We're just transferring the remaining 50% by using a pet request against the upload resource.

05:06.000 --> 05:12.000
And that's nice, and that's how we can complete the upload without issues.

05:12.000 --> 05:18.000
And that's most importantly, without transferring any additional data that has already been transferred.

05:18.000 --> 05:25.000
We're saving resources on the server side, on the client side, and we're also getting great user experience.

05:25.000 --> 05:30.000
Because we're recovering from errors without delaying upload.

05:30.000 --> 05:35.000
Now, you actually don't have to implement this because we already did so.

05:35.000 --> 05:42.000
This is part of the test project that we started, I think, 13 years ago.

05:42.000 --> 05:46.000
When we realize that this is a problem that's not specific to our application,

05:46.000 --> 05:49.000
but many applications develop this kind of need these features.

05:49.000 --> 05:53.000
Uploads are everywhere, large file sizes are everywhere.

05:53.000 --> 05:59.000
So in 2014, we started building an open HTTP-based protocol for file uploads.

05:59.000 --> 06:03.000
The idea being is that we didn't want to have specific implementations,

06:03.000 --> 06:08.000
but that we wanted to have an open protocol that could be implemented in many languages,

06:08.000 --> 06:14.000
for many environments and platforms, and that was called all interoperable interrupt.

06:14.000 --> 06:16.000
And that's also what happened.

06:16.000 --> 06:22.000
Together with the community, we were able to create many implementations for all of your favorite languages,

06:22.000 --> 06:29.000
for all environments, whether it be the browser, Android, iOS, many server side SDKs as well.

06:29.000 --> 06:33.000
And those are all for you, free and ready to use.

06:33.000 --> 06:42.000
And of course, we also have a go implementation, and this is how it fits into this track here.

06:42.000 --> 06:50.000
Because I want to show you a few patterns that have been really helpful while building a resumable file upload server in Go.

06:50.000 --> 06:56.000
So let's start with a really naive implementation. How could we build this?

06:56.000 --> 07:01.000
Here you see a simple implementation for a request handler.

07:01.000 --> 07:08.000
We see on the top our typical handler signature, we get the response right out, we get the request.

07:08.000 --> 07:14.000
Down below we fetch our request context because that's what we do nowadays.

07:14.000 --> 07:19.000
Here we have a first glimpse of our upload specific logic.

07:19.000 --> 07:25.000
We assume we have some state function that we can fetch the upload resource from.

07:25.000 --> 07:30.000
We assume that I created or it fetches it, but those details are not that important here right now.

07:30.000 --> 07:38.000
What we do afterwards is any data that we read from the request body, we want to append to the upload.

07:38.000 --> 07:42.000
Because this is essentially how resumable uploads work.

07:42.000 --> 07:50.000
We take data, append to the upload, and then even after errors occur, we can just continue appending there until it's done.

07:50.000 --> 07:56.000
Now, to be fair, there's a bit of validation missing here, but this is just example code.

07:57.000 --> 08:03.000
So let's have a look at how this append function might look like.

08:03.000 --> 08:08.000
Here you can see on the top left, this is an example for an S3 based upload backend.

08:08.000 --> 08:17.000
Usually you want to store your files on a certain backend that could either be the file system, S3, Google Cloud,

08:17.000 --> 08:21.000
Azure, some other cloud storage, or FTP server if you into that.

08:21.000 --> 08:24.000
But this doesn't really matter too much.

08:24.000 --> 08:29.000
But the flow is usually from our experience quite similar.

08:29.000 --> 08:34.000
Is that when looking at the function signature, we receive and reader,

08:34.000 --> 08:38.000
and we then copy this into a temporary file.

08:38.000 --> 08:44.000
We buffer it because very often at the beginning, we don't really know what's the best way to transfer it to the backend.

08:44.000 --> 08:48.000
Because that very often depends on how much data we actually received.

08:48.000 --> 08:56.000
But just bear with me, we save it on a file, and then later transfer it to S3 for example in that case.

08:56.000 --> 08:59.000
Now, this seems to look fine.

08:59.000 --> 09:04.000
You know, the error handling is all good, but there's actually one issue with this.

09:04.000 --> 09:15.000
And that is that if the request runs into some error, for example, an IO timeout, or an unexpected end of file,

09:15.000 --> 09:19.000
then IO copy will obviously return an error.

09:19.000 --> 09:21.000
But in our case, it's exactly what we want to do.

09:21.000 --> 09:24.000
We want to handle this error properly.

09:24.000 --> 09:30.000
But in this case, it will just return and then know data is stored at all.

09:30.000 --> 09:38.000
And if an error happens, then the entire request is basically wasted because we don't save any data from our upload.

09:38.000 --> 09:40.000
So how can we handle this?

09:40.000 --> 09:49.000
Well, of course, we could go in there at some error handling saying, okay, if you get this error, ignore it, but then return it later.

09:49.000 --> 09:52.000
But this is kind of cumbersome because we have many of these backouts.

09:52.000 --> 09:56.000
It's not just S3, we have this in many different implementations.

09:56.000 --> 10:06.000
And we decided to go with a slightly different way and say, okay, what if we can shield this implementation from this error altogether, right?

10:06.000 --> 10:09.000
So that it doesn't have to concern itself with this error.

10:09.000 --> 10:19.000
Because we actually don't have to let our S3 back end care about whether an error happens or not.

10:19.000 --> 10:24.000
So the pattern that we kind of came up with is something you could call a happy reader.

10:24.000 --> 10:29.000
It's happy because it kind of ignores errors for a certain time.

10:29.000 --> 10:34.000
In this struct definition, you can see it just wraps another IO reader as a source.

10:34.000 --> 10:39.000
And then we have a read implementation of the IO reader interface.

10:39.000 --> 10:42.000
The basically just reads from the underlying source.

10:42.000 --> 10:50.000
But the important part is that any error that happens is basically mask as an IO end of file error.

10:51.000 --> 10:59.000
So whenever something happens like an IO timeout, this is actually shown to the consumer as just like the stream ending.

10:59.000 --> 11:13.000
And this is exactly what we want because for our S3 back end, for example, they don't care whether the request stream actually aborted because of an error or because it ended successfully.

11:13.000 --> 11:16.000
For them, it should be just the same.

11:16.000 --> 11:20.000
But of course, we still have to be good citizens and actually handle the error.

11:20.000 --> 11:31.000
So we kind of store it and allow the request handler in the end to still fetch it and log it properly handle it properly.

11:31.000 --> 11:32.000
How could we use this?

11:32.000 --> 11:37.000
So this is an excerpt from our request handler from before.

11:37.000 --> 11:42.000
We can now see that we wrap our request body into a happy reader.

11:42.000 --> 11:45.000
We call our pen function again.

11:45.000 --> 11:52.000
But this time, even if something happens with the IO read stream, the pen function doesn't see the error.

11:52.000 --> 11:55.000
It just sees the stream ending gracefully.

11:55.000 --> 12:00.000
And we can then in request handler later handle the error nicely.

12:00.000 --> 12:11.000
And I think this is a really great example because it shows how you can gracefully handle errors by shielding parts of the application that don't actually need to care about this error.

12:11.000 --> 12:17.000
But there's still one more issue.

12:17.000 --> 12:21.000
In the beginning, we started using the request context.

12:21.000 --> 12:25.000
And the request context is a really nice feature.

12:25.000 --> 12:34.000
It allows you to scope or limit the execution of your request handlers to the actual duration of the request.

12:34.000 --> 12:43.000
So for example, if the request is aborted or canceled, which is possible in your HTTP versions, the request context is canceled as well.

12:43.000 --> 12:46.000
And all your resources are aborted and freed.

12:46.000 --> 12:47.000
That's nice.

12:47.000 --> 12:50.000
But this is actually problematic in our case.

12:50.000 --> 12:53.000
Let's assume the request is canceled.

12:53.000 --> 12:59.000
The request stream is closed, but we handle that error gracefully now.

12:59.000 --> 13:02.000
But the request context is also canceled.

13:02.000 --> 13:08.000
So our append function down here will kind of like handle with a canceled context.

13:08.000 --> 13:09.000
And that doesn't work.

13:09.000 --> 13:14.000
It's then not able to perform the network request to save your data.

13:14.000 --> 13:17.000
So what can we do about this?

13:17.000 --> 13:23.000
We could of course think about using context background, but don't do that.

13:23.000 --> 13:25.000
Don't want to have that.

13:25.000 --> 13:31.000
What we in that instead introduced is what we call a delayed context.

13:31.000 --> 13:37.000
It's a wrapper around or it consumes another context, the parent context.

13:37.000 --> 13:42.000
But then delays the propagation of the cancellation signal.

13:42.000 --> 13:48.000
We can see here, we just create a new context, we wait until the parent is done,

13:48.000 --> 13:52.000
sleep an additional delay, and then just cancel.

13:52.000 --> 14:00.000
The cancellation is effectively delayed by, or it's propagation is delayed by a certain amount.

14:00.000 --> 14:02.000
How can we use this?

14:02.000 --> 14:05.000
Well, here we're back in our handler again.

14:05.000 --> 14:15.000
And in the top line, you see that we wrap our request context in a delayed context.

14:15.000 --> 14:19.000
In this example, we delay the cancellation by 10 seconds.

14:19.000 --> 14:21.000
How does this work now?

14:21.000 --> 14:29.000
If we assume that the request is canceled, the request context is canceled,

14:29.000 --> 14:31.000
basically immediately again.

14:31.000 --> 14:37.000
Then the request body is closed, but we handle this error gracefully, because of our happy reader.

14:37.000 --> 14:45.000
And after that, the append function still has 10 seconds time to save the data onto a free.

14:45.000 --> 14:49.000
After that, of course, our delay context will be canceled as well,

14:49.000 --> 14:55.000
making sure that we're not hanging on any requests or consuming any network resources

14:55.000 --> 14:59.000
as long as we need them.

14:59.000 --> 15:03.000
Together with the error shield that we've introduced before, this works really well.

15:03.000 --> 15:06.000
But it's kind of a bit of complicated logic, right?

15:06.000 --> 15:13.000
You're doing stuff with context that's, you know, kind of a bit, are you sure that this is actually working properly?

15:13.000 --> 15:18.000
Well, luckily, go not only provides you with the concurrency structures itself,

15:18.000 --> 15:20.000
but with the context itself.

15:20.000 --> 15:23.000
But nowadays, also, you have a nice package to test it.

15:23.000 --> 15:26.000
That's the sync test package.

15:26.000 --> 15:32.000
I actually plan to go into this a bit more detail, but Rona had a really great talk about this just earlier.

15:32.000 --> 15:39.000
So I'm not going to go too much into details and just tell you to watch her talk if you really want to learn more about this.

15:39.000 --> 15:42.000
But how could you test this delay context anyways?

15:42.000 --> 15:47.000
You just use the sync test to test functionality to create a bubble,

15:47.000 --> 15:51.000
and then you can run the function in there.

15:51.000 --> 15:55.000
For example, up here we create a new delay context.

15:55.000 --> 15:58.000
We cancel it immediately for our test cases.

15:58.000 --> 16:05.000
But then we wait just the amount before the cancellation should be propagated.

16:05.000 --> 16:11.000
So this is basically five seconds, minus one millisecond, just before it should be called.

16:11.000 --> 16:13.000
We're certain it's not called.

16:13.000 --> 16:17.000
And then we wait the one final millisecond,

16:17.000 --> 16:23.000
and then can actually ensure that our context is canceled.

16:23.000 --> 16:28.000
And this is, I think, a really powerful tool, because now we're not only like have all of this concurrency patterns,

16:28.000 --> 16:31.000
but we can actually make sure that it works properly.

16:31.000 --> 16:38.000
It does what we want to, and there's no side effects, there's no leaking go routines, nothing.

16:38.000 --> 16:42.000
So, yeah, this is what I really like about go.

16:42.000 --> 16:48.000
Now, talking about cancellation, we've focused on cancellations from the client side.

16:48.000 --> 16:52.000
But there's also cancellations from the server side.

16:52.000 --> 16:56.000
Now, for example, assume you want to shut down your server,

16:56.000 --> 17:03.000
because you're deploying a new release, or you want to scale down your services,

17:03.000 --> 17:06.000
or you're using spot instances, for example,

17:06.000 --> 17:08.000
and you have to shut down your server.

17:08.000 --> 17:12.000
Anything like that, go as a nice function for that,

17:12.000 --> 17:16.000
the shutdown function of our HTTP server.

17:16.000 --> 17:21.000
But if you read the function definition, it basically says,

17:21.000 --> 17:24.000
shutdown gracefully shuts down the server,

17:24.000 --> 17:28.000
waiting without interrupting any active connections.

17:28.000 --> 17:34.000
Meaning it will stop accepting your requests, but the existing ones, it will keep them running.

17:35.000 --> 17:39.000
That is, of course, intentional, because go doesn't know what your application is doing,

17:39.000 --> 17:45.000
so it just is safe to assume, okay, the application will handle them in the way that's best for it.

17:45.000 --> 17:51.000
So, but in our case, because we're having built this framework about handing errors

17:51.000 --> 17:57.000
resiliency and gracefully, we actually are fine, which is like shutting down these requests,

17:57.000 --> 18:01.000
and then letting the client retry them again, because it can resume,

18:01.000 --> 18:05.000
because we save all the data that we already received.

18:05.000 --> 18:09.000
But if we want to interrupt these requests, how can we actually do this?

18:09.000 --> 18:17.000
If you remember in the beginning, we used an IO.copy call to consume the request body,

18:17.000 --> 18:21.000
and then store somewhere before we transfer somewhere else.

18:21.000 --> 18:26.000
But the IO copy might actually hang.

18:26.000 --> 18:29.000
So, it's waiting on the next read call to finish,

18:29.000 --> 18:36.000
but this could just hang because the network is out, or because there's some requests from the users that are just not working anymore.

18:36.000 --> 18:42.000
And, of course, you can wait until a timeout times out, but we can actually do this better.

18:42.000 --> 18:50.000
The first idea might be, okay, let's cancel the request context, but this also doesn't work,

18:50.000 --> 18:56.000
because canceling the request context doesn't affect the request body stream.

18:56.000 --> 19:02.000
You might think, okay, is there something like an IO copy context method?

19:02.000 --> 19:06.000
But it doesn't exist, and for pretty good reasons.

19:06.000 --> 19:11.000
But this was my first intention, is there an IO copy context function that I can pass a context in,

19:11.000 --> 19:15.000
it will just stop copying, but no, that doesn't exist.

19:15.000 --> 19:18.000
We need a better way to interrupt it.

19:18.000 --> 19:23.000
And this is actually really nice, because when you are faced with this problem,

19:23.000 --> 19:29.000
and then you scroll through the standard library, at some point, you will just realize that the HTTP package,

19:29.000 --> 19:35.000
even though it seems pretty simple in the beginning, is full of hidden gems that usually will help you.

19:35.000 --> 19:41.000
For example, this is an excerpt from the documentation of the request field,

19:41.000 --> 19:45.000
the body field of the request structure.

19:45.000 --> 19:48.000
And for us important is the last paragraph.

19:48.000 --> 19:54.000
It basically says that body must allow read to be called concurrently with close.

19:54.000 --> 20:00.000
In particular, calling close should unlock a read call for input.

20:00.000 --> 20:02.000
And this is exactly what we want.

20:02.000 --> 20:08.000
We so effectively, when we want to shut down our request, we can just call request body close,

20:08.000 --> 20:14.000
and all of our waiting recalls will come to a halt.

20:14.000 --> 20:16.000
This is exactly what we can do.

20:16.000 --> 20:18.000
So let's wire it up.

20:18.000 --> 20:26.000
On the top line, you will see we are creating server context.

20:26.000 --> 20:31.000
This is basically a context that you should cancel when you want to shut down the server,

20:31.000 --> 20:34.000
like it's waiting for interruptions signal, for example,

20:34.000 --> 20:37.000
or some other application specific logic that tells you,

20:37.000 --> 20:44.000
okay, and now we need to shut down.

20:44.000 --> 20:50.000
We then wait on the server context to be closed.

20:50.000 --> 20:53.000
This is all inside of our request handler.

20:53.000 --> 20:59.000
And if we're told to shut down, we just go and close our request body stream.

20:59.000 --> 21:05.000
We also cancel the request, blah blah blah, to make sure nothing is leaked, all is fine.

21:05.000 --> 21:09.000
But this is exactly what we want to have.

21:09.000 --> 21:16.000
Now when the server shuts down, it interrupts our request body,

21:16.000 --> 21:20.000
which then might raise the errors, but we handle all of those errors,

21:20.000 --> 21:23.000
gracefully through our happy reader.

21:23.000 --> 21:26.000
And because of our delayed context,

21:26.000 --> 21:30.000
we actually still have enough time to save the data on the stream.

21:30.000 --> 21:36.000
So all in all, all of these free patterns kind of work together to make sure that

21:36.000 --> 21:43.000
requests and errors are handled nicely, but all of the data that we receive from the user

21:43.000 --> 21:48.000
is still saved properly on your desired backend,

21:48.000 --> 21:56.000
so you can resume the upload nicely as well in the future.

21:57.000 --> 22:00.000
So this is basically a few patterns I wanted to show you.

22:00.000 --> 22:05.000
You've already heard a lot of talks this weekend, and you're probably going to hear a few more today.

22:05.000 --> 22:09.000
So let me recap a few key takeaways for you.

22:09.000 --> 22:17.000
The first one is long running requests, which is essentially what we do with file uploads,

22:17.000 --> 22:19.000
because they may take an hour.

22:20.000 --> 22:22.000
Hailing them is kind of a tricky.

22:22.000 --> 22:27.000
It's not like your normal API requests that just conclude in a few seconds,

22:27.000 --> 22:29.000
but those could actually take a pretty long time.

22:29.000 --> 22:35.000
But go is pretty nice because it gives us a few patterns to help with this.

22:35.000 --> 22:41.000
By essentially, because we can handle the errors quite gracefully,

22:41.000 --> 22:44.000
error handling and go is just like any other value.

22:44.000 --> 22:46.000
There's no exceptions that are thrown.

22:46.000 --> 22:53.000
We can just pass them around, store them, retrieve them later.

22:53.000 --> 22:56.000
We can also use context creatively.

22:56.000 --> 23:00.000
You don't only have to use context and just pass it down to your calls,

23:00.000 --> 23:02.000
but you can actually use them creatively.

23:02.000 --> 23:06.000
The later cancellation of that work for you case, you can branch them out, you can join them together.

23:06.000 --> 23:11.000
All of that really helps you structure your application.

23:12.000 --> 23:16.000
Especially when building servers, the net HDP packages your friend.

23:16.000 --> 23:23.000
There's a lot of in gems in there, as well as about controlling how your response is done.

23:23.000 --> 23:29.000
And, for example, handling their request body.

23:29.000 --> 23:33.000
So, really like if you're running into issues with it,

23:33.000 --> 23:38.000
just like look for the documentation will probably find something in there that helps you.

23:38.000 --> 23:43.000
And lastly, if you're faced with a problem that you say,

23:43.000 --> 23:46.000
okay, my application, my benefit from original uploads,

23:46.000 --> 23:49.000
you don't actually have to implement this again.

23:49.000 --> 23:55.000
We have implementations in many different languages that are provided by us or the community.

23:55.000 --> 23:59.000
So, just head over to Test.io if you're really interested in this.

23:59.000 --> 24:01.000
But yeah, that's all I have.

24:01.000 --> 24:03.000
Thank you very much for your attention.

24:04.000 --> 24:07.000
Thank you.

24:11.000 --> 24:14.000
Finally, all my uploads are saved.

24:14.000 --> 24:17.000
I have time for one question if you're taking it.

24:17.000 --> 24:18.000
Sure.

24:18.000 --> 24:20.000
Anyone who troubles with their uploads?

24:20.000 --> 24:23.000
I have a question.

24:23.000 --> 24:27.000
I have one question, I'll come over with a microphone.

24:27.000 --> 24:34.000
Keep raising your answer.

24:34.000 --> 24:38.000
So, if you have partial uploads, for example,

24:38.000 --> 24:42.000
and let's say it gets interrupted, but it never finishes,

24:42.000 --> 24:47.000
how do a determine currently for some clean-up job,

24:47.000 --> 24:50.000
you know, when to sweep them and how would you handle it?

24:50.000 --> 24:51.000
Yeah.

24:51.000 --> 24:55.000
So, when you clean up uploads that are not finished,

24:55.000 --> 24:57.000
kind of depends on your application, right?

24:57.000 --> 25:01.000
There's some cases where you want to give the client just a few seconds to retry,

25:01.000 --> 25:03.000
but there's also cases where you want to give it, like,

25:03.000 --> 25:05.000
for example, a day to complete, right?

25:05.000 --> 25:08.000
So, it depends on that case.

25:08.000 --> 25:12.000
What is important is that if you want to clean up,

25:12.000 --> 25:15.000
you want to make sure that the upload resource

25:15.000 --> 25:17.000
does not have concurrent access.

25:17.000 --> 25:21.000
So, what we usually use is we have a distributed lease or lock mechanism

25:21.000 --> 25:24.000
for that to make sure that the cleanup job doesn't interrupt

25:24.000 --> 25:26.000
with any running requests.

25:26.000 --> 25:30.000
We use a lot of context for making sure that all of the signals

25:30.000 --> 25:33.000
get propagated around, but it's basically similar

25:33.000 --> 25:36.000
that it feeds back into the request context as well.

25:36.000 --> 25:39.000
Like, we close the context and the request stream

25:39.000 --> 25:42.000
when we realize that we want to terminate the upload

25:42.000 --> 25:45.000
because we're deleting the resource.

25:45.000 --> 25:46.000
Thank you.

25:46.000 --> 25:49.000
Thank you so much, applause.

