WEBVTT

00:00.000 --> 00:12.520
All right, so people can grab a seat, we're going to get started again and for the next

00:12.520 --> 00:22.400
talk, we have David talking about an XDPE virtual server to try an EVPF load balance

00:22.400 --> 00:27.400
of library, so there you go.

00:27.400 --> 00:31.680
Hi, I'm David, I'm going to show you how you can build a high capacity load balancing

00:31.680 --> 00:36.320
application with XDPE and go, I hope to go as well, a bit of stress because the cast

00:36.320 --> 00:41.200
sitter locked herself outside the house and my partner had to dash home, so I'm a little

00:41.200 --> 00:51.080
bit on edge, so the global radio is UK's largest commercial broadcaster and I haven't

00:51.080 --> 00:56.080
some time ago running legacy hardware load balance appliances handling all of our traffic

00:56.080 --> 01:01.840
and they've reached end of life, we host websites, live audio streaming, which is the

01:01.840 --> 01:08.280
book of the traffic audio on demand and some video, look over 30 gig a day, gig a

01:08.280 --> 01:16.040
bit a day, join the day, peak, and there's a mix of these high bandwidth media services,

01:16.040 --> 01:20.720
which are public and low bandwidth business services, so stuff we do with partners, uploading

01:20.720 --> 01:28.280
ad copy or whatever, but we're encountering some a federal port exchange bug for net services,

01:28.280 --> 01:35.040
which affected both sets of services because we have to reboot the appliances occasionally,

01:35.040 --> 01:41.440
we delay it, but still be kicking 100,000 listeners off of the effects it, which, you

01:41.440 --> 01:43.040
know, it's not great.

01:43.040 --> 01:48.520
So I thought we could move the bulk of the traffic to a simpler high capacity low-banzer,

01:48.520 --> 01:54.480
which would leave less work for a more sophisticated, featureful solution, TLS offload and

01:54.480 --> 01:58.720
WAF and that sort of thing, and should create a more resilient platform, wouldn't necessarily

01:58.720 --> 02:06.040
see the same bug, but why mix the different classes of services, media apps are ice-casts

02:06.040 --> 02:13.040
we use are really quite simple, it's IPv4, layer 4, oh balancing only, it's just TCP,

02:13.040 --> 02:21.000
UDP, no HTTP headers or anything like that to take into account, and it's a direct server

02:21.000 --> 02:28.920
return at layer 2, so what is direct server return on here is also known as direct routing

02:28.920 --> 02:34.920
or F5 calls N path, basically the packet gets sent by the client, comes in through your

02:34.920 --> 02:41.640
firewall router, hits the low-banzer, low-banzer just updates the macadresses in the ethernet

02:41.640 --> 02:46.160
header to set itself as the source and the backend that it selected as the destination,

02:46.160 --> 02:52.640
forwards it on to the backend, which can see that the IP address, which has been unchanged,

02:52.640 --> 02:58.600
here, we're going to want to update 100.1, it's got that on its loopback address, so it

02:58.600 --> 03:06.560
is supposed to it, replies directly to the client, so you've got this asynchronous route

03:06.560 --> 03:13.720
for your packet, so the packet is unchanged, pop from decranting TTL, there's no port mapping

03:13.720 --> 03:18.440
you can do, you can translate port 80 to 88 or anything like that, all backends must share

03:18.440 --> 03:25.680
the same VLAN with the low-banzer because it's layer 2, and virtual addresses must be on

03:25.680 --> 03:30.320
the backend, so no federal ports allocated or anything like that, basically you don't allocate

03:30.320 --> 03:34.880
really anything, so you have millions of connections through a single low-banzer, and there's

03:34.880 --> 03:41.680
no state necessary to keep, so you can fill over to different low-banzers and the same

03:41.680 --> 03:46.560
ways you can do with a router, essentially a router, we want to make sure that the same

03:46.560 --> 03:54.920
backend is chosen after a change to the pool of the backend, let's skip over this

03:54.920 --> 04:01.120
really, just for brevity, but essentially the packet can take any route to get to the backend,

04:01.120 --> 04:06.320
let us have a low-banzer in this case, and we should be able to fail over between these

04:06.320 --> 04:14.880
and will without breaking connections, so consider a few things, Google Maglain, it's

04:14.880 --> 04:19.280
some great blog pieces, and it's a really good consistent hashing algorithm for backend service

04:19.280 --> 04:24.560
selection, but it's not open source, so I'm going with that, same thing for a cloud-flare

04:24.560 --> 04:30.320
unimog, not open source, just count that, get a low-banzer look interesting, but to fully

04:30.320 --> 04:36.080
utilize it, you need to run a kernel module on the backends, we don't want to change anything,

04:36.080 --> 04:43.440
so yeah, that seemed a bit of a stretch, and it used to be DPDK, I don't know if that's

04:43.440 --> 04:51.840
changed now, but the main thing was layer 3, it uses layer 3 tunneling rather than layer 2 for

04:51.840 --> 04:58.240
selecting the backends, so essentially encapsulating the packet in another IP layer, so we didn't

04:58.240 --> 05:02.160
want to do that, so you have to change all the services, which use it, it's really want to do a

05:02.160 --> 05:09.520
like for like swap, and that looked a bit more promising, but again it's layer 3 only, and also

05:09.520 --> 05:17.760
it's C++, and that's really a library, and I don't do any C++, so I do that, there's got like

05:17.760 --> 05:24.240
PVS as well traditional ones, but I can find a way that you can get away without configuring

05:24.240 --> 05:31.200
a bit locally on the low-banzer itself, which, potentially, resulting exposing services like SSH,

05:32.640 --> 05:38.800
so yeah, and also issues with accurately doing health checks against the vips on the backends

05:38.800 --> 05:46.640
service, so I thought, want to try building something with XDP and EBPF, yeah, Director of

05:46.640 --> 05:52.080
Server Return is straight forward to implement updating macadresses, Catran, shows that it's

05:52.080 --> 05:56.320
possible to do something like this, we just need a bit to be configuring on the backends,

05:56.320 --> 06:00.320
loop back into face, that's already done, so no change there, and yeah, we just have

06:00.400 --> 06:07.280
the basic macadresses, which should be easy to do, how hard could it be, turns out, not very hard to

06:07.280 --> 06:16.320
solve, is a program, EBPF, XDPF program, which will do low balancing on a single bit for all traffic,

06:16.320 --> 06:24.720
define the bit at the top, produce the list of your backends, there, got to match the packet,

06:25.680 --> 06:29.840
modify the ethernet header, and then forward it on its way, and should be able to do like

06:29.840 --> 06:36.400
tangy line rates, no problem with this, we've just just a couple of calls and take a couple of

06:36.400 --> 06:42.720
tens of nanoseconds maybe to handle each packet, obviously this is inflexible, you need to

06:42.720 --> 06:46.960
recompile it and reload it to change backends, there's no health checking, so you end up back

06:46.960 --> 06:53.680
black polling a lot of traffic, if the backend is down, backend selection is poor, it's just

06:54.720 --> 07:00.080
you know what's used as suitable hash algorithm instead, something consistent hashing algorithm,

07:00.080 --> 07:05.360
which maximizes the chance of hitting the same backend after a pool change, and like traditional

07:05.360 --> 07:09.520
hash table modular table size like this, which breaks the majority of connections if anything

07:09.520 --> 07:16.880
changes, so we need a way of making this flexible, go with the EBPF maps, I want to store

07:16.880 --> 07:22.640
a service and destination info, bits, backend IP addresses, backend macadresses, that kind of thing,

07:22.640 --> 07:28.560
we want to map VLAN IDs to interface indexes, so that we can use BPF redirect map,

07:29.120 --> 07:36.080
and with the deal we either native interfaces or VLAN trunks, back in selection, I want to choose

07:36.080 --> 07:41.280
a destination and track the flow to it, preserve connections when the pool changes,

07:41.280 --> 07:45.360
then use the consistent hashing algorithm, I want to keep account of packets,

07:46.000 --> 07:53.040
an opportunity and any errors to each bit, service backend monitoring of various reasons,

07:53.040 --> 07:57.920
we want to push packets to use a space with a queue for an hour of sizzle relaying or

07:57.920 --> 08:06.960
flow state sharing, I sort of need to actually in the kernel, the EBPF code, what about user space,

08:08.880 --> 08:14.560
so we need some control plane functions to add or remove services and backends and enable

08:14.560 --> 08:19.600
to disable them or change their parameters, we want to be able to send a ping to trigger art discovery

08:19.600 --> 08:24.720
and then read macadresses from the product map, we can discover local interfaces,

08:24.720 --> 08:30.480
F index numbers for them, hardware macadresses, and local IP addresses, anything else you might

08:30.480 --> 08:36.640
need to know like bonding and trunken, we need to ensure that each bit is present on backends

08:36.640 --> 08:40.320
to avoid black holding packets, it's not enough to know that a web server is running it needs to

08:40.320 --> 08:50.000
be listening on the VIP that you're using, so we're at the S library, it's borrowers the interface

08:50.000 --> 09:04.320
from cloud flares, IPvs, go library, so to reinforce that, to, yes, you iterate over the results of

09:04.320 --> 09:10.400
the services and destinations, functions and you can ensure they match the desired state

09:10.400 --> 09:14.080
if something's missing, you create it if something is there, you don't want, you can remove it

09:14.400 --> 09:21.680
and you can update to enable disabled backends and other things, I add some more functions to

09:21.680 --> 09:28.880
help with health checks and flow at the state sharing, there's an example program, you've got here,

09:29.920 --> 09:35.840
yeah just pulling the library and reading with the command line arguments, create an instance of

09:35.840 --> 09:43.840
a client and create an instance of a service, in this case it's port 80 protocol TCP, so that's

09:43.840 --> 09:52.160
HTTP server, and then add backends to the service, that's it, requires the BPS dev for linked

09:52.160 --> 09:58.000
things that used to see go, my school to the CBPF API, that's enough to set up a service and

09:58.000 --> 10:05.600
distribute traffic, but what about health checks? So state previously, the server needs to respond

10:05.600 --> 10:10.960
to a VIP, not just its own IP address, we send traffic for a VIP to backend but the VIP is not

10:10.960 --> 10:15.360
configured, then you'll pack it back and we'll be backholds, we need a way of probing the VIP,

10:15.920 --> 10:22.720
we create a map of virtual IP and backends, real IP addresses, to enact address, we'll send

10:22.720 --> 10:31.280
a packet to say 25502 via a special VEP interface that we create and we can have HTTP map the

10:31.280 --> 10:39.760
packet and forward it to MAC address of the real server, 10124, but with the VIP address of 1212,

10:39.760 --> 10:46.960
what's it say, 100.1, similarly for 2450, so it's a three, sends it to the same server, but we'll send it

10:46.960 --> 10:56.320
to the 100.2 VIP address. So yeah, we need to create an interface with a route to the

10:56.320 --> 11:03.120
NAT addresses, I'll tell you that NAT address for a virtual IP and real IP pair, use a

11:03.120 --> 11:11.200
client to dot NAT function, we can then use standard networking stack, EG goes network, net

11:11.520 --> 11:19.200
HTTP library to make requests, make a get request, however the host name of the service

11:20.560 --> 11:27.440
will be set to the NAT IP address, so if you want to make requests that will rely on the host

11:27.440 --> 11:34.240
name being set correctly, virtual hosting or TLS, so the name indication, then we can override

11:34.640 --> 11:41.680
goes HTTP transport dial context, which will always return the desired NAT IP address in the

11:41.680 --> 11:48.000
amount, what the URL you give is, and each backend and each service gets its own client,

11:48.000 --> 12:00.080
it gets its own HTTP client. So minimum viable product for a load balancer, it's got a health

12:00.080 --> 12:05.280
checks for each individual service, I've got to advertise VIPs to neighbouring routers for the

12:05.280 --> 12:11.520
BGP, simplest protocol to implement, known as health injection, I don't know if the surface

12:11.520 --> 12:17.200
metrics and stats, EG with previous, do a bit of logging locally, and maybe with some

12:17.200 --> 12:23.920
logistics search, and also perhaps some kind of a learning via teams or Slack web hooks for

12:24.000 --> 12:31.680
VIP failures, flowchowing would be nice to support for preserving a long-running connection

12:31.680 --> 12:37.680
when failing over between load balancers in the face of backend server pool changes,

12:37.680 --> 12:42.880
you don't know where that connection was going to, I need some way to configure the service,

12:42.880 --> 12:53.360
JSON, yeah, or something. So there's an app for that, this Vc5 application, which is written using the

12:53.360 --> 12:59.440
XVS library, it's actually proxy influences you can probably tell, health checks were indicated

12:59.440 --> 13:04.480
as you'd expect, VIPs services back ends and green network thorough K, read if they're failing

13:04.480 --> 13:12.400
and grave, they're administratively disabled, it speaks BGP, the top right there, there is a talking

13:12.400 --> 13:17.440
to the local bird instance in this case, but you can have any number of routers speak to you,

13:18.400 --> 13:24.560
metrics, you can see the stats are summarized in the tables, and there's a metrics link at the top,

13:25.280 --> 13:31.440
which is a link to a previous endpoint. As a way, box in the center of the screen for logging,

13:32.960 --> 13:40.160
and this is much more detail, can also be sent to a elastic search, and at the top, this status link

13:40.160 --> 13:46.400
is a JSON endpoint, in which the view is built so you can change it, just all done via via JavaScript.

13:47.040 --> 13:55.120
If you need to modify configuration, it's a very YAML file, it's very compact,

13:55.680 --> 14:02.560
should cover 95% of cases in this fairly trivial manner, and there's a config PL script,

14:02.560 --> 14:08.880
which expands the YAML to a much more explicit JSON format, so you can see all the health checks are

14:08.880 --> 14:13.600
explicitly defined in there, there's only one of those two services shown because it's a much more

14:13.600 --> 14:21.200
of the both, but if we need more control, we can define custom checks under the policy points there.

14:22.240 --> 14:29.440
Standard ports can be overwritten, HTTP here is short for 80 slash HTTP, so you could use 80, 80,

14:29.440 --> 14:34.560
slash HTTP if you want a different port, and if you're not keen on the YAML, then you could write

14:34.560 --> 14:39.280
to your own script process, if something any format or time or use a scripting language,

14:39.280 --> 14:44.400
but I'll draw in DSL, whatever. I'll run the YAML's JSON stage before deploying this with

14:44.400 --> 14:51.920
them with Ansible to catch any syntax errors, and it's all therefore put into a Git repository,

14:51.920 --> 15:01.120
it's pretty infrastructure as code, and set up. Yes, starting deploying it in 2024, run for a year on

15:01.120 --> 15:06.480
some VMs, running small services, test services, obviously a restatic website, that kind of thing,

15:07.440 --> 15:13.600
once we're confident, we've started running in ANGO on tier 1 services, but it's not a January,

15:13.600 --> 15:22.960
it's sort of 2025. It runs on pretty modest servers, yeah, 32 core 32 gig, they're not nothing big,

15:24.720 --> 15:30.880
pretty inexpensive, and one server can handle a full production load in low single digit CPU,

15:31.600 --> 15:40.640
partly because the DSR, you're only getting acts back mainly from the clients,

15:41.520 --> 15:51.120
the bulk of the data going out can be up 20 or 30 times the ingress load. Runs one, some intel,

15:51.120 --> 15:58.720
called X-710s, with VLAN Trunks running over AMLAG, so aggregated interface server,

15:58.720 --> 16:06.320
carous switches for resilience, and we create a new VPS report to DNS to migrate it,

16:06.320 --> 16:10.960
both some services, we could just cut over one shutdown at one of our co-loads and bring it up again,

16:11.760 --> 16:19.360
and then flip over to the other side, it's all quite simple. This is a week in April last year,

16:19.360 --> 16:23.680
so the Easter Brakes are slightly lower traffic than normal, you can see where upgrades took place,

16:23.680 --> 16:30.400
and the orange area gives way to blue, and the brown area gives way to green, possibly a

16:30.400 --> 16:37.120
colorstone commitment, one on there, and that's where we failed over between low balances,

16:37.120 --> 16:42.080
when we upgraded them, we looked at the spreads and they lost a traffic, so it was nice,

16:43.120 --> 16:47.360
large spikes there are related to some very popular podcasts that we host,

16:48.080 --> 16:54.160
I think they're automated, one of our mobile apps, so they're in some kind of self-DOS attempt.

16:56.000 --> 17:00.640
Sorry, even distribution traffic at the backends, but I'm left here, so yeah, that's the

17:00.640 --> 17:07.600
hatching algorithm working nicely. Give me an idea, remember this old XKCD strip,

17:07.600 --> 17:13.120
map of the internet, so I held it space-filling curve, and maps a linear address into a 2D

17:13.360 --> 17:17.200
fractal, essentially. I've got to be in the pressing way of visualizing the traffic now that I'm

17:17.200 --> 17:23.520
seeing all that, so you know, trapped prefixes from across the net, each pixel here is a slash

17:23.520 --> 17:31.600
20 prefix, most traffic from European IPs, quite in Brake, as you would expect, as being European,

17:32.880 --> 17:39.360
and the large white patches, when there's lots of traffic, generally they look like the UK ISPs,

17:39.840 --> 17:46.080
some traffic from USA, our in addresses, but some of those might be reallocated prefixes,

17:46.080 --> 17:52.080
I don't know. I'd imagine that if we were getting a DDOS, you'd see a much more evering covering

17:52.080 --> 17:59.040
of points, so I thought we used to visualize that, and you can use these stats that you report

17:59.040 --> 18:03.760
over time to calculate the probability of a packet, being legitimate, in the case of rubbing,

18:03.760 --> 18:11.360
DDOS, and got with their sort of commitment from below the 50th, 95th percentile of regular traffic,

18:11.360 --> 18:19.200
because I don't know you do the same for IPv6 though. Once this is a working, I've also now

18:19.200 --> 18:28.560
I did lathe 3 and IPv6 support, so lathe 3 is a towing traffic over IP, encapsulated, it uses

18:29.040 --> 18:37.360
well supports the IPNIP, generic routing encapsulation, fill over UDP, and generic UDP and encapsulation

18:37.920 --> 18:51.920
standards. IPv4 IPv6 traffic addresses, maybe used for VIP or real IP addresses, and you can have any

18:51.920 --> 18:58.960
combination of those, so you can have an IPv6 address backed up by IPv4 backends or vice versa,

18:58.960 --> 19:04.640
any combination. You tell the library what gateway to use, because it doesn't use the standard

19:04.640 --> 19:11.200
Linux routing table, but that's about it, and it's a lot simpler to only support lathe 3,

19:11.200 --> 19:22.640
than having multiple VLANs with layer 2. We have a few issues, the Ex710 had problems getting

19:22.640 --> 19:29.680
media negotiations to work at the client. A colleague did look at some system D parameters that

19:29.680 --> 19:36.160
seemed to help, although I think it was mostly fixed by just doing kernel upgrade. I tried the

19:36.880 --> 19:48.320
native driver modes for the VMAX net 3, VMware driver, but there was a bug in that which had

19:48.320 --> 19:56.160
the wrong point for the data end, but again, I couldn't come up, fixed that, so it was all

19:56.160 --> 20:06.560
easy enough. Adding the lathe 3 support increased the size of the program a lot, so

20:07.360 --> 20:14.560
verify wasn't very happy with me, so I had to use tail calls for some of the passing packets there.

20:16.560 --> 20:24.160
I can't get a termination of food or GUE over IPv6, I don't see a way to set that, anyone knows

20:24.160 --> 20:33.440
that we're amazing. Future development, the config, which I say currently stored in get in

20:33.440 --> 20:38.560
YAML, can be centralized in a database, or with a nice UI on it, the surface of the JSON,

20:38.560 --> 20:45.360
and URL end point, and have the lowest balance hold it, to make it a bit more self-service,

20:45.920 --> 20:50.960
teams could be given BGPPS to speak to, and a bit range, and they could manage the

20:50.960 --> 20:58.800
own load balances, just a demon you run, we could use service discoveries, so have back ends

20:58.800 --> 21:03.920
registered to something like Zookeeper or console, when they're ready to serve traffic, and then

21:03.920 --> 21:07.600
the load balances could discover them automatically, so you don't need to update it.

21:07.600 --> 21:16.640
An EBPPS futility to terminate some of the layer three protocols in white the handy,

21:17.680 --> 21:23.520
you can currently require IP tables for my warning stuff to make sure it only goes to the vets

21:24.160 --> 21:29.920
and you're doing all of that with this post and an EBPPS program, so that would be useful,

21:29.920 --> 21:37.200
and that would also solve the problem that I can get food and GUE to work with IPv6.

21:38.080 --> 21:43.840
Yeah, and do stuff at our own pace now, we're not worried about switching vendors and

21:43.840 --> 21:49.840
migrating all the config again, or any extra licenses if we want to deploy more instances,

21:51.920 --> 21:57.040
and popping the cloud flat IPv6 interface to now to be quite good, as you can basically adapt it

21:57.040 --> 22:02.240
to use IPv6 instead of XDP for some net-based load balancing.

22:03.200 --> 22:09.040
So quick demo, which I think will work.

22:11.040 --> 22:17.920
All right, so there's balance of program, it's in go, simple, I'll look at it.

22:18.800 --> 22:25.440
Yeah, like before, essentially you pull in the library and pass your manual in arguments,

22:26.320 --> 22:32.080
create a new client, create a service on that client, what IPTCP again,

22:33.760 --> 22:38.240
then for each of the backends you've got for the for it, you create the destinations,

22:40.400 --> 22:44.720
and there it's using GRE, so that's a layer three service,

22:44.720 --> 22:51.200
from layer three backends rather, loop for 60 seconds and display stance and then clean up,

22:51.840 --> 22:58.640
it's very easy, just remove each service that's returned from the services function.

23:01.920 --> 23:06.480
Yeah, so every second we print out some global stance, print out the stats from the vip's,

23:07.280 --> 23:16.800
then for each service and each backend in each service, we get the net address for that backend,

23:17.520 --> 23:25.280
make a query to the net backend and pen on the result, we can either enable or disable backend

23:26.320 --> 23:30.960
and update it so it gets to bring back to the kernel and that's it.

23:33.680 --> 23:42.800
Pull down the library and then again, that's all done, and then yes, we just touch

23:42.800 --> 23:50.720
run with the interface name, the v-land description that we're on, the vip address, we're going to use

23:50.720 --> 23:58.800
and then list the backends, let me send that going, now it's immediately that one of the backends

23:58.800 --> 24:04.400
isn't responding, this because it hasn't got the vip set on it yet, so if we send traffic to that,

24:04.400 --> 24:16.080
well I'd see it just goes to the backend one, and if we then add the vip to the backend,

24:17.600 --> 24:22.000
we should see that most immediately it comes up and it's healthy and then if we send more traffic

24:22.800 --> 24:29.440
it will go to both backends, so it's being balanced, check a bit more traffic kind of,

24:30.160 --> 24:36.080
yes, so I'm this will handle huge amounts of traffic, I don't want to conserve all

24:36.080 --> 24:42.400
handle potentially 100 gig backend services because of the asymmetric nature of DSR,

24:43.760 --> 24:48.880
and that's it,

24:48.880 --> 25:03.280
so links to the programs and stuff back to the movie and questions, thank you, that was a really great

25:03.280 --> 25:09.440
talk, just a quick question, basically around high availability, particularly around two points,

25:09.440 --> 25:16.000
I think you've kind of touched a little bit better, one, basically if you're compiling a new program,

25:16.160 --> 25:22.080
is that going to disrupt things when you push it, also the if you have two little balancers,

25:22.080 --> 25:26.640
what your kind of strategies there, so yeah, I think one of the service for maintenance and so on,

25:26.640 --> 25:33.840
for there's nothing particularly fancy about loading a new new EVP F code, that would disrupt it,

25:34.640 --> 25:39.760
but because it works in a equal cost multi path kind of set up and there's a

25:40.640 --> 25:46.000
state table sharing, then failing over from one to another, it's not probably, you know,

25:46.000 --> 25:51.840
pretty much all the connections of maintenance, so you just shut down one and restart it with the new code,

25:53.680 --> 25:58.080
thank you,

25:58.080 --> 26:13.680
great talk, we also run into the BMX net 3 buck, I think that's your, yeah, I was wondering,

26:13.680 --> 26:20.480
you mentioned you are using a map for the gateway lookup, is there any reason we didn't use the

26:20.480 --> 26:29.040
kernel for the lookup? Yeah, for the gateway lookup, before I just did see it at the time,

26:29.040 --> 26:35.920
I think don't necessarily want to use the routing table, you know, so there's just have a default

26:35.920 --> 26:41.360
route on one white administrative interface and then the actual services are done on completely separate

26:41.440 --> 26:48.640
and the lands which don't have a route to go, if I understood the question correctly, I'm not sure,

26:50.960 --> 27:01.680
yeah, it's fine, I'll thanks for your talk,

27:02.640 --> 27:10.880
I would do you maintain the flows they're sharing across all the level for load balancer,

27:10.880 --> 27:18.080
for example, if one goes down or if one of the server or for service goes down?

27:18.080 --> 27:24.480
It's done relatively lazily because of the, it's just an hashing algorithm, then everything

27:24.480 --> 27:27.920
kind of sees the same stuff and, you know, generally you don't break connections if a back-end

27:27.920 --> 27:34.880
server goes down, it's more for long-lived connections, so just of every 30 seconds or so,

27:34.880 --> 27:41.440
we can kind of update the flow, multi-cast it out to the other load balancers and they can

27:41.440 --> 27:47.760
write the flows to them. Okay, and for a vain encapsulation, do you use a do you enable

27:47.840 --> 27:54.720
jumbo frames, something at work, for a vain crazy size of a packet? Not at the moment, it will handle

27:56.000 --> 28:00.880
the send-back and I see MP in the packet too big, kind of responsive if it needs fragmentation,

28:00.880 --> 28:06.320
so we're not really switched off two layer three yet, so that will be a kind of ongoing

28:06.400 --> 28:12.080
thing and say how it works, but yeah, should be okay. Thank you.

28:18.080 --> 28:21.840
Right, so as you mentioned already some load balancers that exist

28:23.280 --> 28:27.680
different projects and yours is right where we wish I think is great because that means

28:27.680 --> 28:32.080
everybody can basically reuse it. So my question would be, do you have that, do you have actually

28:33.040 --> 28:39.680
external people contributing or using your project for now? I think a couple of people do,

28:39.680 --> 28:47.600
I've only seen a couple of she's reported, so some people do use it, I don't, I don't,

28:47.600 --> 28:51.840
I can't say for sure who does that, I don't know. Right, that's a good start.

28:53.920 --> 28:57.760
After first step, maybe, maybe. Yeah, if you do use it, let me know,

28:58.080 --> 29:02.480
try not to be as cautious on making changes to the code and breaking stuff, but

29:04.080 --> 29:05.840
good. Any other questions?

29:09.360 --> 29:13.760
I think we're good. Thank you so much. Thank you.

