WEBVTT

00:00.000 --> 00:17.680
Hello everyone, so this is a bit of a mouthful of title, so basically the name of the software

00:17.680 --> 00:26.160
project, what you should remember is a CVMFS. Before we get into the content, we'd like

00:26.160 --> 00:39.680
to suggest you visit our both session, and the open source at 7 stand at K level 1, and

00:39.680 --> 01:03.440
now for the George Will. Is this all right? Perfect. Okay, so I think I need to do

01:03.440 --> 01:17.080
this first here, right here. Okay, so hello, I'm George, Europe, Europe, Europe,

01:17.080 --> 01:20.800
Europe, Europe, Europe, Europe, Europe, Europe, Europe, Europe, Europe, Europe, and I'm a certain fellow

01:20.800 --> 01:29.480
working in the APSFD group since 2025, and especially in the CVMFS project, and to

01:29.480 --> 01:35.560
tell you, like I'll give a brief introduction to CVMFS, and I'll talk to you how can we benefit

01:35.560 --> 01:45.000
from CVMFS on content rights workflows. So something that I was not actually aware of before

01:45.000 --> 01:49.560
landing at turn is the amount of data that that experiments in high energy physics and

01:49.560 --> 01:55.960
practical physics are producing, just the Sun Data Center is starting more than 30

01:55.960 --> 02:03.640
petabytes of data per year, and it's dedicated software to process them, and of course this

02:03.640 --> 02:10.440
software cannot operate only locally, it has to be distributed all over in libraries, over over

02:10.520 --> 02:21.320
the world, and this is the main mission that CVMFS came for. I'll just use a small time

02:21.320 --> 02:28.520
split here to mention easy, I guess most of you should have already heard about them,

02:28.520 --> 02:34.120
it's the European Environment Force certificate software installation was mission is to provide

02:34.120 --> 02:40.360
the uniform experience for people that are working with scientific software to have

02:40.360 --> 02:46.280
a transparent experience regardless of the underlying system they are working on. So probably

02:46.280 --> 02:55.080
you're familiar with the stock, CVMFS is in the file system layer of this stock, actually they

02:55.080 --> 03:03.400
have trust pretty good presentations, for more in the HPC development totally recommend attending

03:03.400 --> 03:15.080
there. So CVMFS, what is it? It's read-only file system in the user space, and I think a good

03:15.080 --> 03:23.160
metaphor to think about it, it is like an on-demand streaming service, but for scientific software.

03:23.160 --> 03:30.920
So software in CVMFS is an organized group in a repository that in principle do not leave

03:31.080 --> 03:38.120
on the user's machine, and CVMFS from the user's perspective is providing a normal

03:38.120 --> 03:46.040
process, browsing an accessing experience to those files. So we can do like the listing or

03:46.040 --> 03:52.120
data accessing, and what's the take home message here is that these those files are not present

03:52.120 --> 03:57.240
on the users' machine, they live elsewhere, and they're being downloaded on demand.

04:01.400 --> 04:06.840
But how does this work? So CVMFS, I think, who is using, under the who is using fused,

04:08.040 --> 04:16.280
and he just comes with the kernel module, and a user space library. So every file system

04:16.280 --> 04:23.800
access is forwarded through VFS to the kernel module, and then in the operating system,

04:24.760 --> 04:35.080
and CVMFS is implementing all the necessary read-only system calls to enable the user

04:35.080 --> 04:42.440
access files. So in principle, if the file is present in the local storage, it will be used directly,

04:43.080 --> 04:54.440
if not, it will be fetched from the network. Okay, but how did that become so available to the

04:54.440 --> 05:03.960
user? So we start from a normal read-ride file system, and then we follow the process called

05:04.440 --> 05:13.720
compublishing, and what's happening under the hood is compression, chunking, and we use

05:13.720 --> 05:20.120
merciletries to sort our data in a content addressable format, which means that basically in

05:20.120 --> 05:27.240
or to access our data, we don't use the path or their name, we just use their content,

05:27.960 --> 05:35.240
and this allows for deduplication, which practically means that the same file or chunk

05:35.240 --> 05:42.360
is never stored wise, which saves a lot of logical space. We have an overview of the order

05:42.360 --> 05:47.000
of magnitude of that later on talking about the unpopular repo.

05:47.320 --> 05:58.360
Of course, okay, then once we have the data in a content addressable format on our server,

05:58.360 --> 06:09.880
then they become accessible from the user's client just by using HTTP, and I guess it's pretty intuitive

06:09.960 --> 06:20.600
that uses a storage is not limited, so we use an LRU clean up policy to respect the available space.

06:23.480 --> 06:31.400
So some numbers here, this is the evolution of the four principal repositories. The

06:31.480 --> 06:40.440
turn is hosting with similar 7FS, plus the unpacked.sern.ca 3PO that we'll talk about later.

06:42.520 --> 06:49.880
At the moment, turn is distributing over two beta bytes of data using 7FS,

06:50.920 --> 06:59.320
more than 8,000 container images and around 290 repositories, and something that came

06:59.400 --> 07:05.240
along the way is that besides being a very good tool for software distribution,

07:05.240 --> 07:10.840
7FS has also it ought to be nice tool for software preservation.

07:14.280 --> 07:20.120
Okay, so a small parenthesis here to talk about container images, and then so

07:20.120 --> 07:25.720
how 7FS is used to facilitate a container image distribution.

07:26.600 --> 07:33.000
So in principle, container images are file system layers that are stored, stored and unindexed

07:34.360 --> 07:40.600
file archive, and this practically means that even if we need a specific

07:40.600 --> 07:44.120
time portion of the archive, we cannot access it directly, we need to have access

07:44.120 --> 07:53.960
in entirety, decompress it, and then access it, usual file system, usual container images in

07:53.960 --> 08:03.320
the physics sphere is around in the order of magnitude of gigabytes, and we end up using a tiny

08:03.320 --> 08:09.080
fraction of it. So we'd like to have a way to be able to access only the essential parts.

08:09.960 --> 08:20.600
So this is already been there. Container D allows for providing some apps, some apps,

08:20.600 --> 08:27.160
other plugins could exactly that we do lazy pulling, and we are only pulling the essential

08:27.160 --> 08:34.040
parts of the archive. There are many alternatives, there's strategy, it's strategy,

08:35.000 --> 08:43.960
so basically, it's one introduced a different way to form a to store container images.

08:45.000 --> 08:54.200
Also, CVMFS comes with dedicated to the synapse author, and it provides a way to publish container

08:54.200 --> 09:00.680
images in an unpacked format, which is also compressed and de-duplicated. It works with Docker

09:00.760 --> 09:14.280
app owner and podman, and also some overly here of the main repository uttering that is providing

09:14.280 --> 09:21.640
an unpacked container images is unpacked attorney.ca, you can see that it's broadly adapted from

09:21.640 --> 09:30.360
2022 until now, if it's not visible well. But we have recently known of 12 terabytes of

09:31.000 --> 09:41.560
compressed and de-duplicated images more than 120 million files, which would correspond to more

09:41.560 --> 09:51.080
than a petabyte of logical data if they were not de-duplicated. So I'll pass this stage here to

09:51.080 --> 10:01.800
underely who has some nice numbers. 100x the compression rate you want, just be honest, that's cool.

10:21.640 --> 10:37.640
So this is a practical demonstration of lazy pooling of container images, which George has just

10:37.640 --> 10:56.760
introduced. So this is all it takes on a fresh CVMFS install to actually use this unpacked

10:56.760 --> 11:04.440
CRMCH repository. You shouldn't be using it in production, but you can do that. So basically we

11:04.440 --> 11:11.000
issue podman command. It's just an example that you can use it with podman. Nothing special with

11:11.000 --> 11:19.400
podman to do that. So you just give it some temp-dir to mount it through the first, then use

11:19.400 --> 11:32.520
specify this CVMFS mount point location, which includes the path to image directory, and then you

11:32.520 --> 11:40.520
give it a comment. So here we just load a particular module with a python. And it took us 25 seconds

11:41.240 --> 11:50.120
and in our cache, we have a cache in the end there is 600 megabytes, but the whole image is over one

11:50.120 --> 12:00.680
gigabyte, so we saved downloading 400 megabytes. So this is just normal pooling.

12:03.560 --> 12:09.880
So even on a very fast gigabyte of link connection, it took over a minute to download it. And in fact,

12:10.200 --> 12:20.200
it can end up like this. So this is also a part of motivation behind lazy pooling to avoid

12:20.200 --> 12:27.320
situations like this to have to go and clean up expeditions or resize your machines with that.

12:27.800 --> 12:39.080
There is a great, so there is exists the documentation for lazy pooling of containers

12:40.120 --> 12:50.360
with the CVMFS, upstream edge check the links in slides. Cubecon 25 had a great talk by

12:51.320 --> 13:00.280
Valentin. So a slides and video of that are available. So here we want to introduce

13:00.280 --> 13:07.000
another new set of tools, modeled after popular five utilities, which offer the convenience of

13:07.000 --> 13:15.400
direct effect on a CVMFS repo. So it can be seen as easier to use tools. They might

13:15.400 --> 13:23.000
provide speed-ups in some modes. They can help avoid running out of disk space in a scratch area

13:23.000 --> 13:30.360
when you are updating CVMFS repo content. So this will become clearer in a bit. So the usual way

13:30.360 --> 13:38.600
to change the contents involves the three stages. So you give it a command CVMFS server transaction.

13:39.560 --> 13:46.520
You make changes in a particular directory and then you issue the command CVMFS server

13:46.520 --> 13:54.280
publish and then the changes after that would become available to clients who mount your repository.

13:56.040 --> 13:57.400
So this is how it looks.

13:57.640 --> 14:17.320
So usually you have mount read-only of CVMFS and then with a transaction command CVMFS mount

14:17.320 --> 14:25.880
stays read-only but your overlay is amounted read-write so you then can jump into this directory

14:25.880 --> 14:36.920
and do your changes. So overlayFS then put them. So they appear in your this directory but

14:37.640 --> 14:45.560
in fact CVMFS itself still doesn't have this content but this directory handled by overlay

14:45.560 --> 14:53.960
FS is called scratch-dure. It holds this temporarily. So then we issue this command and then

14:54.840 --> 15:13.080
it is uploaded into the primary storage. So here we add four gigabyte file to S3 back to the

15:13.160 --> 15:23.240
first repository. S3 doesn't necessarily mean Amazon many, many implementations work.

15:26.120 --> 15:30.040
So these are the three stages we've just seen.

15:30.120 --> 15:47.080
And now is the new tool CVMFS R sync which works slightly differently. So we uploaded it

15:47.080 --> 15:56.680
so we issued the command and then it's there and it never goes into the scratch area.

15:57.320 --> 16:09.400
So how that works? So it uploads files to S3 directly. It pushes metadata changes to CVMFS gateway

16:09.400 --> 16:19.720
host. This is separate service which might which might be present in simple and this in simple

16:19.720 --> 16:25.960
setups it might be not in use at all. So and then it waits for new CVMFS revision to become

16:25.960 --> 16:35.320
visible from the client standpoint. So speaking of S3 implementations so many protocol implementations

16:35.320 --> 16:44.280
work. I know of Seth works. I checked garage works. Minio Azure is a red it works but with a quick.

16:44.840 --> 16:47.880
So again.

17:02.880 --> 17:04.880
Okay.

17:04.880 --> 17:15.920
Okay, so it was this visible.

17:15.920 --> 17:16.920
Okay.

17:16.920 --> 17:17.920
Okay.

17:17.920 --> 17:18.920
Okay.

17:18.920 --> 17:19.920
Okay.

17:19.920 --> 17:20.920
Okay.

17:20.920 --> 17:25.600
So Stephen Fespois explores at the moment a ceiling a pull request, but you know where

17:25.600 --> 17:28.000
it is now.

17:28.000 --> 17:33.360
So I'm for questions and again, don't forget these two other related events if you want

17:33.360 --> 17:34.360
to know more.

17:35.360 --> 17:36.360
So questions.

17:36.360 --> 17:40.360
Quite okay.

17:40.360 --> 17:41.360
Okay.

17:47.360 --> 17:53.360
We unpacked up several stages quite big, even with all the duplication and compression.

17:53.360 --> 18:00.360
There's a couple of problems in practice, but managing it will also give you the question.

18:00.360 --> 18:01.360
Yes.

18:01.360 --> 18:07.360
So the size of unpacked is quite big, and if you have troubles with its management.

18:07.360 --> 18:09.360
So answer is yes.

18:09.360 --> 18:11.360
There is some technicalities.

18:11.360 --> 18:17.360
I think it's some implementation specific details that we can improve somehow to make it.

18:17.360 --> 18:21.360
Actually, it's okay.

18:21.360 --> 18:28.360
Maybe we don't have the time right here to introduce how catalogue work, but maybe want to do the unpacking

18:29.360 --> 18:35.360
the catalogue level, and we hope that this will just ease a lot of the publication process of unpacking

18:35.360 --> 18:36.360
legislation.

18:36.360 --> 18:37.360
Okay.

18:41.360 --> 18:46.360
So when you demonstrate in the beginning, you talked about that in the last amount of data

18:46.360 --> 18:47.360
to do this.

18:47.360 --> 18:48.360
Yes.

18:48.360 --> 18:49.360
Exactly.

18:49.360 --> 18:52.360
But then we really switch over to just the software.

18:52.360 --> 18:57.360
Do you provide, at least that's what we're talking about.

18:57.360 --> 19:02.360
You also provide the stories, like data sets, and so on.

19:02.360 --> 19:04.360
Yes.

19:04.360 --> 19:12.360
So there's the, it was primarily designed and just for a software distribution.

19:12.360 --> 19:17.360
But then there is the possibility to distribute static data.

19:17.360 --> 19:22.360
And also, you can, you can do sophisticated data that have already been stored elsewhere.

19:22.360 --> 19:30.360
So the idea is there's the external, external data distribution.

19:30.360 --> 19:35.360
The catalogue are generated in a repository the same way.

19:35.360 --> 19:38.360
It would have been done like publishing software.

19:38.360 --> 19:42.360
And then it can be accessed by the client directly where they live.

19:42.360 --> 19:47.360
And I think, but for not in the content addressable format anymore.

19:47.360 --> 19:54.360
So if we go via the local process of publishing, it will be the normal one.

19:54.360 --> 19:59.360
It will be content addressable if not it would be just accessed the same way.

19:59.360 --> 20:05.360
But through the catalogue provided by the normal publishing.

20:12.360 --> 20:14.360
Yes.

20:14.360 --> 20:16.360
Can I have this dual on facts?

20:16.360 --> 20:22.360
Container of GZ-Direct releases.

20:22.360 --> 20:32.360
How unpacked 30H-based container images interact with Kubernetes.

20:32.360 --> 20:35.360
You can use these.

20:35.360 --> 20:41.360
You can launch containers with these as long as you have a 7FS mound,

20:41.360 --> 20:45.360
which I have no experience with Kubernetes.

20:45.360 --> 20:54.360
If you can set up a Kubernetes host or whatever that can launch the container using this command,

20:54.360 --> 20:55.360
then yes.

20:55.360 --> 20:56.360
Otherwise, no.

20:56.360 --> 20:59.360
There's a plugin that you can use.

20:59.360 --> 21:00.360
Yes.

21:00.360 --> 21:04.360
The container is snapshot of the information.

21:05.360 --> 21:08.360
Yes.

21:08.360 --> 21:09.360
Yes.

21:09.360 --> 21:14.360
The plugin that does is, it's the snapshoter that I mentioned just before.

21:22.360 --> 21:24.360
Okay.

21:24.360 --> 21:25.360
Removing files.

21:25.360 --> 21:29.360
How does removing files in garbage collection works?

21:29.360 --> 21:39.360
The reason why the process of publishing, there is a similar procedure to update the repositories on the server that they leave the exist.

21:39.360 --> 21:45.360
There is a chain of like before the data become available to the client.

21:45.360 --> 21:49.360
There is a series of proxies and then mirror servers.

21:49.360 --> 21:53.360
And then there is one principle server that is responsible for publishing.

21:53.360 --> 21:59.360
And it's there where the updating happens.

21:59.360 --> 22:07.360
For garbage collection, I'm not the guy to respond, so maybe Valentin, what are you doing?

22:07.360 --> 22:08.360
Yes.

22:08.360 --> 22:09.360
Yeah.

22:09.360 --> 22:11.360
But the internal is like, I cannot say more.

22:11.360 --> 22:25.360
But if you like it's getting more popular on recent years.

22:25.360 --> 22:27.360
Is there any explanation for this?

22:27.360 --> 22:33.360
Is it the container or support or is it one of the things going on?

22:34.360 --> 22:43.360
Well, it's it's it's it's going to pass it for for long now, but it's getting a broader acceptance and it's being utilized.

22:43.360 --> 22:48.360
I think well, I think yes, it's it's a decent effort.

22:48.360 --> 22:50.360
It's prevent work quite well.

22:50.360 --> 22:54.360
And then yeah, container images some fucking is is benefit I can say.

23:03.360 --> 23:15.360
I'm not very clear.

23:15.360 --> 23:27.360
Is the question does it compress the files?

23:27.360 --> 23:32.360
Is it a lot of work does it require high like beefy machines to be doing?

23:32.360 --> 23:35.360
To set up a server.

23:35.360 --> 23:42.360
No, it works on the small virtual machines on the laptop.

23:42.360 --> 23:44.360
It definitely works as a containers.

23:44.360 --> 23:56.360
So basically as soon as it's processed published, it's just serving as HTTP requests from now on.

23:56.360 --> 23:59.360
And it's externalized somewhat.

23:59.360 --> 24:07.360
So you can serve HTTP requests with a page or a stream implementation.

24:07.360 --> 24:11.360
And the point to actually scaling up the.

24:11.360 --> 24:19.360
To have a to serve a lot of clients is to add layers of proxy and then some.

24:19.360 --> 24:25.360
Sharding the process and adding more layers and more sophistication to proxy.

24:25.360 --> 24:28.360
That's the way you serve a lot.

24:28.360 --> 24:32.360
And the central server doesn't need to be beefy.

24:32.360 --> 24:37.360
I also think it depends a bit on on the amount of data that is to be published.

24:37.360 --> 24:45.360
I think maybe if you talk to sir, 90 who has a different opinion about it.

24:45.360 --> 24:51.360
So yeah, I mean for for small scale experiments, it may be easy.

24:51.360 --> 24:56.360
But if you want to publish a big amount of data, I think publication becomes an issue.

24:56.360 --> 25:07.360
Actually there are developments to to ease the publication of sub parts of a directory in different machines to make this easier via dedicated entity, which is the gateway.

25:07.360 --> 25:12.360
So I my mark, I would be it depends on the scale of what you want to publish.

25:12.360 --> 25:19.360
So we have for five more minutes for questions.

25:19.360 --> 25:22.360
Don't be shy.

25:22.360 --> 25:24.360
No questions.

25:24.360 --> 25:25.360
All right.

25:25.360 --> 25:26.360
Thank you.

