WEBVTT

00:00.000 --> 00:22.000
So we'll round up a pause and we'll get started.

00:22.000 --> 00:28.000
Okay, let's start.

00:28.000 --> 00:30.000
Good afternoon, everyone.

00:30.000 --> 00:33.000
My name is Greencee from Alibaba Cloud.

00:33.000 --> 00:35.000
Today, I'm a set user.

00:35.000 --> 00:39.000
You'll work on CSI-based terrorist plan.

00:39.000 --> 00:49.000
This plan is M2 Bridge, the gap between the local block storage and cloud block storage.

00:49.000 --> 00:51.000
Let's begin.

00:52.000 --> 00:55.000
Today, we will start with the challenge of modern storage.

00:55.000 --> 01:03.000
Then we will give a brief introduction to Kubernetes and the CSI to set the context.

01:03.000 --> 01:08.000
After that, we will dive into the design of our character's storage plan.

01:08.000 --> 01:12.000
And follow the parallel demo.

01:12.000 --> 01:14.000
Then we will look at the benchmark results.

01:14.000 --> 01:22.000
Finally, we will compare our approach with our alternatives and work-up with summary.

01:22.000 --> 01:25.000
Okay, let's start the background.

01:25.000 --> 01:31.000
Today, applications, let's data bases and a worked-load demands tracing simultaneously.

01:31.000 --> 01:37.000
High performance in the press grid reliability and the operational reliability.

01:37.000 --> 01:41.000
But current storage forces us to compromise.

01:41.000 --> 01:45.000
On the one hand, we have a local MME SSD.

01:45.000 --> 01:52.000
They provide the incredibly fast and millions of LPS.

01:52.000 --> 01:58.000
But they are informal, because the host got filled.

01:58.000 --> 02:01.000
They have in the data we will lose.

02:01.000 --> 02:04.000
On the other hand, we got a cloud block storage.

02:04.000 --> 02:11.000
They offer the incredibly durability and the future, like snapshots or clone or resizing.

02:11.000 --> 02:15.000
But they are limited by network throughput.

02:15.000 --> 02:21.000
So, what if we could take the durability of the cloud block storage

02:21.000 --> 02:30.000
and inject the raw latency of local disks, all managed seamlessly within CSI and Kubernetes?

02:30.000 --> 02:34.000
That is precisely what our plan aim to achieve.

02:34.000 --> 02:38.000
We are talking about smart system.

02:38.000 --> 02:44.000
You'll see a sector where the complexity providing a high performance boost

02:44.000 --> 02:51.000
where it matters most without compromising data purchases.

02:51.000 --> 02:55.000
Okay, let's take a...

02:55.000 --> 03:00.000
Before we dive into the architecture of this third storage solution,

03:00.000 --> 03:08.000
it's essential to build two concepts, which is Kubernetes and the container storage interface.

03:08.000 --> 03:13.000
Kubernetes is basically two that have to run your apps in your containers.

03:13.000 --> 03:18.000
It's starting and starting and make sure they are set up.

03:18.000 --> 03:23.000
A lot of these apps also need somewhere to store their data.

03:23.000 --> 03:30.000
And most honestly, most of them don't care how the storage work behind the scenes.

03:30.000 --> 03:34.000
They just need it.

03:34.000 --> 03:41.000
They just want it to be reliability and available when they need it.

03:41.000 --> 03:48.000
So, how does Kubernetes actually connect to all these different kind of storage?

03:48.000 --> 03:50.000
That's where the CSI comes in.

03:51.000 --> 03:59.000
CSI provides a standardized set of interface that allows any storage window to integrate in with Kubernetes.

03:59.000 --> 04:05.000
Once the hardware is implicated, user can easily consume the storage through the PVC

04:05.000 --> 04:12.000
and PVS without ever need to know the details of the hardware.

04:12.000 --> 04:18.000
As the show in the diagram, single port can clamp multi PVCs

04:18.000 --> 04:23.000
and for device needs.

04:23.000 --> 04:28.000
Additionally, single storage backend can be mapped to multiple PVs

04:28.000 --> 04:35.000
to serve various claims across the cluster.

04:35.000 --> 04:42.000
Now, let's look at how we actually build this third storage system.

04:42.000 --> 04:50.000
This architectural diagram shows the interaction between the Kubernetes console and the Kubernetes node plan.

04:50.000 --> 04:54.000
And that's what's through a technical provision in scenario.

04:54.000 --> 04:59.000
First, when we create a PVC in a cluster,

04:59.000 --> 05:09.000
and the CSI's error will trigger storage windows API to create a EBS in the storage cluster.

05:09.000 --> 05:14.000
When it's successful, the Java suggests a PV within the Kubernetes.

05:14.000 --> 05:18.000
It's then automatically bound to the PVCs,

05:18.000 --> 05:30.000
and the transaction in this scan to be found that is making it ready for port usage.

05:30.000 --> 05:35.000
Then, when user created a port, to consume this PVC,

05:35.000 --> 05:42.000
the Kubernetes scheduler assigned the port to the specific node.

05:42.000 --> 05:48.000
And at this point, CSI will demonstrate CSI's error will attach

05:48.000 --> 05:51.000
also trigger the storage windows API,

05:51.000 --> 05:56.000
and CSI's error will attach the EBS one to the specific node.

05:56.000 --> 06:04.000
As you can see in the diagram, the device become visible in node plan in OS level.

06:04.000 --> 06:11.000
So, as the precision foundation of our storage,

06:11.000 --> 06:15.000
you will notice we have two types of nodes.

06:15.000 --> 06:20.000
On node without local these application pods,

06:20.000 --> 06:25.000
use the EBS one directly on the right, the node plan.

06:25.000 --> 06:29.000
But on the node with the local disk,

06:29.000 --> 06:37.000
we take extra steps to build our extension layer.

06:37.000 --> 06:40.000
So, how we build this layer,

06:40.000 --> 06:44.000
we initialize the local disk into a red zero array,

06:44.000 --> 06:46.000
format is with TRIFS,

06:46.000 --> 06:48.000
and mounted to the specific path.

06:48.000 --> 06:55.000
This create a high speed cache that we will use later.

06:55.000 --> 06:59.000
Here is the why, when we went to a win-win,

06:59.000 --> 07:02.000
we went with this specific configuration.

07:02.000 --> 07:06.000
First, with zero, we are empty at the main,

07:06.000 --> 07:11.000
allow us to scale our bandwidth across all the global local disks.

07:11.000 --> 07:15.000
Second, we took TRIFS for these location groups,

07:15.000 --> 07:20.000
which we allow highly concurrence, higher across multiple CPU cores

07:20.000 --> 07:27.000
while widening metadata conditions.

07:27.000 --> 07:29.000
Then, let's move back to the container layer.

07:29.000 --> 07:32.000
Once the sSN node detects the path,

07:32.000 --> 07:35.000
has been scheduled,

07:35.000 --> 07:39.000
it begins constructing the paired layer, a paired storage layer.

07:39.000 --> 07:42.000
Using the file pass defined in the pv,

07:42.000 --> 07:46.000
the jr creates two file, and then that pass.

07:46.000 --> 07:51.000
On the local TRIFS system, we just initialize.

07:51.000 --> 08:01.000
This file is a map called Meta and the data, this one.

08:01.000 --> 08:06.000
This files are mapped into the block device using loop setup,

08:06.000 --> 08:12.000
appearing as loop zero for metadata and loop one for data.

08:12.000 --> 08:16.000
This two loop device, together with the EBS volume,

08:16.000 --> 08:19.000
we just touched on this node,

08:19.000 --> 08:22.000
formed the three components,

08:22.000 --> 08:27.000
initialized to initialize the DM cache targets.

08:27.000 --> 08:31.000
The final result is pushed to the application port,

08:31.000 --> 08:39.000
is as dv, dev device map or zero,

08:39.000 --> 08:42.000
this block device.

08:42.000 --> 08:47.000
Yes, we can, for port perspective,

08:47.000 --> 08:50.000
this is a standard block device.

08:50.000 --> 08:54.000
You can use it with whatever you want.

08:54.000 --> 08:58.000
Yeah, they can support any approach.

08:58.000 --> 09:02.000
Now, let's look at the details of this mapping layer.

09:02.000 --> 09:04.000
To manage local storage efficiently,

09:04.000 --> 09:09.000
we use for locating commands to create a locating space to a file.

09:09.000 --> 09:13.000
To make this file useful by Linux kernel,

09:13.000 --> 09:16.000
we use loop setup command.

09:16.000 --> 09:20.000
This map our files to dv looks device,

09:20.000 --> 09:24.000
allowing the kernel to treat it as a low block device.

09:24.000 --> 09:30.000
At this point, we have all the essential elements required by the M cache.

09:30.000 --> 09:34.000
The remote EBS volume act as the origin,

09:34.000 --> 09:37.000
the reliable source of tools.

09:37.000 --> 09:43.000
Loop device one becomes the high speed cache layer.

09:43.000 --> 09:45.000
We are at the loop zero,

09:45.000 --> 09:50.000
storage the metadata of the terrestrial storage.

09:50.000 --> 09:55.000
That's what the design of our terrestrial plan.

09:55.000 --> 09:58.000
Now, let's look at the core capability,

09:58.000 --> 10:01.000
this architecture towards.

10:01.000 --> 10:04.000
First, we have our own demand impression.

10:04.000 --> 10:08.000
You can resize your EBS volume online with zero on hand.

10:08.000 --> 10:12.000
I will show you exactly how this works in the demo later.

10:12.000 --> 10:15.000
We also fully support multi-touch,

10:15.000 --> 10:23.000
which means you can attach one EBS volume to multiple calculates node.

10:24.000 --> 10:30.000
Because we don't store any terrestrial metadata on EBS volume itself.

10:30.000 --> 10:33.000
It's function, EBS volume itself,

10:33.000 --> 10:38.000
it function exactly like standard EBS.

10:38.000 --> 10:43.000
This is the same reason we can simultaneously support back up,

10:43.000 --> 10:46.000
and scrolling, and fell over.

10:46.000 --> 10:52.000
From a usability perspective, this output is standard block device,

10:52.000 --> 10:56.000
so applications can use it without any change.

10:56.000 --> 10:58.000
We just talk about that.

10:58.000 --> 11:02.000
Finally, there is no additional background service

11:02.000 --> 11:06.000
or diamonds to attend this whole plan.

11:06.000 --> 11:13.000
So we can significantly reduce the operation over half.

11:13.000 --> 11:21.000
To begin the demo, let's look at how easy it is to use this terrestrial plan.

11:21.000 --> 11:30.000
First, we define a standard storage class in the cluster.

11:30.000 --> 11:33.000
We define this storage class,

11:33.000 --> 11:35.000
include the data cache class.

11:35.000 --> 11:38.000
We have just initialized.

11:38.000 --> 11:45.000
We also define the data cache size in 100 gigabytes.

11:45.000 --> 11:47.000
It's defined.

11:47.000 --> 11:58.000
It means we need to allocate pre-allocates the size for cache.

11:58.000 --> 12:03.000
Next, we develop a stable set.

12:03.000 --> 12:09.000
This is a stable set that we claim our storage class name

12:09.000 --> 12:13.000
by one of the clamp templates.

12:13.000 --> 12:18.000
We set two, we set a script,

12:18.000 --> 12:21.000
we define a script in the stable set,

12:21.000 --> 12:26.000
which contains two FIO benchmark tests.

12:26.000 --> 12:29.000
You can see when the pot is running,

12:29.000 --> 12:36.000
the results will show the significant performance boost.

12:36.000 --> 12:40.000
This is the first rate.

12:40.000 --> 12:49.000
This is around 150 MBB, but in the second,

12:49.000 --> 12:54.000
it will be 1,000 and 60 MBB.

12:54.000 --> 13:01.000
Next, we will show the online expansion.

13:01.000 --> 13:05.000
the application port is running and we

13:06.000 --> 13:10.000
we enter the port and the dev

13:10.000 --> 13:14.000
commands so that the size of this

13:14.000 --> 13:19.000
our block is 120 gigabytes

13:19.000 --> 13:22.000
and mount down the data pass

13:22.000 --> 13:27.000
and we do the standard Kubernetes operation

13:27.000 --> 13:32.000
to patch the PVC storage request to 200 gigabytes

13:33.000 --> 13:36.000
and we will see the device

13:37.000 --> 13:41.000
in the port has become 200 gigabytes

13:42.000 --> 13:45.000
yeah, that's our online expansion

13:46.000 --> 13:51.000
the application don't need to have zero downtime

13:51.000 --> 13:58.000
and by that look at the bottom picture

13:59.000 --> 14:05.000
we don't experience the cash fell during this process

14:06.000 --> 14:08.000
as you can see the cash fell is

14:09.000 --> 14:11.000
remains as 100 gigabytes

14:12.000 --> 14:16.000
in which we define in storage class

14:16.000 --> 14:20.000
finally, let's look at the data continuity

14:21.000 --> 14:25.000
at the top of the screen you can see we have a port running

14:26.000 --> 14:29.000
at node number, sorry,

14:30.000 --> 14:34.000
node number 244

14:35.000 --> 14:42.000
and we can see the, we got a test fell under the pass

14:42.000 --> 14:44.000
one point data

14:45.000 --> 14:52.000
which is match to the device of our device

14:53.000 --> 14:58.000
yeah, and we perform a node

14:59.000 --> 15:03.000
we simulate node failure by delayed the port

15:04.000 --> 15:09.000
and the Kubernetes will immediately schedule this port to another node

15:10.000 --> 15:12.000
and the port becomes running

15:13.000 --> 15:16.000
this node is number 245

15:20.000 --> 15:24.000
and we enter into the new port

15:25.000 --> 15:30.000
you can see we also have the exactly the same fell

15:31.000 --> 15:36.000
but the month points of the data has become the virtual block device

15:37.000 --> 15:38.000
now it's the device map

15:39.000 --> 15:42.000
so as you can see the 232

15:43.000 --> 15:46.000
gigabits file is perfect be intact

15:47.000 --> 15:49.000
the application remain entirely unaware

15:50.000 --> 15:53.000
that the underlying cash was swept out

15:54.000 --> 15:57.000
and this simply is data and continuous running

15:58.000 --> 16:03.000
this proofs that we have successfully integrate local disk performance

16:03.000 --> 16:05.000
this is the cloud grid persistence

16:09.000 --> 16:11.000
after that let's look at the numbers

16:12.000 --> 16:18.000
first our test environment is consists of 120 gigabits cloud disk

16:19.000 --> 16:22.000
and 100 gigabits local SSD

16:23.000 --> 16:26.000
the benchmark commands we use is FIO

16:27.000 --> 16:28.000
at least there here

16:28.000 --> 16:32.000
and the baseline is on the left

16:33.000 --> 16:39.000
we use this FIO command to test the EBS volume directly

16:40.000 --> 16:45.000
we got 1,620 files

16:47.000 --> 16:49.000
this is our starting point

16:50.000 --> 16:54.000
in the middle you can see the performance during the very first rate

16:54.000 --> 16:58.000
this is the code rate

16:59.000 --> 17:03.000
and the reds you will see the

17:04.000 --> 17:09.000
you will see the performance during the very

17:10.000 --> 17:17.000
what the data is sitting in the local disk local dynamic cash

17:18.000 --> 17:24.000
the performance schedules to 224,000 EBS

17:25.000 --> 17:28.000
this is the random rate

17:29.000 --> 17:32.000
and then the sequential rate

17:33.000 --> 17:37.000
as in the sequence rate there is no significant improvement

17:38.000 --> 17:45.000
this is because the cache was originated designed for HDD to SSD terry

17:45.000 --> 17:48.000
this is packed as the long span

17:49.000 --> 17:51.000
the default policy rules

17:52.000 --> 17:54.000
the sequence rate is directly to the origin layers

17:55.000 --> 17:57.000
bypass the cache to reduce sweat

17:58.000 --> 18:02.000
and we have did some research

18:03.000 --> 18:07.000
there is used to be parameters called the sequence rules

18:08.000 --> 18:11.000
in the cache

18:11.000 --> 18:16.000
the cache docs is no longer available

18:17.000 --> 18:23.000
so for now we got no way to fix this in the sequence rate

18:24.000 --> 18:26.000
so that is

18:27.000 --> 18:29.000
next we see the red pass

18:30.000 --> 18:32.000
by using the red back mode

18:33.000 --> 18:36.000
we acknowledge the moment

18:37.000 --> 18:39.000
to the local dynamic

18:39.000 --> 18:43.000
the signal to the cloud happens synchronously

18:44.000 --> 18:50.000
as you can see this takes our throughput from 132

18:51.000 --> 18:56.000
to 3,500 EBS

18:57.000 --> 19:01.000
this impacts on performance incredible

19:02.000 --> 19:05.000
now we have seen the performance results

19:05.000 --> 19:08.000
moving into the comparatively

19:09.000 --> 19:12.000
the first one is LVAN

19:13.000 --> 19:17.000
LVAN is more noble

19:18.000 --> 19:20.000
compare with the DM cache

19:22.000 --> 19:26.000
it is the main problem of LVAN

19:27.000 --> 19:30.000
is that the metadata layouts

19:30.000 --> 19:32.000
because the LVAN

19:34.000 --> 19:37.000
are invented into the EBS volume

19:38.000 --> 19:42.000
it is no longer function as the standard block device

19:43.000 --> 19:46.000
that is the main reason we now use this LVAN

19:47.000 --> 19:50.000
second there is a significant migration over has

19:51.000 --> 19:54.000
Kubernetes is built for high speed port movements

19:55.000 --> 19:57.000
but LVAN makes volumes heavy

19:57.000 --> 19:59.000
this is because you have to

20:00.000 --> 20:03.000
naturally clean the LVAN metadata before the EBS volume

20:04.000 --> 20:06.000
can be safely attached to the new node

20:07.000 --> 20:11.000
third, multi-attached port is extremely limited

20:12.000 --> 20:15.000
standard LVAN is designed for single nodes

20:16.000 --> 20:17.000
exclusive access

20:18.000 --> 20:20.000
which means you can now safely attach the LVAN

20:21.000 --> 20:24.000
manage volume to multiple nodes simultaneously

20:25.000 --> 20:28.000
finally is the issue of capability

20:29.000 --> 20:33.000
if you already have a existing EBS volume

20:34.000 --> 20:35.000
fill with data

20:36.000 --> 20:39.000
and you can adjust the turn on the EVAN

20:40.000 --> 20:43.000
without a lot of complexity for mapping

20:46.000 --> 20:49.000
so that is another alternative

20:50.000 --> 20:52.000
why not just build a large storage

20:53.000 --> 20:56.000
cluster like CF by using this local disk

20:57.000 --> 21:00.000
where CF is powerful and is has

21:01.000 --> 21:03.000
but is has four major drawbacks

21:04.000 --> 21:06.000
that's our care storage plan

21:07.000 --> 21:08.000
first is the reliability

21:09.000 --> 21:11.000
all we call block radius

21:12.000 --> 21:15.000
in large cluster a single network link

21:16.000 --> 21:18.000
or metadata error can cause the

21:18.000 --> 21:20.000
data that crashed the entire system

21:21.000 --> 21:23.000
our solution keeps things isolated

21:24.000 --> 21:25.000
if one part has an answer

21:26.000 --> 21:28.000
it doesn't take down your whole storage wall

21:29.000 --> 21:32.000
second, managing a self-built cluster is massive

21:33.000 --> 21:34.000
operational headache

21:35.000 --> 21:37.000
you need to specialize in renaming

21:38.000 --> 21:41.000
who understand the complex components like OSDs

21:43.000 --> 21:46.000
third, we bypass the network bottleneck

21:46.000 --> 21:48.000
if for more storage

21:49.000 --> 21:52.000
cluster, internal storage cluster are always limited

21:53.000 --> 21:54.000
by network speed

21:55.000 --> 21:57.000
usually 10 or 25 gigabytes

21:58.000 --> 22:00.000
by using the local bus on the server

22:01.000 --> 22:05.000
our care plan gets data to your apps much faster

22:06.000 --> 22:09.000
than any network-based storage code

22:10.000 --> 22:12.000
finally is the cloud integration

22:12.000 --> 22:16.000
self-built storage often acts like storage island

22:17.000 --> 22:20.000
it's so loaded and hard to connect to

22:21.000 --> 22:24.000
with the cloud future

22:25.000 --> 22:28.000
like instance EBS stem cells or original backup

22:29.000 --> 22:35.000
our plan is cloud native means it works perfectly with the existing cloud ecosystem

22:37.000 --> 22:40.000
in conclusion, we have demonstrates how

22:40.000 --> 22:43.000
they set up based care storage plan

22:44.000 --> 22:46.000
the primary tension in modern storage

22:47.000 --> 22:49.000
this trade-off between the ultra high

22:50.000 --> 22:53.000
and performance of the former local SSDs

22:54.000 --> 22:57.000
and the lock solid durability of the cloud block storage

22:58.000 --> 23:00.000
by implementing this architecture

23:01.000 --> 23:05.000
we are no longer forcing the developers to compromise

23:06.000 --> 23:08.000
we are providing a storage solution

23:08.000 --> 23:10.000
that is as fast as local hardware

23:11.000 --> 23:14.000
and as reliable as the cloud backbone

23:15.000 --> 23:18.000
a quick so philosophy behind this is

23:19.000 --> 23:22.000
this plan is cloud force

23:23.000 --> 23:25.000
we want our customer to dedicate that company

23:26.000 --> 23:28.000
to the cloud vendors

23:29.000 --> 23:30.000
of loading the heavy scene

23:31.000 --> 23:33.000
to infrastructure

23:34.000 --> 23:37.000
so they can focus entirely on their own business logics

23:38.000 --> 23:41.000
so if you are interested in this plan

23:42.000 --> 23:44.000
for free to research where they have issues

23:45.000 --> 23:48.000
and emails I will look forward to connect with you

23:49.000 --> 23:50.000
yeah, that's all

23:52.000 --> 23:54.000
and we have done a question

23:55.000 --> 23:56.000
yeah, please

23:57.000 --> 23:59.000
I think you're a quick question

24:00.000 --> 24:03.000
I will give an example of

24:03.000 --> 24:04.000
a quick question

24:05.000 --> 24:08.000
I will give an example of still using accessor

24:09.000 --> 24:10.000
I will explain which one

24:11.000 --> 24:14.000
it's still using active events of the EDS

24:15.000 --> 24:16.000
but using S3

24:17.000 --> 24:18.000
because going out, yes it will perform

24:19.000 --> 24:21.000
so there is a good idea

24:22.000 --> 24:25.000
so instead of using a full search behind

24:26.000 --> 24:28.000
an autistor option, the activities are going to be different

24:29.000 --> 24:31.000
but what I will tell something is in there

24:33.000 --> 24:35.000
do you have any advice?

24:36.000 --> 24:38.000
I have a lot of advice

24:39.000 --> 24:41.000
just a block story

24:42.000 --> 24:43.000
you can see

24:44.000 --> 24:48.000
in the story, I can't perfectly get to what you ask

24:49.000 --> 24:52.000
maybe you can talk about this later

24:54.000 --> 24:55.000
please

24:57.000 --> 25:00.000
what is the context of the EDS

25:01.000 --> 25:03.000
what is the, what is the, sorry

25:04.000 --> 25:06.000
Fc

25:07.000 --> 25:08.000
yeah

25:09.000 --> 25:15.000
yeah, there are two modes of the data provided by the MPH

25:16.000 --> 25:19.000
you got two modes

25:20.000 --> 25:23.000
force is the red back, the second is the red through

25:24.000 --> 25:26.000
we will set it with red back

25:27.000 --> 25:29.000
the data will acknowledge

25:30.000 --> 25:32.000
when the URL acknowledge

25:33.000 --> 25:37.000
when data goes into the character storage layer

25:38.000 --> 25:40.000
if you set it with red through

25:41.000 --> 25:43.000
the data will go through the EDS volume

25:44.000 --> 25:48.000
yes, it's by your choice, you can customize it

25:57.000 --> 25:58.000
yeah

26:11.000 --> 26:14.000
yeah, we use this in our internal scenario

26:15.000 --> 26:19.000
but not for publics, this is the first time for publics

26:21.000 --> 26:23.000
yeah, yeah

26:23.000 --> 26:27.000
because usually when I try to use something at this

26:28.000 --> 26:30.000
for example, the developers then you

26:31.000 --> 26:33.000
oh, be your client policies support

26:34.000 --> 26:37.000
you can use something that does not get the following support

26:38.000 --> 26:39.000
we will not

26:40.000 --> 26:41.000
control it

26:42.000 --> 26:43.000
not for

26:48.000 --> 26:49.000
yeah, for

26:50.000 --> 26:52.000
we haven't received such a question

26:53.000 --> 26:56.000
so I can't give you a conclusion, sorry

26:57.000 --> 26:58.000
yeah, please

27:07.000 --> 27:09.000
sorry, I can't get you a question

27:10.000 --> 27:11.000
sorry

27:24.000 --> 27:25.000
sorry

27:25.000 --> 27:27.000
do you run virtual machines?

27:28.000 --> 27:30.000
yeah, this is around virtual machines

27:31.000 --> 27:32.000
to host virtual machines

27:33.000 --> 27:34.000
virtual machines images

27:36.000 --> 27:38.000
yeah, this is exactly

27:39.000 --> 27:43.000
you mean something like image

27:44.000 --> 27:47.000
like that, a multiple manifest

27:49.000 --> 27:50.000
okay

27:53.000 --> 27:58.000
thank you

