WEBVTT

00:00.000 --> 00:10.000
All right, can you hear me, yeah?

00:10.000 --> 00:11.000
Yeah, everyone.

00:11.000 --> 00:13.000
So, good evening.

00:13.000 --> 00:15.000
Thank you for coming.

00:15.000 --> 00:18.000
I'm Victor Tozo, and we're here to talk about

00:18.000 --> 00:21.000
VDI in Cooperate.

00:21.000 --> 00:25.000
What the goal of this presentation is to tell a little bit about

00:25.000 --> 00:28.000
what's the current state of the affairs?

00:28.000 --> 00:29.000
What works well?

00:29.000 --> 00:30.000
What doesn't?

00:30.000 --> 00:33.000
And plans for the future.

00:33.000 --> 00:37.000
So, first of all, am I?

00:37.000 --> 00:42.000
I work within Spice for a little bit over four years.

00:42.000 --> 00:46.000
Spice, I don't know if you know, it's this remote protocol

00:46.000 --> 00:48.000
that used to...

00:48.000 --> 00:51.000
It's still does work well.

00:51.000 --> 00:54.000
I was in the very team in Red Hat for quite a while.

00:54.000 --> 00:57.000
Nowadays, I'm working in the Cooperate infrastructure team.

00:57.000 --> 01:01.000
And, yeah, I'm working in Cooperate more or less since 2020,

01:01.000 --> 01:06.000
and I'm a reviewer there since 2024.

01:06.000 --> 01:09.000
So, what is VDI?

01:09.000 --> 01:10.000
The current rules, right?

01:10.000 --> 01:12.000
Like, it's over to all this topic for infrastructure.

01:12.000 --> 01:14.000
For the purpose of this talk, like,

01:14.000 --> 01:16.000
trying to narrow down what it is,

01:16.000 --> 01:19.000
is the technology to allow you to remote access

01:19.000 --> 01:21.000
the virtual machine.

01:22.000 --> 01:26.000
Yeah, everything that is needed for that to work well.

01:26.000 --> 01:31.000
It's key to understand that integration with the hypervisor

01:31.000 --> 01:33.000
is important.

01:33.000 --> 01:38.000
So, the hypervisor, for instance, in this context of the stock

01:38.000 --> 01:40.000
in the KMO environment is KMO.

01:40.000 --> 01:45.000
And, yeah, we have integrated this play keyboard

01:45.000 --> 01:50.000
and anything else that KMO provides as interfaces.

01:51.000 --> 01:54.000
And, not guest resources.

01:54.000 --> 01:58.000
So, because there is also the option of accessing a remote machine

01:58.000 --> 02:00.000
inside of the operation system, right?

02:00.000 --> 02:02.000
This is not a case with VDI.

02:02.000 --> 02:06.000
We are talking about the hypervisor level in the host.

02:08.000 --> 02:10.000
Yeah, talking about VDI.

02:10.000 --> 02:13.000
There is also a little bit of confusion

02:13.000 --> 02:15.000
and understanding about remote protocols as well.

02:15.000 --> 02:19.000
So, VDI's VDI uses remote protocols,

02:19.000 --> 02:22.000
but they are remote protocols are not VDI, right?

02:22.000 --> 02:24.000
So, I have two examples here.

02:24.000 --> 02:28.000
Running spice, like from the left,

02:28.000 --> 02:32.000
Linux machine connecting to Windows 7

02:32.000 --> 02:35.000
and on the right, a Windows machine connecting to

02:35.000 --> 02:37.000
each other machines.

02:37.000 --> 02:40.000
And the idea is that for the remote protocol,

02:40.000 --> 02:44.000
it doesn't matter where the VM is running.

02:44.000 --> 02:46.000
If it's actually in the guest,

02:46.000 --> 02:49.000
it doesn't even matter if it's a guest or not.

02:49.000 --> 02:52.000
Running like a VINC server inside of the guest.

02:52.000 --> 02:55.000
Only matters that the protocol is being implemented

02:55.000 --> 02:57.000
properly so you can access.

02:57.000 --> 03:00.000
And, yeah, on the application level,

03:00.000 --> 03:02.000
you see it's like just another application,

03:02.000 --> 03:06.000
but in fact, it's remotely accessing VM.

03:08.000 --> 03:13.000
So, this is the common case when we are talking about VDI

03:14.000 --> 03:17.000
and in the case specifically of KEMU.

03:17.000 --> 03:19.000
So, we have KEMU.

03:19.000 --> 03:22.000
It runs a guest operation system,

03:22.000 --> 03:24.000
the apps inside of the guest,

03:24.000 --> 03:28.000
but KEMU itself also runs VINC server.

03:28.000 --> 03:32.000
It has a VINC server integrated with it.

03:32.000 --> 03:36.000
We can access the guest using that.

03:36.000 --> 03:41.000
So, it exports the VINC server on a port

03:41.000 --> 03:47.000
that you can configure here is like a 5,515,500.

03:47.000 --> 03:50.000
So, which is the usual,

03:50.000 --> 03:54.000
and then a VINC client can connect to it.

03:54.000 --> 03:57.000
No.

03:57.000 --> 03:59.000
So far, so good.

03:59.000 --> 04:02.000
I didn't explain anything about comfort, for instance.

04:02.000 --> 04:04.000
Kind of expecting the protocol.

04:04.000 --> 04:07.000
But, yeah, if you do just raise your hand

04:07.000 --> 04:09.000
otherwise it gets confusing.

04:09.000 --> 04:12.000
But, in short, like comfort is basically

04:12.000 --> 04:16.000
the way of running VMs in Kubernetes,

04:16.000 --> 04:21.000
installing and making it as native as possible for Kubernetes users

04:21.000 --> 04:24.000
to run workloads as VMs in Kubernetes.

04:24.000 --> 04:27.000
Now, about VDI in Kubernetes today,

04:27.000 --> 04:29.000
and for quite some time actually,

04:29.000 --> 04:31.000
since the inception of Kubernetes,

04:31.000 --> 04:35.000
the goal of VDI in Kubernetes is actually for troubleshooting.

04:35.000 --> 04:37.000
It has not been a priority

04:37.000 --> 04:42.000
to make that first class sits in VDI solution,

04:42.000 --> 04:47.000
which is of course it's something that it's becoming

04:47.000 --> 04:49.000
more of a problem, right?

04:49.000 --> 04:51.000
Like we are becoming more and more users,

04:51.000 --> 04:55.000
since let's say a company about another one, recently.

04:55.000 --> 04:58.000
They are interested in having more and more VDI solutions

04:58.000 --> 05:03.000
and in Kubernetes or products that are embedded in Kubernetes.

05:04.000 --> 05:07.000
And this is one of the reasons that I'm here today,

05:07.000 --> 05:10.000
which we start taking a step forward

05:10.000 --> 05:14.000
into how to improve the situation in Kubernetes.

05:14.000 --> 05:17.000
So, yeah, main goal is troubleshooting.

05:17.000 --> 05:21.000
We use KmoVNC server, so it's what Kmo gives us,

05:21.000 --> 05:25.000
basically, and the access is over VDI.

05:25.000 --> 05:28.000
I'll show how it works,

05:28.000 --> 05:32.000
but this is also another problem.

05:32.000 --> 05:35.000
Yeah.

05:35.000 --> 05:37.000
So what you see here,

05:37.000 --> 05:42.000
I didn't provide a full load of components of Kubernetes

05:42.000 --> 05:47.000
or Kubernetes, but I just want to pinpoint the main actors

05:47.000 --> 05:52.000
of VINC over VDI in Kubernetes.

05:52.000 --> 06:00.000
So you have the ingreen is the virtual entrepreneur pod,

06:01.000 --> 06:04.000
and it runs then it has containers inside of it.

06:04.000 --> 06:06.000
One of them is the virtual entrepreneur container,

06:06.000 --> 06:09.000
the computer container that runs KmoVNC.

06:09.000 --> 06:16.000
And let's say that someone wants to access this remote machine over VINC.

06:16.000 --> 06:20.000
So the path forward today, what we have today,

06:20.000 --> 06:23.000
is a super resource for VINC.

06:23.000 --> 06:27.000
So we use the VIRT API, which is providing the super resource for us.

06:27.000 --> 06:31.000
So basically what we need is the name space that the VM is running

06:31.000 --> 06:33.000
and the VMI name.

06:33.000 --> 06:37.000
So with just this two, the process has it follows, right?

06:37.000 --> 06:40.000
The client will ask,

06:40.000 --> 06:43.000
I want the VMC connection of this name space plus VM.

06:43.000 --> 06:48.000
The VIRT API will receive this request and forward it to VIRT tender.

06:48.000 --> 06:50.000
VIRT tender is our demo set and convert,

06:50.000 --> 06:52.000
so it has high privileges.

06:52.000 --> 06:55.000
So basically, this is what I'm saying is that in a cluster,

06:55.000 --> 06:59.000
VIRT API will forward the request to the right note,

06:59.000 --> 07:00.000
to the right host,

07:00.000 --> 07:03.000
and then VIRT tender will receive the request

07:03.000 --> 07:06.000
and we'll connect through this VINC server,

07:06.000 --> 07:09.000
which is exposed as a unix socket in the node.

07:09.000 --> 07:12.000
And VIRT tender will make it in this connection

07:12.000 --> 07:15.000
between the HTTP request, the right request,

07:15.000 --> 07:18.000
and the unix socket, making a web socket.

07:18.000 --> 07:23.000
Main issue here is that the data flow of the web socket

07:23.000 --> 07:26.000
will be the same as we see now.

07:26.000 --> 07:30.000
And this can become a problem overloading,

07:30.000 --> 07:34.000
or the VIRT API pad.

07:34.000 --> 07:39.000
So as I said, the first problem of this approach

07:39.000 --> 07:41.000
is the data pad.

07:41.000 --> 07:43.000
It does not scale well.

07:43.000 --> 07:46.000
And for that reason, since the inception,

07:46.000 --> 07:49.000
we define that it's a for troubleshooting.

07:49.000 --> 07:51.000
And for troubleshooting, this is fine.

07:51.000 --> 07:54.000
We are just accessing the VM for why it's not booting,

07:54.000 --> 07:56.000
and what's the problem there.

07:56.000 --> 08:02.000
And for that reason, yes, one single connection per VM.

08:02.000 --> 08:03.000
That's what we allow.

08:03.000 --> 08:07.000
So we actually deny or remove.

08:07.000 --> 08:11.000
It was too recently, we actually removed the VINC connection

08:11.000 --> 08:12.000
that already exists.

08:12.000 --> 08:15.000
We would drop that one, get the newer request,

08:15.000 --> 08:17.000
we'll take over.

08:17.000 --> 08:21.000
But now it's, we have options for that not you happen.

08:21.000 --> 08:26.000
And another issue of this approach is that we don't have

08:26.000 --> 08:29.000
an easy way to add new protocols.

08:29.000 --> 08:32.000
So basically we are getting what Kim will provide to us.

08:32.000 --> 08:34.000
And let's say that Kim will provide the different one,

08:34.000 --> 08:37.000
actually does spice for instance.

08:37.000 --> 08:43.000
So we have to create a whole pad for that new protocol,

08:43.000 --> 08:46.000
or other protocols to exist.

08:46.000 --> 08:49.000
And that's a problem, right?

08:49.000 --> 08:52.000
It brings a lot of coding to the factory.

08:52.000 --> 08:56.000
You want to reduce the how long it takes for someone

08:56.000 --> 08:59.000
to access this feature, this new feature.

08:59.000 --> 09:02.000
And that comes to the possible solution.

09:02.000 --> 09:08.000
So Kim, all things early, 2022,

09:08.000 --> 09:12.000
provides this display, debos interface.

09:12.000 --> 09:16.000
So what does it mean?

09:16.000 --> 09:22.000
Debus is something, does someone not knows what debos is?

09:22.000 --> 09:25.000
It's a peer IPC protocol, right?

09:25.000 --> 09:28.000
So the team process you can make a communication.

09:28.000 --> 09:30.000
They can talk to each other.

09:30.000 --> 09:35.000
So what this means for in this case is that Kim will export

09:35.000 --> 09:40.000
all the VDI interfaces over this debos interface.

09:40.000 --> 09:45.000
So now it will allow other process outside of Kim,

09:45.000 --> 09:47.000
outside of VDI, in this case,

09:47.000 --> 09:53.000
to connect and expose the graphical, the keyboard, everything.

09:53.000 --> 09:56.000
As it will, as it wants.

09:56.000 --> 09:58.000
So this is a great possible solution

09:58.000 --> 10:02.000
that I implemented a proof of concept that I showed shortly.

10:02.000 --> 10:05.000
But first, how we would work,

10:05.000 --> 10:08.000
or at least how I'm proposing.

10:09.000 --> 10:15.000
I'm proposing now at least what's implemented.

10:15.000 --> 10:17.000
Yes.

10:17.000 --> 10:22.000
So as I said, the idea is that Kim will now

10:22.000 --> 10:27.000
we will expose this VDI interface over debos.

10:27.000 --> 10:31.000
So the green actor here, which is the virtual launcher pod.

10:31.000 --> 10:33.000
No, the virtual launcher container.

10:33.000 --> 10:35.000
So I forgot to drown the pod.

10:35.000 --> 10:38.000
Both VDI, the blue and the green are running on the same pod

10:38.000 --> 10:40.000
in this case.

10:40.000 --> 10:43.000
So the virtual launcher container now does not need

10:43.000 --> 10:45.000
to expose the VINC server.

10:45.000 --> 10:48.000
It just needs to connect to the Unix socket.

10:48.000 --> 10:52.000
And the blue one is a sidecard container that I have launched.

10:52.000 --> 10:56.000
And this sidecard container, because in the virtual launcher

10:56.000 --> 10:58.000
by default, it does not have debos.

10:58.000 --> 11:03.000
So I included a debos demon in the sidecard.

11:03.000 --> 11:07.000
So what I'm doing here is like the blue container runs first,

11:07.000 --> 11:10.000
because this is how the hook API in Kubernetes works.

11:10.000 --> 11:14.000
It creates all the environment for this, for Kim,

11:14.000 --> 11:15.000
which you connect.

11:15.000 --> 11:17.000
And it will also change Kim,

11:17.000 --> 11:19.000
and the virtual, using the virtual,

11:19.000 --> 11:23.000
to change Kim to use the display debos.

11:23.000 --> 11:27.000
So this is actually not too difficult to test on your own

11:27.000 --> 11:29.000
if you want.

11:29.000 --> 11:31.000
So yes.

11:31.000 --> 11:34.000
So I'm proposing to have a sidecard this moment.

11:34.000 --> 11:38.000
But in the future, the idea is to have actually a VDI sidecard.

11:38.000 --> 11:43.000
This VDI sidecard, in this example, for instance, is running VINC.

11:43.000 --> 11:46.000
And it will connect to the debos interface.

11:46.000 --> 11:50.000
So it will get what Kim will provide as a VDI interface,

11:50.000 --> 11:52.000
a VDI data.

11:52.000 --> 11:56.000
And it will expose as it once, right?

11:56.000 --> 12:00.000
In this case, I use the service object for Kubernetes.

12:00.000 --> 12:06.000
So this also provides the possibility to avoid going through the VDI,

12:06.000 --> 12:08.000
and some flexibility.

12:08.000 --> 12:10.000
Again, this is a proof of concept.

12:10.000 --> 12:14.000
So all the details on how this actually have to be implemented

12:14.000 --> 12:16.000
needs to be discussed.

12:16.000 --> 12:20.000
Yes.

12:20.000 --> 12:21.000
Interesting.

12:21.000 --> 12:24.000
The VINC server, in this case,

12:24.000 --> 12:28.000
needs somewhat to be able to connect to debos,

12:28.000 --> 12:32.000
and then feed the VINC server with the data, right?

12:32.000 --> 12:34.000
The keyboard, display, and such.

12:34.000 --> 12:40.000
So what I'm using here is this repo from my colleague

12:40.000 --> 12:42.000
that he implemented our enrast.

12:42.000 --> 12:46.000
This VINC server that connects over debos.

12:46.000 --> 12:48.000
And that's it.

12:48.000 --> 12:52.000
So the VINC server here is different from the one before.

12:52.000 --> 12:55.000
The one before is running is the Kiml1.

12:55.000 --> 12:58.000
That you can get with the dash VINC.

12:58.000 --> 13:01.000
In this case, now I'm running another container.

13:01.000 --> 13:02.000
It's another process.

13:02.000 --> 13:05.000
And it's running this Rust VINC server.

13:05.000 --> 13:09.000
And then it communicates with Kiml over debos.

13:09.000 --> 13:11.000
I have a demo.

13:11.000 --> 13:13.000
It's not long.

13:13.000 --> 13:14.000
It's one minute.

13:14.000 --> 13:16.000
So I will just quickly go.

13:16.000 --> 13:17.000
I hope you can see.

13:17.000 --> 13:18.000
Sorry.

13:18.000 --> 13:20.000
But it's also in the link.

13:20.000 --> 13:24.000
What I did here, yeah, as I said, right?

13:24.000 --> 13:28.000
So because, yeah, in a minute,

13:28.000 --> 13:32.000
you see that KimlVNC and KimlWRDP are the binaries that

13:32.000 --> 13:36.000
connect over debos and expose VINC and RDP in this case.

13:36.000 --> 13:38.000
Oh, I actually did both.

13:38.000 --> 13:41.000
I'm exposed as a, yeah, that's remote view,

13:41.000 --> 13:43.000
connecting over VINC to the VMI.

13:43.000 --> 13:47.000
And I use a server subject and the port forward

13:47.000 --> 13:49.000
for local Econnecting.

13:49.000 --> 13:52.000
Now for RDP, actually needs credentials.

13:52.000 --> 13:56.000
So I had to connect through the port and set the credentials there.

13:56.000 --> 13:59.000
In this case, the user secret and the domain is home lab.

13:59.000 --> 14:02.000
Now I can run genome connections,

14:02.000 --> 14:06.000
which is the RDP client that I had at the time.

14:06.000 --> 14:09.000
And set those and connect over RDP.

14:09.000 --> 14:13.000
Yeah.

14:13.000 --> 14:14.000
And it's the same VMI.

14:14.000 --> 14:17.000
It's the same thing.

14:17.000 --> 14:19.000
So this is the plan.

14:20.000 --> 14:23.000
The proposal that I want forward.

14:23.000 --> 14:30.000
I think there are several benefits that I didn't put in the next slide.

14:30.000 --> 14:33.000
But let's say now we can have, as I said,

14:33.000 --> 14:37.000
like we can go without passing the VINC API.

14:37.000 --> 14:44.000
So not put in the burden of the data pad over the Kubernetes API pad.

14:44.000 --> 14:49.000
We can start having different remote protocols as well.

14:49.000 --> 14:52.000
So as I showed RDP is possible.

14:52.000 --> 14:56.000
This debust thing is already quite for quite some time.

14:56.000 --> 14:58.000
So there are quite a few possibilities.

14:58.000 --> 15:05.000
And one that I care a lot is the being able to provide third part solutions.

15:05.000 --> 15:06.000
Right?

15:06.000 --> 15:09.000
You can provide your own container for VDI.

15:09.000 --> 15:11.000
And run.

15:11.000 --> 15:12.000
It's only needed.

15:13.000 --> 15:16.000
It's only needed for you to be able to speak with the bus.

15:16.000 --> 15:21.000
So as long as you do, you can expose whatever protocol you want with minimum changes to

15:21.000 --> 15:22.000
Kubernetes.

15:22.000 --> 15:25.000
So I think that that's great for the future.

15:25.000 --> 15:31.000
Now, other possibilities as well for it's not necessarily, it's not really VDI.

15:31.000 --> 15:36.000
But it's something that it's well established as well as using in guest solutions.

15:36.000 --> 15:42.000
So for instance, in all nowadays with motor, you can have even the bus APIs there as well.

15:42.000 --> 15:45.000
But you can have an RDP or VINC server.

15:45.000 --> 15:49.000
And yeah, take one step back.

15:49.000 --> 15:56.000
One of the benefits of VDI itself that I mentioned is that the guest operation system does not need network.

15:56.000 --> 15:57.000
Right?

15:57.000 --> 15:59.000
You can connect through it directly to PMO.

15:59.000 --> 16:04.000
But in general, with RDP or VINC server running in the guest, you need internet.

16:05.000 --> 16:07.000
You need networking access access.

16:07.000 --> 16:10.000
So doing something like using VINC.

16:10.000 --> 16:18.000
We can actually connect the running open server in the guest to the host and do some sort of bridging as well.

16:18.000 --> 16:24.000
So it's another possibility that I heard and discussed before.

16:24.000 --> 16:26.000
Benefits.

16:26.000 --> 16:31.000
It's that it mostly uses guest resources for everything.

16:32.000 --> 16:37.000
And yeah, it would work even without the guest network.

16:37.000 --> 16:38.000
Yeah.

16:38.000 --> 16:42.000
As I mentioned, there are some motor APIs and things like that.

16:42.000 --> 16:48.000
Perhaps we can do some bridging as well between them, but it's more like afterthought.

16:48.000 --> 16:51.000
So take away.

16:51.000 --> 16:52.000
Yes.

16:53.000 --> 16:56.000
In VDI, it's for troubleshooting.

16:56.000 --> 16:58.000
That's that's very important.

16:58.000 --> 17:04.000
Discussion for improvements is always welcome, but today that's the case.

17:04.000 --> 17:09.000
Using this play over DBAS, it's very doable in a reasonable time.

17:09.000 --> 17:13.000
I took a week to get this proof of concept in place.

17:13.000 --> 17:17.000
And yeah, of course, you need to do the VAP and discussions upstream.

17:17.000 --> 17:20.000
But you know, the technologies are already there.

17:21.000 --> 17:30.000
Yeah, I didn't show any code because it's mostly container files, creative things and connecting things that already exist.

17:30.000 --> 17:35.000
And yeah, allowing other protocols and third-party integrations.

17:35.000 --> 17:39.000
And of course, in guest solutions is always the possibility too.

17:39.000 --> 17:44.000
Although I don't see anyone pushing forward to that today.

17:44.000 --> 17:46.000
That's it.

17:46.000 --> 17:47.000
Any questions?

17:51.000 --> 17:53.000
Yeah.

18:02.000 --> 18:04.000
Yeah, that's cool question.

18:04.000 --> 18:07.000
So the question was if you can use GPU acceleration.

18:07.000 --> 18:13.000
So that's another point for discussion in the case of how that would work, because the container would need the GPU, right?

18:14.000 --> 18:19.000
So let's say VNC wants to extreme H264 and GPU support for that.

18:19.000 --> 18:28.000
So yeah, it will need how we will fit this this allocation of the GPU for the container in that pod.

18:38.000 --> 18:42.000
In both cases, if you have the GPU for the VM,

18:43.000 --> 18:47.000
then that means that the VM already sees the operation system already sees the GPU.

18:47.000 --> 18:50.000
And it's what is running there, it will use it.

18:50.000 --> 18:55.000
But if you're talking about in host that in the same layer as KMO,

18:55.000 --> 18:57.000
now we have a GPU on OK to do that pod.

18:57.000 --> 19:02.000
Is it a GPU for the VM to use or is it for the site container to use?

19:02.000 --> 19:06.000
So this is the kind of thing that needs, but it's definitely doable.

19:06.000 --> 19:08.000
Yeah.

19:08.000 --> 19:10.000
All right. Thank you for the question.

19:12.000 --> 19:17.000
If the buzz, would you?

19:21.000 --> 19:23.000
Printers.

19:23.000 --> 19:25.000
Ah, OK.

19:25.000 --> 19:27.000
Yeah, cool, cool.

19:27.000 --> 19:30.000
Yeah, no, no, that's actually a great idea, you know, like if you can expose it,

19:30.000 --> 19:34.000
I don't know the use case, perhaps yeah, that's perhaps something interesting to think about.

19:34.000 --> 19:37.000
So the question was, can I use the buzz interface from the guest?

19:37.000 --> 19:43.000
Should do things that you usually, you do with the guest, like exporting printers,

19:43.000 --> 19:44.000
do you printing?

19:44.000 --> 19:48.000
Yeah, it's, I think it's a possibility as to do this kind of expose.

19:48.000 --> 19:55.000
It's not necessarily a related to the video I had this, but I think it's not not difficult.

19:55.000 --> 19:57.000
Yeah.

20:01.000 --> 20:03.000
Sorry, I may say the company.

20:04.000 --> 20:06.000
The conversation?

20:06.000 --> 20:07.000
Yeah.

20:07.000 --> 20:14.000
For example, if I have the environment in the virtual machine of the company,

20:14.000 --> 20:18.000
and I want to be great to to connect this channel,

20:18.000 --> 20:25.000
it's a faster or easier to, in the middle, give the computer probably,

20:25.000 --> 20:27.000
I will go.

20:27.000 --> 20:29.000
About migration then.

20:29.000 --> 20:33.000
So the question was, if I have a VM on premise,

20:33.000 --> 20:39.000
and I want to migrate to Cooperate, or you're already using Cooperate on premise,

20:39.000 --> 20:43.000
and they want to migrate some cluster.

20:43.000 --> 20:46.000
Yeah, so there are several projects.

20:46.000 --> 20:51.000
There's Farclift for instance, and that does this kind of migrations.

20:51.000 --> 20:55.000
They take VM where to, and move to, to each other.

20:55.000 --> 20:59.000
Yeah, so you can, you can try to use those things, but you have to see the use case.

20:59.000 --> 21:02.000
Migrating is, it's not trivial work.

21:02.000 --> 21:07.000
It's not, it has to be careful because otherwise you may lose data or the workload.

21:25.000 --> 21:33.000
Yeah, Cooperate runs on Kubernetes.

21:33.000 --> 21:39.000
So it's a matter of how you do the workload that you have today running on Cooperate,

21:39.000 --> 21:42.000
and there are a few ways of doing that.

21:42.000 --> 21:45.000
If you want to, I can show you.

21:46.000 --> 21:55.000
I don't know if it works, or they can do it for some, but I don't know.

21:55.000 --> 22:01.000
Well, I didn't get the question.

22:01.000 --> 22:05.000
So I'm currently in a social situation.

22:05.000 --> 22:08.000
Okay, so it's basically it's feature parity.

22:08.000 --> 22:12.000
So let's say you are using your passing through something,

22:12.000 --> 22:15.000
some device in your own premise solution.

22:15.000 --> 22:18.000
If you can have it on Cooperate, it really depends.

22:18.000 --> 22:23.000
So it really depends if we are, what you're doing and how you use it.

22:23.000 --> 22:27.000
Yeah, perhaps you have to adapt some stuff.

22:27.000 --> 22:28.000
Yeah.

22:28.000 --> 22:29.000
All right.

22:29.000 --> 22:32.000
Any further questions?

22:32.000 --> 22:34.000
Okay, so thank you.

