WEBVTT

00:00.000 --> 00:18.840
Hello everyone, thank you for joining. My name is Pierre Rathmerck, I'm a PhD candidate

00:18.840 --> 00:24.520
at Fusek at Friday, University of Amsterdam, and I'm in the Security Research and mainly

00:24.520 --> 00:31.720
specializing in UEFI PCI Express and Hypervisor Security. My free-fuse work involves

00:31.720 --> 00:36.480
intultenable vulnerability research, and this is my colleague Sina.

00:36.480 --> 00:43.600
Hello everyone, I'm Sina, I'm also a PhD candidate at Friday University at Amsterdam.

00:43.600 --> 00:50.000
I usually spend my time on system programming, maintaining HyperdvG and like Windows

00:50.000 --> 00:55.920
internals and a couple of digital designs, you can see more of my works here at my

00:55.920 --> 01:07.120
blog. So we're giving two talks today at Fusek. The current one is an introduction

01:07.120 --> 01:14.120
to HyperdvG where we go into details on what it can offer you, but also how it works

01:14.120 --> 01:20.560
on the do-hot. And this morning we gave a talk in the Security Track where we discussed

01:20.560 --> 01:26.200
the case study of using HyperdvG for malware reversing. If you missed that one, no worries,

01:26.200 --> 01:36.600
that will be recording. Yeah, so let's talk about HyperdvG. We are currently sitting in

01:36.600 --> 01:41.480
the virtualization track, but we're talking about B-buggers, so what's up with that?

01:41.480 --> 01:48.920
Well, actually we're going to talk about both HyperdvG and D-buggers. So why would you

01:48.920 --> 01:55.560
want a HyperdvG based debugger? Well, basically a HyperdvG is highly preferred. It has

01:55.560 --> 02:00.840
complete system visibility, and that means that it has visibility over nearly all events

02:00.840 --> 02:07.400
that are occurring in the operating system. At the same time, because it is operating

02:07.480 --> 02:14.440
at the HyperdvG level, it is also virtually completely transparent to both user and kernel space.

02:14.440 --> 02:21.640
In this case, all sources of advantages, one being stealthy. This is, of course,

02:22.840 --> 02:28.840
very useful when the debugging software that behaves differently when running in a virtualized

02:28.840 --> 02:36.920
environment or when being debugged. And yeah, being a HyperdvG means we also have access to

02:38.040 --> 02:43.640
a toolbox that offers features that are normally not possible with traditional debuggers.

02:45.960 --> 02:54.200
So, introducing HyperdvG, it's a Hypervisor assisted debugger, leverages virtualization

02:54.200 --> 03:00.040
extensions offered by the Intel ISA, originally intended for virtualization, but we're using them

03:00.040 --> 03:08.440
to implement a debugger. And because we are operating at the Hypervisor level, we operate also

03:08.440 --> 03:14.920
independently of any operating system level debugging APIs. This gives all sorts of

03:14.920 --> 03:22.600
advantages and we'll be talking about those in a more detail. The first release was for Windows

03:22.680 --> 03:29.480
back in 2022, actively maintained since, and we're currently working on a UEFI based,

03:29.480 --> 03:35.000
always agnostic, hypervisor agent, which means that soon we'll be able to support Linux,

03:35.000 --> 03:45.480
BSD, and basically any other operating system. So, yeah, let's talk about some debugging scenarios.

03:45.480 --> 03:50.360
Like, if you have to debug native code, especially when you don't have access to the source code,

03:50.360 --> 03:59.080
this can be really difficult. For example, debugging a device driver. You're debugging the

03:59.080 --> 04:03.480
device driver, or you're actually looking for a device driver that is touching a certain

04:05.480 --> 04:10.520
area in the memory, and you don't know which device driver it is. Well,

04:12.760 --> 04:19.320
this is exactly one of the use cases that's hyperdvG is useful for. So, you try to find what

04:19.400 --> 04:26.360
device driver, or what user space program is serving, writing into the certain memory range,

04:26.360 --> 04:32.360
no problem, you can find that out. When you have finally found what device driver is writing into

04:32.360 --> 04:40.360
that certain memory range, you might want to reverse trace the stack all the way up to user space.

04:40.360 --> 04:45.800
You know, maybe there's a user space companion app talking to the device driver. This is also possible

04:45.800 --> 04:51.320
with hyperdvG. Maybe you want to script all of this, right? Turn all of these

04:52.120 --> 04:58.920
data points into events that triggers certain scripts, and does something interesting.

05:01.720 --> 05:08.840
Or maybe you want to prevent user kernel space, touching a range of memory.

05:09.800 --> 05:16.520
And what the list goes on and goes on, these are all basically use cases that are not

05:16.520 --> 05:21.720
possible with a traditional but debugger, and we'll show you how this can be done with hyperdvG.

05:23.640 --> 05:31.560
So, hyperdvG to the rescue. But before we do that, we have to introduce some new terms.

05:31.880 --> 05:38.840
So, hyperdvG implements a concept, but we like to call event-driven debugging.

05:39.480 --> 05:46.200
So, everything that happens in hyperdvG is basically an event, and each event could trigger

05:46.200 --> 05:52.840
one or more actions. So, it could be the execution of some script, it could be the execution of

05:53.400 --> 06:01.000
a piece of assembly code that you wrote beforehand, and you could simply trigger a breakpoint.

06:02.360 --> 06:08.120
And the inputs for these events can be anything from system calls, or returns from system calls,

06:08.120 --> 06:16.280
to EPT hooks, what that is, will dive into in a little bit. It can also be IO operations,

06:16.280 --> 06:23.720
and it can be even be specific CPU instructions. Like you know, that the certain piece of software

06:24.360 --> 06:32.840
either user or kernel code uses these instructions. You can actually trigger an event based

06:32.840 --> 06:43.960
on the use of this only instruction. So, there are event calling stages, like post, pre, and both.

06:45.240 --> 06:51.160
You can also do what we call like to call event short circuiting, which basically means you

06:51.160 --> 06:57.480
define a set of conditions under which the event must be triggered, and then you can simply

06:58.200 --> 07:04.840
ignore the entire event. This is how you can ignore read or write actions to a certain

07:04.840 --> 07:09.240
memory region, for example. You can also completely bypass events, which means that

07:10.440 --> 07:17.400
a certain range of instructions will be skipped. And we implement all of these using

07:17.400 --> 07:25.480
PMXids, but we'll talk about that. Yeah, and so let's dive a little bit deeper into

07:25.480 --> 07:27.880
how we implement it all of this.

07:31.400 --> 07:38.840
No, we're going to talk about the techniques, the way that we implement these features in HyperDVG,

07:38.840 --> 07:44.200
and how they can help us to enhance or reverse engineering experience.

07:45.160 --> 07:54.440
The first feature that we offer here is by using monitor trap flag. If you are familiar with

07:54.440 --> 08:04.120
virtualization of Intel VTX, there is like a VMCS and on top of that there is an MTF flag.

08:06.280 --> 08:14.120
We made like a function tracking mechanism here that you can use to

08:14.120 --> 08:19.640
go from the user mode to kernel mode and directly back from the kernel mode to user mode by

08:19.640 --> 08:27.960
employing MTF. And once it's combined with a symbol server, let's say Microsoft symbol server or

08:27.960 --> 08:34.520
a private symbol server, PDB files, you can just see the name of the functions, the call,

08:34.520 --> 08:41.960
three of everything that is called, and HyperDVG is capable of running from user mode to kernel mode

08:41.960 --> 08:45.960
and coming back from the kernel to the user. This is something that traditional

08:45.960 --> 08:55.720
devices are not capable of doing that. So in another scenario, we are trying to combine

08:55.720 --> 09:01.160
what we introduced before about even calling the stages and even short circuiting.

09:03.400 --> 09:10.120
There are three scripts, there are three HyperDVG scripts. All you can see here is

09:10.920 --> 09:18.120
from the language of HyperDVG, we call DSLank, it's a customized language which is so similar to

09:18.120 --> 09:26.520
WindyVG scripting language, but with a little bit of difference. So this is the way that we create

09:26.520 --> 09:35.560
an event in HyperDVG. As you can see, we are trying to bypass the execution of certain instructions.

09:35.560 --> 09:41.880
So first, we go for MSORI, which is basically the WR MSORI, if you're familiar with the

09:41.880 --> 09:50.360
model specific registers in Intel. And here we set like in the, if you see the stages here,

09:50.360 --> 10:01.320
we defined two stages as a pre-estate and post-estate. And because the first example is a pre-estate,

10:01.320 --> 10:08.840
so the WR MSORI instruction is not yet executed and as a result, we could change the values that

10:08.840 --> 10:17.160
the WR MSORI instruction wants to put inside like wants to run on a bare metal machine. So they are

10:17.160 --> 10:25.240
all executing before the emulation happens in the system. Another example is exceptions. You know,

10:25.240 --> 10:32.520
there are plenty of like, you know, hypervisor, there are exceptions, there are interrupts, there are traps.

10:33.160 --> 10:42.280
All of them can be monitored and can be emulated by HyperDVG. So in this example, we use a post-calling

10:42.280 --> 10:51.720
stage here and as a result, like 0XE is a page fault. And the based on Intel processors,

10:52.040 --> 10:58.920
there is like a location where the page fault happened in CR2, which is there. So because we are using

10:58.920 --> 11:08.840
the post-estate, we could see where this page fault happens and like see the address.

11:10.360 --> 11:17.480
Another example is about changing system calls. So we have two examples here. For the first

11:17.560 --> 11:24.840
example is that we are trying to intercept the Cisco instruction. And for the Cisco instruction,

11:24.840 --> 11:30.440
first we check whether the Cisco number, which is always located in both Linux and Windows,

11:30.440 --> 11:37.640
all located on the first EA 3G server, we check it with some values. Let's say 0XE55.

11:38.600 --> 11:45.480
And if it was like, if the Cisco number is 55, then we change one of the parameters. Like,

11:46.040 --> 11:52.920
if the assuming that it's a Cisco, it's a fast-call calling convention, we change some bits

11:52.920 --> 12:01.640
in ECX register or first parameter. And as you can see, there is also another example here

12:03.560 --> 12:11.720
that we call it as the previous stage. So we have the capability to bypass the system call.

12:12.440 --> 12:18.840
What happens here is that HyperDVG before executing and before emulating the Cisco instruction

12:20.440 --> 12:26.440
checks for some conditions. And if this condition is like the condition, if the condition

12:26.440 --> 12:33.880
met, then this event is short-circuited. Which means that the Cisco instruction is never

12:33.880 --> 12:41.880
emulated and never executed in the virtualized environment. And what we do here, we change the

12:41.880 --> 12:49.320
REX register to put some status here. So the application, the debug application, things that

12:50.200 --> 12:57.960
like the Cisco instruction is successfully executed. And the value in REX, which probably

12:57.960 --> 13:05.400
refers to like access denied file is not able to be accessed. It's returned from the system called

13:05.400 --> 13:11.720
button reality. There is no Cisco and nothing is executed in the system. So this is how you can

13:11.720 --> 13:19.080
use short-circuiting. And you know, like you can also combine short-circuiting with all of the events

13:19.080 --> 13:27.800
in HyperDVG. Let's say for events that are related to WDRMSR, RDMSR, CPUID, TSC,

13:27.960 --> 13:35.320
like RDSC and RDSCP or RDPMC, all of the events that are available in HyperDVG could be

13:36.360 --> 13:45.080
short-circuited. And another interesting example here is that how we can ignore

13:45.800 --> 13:56.280
certain memory rights. Here by implementing this, we use the EPD, second-level page table implementation

13:56.280 --> 14:06.440
of Intel. And as you can see, we put like a hook inside a certain address. Let's say it's a

14:06.440 --> 14:13.240
variable inside a user mode application or it could be also in a kernel mode like driver.

14:15.080 --> 14:24.440
And we check for all of the modifications like any memory rights into this address or memory

14:24.440 --> 14:32.120
reads into these address. So if there are, because it's W here, it means that we want to

14:32.120 --> 14:40.120
intercept and trigger some events for memory rights. And we check for this stage. So if the memory

14:40.120 --> 14:48.920
is equal to something like let's say a specific value, then we could easily change that value

14:48.920 --> 14:57.800
as if nothing happens and like we completely change the value of the memory. How it could be useful

14:57.800 --> 15:05.240
as soon that you have a program that has like different threads, different threads that are accessing

15:05.240 --> 15:12.600
a specific global variable. And you just want to make sure that this specific global variable

15:12.600 --> 15:18.440
is never changed to a certain value. Using this script, you can short-circuit or you can

15:18.760 --> 15:31.880
see or modify or prevent any modification of the target memory. So what make these things possible here?

15:34.440 --> 15:44.360
We extensively use the MVAC or mode-based execution controls, which are available on Intel

15:44.360 --> 15:53.240
processors for VTX. We have a mechanism in which we like allocate and use three different

15:53.240 --> 16:04.840
EPDPs. So different EPD pointers. And one of these EPDs are a normal EPD and the other EPDs

16:04.840 --> 16:12.440
like a user mode execution denied EPD and the other one is like kernel mode execution denied EPD.

16:13.400 --> 16:22.280
Regarding MVAC, MVAC is introduced in Intel processors starting from KBLAG processors. I think it's

16:22.280 --> 16:31.720
seventh generation. And whenever we reach to our target process, we just like detect whether

16:32.680 --> 16:42.200
we want one of these user denied or kernel denied EPDP's in VMCS. So what we have here is that

16:42.200 --> 16:52.200
we combine or detect more changing detection, mechanism with some other features of Intel,

16:52.200 --> 17:00.600
which is called move to CR3 exiting. The thing is, whenever the CR3 register is changed, we are

17:00.600 --> 17:06.760
sure that some context switches happens in the process. And based on that, if we are interested

17:06.760 --> 17:14.440
on that process, we will load one of these three EPDs. So if the process is interesting for us,

17:15.080 --> 17:21.400
then we load one EPD that detects, for example, the execution of user mode.

17:22.920 --> 17:36.680
Now, like using this approach, using MVAC makes us capable of freezing the execution

17:36.680 --> 17:46.600
of applications. Assume that we are targeting a specific process and we are loading EPDPs for

17:46.600 --> 17:55.480
that specific process that doesn't execute any user mode code. So what will happen here is that

17:55.480 --> 18:01.640
the operating systems context switches to this specific process. And we only understand it by

18:02.600 --> 18:09.640
intercepting all of the VMXs that are related to move to CR3 exiting. And in the context

18:09.640 --> 18:17.000
switch, we just load the EPDP that denies the execution of user mode code. So what happens is that

18:17.000 --> 18:25.000
at some point windows or the operating system tries to execute some user mode codes. And what will

18:25.000 --> 18:33.320
happen is that a processor creates some EPDP violation of VMXs. And in the EPDP violation,

18:33.320 --> 18:41.080
VMXs, it will just basically ignore it. As if we don't let the application to run. And at some

18:41.080 --> 18:49.160
points, the clock interrupt comes and changes the process to another process. So what happens is that

18:49.240 --> 18:56.280
what happens here is that the operating system thinks that the user mode code is running

18:56.280 --> 19:02.680
normally. But in reality, we just prevent any execution of user mode codes. And this is the

19:02.680 --> 19:08.440
way that we implement the time freeze debugging for the user mode applications in HyperDouge.

19:09.400 --> 19:22.440
Here for the conclusion, HyperDouge has two different mechanisms both for debugging user mode and

19:22.440 --> 19:34.120
kernel mode. So you can just debug an entire operating system with it. And it also leverages

19:34.440 --> 19:44.200
modern hardware features that gives it a system wide visibility. So it powerful, it provides

19:44.200 --> 19:50.200
some powerful features that are designed for reverse engineering. And these features are

19:50.200 --> 19:56.840
simply not possible with regular debugger, traditional debugger. So you need a hypervisor to

19:57.240 --> 20:05.240
have those reverse engineering techniques. And of course HyperDouge is a free and open source

20:06.680 --> 20:15.160
project. So it's available for the community to contribute and has and patches are always welcome.

20:16.680 --> 20:20.840
Yes, that's it.

20:27.720 --> 20:37.000
This is the on the low level of instrumentation that you just do. Do you have any higher level

20:37.000 --> 20:42.920
utilities to provide a bit more for example, OS awareness? Like I remember looking into

20:42.920 --> 20:50.920
the open VMI previously. So you mean the difference is that HyperDouge and open VMI?

20:50.920 --> 21:12.280
Yes, maybe you can put it in context. Yeah. Yes, the question is what higher level features we offer

21:12.280 --> 21:20.200
for seeing some higher level operating system content? We do have like a

21:20.200 --> 21:30.120
specific script engine that is written for HyperDouge. And this script engine has a lot of

21:30.120 --> 21:37.640
like it adverse about some like operating system concepts like thread ID process ID and it also has

21:38.600 --> 21:46.920
like it has access to call certain let's say functions from the Windows kernel or it

21:47.000 --> 21:52.440
is it has completely a complete interaction with the operating system. So it's the higher level

21:53.240 --> 21:59.240
of debugging. Yeah, I would like to add that you can also use the Microsoft Symbol server

21:59.240 --> 22:02.840
or your own private symbols to give context to the coach you're looking at.

22:09.080 --> 22:13.400
Yes, so we're working on a UEFI based agent. So basically that means

22:14.360 --> 22:21.080
our hypervisor will run at boot time and that way we can support Linux and be as the

22:21.080 --> 22:27.800
and all the other operating systems. But it's a work in progress. So yeah, if you like helping us out,

22:27.800 --> 22:36.520
please do. And also maybe it's good to add that in HyperDouge like the way that we designed

22:36.520 --> 22:43.160
this script engine is completely different from the Vindouge. And we published an academic paper

22:43.480 --> 22:49.480
out of it and if you see that academic paper, it'll first like more than 1,000, it is more

22:49.480 --> 22:53.560
than 1,000 times faster than Vindouge because of its design.

22:56.840 --> 23:03.240
Another question? Yeah, to expand on that. So when you say this is working progress,

23:03.800 --> 23:06.440
I like working to have second line, like kind of individual space,

23:06.440 --> 23:08.680
second line of all these different operating systems.

23:11.240 --> 23:17.480
So the question is, can you? Are you going to implement current and user space,

23:17.480 --> 23:21.160
second line for all of the different operating systems you mentioned, like how are you going to

23:21.160 --> 23:26.920
use those symbols and understand what is going to be the space? Yes, so the question is if you are

23:26.920 --> 23:32.760
going to support and add the support for all of the operating systems. Am I right?

23:32.760 --> 23:43.720
Yes, actually the thing is like there are certain things in HyperDouge that are

23:43.720 --> 23:49.400
not related to the platform. And there are something that are related to the platform. For example,

23:49.400 --> 23:57.480
we are using for now, we are using Intel VTX. So we have to execute those instructions that are

23:57.560 --> 24:10.040
related to the VTX or VTD. But the thing is, like there are also different, like we have also

24:10.040 --> 24:18.600
used some mechanisms that are only available on Windows. Let's say IRQLs. But the thing is,

24:18.600 --> 24:23.960
all of them can be applied to other operating systems as well. Like for example, for IRQL,

24:23.960 --> 24:31.320
there is nothing like this in Linux. But we are trying to implement those specific features for

24:31.320 --> 24:37.000
other operating systems as well. So you have one example you have of like system call

24:37.000 --> 24:41.720
interesting. So you have for instance something where I can say I want to figure out every open

24:41.720 --> 24:46.280
system called a Linux. Like you have this information about what the system call is in Linux.

24:47.080 --> 24:53.160
So the question is if we have like information about what the open system calls are in Linux,

24:54.920 --> 25:01.320
right now. I mean, what what we did for implementing system calls can be applied also in Linux.

25:01.320 --> 25:07.720
Like we are all of the events in hardware DVG are using hypervisor capabilities. So there is

25:09.800 --> 25:17.240
no like it doesn't relate to Windows. But I mean, the user should also know about the way

25:17.240 --> 25:24.520
that the system call is implemented in Linux. For example, I assume that Windows uses a specific

25:24.520 --> 25:32.120
calling convention. Let's say fast call. And in Linux, the user should also know about the fast call

25:32.120 --> 25:38.600
and the difference between fast calling and in Linux and Windows and trace those system calls.

25:38.920 --> 25:43.320
But right now does hypervugiorate and the standard calling convention in Linux?

25:43.320 --> 25:47.160
No, no, right, right now hypervugiorate only understands Windows.

25:49.560 --> 25:56.280
So you mentioned your infinite time like the time we're doing or in this space process is only

25:58.920 --> 26:05.720
how stealthy is that? Because as far as I understand like the time like you're not

26:06.280 --> 26:14.760
turn on space, right? So if a turn on space, a malware would monitor CIS calls, would they be able to

26:14.760 --> 26:25.640
catch it? So the question is how stealthy is the time freezing debugging inside like Windows?

26:25.640 --> 26:32.360
Because we are only time freezing user mode applications. You want to know whether like a

26:32.360 --> 26:40.360
kernel malware could bypass this or not? The thing is in hypervugiorate, there are two modes of debugging.

26:40.360 --> 26:48.280
Whether you can debug the kernel entirely the kernel and pause everything like pause the operating system

26:48.280 --> 26:55.720
and nothing changes in the operating system and the other thing is the user mode code is like the

26:55.720 --> 27:03.080
user mode debugging is a separating and the thing is like if there is a malware that could run

27:03.080 --> 27:08.520
into the kernel then there are like some limitations for those malware like for example in Windows

27:08.520 --> 27:15.720
there is a driver signature enforcement so they have to sign it their driver and there are also

27:15.720 --> 27:21.800
other ways of like for example who king those functions that are designed to load certain

27:21.800 --> 27:38.520
their drivers so we can just catch them from there. Yes? Yeah so the question is we can hide

27:39.080 --> 27:48.040
all of those footprints or not of the hypervisor. The thing is or first talk in this morning

27:48.280 --> 27:55.960
in the security track was exactly about this thing. We cannot guarantee a 100% transparency

27:55.960 --> 28:03.960
but we like raise the bar a lot like hypervugiorate even though by its nature is more steals

28:03.960 --> 28:10.600
because it simply doesn't use API that Windows API that are designed for debugging but at the

28:10.600 --> 28:15.560
same time we also try to mitigate those footprints that are for hypervugiorate itself.

28:18.040 --> 28:22.760
So let me quickly pull up a slide from this morning that exactly answers your question

28:24.760 --> 28:25.400
one second.

28:33.640 --> 28:42.040
So this is the roadmap that we have for mitigating all of the artifacts that hypervisors

28:42.760 --> 28:53.160
ourselves but also the top level hypervisor could reveal to malware. We have the two on the left

28:53.160 --> 29:00.120
we have implemented and we're working on the ones in the middle there about 75% done and the

29:00.120 --> 29:08.280
remaining ones yeah there's scheduled right there. Do we have time for more questions or yeah?

29:13.000 --> 29:19.000
Sorry can you speak up?

29:20.680 --> 29:26.040
Yes well we don't currently don't have plans for that but it is certainly possible to implement

29:26.120 --> 29:45.480
to say on arm and aim D. Yes go ahead as a member you know we have some commands like

29:45.640 --> 29:53.640
extension commands for SMIs and we kind of could like

29:55.560 --> 30:01.880
monitor certain things for the SMI handlers for example if I don't know an attacker or any

30:01.880 --> 30:08.920
other application or the Windows itself tries to call and tries to trigger some SMIs by using

30:08.920 --> 30:16.120
IO port certainly you can use the features in hyperduty to detect that and we are also having

30:16.120 --> 30:24.520
some commands for triggering SMIs like you just trigger the SMIs by using hyperduty if you

30:24.520 --> 30:32.360
check the document there is the certain commands for SMI handlers but in general everything is

30:32.360 --> 30:39.480
inside the VTX inside the ring minus two hypervisor not in SMM.

30:43.960 --> 30:50.520
No no I mean this is not possible in interpasses or like you have to have like access to the

30:50.520 --> 30:57.640
UEFI firmware that loads the SMI handler and your code should be in SMM so this is technically not possible.

30:58.040 --> 31:10.040
This is actually part of our effort to port hyperduty to UEFI we are currently not sure yet

31:10.040 --> 31:17.320
but it's feasible but since we will be running from UEFI we might be able to actually intercept

31:17.320 --> 31:25.800
anything that's happening in SMM. But again like running into UEFI is like you need to have like

31:25.880 --> 31:32.920
some access to the SPI chip because like the SMI handler is loaded before the UEFI application yes

31:37.880 --> 31:49.160
is the QMU I mean what we could do for the QMU is that we could we do use some of their commands

31:49.640 --> 31:57.960
like some of their codes for emulating certain instructions but here the like what we do is that

31:57.960 --> 32:02.680
we execute like we run on bare metal so we don't have the plan to.

32:11.720 --> 32:18.360
Yeah yeah I mean like right now hyperduty is supported on the VMware workestations nested virtualization

32:18.360 --> 32:30.680
environment and KVM like nested virtualization environment it's not so it doesn't support hyperv like

32:30.680 --> 32:37.640
we try a lot but unfortunately we couldn't port it to the hyperv yet but right now I think KVM

32:37.640 --> 32:54.680
supported yeah so the overhead comparison normal hypervisors I mean this is like the way that

32:54.680 --> 33:05.000
we emulated is exactly same as the way that you emulate like let's say a VMware like I mean once

33:05.880 --> 33:13.640
we execute those instructions I think we didn't have any measurements for this but I think

33:13.640 --> 33:21.320
it should be generally with less overhead compared to like big hypervisors like KVM because

33:21.960 --> 33:32.200
or VMX it handler has a much shorter path and it's like the code that we wrote for it is of course

33:32.200 --> 33:39.080
shorter than what they wrote for KVM but I mean in any case if you are running hyperduty in a nested

33:39.080 --> 33:45.960
virtualization environment let's say on a VMware then the overhead of hyperduty is also added to the

33:45.960 --> 33:53.240
overhead of the VMware workestation nested virtualization because Intel doesn't support officially doesn't

33:53.320 --> 34:02.520
support nested virtualizations all they just emulate everything so there is no nested virtualizations

34:02.520 --> 34:05.080
thank you

