WEBVTT

00:00.000 --> 00:13.000
Welcome to staff.

00:13.000 --> 00:14.000
Hi.

00:14.000 --> 00:17.000
I'm Artichiro Seda from NTT.

00:17.000 --> 00:22.000
Today, I want to talk about the update in Deamer Project 2.0.

00:22.000 --> 00:27.000
And in this series, we are expanding the project focus to cover

00:27.000 --> 00:35.000
protecting AI, as well as Israeli running continuous on laptops.

00:35.000 --> 00:40.000
So, by the Deamer Rema stands for Phoenix Virtual Machine,

00:40.000 --> 00:44.000
it's optimized for running containers and AI agents,

00:44.000 --> 00:49.000
but early candidates for other workloads as well.

00:49.000 --> 00:54.000
And Deamer features automatic host-of-fire system sharing

00:54.000 --> 00:59.000
that means the host-of-fire system can be easily shared

00:59.000 --> 01:03.000
to the guest operating system.

01:03.000 --> 01:07.000
And it also supports automatic port routing.

01:07.000 --> 01:15.000
That means you can access to the TCP IP ports of the guest office,

01:15.000 --> 01:19.000
just as the local host on the host operating system.

01:19.000 --> 01:25.000
So, you can just access the guest IP, just as the local host on the web

01:25.000 --> 01:28.000
roles that run on the host operating system.

01:28.000 --> 01:33.000
And Deamer comes with built-in integration for several

01:33.000 --> 01:38.000
container-owned times, including continuity, which is a default.

01:38.000 --> 01:44.000
And it also supports Joker, Potsumer, Kubernetes, and Aptiner.

01:44.000 --> 01:47.000
And Deamer supports several host operating systems,

01:47.000 --> 01:51.000
but in the case of Mac OS, you can just use BooRue to install Deamer,

01:51.000 --> 01:54.000
BooRue install Deamer, and you can use Deamer CityRs

01:54.000 --> 01:57.000
to stretch our virtual machine.

01:57.000 --> 02:03.000
And you can just prefix as a Deamer command in the disk commands,

02:03.000 --> 02:06.000
such as not a cityR, not a cityR means continuity,

02:06.000 --> 02:10.000
cityR uses the whole controlling or continuity.

02:10.000 --> 02:15.000
Basically, it's similar to Joker.

02:15.000 --> 02:22.000
So, this is Mac OS, and you can just prefix Deamer to unin,

02:22.000 --> 02:26.000
and it's not Linux.

02:26.000 --> 02:35.000
And inside Deamer share, the host directory is visible.

02:35.000 --> 02:39.000
This is mounted on the Mac OS host operating system.

02:39.000 --> 02:46.000
And you can run continuous using another cityR commands.

02:50.000 --> 02:57.000
This run is another cityR container, who running nzx.

02:57.000 --> 03:04.000
And you can just access to the test port as a Joker host.

03:04.000 --> 03:09.000
So, this is conducting through the disk as a Joker host.

03:09.000 --> 03:13.000
So, basically, it just looks like a process,

03:13.000 --> 03:16.000
running on the host operating system directly.

03:16.000 --> 03:20.000
But actually, it's wrapped inside the virtual machine.

03:20.000 --> 03:34.000
So, share is this, the project similar to Deamer.

03:34.000 --> 03:37.000
One example is WSC2.

03:37.000 --> 03:43.000
So, with WSC2, the Windows host file system is visible to the Linux disk.

03:43.000 --> 03:49.000
And the disk disk ports are just accessible as a Joker host on the Windows host.

03:49.000 --> 03:52.000
So, it's a very similar to Deamer.

03:52.000 --> 03:57.000
But WSC2, of course, only supports Windows host.

03:57.000 --> 04:03.000
And Deamer is also similar to big runs in some sense.

04:03.000 --> 04:05.000
But big runs is not property.

04:05.000 --> 04:11.000
And big runs also missing some features, such as automatic port running.

04:11.000 --> 04:16.000
So, you can just configure big runs to who are the ports.

04:16.000 --> 04:22.000
But it doesn't automatically scan ports used in the test operating system.

04:22.000 --> 04:26.000
So, it's quite different from Deamer in the sense.

04:26.000 --> 04:30.000
And also, Deamer is a very similar to the Kamasin,

04:30.000 --> 04:33.000
but it's only made for Joker.

04:33.000 --> 04:37.000
And it's now abandoned a few years ago.

04:37.000 --> 04:42.000
And Deamer is also similar to the Catastop, but without GUI.

04:42.000 --> 04:46.000
But the Catastop is just made for Joker.

04:46.000 --> 04:51.000
Of course, and it's property.

04:51.000 --> 04:56.000
And let me talk about origin of the project in the current status.

04:56.000 --> 05:00.000
So, I launched the project five years ago,

05:00.000 --> 05:04.000
for the sake of promoting Continuity,

05:04.000 --> 05:08.000
including Neurosyti, to Macintosh users.

05:08.000 --> 05:14.000
So, this project was originally designed as Continuity in Macintosh.

05:14.000 --> 05:17.000
But through the growth of the community,

05:17.000 --> 05:21.000
the scope has expanded, that's just for a Continuity.

05:21.000 --> 05:25.000
So, it's the Linux Macintosh Continuity in Macintosh.

05:25.000 --> 05:29.000
So, we support several Continular engines, including Joker,

05:29.000 --> 05:32.000
Pokémon, Kubernetes, and Optiner.

05:32.000 --> 05:37.000
And we also support Neon Continular workloads.

05:37.000 --> 05:43.000
For example, it's also used for Santa Boxing AI Coding Agents.

05:43.000 --> 05:47.000
So, you know, if you run AI Coding Agents like Cloud,

05:47.000 --> 05:53.000
it's Accentary rooms, everything under your home directory,

05:53.000 --> 05:58.000
or maybe it's installs some run-down properties from Internet.

05:58.000 --> 06:02.000
So, it's really scary to run AI Coding Agents

06:02.000 --> 06:04.000
or you are up directly.

06:04.000 --> 06:08.000
So, you can use Dima to send up such AI Coding Agents.

06:08.000 --> 06:14.000
And Dima is also used for when you want to run a No Ubuntu OS,

06:14.000 --> 06:18.000
such as Hydra on GitHub Actions.

06:18.000 --> 06:21.000
There was a GitHub Actions on this support.

06:21.000 --> 06:24.000
Ubuntu has a Linux list,

06:24.000 --> 06:28.000
but you can just run Dima inside GitHub Actions.

06:28.000 --> 06:34.000
So, you can run any distribution, such as Hydra,

06:34.000 --> 06:38.000
or Apprentices, or Work Linux, or Analytics, or Apprentices,

06:38.000 --> 06:42.000
or Alpine, and other distributions.

06:42.000 --> 06:45.000
And there's GitHub Actions.

06:45.000 --> 06:49.000
And Dima is originally made for Macintosh,

06:49.000 --> 06:53.000
but we now support several HOST operating systems,

06:53.000 --> 06:56.000
including Linux, Windows, KBC, and Dregal, Privacy.

06:57.000 --> 07:02.000
We don't support privacy, because our client doesn't really work well on privacy HOST,

07:02.000 --> 07:07.000
but I think we can see a real support privacy as well.

07:09.000 --> 07:15.000
And we have lots of separate free software projects based on Dima.

07:15.000 --> 07:19.000
The most of the famous one is Coding Mer.

07:19.000 --> 07:23.000
This is an alternative CRI for Dima,

07:23.000 --> 07:27.000
with Dregal at the default engine.

07:27.000 --> 07:30.000
So, Coding Mer is very famous.

07:30.000 --> 07:34.000
Maybe it's more famous than Dima itself.

07:34.000 --> 07:39.000
It has a very huge number of GitHub stores.

07:39.000 --> 07:46.000
So, with Coding Mer, you can just run Dregal on Mac OS, very easily.

07:46.000 --> 07:50.000
And it doesn't need a proprietary Dregal,

07:51.000 --> 07:53.000
just to write this.

07:53.000 --> 07:58.000
And also, there's a run chat desktop by Susie.

07:58.000 --> 08:05.000
It's a combination of Dima and K3S and GUI.

08:05.000 --> 08:08.000
So, this is a screenshot of run chat desktop.

08:08.000 --> 08:13.000
And with this GUI dashboard,

08:13.000 --> 08:19.000
you can check near the status of the Dima run chat desktop.

08:19.000 --> 08:26.000
If running Q3S, this is also free software.

08:26.000 --> 08:30.000
And the next one is a print.

08:30.000 --> 08:32.000
It's AWS product.

08:32.000 --> 08:38.000
And it's made for local development with AWS services,

08:38.000 --> 08:42.000
such as AWS server,

08:42.000 --> 08:46.000
application model, AWS some.

08:46.000 --> 08:51.000
They recently announced this support.

08:51.000 --> 08:58.000
And which is also free software as well.

08:58.000 --> 09:03.000
And there are projects that Dima and GUI,

09:03.000 --> 09:10.000
which provides GUI based on Q2 framework, like this.

09:10.000 --> 09:16.000
And also, potman desktop supports managing Dima instances,

09:16.000 --> 09:22.000
as well as native potman machine instances.

09:22.000 --> 09:29.000
And we have more supported projects.

09:29.000 --> 09:32.000
So, let's talk about how it works.

09:32.000 --> 09:35.000
So, this is a texture of the Dima.

09:35.000 --> 09:38.000
So, there is a Dima set here.

09:38.000 --> 09:41.000
CRI, quanti interface.

09:41.000 --> 09:47.000
This is used by Shuman or GUI or MCP programming.

09:47.000 --> 09:55.000
And this CRI can be used for managing the Dima host agent processes.

09:55.000 --> 10:00.000
And host agent process is a medium driver

10:00.000 --> 10:06.000
to support a server or partner machine implementations.

10:06.000 --> 10:12.000
And this Dima branches BM and also connects to the network driver.

10:12.000 --> 10:16.000
And instead of BM, we have a Dima agent process,

10:16.000 --> 10:22.000
which handles fire system sharing and port routing,

10:22.000 --> 10:26.000
and thinking real-time cook,

10:26.000 --> 10:32.000
or other jobs.

10:32.000 --> 10:37.000
And Dima is designed to be compatible

10:37.000 --> 10:43.000
to support different implementations of hyperbysers,

10:43.000 --> 10:48.000
such as QEMU or Apple's port rotation framework,

10:48.000 --> 10:51.000
and then was busy or macOS.

10:51.000 --> 10:55.000
We also support WSR2 or Windows hosts.

10:55.000 --> 10:59.000
And in budget 2.0 of Dima,

10:59.000 --> 11:01.000
we added support for care on kit,

11:01.000 --> 11:06.000
which supports GPU acceleration on macOS.

11:06.000 --> 11:10.000
And in Dima budget 2, we also support

11:10.000 --> 11:18.000
GRPC programming to support your favorite hyperbysers implementation.

11:19.000 --> 11:26.000
And we also have the concept of interon urm binary effectors.

11:26.000 --> 11:30.000
So the default is a two-emil user mode.

11:30.000 --> 11:37.000
So you can just run interbysers on the urm instances of QEMU.

11:37.000 --> 11:42.000
And on macOS hosts, you can also use Ruse 2,

11:42.000 --> 11:46.000
which is more fast faster than QEMU.

11:48.000 --> 11:52.000
And for fire system sharing, we support bar value FS

11:52.000 --> 11:54.000
and bar value 90.

11:54.000 --> 11:58.000
And we also support reverse SSH FS.

11:58.000 --> 12:04.000
When the guest operating system doesn't support bar value FS

12:04.000 --> 12:07.000
and bar value 90.

12:07.000 --> 12:12.000
And for networking, the default is user mode,

12:12.000 --> 12:15.000
network based on G-byser.

12:15.000 --> 12:18.000
So this company works in the user space,

12:18.000 --> 12:23.000
so it doesn't need any root privilege on the host.

12:23.000 --> 12:28.000
But when you want to access the virtual machine by IP,

12:28.000 --> 12:31.000
not just by the Ruka host address,

12:31.000 --> 12:34.000
you can use a solid BM8 driver.

12:34.000 --> 12:37.000
This needs root privilege,

12:37.000 --> 12:41.000
so you have to run it with SIDO.

12:41.000 --> 12:45.000
And for busy driver, you can automatically use

12:45.000 --> 12:49.000
a busy network driver to access the BM by IP.

12:49.000 --> 12:52.000
This one only works with busy,

12:52.000 --> 12:56.000
but this doesn't need SIDO command.

12:56.000 --> 13:03.000
And for port routing, we watch EVP events.

13:03.000 --> 13:08.000
It's a net link, SOC diagnosis events.

13:08.000 --> 13:15.000
And we use EVP to watch this port routing events.

13:15.000 --> 13:20.000
But this information doesn't work for keyboard service ports.

13:20.000 --> 13:25.000
Because keyboard service ports are quite different from regular ports,

13:25.000 --> 13:29.000
because it uses IP tables.

13:29.000 --> 13:35.000
So we also have a custom logic for what's in keyboard service ports.

13:35.000 --> 13:40.000
And it's a default,

13:40.000 --> 13:43.000
this row of the keys to waste is Ubuntu,

13:43.000 --> 13:46.000
but you can also choose other distributions,

13:46.000 --> 13:48.000
such as R-marinux,

13:48.000 --> 13:50.000
by R-Prinux, Center of Stream,

13:50.000 --> 13:52.000
DBN, Open Source, Oracle Linux,

13:52.000 --> 13:54.000
and Rukinux.

13:54.000 --> 13:57.000
And default contenting is contenting,

13:57.000 --> 13:59.000
but you can also use Apptina,

14:00.000 --> 14:04.000
or Pottina, in Rukinux mode by default,

14:04.000 --> 14:06.000
or in Rukinux mode,

14:06.000 --> 14:10.000
when you need privileged operations.

14:10.000 --> 14:14.000
And you can also use several functional testimonials,

14:14.000 --> 14:19.000
such as Kubernetes.

14:19.000 --> 14:24.000
And you can just choose your template using this

14:24.000 --> 14:28.000
and set your stats command.

14:28.000 --> 14:34.000
And let me talk about recent updates.

14:34.000 --> 14:38.000
So we recently got promoted to sensitive,

14:38.000 --> 14:40.000
incorporating projects.

14:40.000 --> 14:43.000
CNCF stands for CRAT Native Computing Foundation,

14:43.000 --> 14:46.000
which is hosting several projects,

14:46.000 --> 14:48.000
such as Kubernetes.

14:48.000 --> 14:51.000
And CNCF has three strategies,

14:51.000 --> 14:54.000
sandbox incubating and graduated.

14:54.000 --> 14:59.000
We joined CNCF as a sandbox in 2022,

14:59.000 --> 15:05.000
and we got promoted to interpreting that year.

15:05.000 --> 15:10.000
And we anticipate that we can promote it

15:10.000 --> 15:16.000
to the graduate level in the next couple of years.

15:16.000 --> 15:21.000
And the number of Retopsters is growing.

15:21.000 --> 15:27.000
So we now have more than 20,000 upstairs,

15:27.000 --> 15:32.000
and we have more than 170 contributors,

15:32.000 --> 15:41.000
so thanks to all the contributors and all the users.

15:41.000 --> 15:46.000
And in November, we released a remote version to 0,

15:46.000 --> 15:50.000
and this is the product infrastructure

15:50.000 --> 15:54.000
to other implementing new features

15:54.000 --> 15:56.000
without modifying remote.

15:56.000 --> 15:59.000
So we have the concept of VM driver programs

15:59.000 --> 16:03.000
to support additional hyperbysers,

16:03.000 --> 16:06.000
and we also support CRAT programs

16:06.000 --> 16:11.000
to add more subcommands to the DMACTS CLI.

16:11.000 --> 16:15.000
And we also have programs for URL schema

16:15.000 --> 16:20.000
for affecting templates from custom remote addresses.

16:20.000 --> 16:25.000
And we support GP Accertation using

16:25.000 --> 16:27.000
Clarity to VM driver.

16:27.000 --> 16:30.000
And we also added MCP server,

16:30.000 --> 16:37.000
model.pl.pl server for protecting AI agents.

16:37.000 --> 16:40.000
So in this case, we are extending things

16:40.000 --> 16:44.000
to cover AI as well as consumers.

16:44.000 --> 16:49.000
So the original goal in 2021 was to facilitate

16:49.000 --> 16:52.000
running continuity on Mac OS,

16:52.000 --> 16:57.000
but the project turned out to be ready to use

16:58.000 --> 17:01.000
for securing AI agents too,

17:01.000 --> 17:05.000
so as to prevent them from accessing the host of files

17:05.000 --> 17:06.000
and the commands.

17:06.000 --> 17:10.000
So you know, AI agents may make

17:10.000 --> 17:15.000
hallucination to remember files under the home directly.

17:15.000 --> 17:18.000
So this example is from Reddit,

17:18.000 --> 17:22.000
but this example quote,

17:22.000 --> 17:27.000
I tempted to clean up some packages and unused files

17:27.000 --> 17:29.000
and quote,

17:29.000 --> 17:33.000
launched RMR commands to test,

17:33.000 --> 17:35.000
test, test, press, press.

17:35.000 --> 17:38.000
And the program is that it even limits

17:38.000 --> 17:41.000
the home directly to the source.

17:41.000 --> 17:46.000
So it removes everything in the home directly.

17:46.000 --> 17:51.000
So it's very scary to run AI on your laptop directly.

17:52.000 --> 17:57.000
So it reminds useful for securing such AI agents.

17:57.000 --> 18:02.000
And I also have to note that AI may also make hallucination

18:02.000 --> 18:07.000
to install fake packages with possible names

18:07.000 --> 18:11.000
by running PPM install or MPM install

18:11.000 --> 18:14.000
or go install or whatever else.

18:14.000 --> 18:19.000
And AI agents now have a web search tool

18:20.000 --> 18:24.000
and AI agents are sometimes deceived

18:24.000 --> 18:26.000
by fake websites,

18:26.000 --> 18:30.000
once they're internet appears the web search tool

18:30.000 --> 18:34.000
to install some malicious purchases.

18:37.000 --> 18:42.000
So AI agents often come with a built-in sandboxing

18:43.000 --> 18:47.000
who is an example, they use a dandroque

18:47.000 --> 18:50.000
or the car container or Linux,

18:50.000 --> 18:54.000
but it's not as strong as virtual machines.

18:54.000 --> 18:59.000
And some AI agents use a sandbox exec,

18:59.000 --> 19:03.000
which is similar to dandroque or macOS,

19:03.000 --> 19:06.000
but this sandbox exec command has been

19:06.000 --> 19:11.000
deprecate almost a decade ago.

19:11.000 --> 19:14.000
And Apple recommends using up sandbox,

19:14.000 --> 19:17.000
but it's not direct refreshment

19:17.000 --> 19:19.000
through sandbox exec commands.

19:19.000 --> 19:21.000
So AI agent products are still

19:21.000 --> 19:26.000
relying on sandbox exec commands.

19:26.000 --> 19:31.000
So we can use Dima as a reversal sandbox

19:31.000 --> 19:34.000
for any AI agent.

19:34.000 --> 19:38.000
And it's also, it can coexist

19:38.000 --> 19:40.000
with the built-in sandboxing

19:40.000 --> 19:44.000
provided by AI agents.

19:44.000 --> 19:49.000
And we're running AI with Dima

19:49.000 --> 19:51.000
there are two scenarios.

19:51.000 --> 19:54.000
AI inside Dima and AI outside Dima.

19:54.000 --> 19:56.000
So AI inside Dima is running

19:56.000 --> 19:59.000
code x, compile, or code, or Gemini

19:59.000 --> 20:02.000
or maybe open code inside Dima.

20:02.000 --> 20:07.000
And you can also do an inference inside Dima

20:07.000 --> 20:11.000
using GPU acceleration.

20:11.000 --> 20:14.000
And AI outside Dima means

20:14.000 --> 20:17.000
AI agent running on the host

20:17.000 --> 20:20.000
can connect to Dima.

20:20.000 --> 20:23.000
And Dima provides MCP servers

20:23.000 --> 20:30.000
for protecting five system course

20:31.000 --> 20:34.000
and command execution course.

20:34.000 --> 20:38.000
And you can also use PIS code and remote SSH

20:38.000 --> 20:44.000
and compile routes with Dima.

20:44.000 --> 20:49.000
And we have an example of running AI inside Dima.

20:49.000 --> 20:52.000
For example, you can use this Dima city

20:52.000 --> 20:55.000
or start command to own the monster

20:55.000 --> 20:57.000
directly in mid-right mode.

20:57.000 --> 21:00.000
And we can install open code.

21:00.000 --> 21:04.000
So I don't have time to show the demo.

21:04.000 --> 21:08.000
But inside the website, we have example

21:08.000 --> 21:10.000
for continuous.

21:10.000 --> 21:13.000
And we also have examples for AI agents.

21:13.000 --> 21:16.000
So you can choose a credit code or code x

21:16.000 --> 21:19.000
or Gemini or GitHub compile route.

21:19.000 --> 21:24.000
And you can just run its inside Dima.

21:25.000 --> 21:30.000
And we also have GPU acceleration.

21:30.000 --> 21:35.000
So Dima CPP running inside Dima uses

21:35.000 --> 21:39.000
a parking to talk to a partner or GPU.

21:39.000 --> 21:46.000
And the partner or GPU talks to multi and brick

21:47.000 --> 21:50.000
which translates a parking right-body call

21:50.000 --> 21:56.000
to make a library course, which is used by Apple.

21:56.000 --> 22:07.000
I think I can show a quick demo.

22:07.000 --> 22:13.000
So this is using a parking GPU,

22:13.000 --> 22:17.000
a partner or GPU binas, a point for max.

22:17.000 --> 22:22.000
And I can talk to AI running inside Dima.

22:25.000 --> 22:28.000
Yeah, so the performance is not so much.

22:32.000 --> 22:35.000
And for running AI outside Dima,

22:35.000 --> 22:38.000
Dima exposes several MCP tools,

22:38.000 --> 22:40.000
such as district free,

22:40.000 --> 22:43.000
the right-body run share command.

22:43.000 --> 22:46.000
These are a similar to Gemini CLIs commands,

22:46.000 --> 22:49.000
but it's a strong risk on the box using BM.

22:49.000 --> 22:52.000
And we plan to raise the budget to a point

22:52.000 --> 22:54.000
to run in March.

22:54.000 --> 22:56.000
And in this race, we have sync modes.

22:56.000 --> 22:59.000
Unlike months, sync devices are written

22:59.000 --> 23:01.000
but only after user information.

23:01.000 --> 23:04.000
So this prevents AI from the situation

23:04.000 --> 23:07.000
where a story, I regrouped everything including

23:07.000 --> 23:08.000
AI.

23:08.000 --> 23:10.000
So this should be highly useful running AI

23:10.000 --> 23:12.000
in the inside Dima.

23:12.000 --> 23:15.000
And I don't have time to cover our future ideas,

23:15.000 --> 23:20.000
but we've come contributions in this field.

23:20.000 --> 23:23.000
And we have a huge community,

23:23.000 --> 23:25.000
and we have website, GitHub, and Slack, and Twitter,

23:25.000 --> 23:26.000
and must-do.

23:26.000 --> 23:27.000
And thank you.

23:28.000 --> 23:29.000
Thank you.

23:37.000 --> 23:39.000
Any questions?

23:45.000 --> 23:46.000
Sorry.

23:46.000 --> 23:48.000
Can you have multiple instances of Dima

23:48.000 --> 23:50.000
on different posts and talking to each other?

23:50.000 --> 23:52.000
They have to run on the same post.

23:52.000 --> 23:54.000
So, yeah.

23:54.000 --> 23:55.000
Yes.

23:55.000 --> 23:56.000
On one system.

23:56.000 --> 23:59.000
So you can use a bridge device

23:59.000 --> 24:02.000
to directly connect virtual machine

24:02.000 --> 24:04.000
to the host network.

24:04.000 --> 24:08.000
So you can just connect multiple instances

24:08.000 --> 24:11.000
running on several posts.

24:27.000 --> 24:29.000
What have you received, too?

24:40.000 --> 24:42.000
Actually, I don't use WC2 by myself,

24:42.000 --> 24:45.000
so I'm not sure about WC2 implementation.

24:51.000 --> 24:53.000
These are a lot of the big questions.

24:53.000 --> 24:54.000
That's it.

24:56.000 --> 24:58.000
Thank you.

