WEBVTT

00:00.000 --> 00:07.000
Thanks, Eric's back at first time.

00:07.000 --> 00:12.000
At 2020, Simon, he's also in the room and Christian, introduced the Eclipse Ice

00:12.000 --> 00:14.000
Oric's project for the first time.

00:14.000 --> 00:20.000
Today, it's my pleasure to introduce Ice Oric's tool and to elaborate on how it can

00:20.000 --> 00:23.000
help you with relieving some pain.

00:23.000 --> 00:27.000
You always have been middleware and I saw over and over again the last 20 years of

00:27.000 --> 00:29.000
middleware development.

00:30.000 --> 00:34.000
I'll start with a high level overview of what Eclipse Ice Oric's is.

00:34.000 --> 00:40.000
And then I will deep dive into four specific pain points and how Ice Oric's

00:40.000 --> 00:41.000
relieves them.

00:41.000 --> 00:48.000
And I will end up with some community insights.

00:48.000 --> 00:53.000
So Ice Oric says an open source project hosted by the Eclipse Foundation,

00:53.000 --> 00:55.000
started in 2019.

00:55.000 --> 01:01.000
And in 2020, we started Ice Oric's tool as the second generation.

01:01.000 --> 01:07.000
And if we combine the whole contribution work of all the commuters over roughly ten

01:07.000 --> 01:13.000
years, it's about more than 70 years of shared experience in developing high

01:13.000 --> 01:18.000
performance in the process communication.

01:18.000 --> 01:20.000
This is how it works on a high level.

01:20.000 --> 01:23.000
Ice Oric's creates a memory, maps it into the processes.

01:23.000 --> 01:25.000
We have send us in receivers.

01:25.000 --> 01:29.000
This allows a sender to directly right into the shared memory and receive

01:29.000 --> 01:32.000
or directly read from the shared memory.

01:32.000 --> 01:36.000
Ice Oric then provides the whole communication infrastructure around its own entities

01:36.000 --> 01:40.000
for sending receiving data, discovery, connecting them,

01:40.000 --> 01:45.000
changing the data, which is actually just passing reference to the messages.

01:45.000 --> 01:48.000
The whole management of memory around the numbers and

01:48.000 --> 01:49.000
information and so on.

01:49.000 --> 01:53.000
This allows you to pass a message without a single copy.

01:53.000 --> 02:00.000
We also call this through zero copy communication.

02:00.000 --> 02:04.000
Ice Oric's tool already surpasses the first generation, which we also call

02:04.000 --> 02:06.000
Ice Oric's classic now.

02:06.000 --> 02:07.000
It's written in Rust.

02:07.000 --> 02:10.000
Runs already on a bunch of operating systems, Linux, Windows,

02:10.000 --> 02:12.000
MegaOS, Q&AX.

02:12.000 --> 02:14.000
It runs even better metal.

02:14.000 --> 02:18.000
And besides the Rust APIs, we also have already language bindings for C,

02:18.000 --> 02:22.000
C++, Python, and also an RC shop.

02:22.000 --> 02:24.000
It has different messaging patterns.

02:24.000 --> 02:27.000
The many to many published subscribe communication,

02:27.000 --> 02:32.000
internet style, request response, and also a key value storage

02:32.000 --> 02:35.000
in shared memory, which we call Blackboard.

02:35.000 --> 02:38.000
And in addition, an event mechanism that allows you to

02:38.000 --> 02:41.000
wake up a receiver that's waiting for data.

02:42.000 --> 02:45.000
It's developed for a mission-critical systems in mind.

02:45.000 --> 02:49.000
And that's no heap allocation during one time, no blocking calls,

02:49.000 --> 02:53.000
and also does not have a dependency on the Rust under library.

02:53.000 --> 02:55.000
Compared to the Ice Oric's classic version,

02:55.000 --> 02:58.000
we have no more central demons of fully decentralized,

02:58.000 --> 03:02.000
and the next step towards a mixed-criticality architecture.

03:06.000 --> 03:09.000
Okay, so it's an IPC middleware, what else?

03:09.000 --> 03:14.000
Yeah, it's solid foundation to avoid some of the hard pain points

03:14.000 --> 03:17.000
that you often have because of the middleware.

03:20.000 --> 03:23.000
The first and most obvious one is the pain you have with copies

03:23.000 --> 03:26.000
and civilization inside your middleware.

03:26.000 --> 03:29.000
So if you copy a message, this takes CPU cycles,

03:29.000 --> 03:32.000
and sure the bigger the message, the more CPU cycles.

03:32.000 --> 03:35.000
And advanced robotic systems can have gigabytes,

03:35.000 --> 03:38.000
or even tens of gigabytes of data you have to pass between processes,

03:39.000 --> 03:41.000
and spreads of execution.

03:41.000 --> 03:46.000
Consequences that you have high latency and high CPU load.

03:46.000 --> 03:50.000
Here on this figure, we see how the latency increases with the message size

03:50.000 --> 03:54.000
for the IPC mechanisms that typically are used with Linux,

03:54.000 --> 03:56.000
so message queue and unix domain so good.

04:00.000 --> 04:04.000
Here a relive system where Ice Oric's is used,

04:04.000 --> 04:07.000
it's autonomous driving based on an end-to-end AI model.

04:07.000 --> 04:10.000
And we have six high resolution cameras,

04:10.000 --> 04:15.000
with roughly five megabytes per frame, 20 frames per second.

04:15.000 --> 04:20.000
But this gives you a camera data stream of already more than four gigabytes

04:20.000 --> 04:23.000
that you have to pass through your inference process.

04:23.000 --> 04:27.000
Now think about you have to copy them on the send-up side.

04:27.000 --> 04:29.000
You have to copy them on the receiver side.

04:29.000 --> 04:34.000
Then you may want to record the data so you have the next process and the next copies.

04:34.000 --> 04:38.000
So it's quite easy to end up with 10 to 20 gigabytes per second,

04:38.000 --> 04:40.000
just for this most subset of your system.

04:44.000 --> 04:47.000
What Ice Oric's does is through zero copy IPC,

04:47.000 --> 04:49.000
you have just passing references through the messages.

04:49.000 --> 04:52.000
This gives you a constant sub-microsecond latency

04:52.000 --> 04:54.000
independent of the message size.

04:54.000 --> 04:57.000
Here we see the measurement on a Raspberry Pi,

04:57.000 --> 04:59.000
which is not the most powerful hardware,

04:59.000 --> 05:03.000
but even there our latency is below one microsecond.

05:03.000 --> 05:07.000
If the message is as small, it's not that much of a difference,

05:07.000 --> 05:09.000
but you have megabyte messages.

05:09.000 --> 05:13.000
We are talking about sub-microsecond and not to be low milliseconds.

05:13.000 --> 05:21.000
So it's a factor for more than one thousand faster in the end.

05:21.000 --> 05:27.000
So this enables dozens of gigabyte per second IPC without CPU load.

05:27.000 --> 05:30.000
And also a fast reaction time,

05:30.000 --> 05:34.000
so it reduces the latency from sensing to acting.

05:34.000 --> 05:38.000
And in the end, with this CPU time, you save.

05:38.000 --> 05:42.000
You can use smaller, cheaper CPU for the same workload,

05:42.000 --> 05:44.000
which can be crucial if you want to go embedded.

05:44.000 --> 05:47.000
So if you have a pre-development on a rock PC,

05:47.000 --> 05:49.000
and then have to go embedded,

05:49.000 --> 05:55.000
this is really one important point to not end up with getting crazy.

05:55.000 --> 05:57.000
On the other way around, you could also say,

05:57.000 --> 06:01.000
the CPU time you save can be used for more workload.

06:05.000 --> 06:09.000
The next point are additional threats inside your middleware.

06:09.000 --> 06:13.000
So many middleware solutions come with background threats

06:13.000 --> 06:15.000
for discovery, housekeeping, and so on.

06:15.000 --> 06:18.000
But the question is, when do these threats break up?

06:18.000 --> 06:20.000
And what are they doing?

06:20.000 --> 06:22.000
And if you want to trigger real-time settings,

06:22.000 --> 06:24.000
how can I configure these threats?

06:24.000 --> 06:28.000
I'm going to nest the deep inside your middleware.

06:28.000 --> 06:33.000
The worst case is that additional threats are involved in passing the message.

06:33.000 --> 06:37.000
This means you have one or even more context features with every message to send.

06:37.000 --> 06:39.000
One context feature is not that expensive,

06:39.000 --> 06:42.000
but if you have thousands of messages per second,

06:42.000 --> 06:47.000
you will end up in the thousands of additional context features per message.

06:47.000 --> 06:50.000
Once the lens of all this, it's not deterministic,

06:50.000 --> 06:53.000
and you have a high scheduling overhead.

06:56.000 --> 06:59.000
Here's an example from the measurement.

06:59.000 --> 07:01.000
We've got a little bit of ice-oys classic.

07:01.000 --> 07:04.000
A user reported that they have some strange latency spikes

07:04.000 --> 07:07.000
in a loaded real-time system.

07:07.000 --> 07:09.000
And it was even a funny pattern,

07:09.000 --> 07:11.000
and we started analyzing, and then realized

07:11.000 --> 07:15.000
these latency spikes were coming from interference with

07:15.000 --> 07:18.000
the monitoring thread we had that send the key

07:18.000 --> 07:20.000
by live message over a unique domain circuit.

07:24.000 --> 07:26.000
The easiest communication would be,

07:26.000 --> 07:29.000
I have a node A sending a message to node B.

07:29.000 --> 07:32.000
The most efficient execution would be,

07:32.000 --> 07:36.000
I run these two nodes after narrowing the same threat of execution.

07:36.000 --> 07:38.000
So first node A runs sends a message,

07:38.000 --> 07:41.000
then node B runs consumes the message.

07:41.000 --> 07:44.000
If you have additional threats involved in this message passing,

07:44.000 --> 07:46.000
it's not clear if you will see this message.

07:46.000 --> 07:49.000
It's a classical race condition here.

07:52.000 --> 07:54.000
Ice-oys 2 has no internal threats,

07:54.000 --> 07:57.000
so nothing happens behind the scenes,

07:57.000 --> 07:59.000
and all the housekeeping we have to do is done

07:59.000 --> 08:02.000
when we create or destroy the entities.

08:06.000 --> 08:09.000
What is enabled is to gather with this zero-copy communication,

08:09.000 --> 08:11.000
low latency with a load shitter,

08:11.000 --> 08:13.000
and it's deterministic.

08:13.000 --> 08:16.000
So this passing of messages along this threat of execution

08:16.000 --> 08:19.000
will always have the message when the receiver is running,

08:19.000 --> 08:22.000
and in the end we reduce the whole scheduling overhead,

08:22.000 --> 08:24.000
which means lower CPU usage.

08:24.000 --> 08:28.000
You on the figure you see what happened when we disabled this monitoring thread,

08:28.000 --> 08:31.000
and we had the stable latency,

08:31.000 --> 08:33.000
compared with the first version,

08:33.000 --> 08:34.000
where we have the interference,

08:34.000 --> 08:36.000
which is the thread in the middle.

08:36.000 --> 08:46.000
The next pain point is in proper queuing inside your middleware.

08:46.000 --> 08:49.000
There are middleware solutions,

08:49.000 --> 08:51.000
especially around shell memory,

08:51.000 --> 08:53.000
where you have no queuing at all,

08:53.000 --> 08:55.000
which means a sender can outpace the receiver

08:55.000 --> 08:58.000
and override messages that are not read.

08:58.000 --> 09:01.000
If you have queues,

09:01.000 --> 09:05.000
you have to face the challenge of a queue overflow.

09:05.000 --> 09:09.000
Often what you do is then you block the sender,

09:09.000 --> 09:12.000
wait for the receiver to consume the messages,

09:12.000 --> 09:14.000
or you return an error and drop the new message

09:14.000 --> 09:17.000
because you cannot push into a full queue.

09:17.000 --> 09:20.000
Another thing is historical messages.

09:20.000 --> 09:24.000
If you have a sender that's only sporadically sending some data,

09:24.000 --> 09:26.000
like some state update,

09:26.000 --> 09:28.000
you want to have receivers connecting to it

09:28.000 --> 09:30.000
and getting the latest state.

09:30.000 --> 09:34.000
But if the late shining receiver is started after the sender,

09:34.000 --> 09:35.000
and you have no historical data,

09:35.000 --> 09:38.000
it won't get a message to no data.

09:38.000 --> 09:39.000
In this case,

09:39.000 --> 09:42.000
you have to ensure that your startup order

09:42.000 --> 09:44.000
ensures that all the receivers are first started

09:44.000 --> 09:45.000
and then the sender,

09:45.000 --> 09:48.000
which is kind of a restriction you have.

09:48.000 --> 09:52.000
The consequences here is that you can have back pressure

09:52.000 --> 09:54.000
from the receiver to the sender side,

09:54.000 --> 09:58.000
or the other way around you have a receiver that's forced to keep pace,

09:58.000 --> 10:01.000
with the sender to not lose any messages.

10:05.000 --> 10:08.000
Here, an example where I have a low frequency,

10:08.000 --> 10:11.000
low priority consumer that wants to read

10:11.000 --> 10:14.000
always the latest for messages from my high frequency,

10:14.000 --> 10:16.000
high priority producer.

10:16.000 --> 10:20.000
That's actually a use case I had when I was working

10:20.000 --> 10:22.000
on an ancient control system.

10:22.000 --> 10:25.000
I had this low frequency, low priority consumer

10:25.000 --> 10:29.000
that wanted to do some calculations on the full ancient cycle,

10:29.000 --> 10:31.000
which had fear for your combustion,

10:31.000 --> 10:34.000
and I had for every combustion the message,

10:34.000 --> 10:36.000
and I wanted to have the full cycle

10:36.000 --> 10:39.000
with all of the four latest messages.

10:39.000 --> 10:41.000
As there was no queueing,

10:41.000 --> 10:45.000
I had to do some work around and have some intermediate note

10:45.000 --> 10:48.000
that starts catching these messages internally,

10:48.000 --> 10:52.000
and then sends one message with all the four messages collected

10:52.000 --> 10:54.000
from this high frequency sender.

10:54.000 --> 10:56.000
That's kind of an ugly work around,

10:56.000 --> 10:58.000
better as to have queues.

11:00.000 --> 11:02.000
But if you have queues,

11:02.000 --> 11:04.000
you could have a queue overflow.

11:04.000 --> 11:07.000
This example here we have now a receiver,

11:07.000 --> 11:09.000
which has a queue of four,

11:09.000 --> 11:11.000
but the receiver gets delayed.

11:11.000 --> 11:14.000
So your high frequency,

11:14.000 --> 11:15.000
high priority publisher,

11:15.000 --> 11:17.000
or wants to provide new data.

11:19.000 --> 11:21.000
Depending on your use case,

11:21.000 --> 11:23.000
you have different options,

11:23.000 --> 11:26.000
often many of them are also not really good options.

11:26.000 --> 11:28.000
So you could block the sender,

11:28.000 --> 11:30.000
which we do not want to do in this case,

11:30.000 --> 11:32.000
because I have a high priority sender,

11:32.000 --> 11:35.000
and I don't want to get it disturbed by low frequency,

11:35.000 --> 11:37.000
low priority consumer.

11:37.000 --> 11:40.000
You could return an error,

11:40.000 --> 11:42.000
and whatever shut down the whole system,

11:42.000 --> 11:44.000
which we also do not want,

11:44.000 --> 11:48.000
we don't want to have trouble with a low priority receiver here.

11:48.000 --> 11:52.000
But we also don't want to let this receiver processing old data,

11:53.000 --> 11:55.000
so it would be good to really have

11:55.000 --> 11:58.000
four consecutive latest messages here.

12:03.000 --> 12:06.000
What ISRICS do provides is receive a queues with

12:06.000 --> 12:08.000
configurable queuesize,

12:08.000 --> 12:11.000
and we have something that we call safely overflowing queues,

12:11.000 --> 12:14.000
which is more or less just a ring buffer,

12:14.000 --> 12:17.000
but it has the benefit that you can drop old messages,

12:17.000 --> 12:20.000
that are unread and favour of new ones.

12:20.000 --> 12:22.000
So in case of an overflow,

12:22.000 --> 12:26.000
you can always provide the latest and messages

12:26.000 --> 12:28.000
to the receiver.

12:30.000 --> 12:33.000
In the end, we can configure this queue overflow behavior,

12:33.000 --> 12:36.000
so we have the possibilities to either return an error,

12:36.000 --> 12:38.000
and drop the new message.

12:38.000 --> 12:40.000
We can block the sender, wait for the receiver,

12:40.000 --> 12:43.000
or we can do this overflow that I just described,

12:43.000 --> 12:45.000
and drop the oldest messages.

12:46.000 --> 12:50.000
What we also provide is a history option on the sender side,

12:50.000 --> 12:53.000
so we can, they are store the messages,

12:53.000 --> 12:55.000
and if you have a late joining receiver,

12:55.000 --> 13:00.000
we can deliver this message on the subscription you could say.

13:05.000 --> 13:09.000
So this enables a decoupling of senders and receivers,

13:09.000 --> 13:13.000
and also to have a very efficient memory queue

13:13.000 --> 13:17.000
of only the messages that are relevant,

13:17.000 --> 13:21.000
and you can drop messages that are no more of interest.

13:21.000 --> 13:24.000
In this example here, we have a fusion node,

13:24.000 --> 13:27.000
it has different senders with different frequencies.

13:27.000 --> 13:30.000
You could decide that for your lighter sender,

13:30.000 --> 13:33.000
you're only interested in the latest radius, so queue size one.

13:33.000 --> 13:35.000
For your camera sender,

13:35.000 --> 13:38.000
you want to have up to the four latest camera images,

13:38.000 --> 13:40.000
but if they are getting older,

13:40.000 --> 13:42.000
they are no more interesting, so you could overflow,

13:43.000 --> 13:46.000
if you have a sender with a certain state,

13:46.000 --> 13:49.000
you can subscribe and get the history,

13:49.000 --> 13:53.000
or the latest message on the sender side still available,

13:53.000 --> 13:56.000
or you could also decide that I don't want to miss a message,

13:56.000 --> 13:58.000
I go for the maximum queue size,

13:58.000 --> 14:00.000
and queue overflow is an error.

14:07.000 --> 14:10.000
The last pain point are message callbacks,

14:11.000 --> 14:14.000
or they say per message callbacks,

14:14.000 --> 14:17.000
and this is typically the pattern that also rush is using,

14:17.000 --> 14:21.000
and it makes you trouble because this is a hot coupling

14:21.000 --> 14:23.000
between communication execution,

14:23.000 --> 14:26.000
so your business logic on the receiver side

14:26.000 --> 14:29.000
is forced to react on every message.

14:31.000 --> 14:34.000
And if there are use cases,

14:34.000 --> 14:36.000
but this is perfectly fine,

14:36.000 --> 14:39.000
but if you want to process several messages

14:39.000 --> 14:41.000
or receive us simultaneously,

14:41.000 --> 14:43.000
you have to do some bookkeeping

14:43.000 --> 14:45.000
and do some cashing beyond these callbacks

14:45.000 --> 14:48.000
to come to the right set of messages you want to have.

14:50.000 --> 14:53.000
It even gets worse if the message only lifts

14:53.000 --> 14:55.000
during the callback duration,

14:55.000 --> 14:57.000
which means you have to copy the content of the message

14:57.000 --> 15:00.000
out of this callback for later processing.

15:01.000 --> 15:04.000
Here we end up with many context features,

15:04.000 --> 15:06.000
which is more scheduling overhead again,

15:06.000 --> 15:09.000
and also more administrative effort on your side.

15:14.000 --> 15:16.000
If we stay with this fusion example,

15:16.000 --> 15:19.000
we have the different sendals with different frequencies.

15:19.000 --> 15:23.000
Now your fusion algorithm could work like,

15:23.000 --> 15:26.000
I always want to process the whole messages

15:26.000 --> 15:29.000
from different sensors when there's a new lighter scan,

15:29.000 --> 15:31.000
so that's my leading sensor.

15:31.000 --> 15:33.000
Whenever I get a new lighter scan,

15:33.000 --> 15:36.000
I want to get the messages from the other sendals,

15:36.000 --> 15:39.000
and throw them in one algorithm that does the calculation.

15:41.000 --> 15:44.000
This simple example, this would mean you have 16 individual callbacks

15:44.000 --> 15:47.000
to just execute one fusion run.

15:48.000 --> 15:53.000
I try to have it here depicted on the timeline in the figure,

15:53.000 --> 15:55.000
and so it's a bit messy,

15:55.000 --> 15:58.000
but if you have callbacks per message,

15:58.000 --> 16:00.000
things get a bit messy.

16:04.000 --> 16:09.000
What ISOH2 provides is totally decoupling of the notification,

16:09.000 --> 16:13.000
and the related context switching from the act of messaging.

16:13.000 --> 16:15.000
We have event as a separate pattern,

16:15.000 --> 16:18.000
comes with notifiers, listeners, and wait sets,

16:18.000 --> 16:21.000
notifier can send an event,

16:21.000 --> 16:23.000
listener can wait on an event,

16:23.000 --> 16:26.000
and with a wait set, you can wait on many events

16:26.000 --> 16:27.000
in a single thread.

16:29.000 --> 16:32.000
This thing can easily combine again

16:32.000 --> 16:35.000
with the publishers, the primals, and clients,

16:35.000 --> 16:36.000
and servers, and so on.

16:36.000 --> 16:40.000
So it's easy to build a triggering publisher that always

16:40.000 --> 16:43.000
wakes up a receiver when sending data,

16:43.000 --> 16:47.000
or you can also have your per message callback again

16:47.000 --> 16:49.000
if you really like it.

16:54.000 --> 16:56.000
If you now bring this together,

16:56.000 --> 16:58.000
the queueing and this event mechanism,

16:58.000 --> 17:01.000
you can have very efficient execution strategies.

17:02.000 --> 17:04.000
So in our fusion example,

17:04.000 --> 17:05.000
I could say,

17:05.000 --> 17:08.000
whenever a lighter message is sent,

17:08.000 --> 17:10.000
I also have a notification,

17:10.000 --> 17:12.000
and I wake up the fusion node.

17:12.000 --> 17:15.000
All the other messages from all the other senders

17:15.000 --> 17:17.000
are queued behind the scenes,

17:17.000 --> 17:20.000
with the queue size, you configured,

17:20.000 --> 17:24.000
and here you have one context switch,

17:24.000 --> 17:26.000
if you want to run the fusion,

17:26.000 --> 17:29.000
and the other queueing just happens behind the scenes.

17:30.000 --> 17:33.000
It's also possible to use our event mechanism

17:33.000 --> 17:35.000
for whatever notification you have,

17:35.000 --> 17:37.000
so you could use it for timers,

17:37.000 --> 17:39.000
for instance, and say,

17:39.000 --> 17:41.000
I have a wait set that's waiting on new data,

17:41.000 --> 17:43.000
or on the timer to expire.

17:45.000 --> 17:46.000
So again, here,

17:46.000 --> 17:48.000
let's conduct switching,

17:48.000 --> 17:50.000
and more CPU time for you application.

17:55.000 --> 17:58.000
Coming to some community insights,

17:59.000 --> 18:02.000
I suppose it's already used in many different domains.

18:02.000 --> 18:03.000
It's used in automotive,

18:03.000 --> 18:06.000
from simulators up to automated driving,

18:06.000 --> 18:08.000
all kinds of robots.

18:08.000 --> 18:11.000
It's used for high frequency trading,

18:11.000 --> 18:13.000
on industrial cameras,

18:13.000 --> 18:17.000
we have different medical devices up to surgical robots,

18:17.000 --> 18:19.000
drones,

18:19.000 --> 18:22.000
so wherever you have a need for low latency

18:22.000 --> 18:24.000
and high volume data transfer.

18:29.000 --> 18:35.000
So you can think of ice ricks as being a low level,

18:35.000 --> 18:39.000
basic software for share memory communication,

18:39.000 --> 18:42.000
that can now be combined with other open source projects

18:42.000 --> 18:44.000
to build something bigger.

18:44.000 --> 18:47.000
So you can combine it with a network protocol

18:47.000 --> 18:49.000
to have a full-blown communication state

18:49.000 --> 18:51.000
with internal and external communication,

18:51.000 --> 18:53.000
so for instance,

18:53.000 --> 18:55.000
or GRPC, something like that.

18:56.000 --> 18:59.000
You could also integrate it as an IPC layer

18:59.000 --> 19:02.000
in a broader communication stack,

19:02.000 --> 19:04.000
maybe based on a standard like DDRs,

19:04.000 --> 19:06.000
which then defines a data model,

19:06.000 --> 19:08.000
and an ideal and so on.

19:08.000 --> 19:09.000
Ice ricks.

19:20.000 --> 19:21.000
Thank you.

19:21.000 --> 19:22.000
So I was wondering,

19:22.000 --> 19:24.000
since you don't copy the data,

19:24.000 --> 19:26.000
since I can only read it,

19:26.000 --> 19:28.000
I cannot write it into that pointer.

19:28.000 --> 19:29.000
Yeah.

19:29.000 --> 19:32.000
So I need to make a copy myself.

19:32.000 --> 19:34.000
If I want to modify it.

19:34.000 --> 19:35.000
Ah, okay.

19:35.000 --> 19:37.000
Yeah, that's this news case,

19:37.000 --> 19:40.000
which we call the read-modify write pipeline.

19:40.000 --> 19:43.000
Yeah, the current API that's not possible,

19:43.000 --> 19:47.000
it could be a special option to also allow this.

19:47.000 --> 19:49.000
No, this news case for who I have.

19:49.000 --> 19:52.000
I think it's interesting if that is how it works.

19:52.000 --> 19:55.000
But yeah, if you have kind of a special contract,

19:55.000 --> 19:57.000
so sure, you would have to ensure that that's

19:57.000 --> 19:59.000
not only receive off the data,

19:59.000 --> 20:01.000
if you've started writing it and so on.

20:12.000 --> 20:14.000
I was wondering about the blackboard mechanism,

20:14.000 --> 20:16.000
so does that keep a history of all the messages

20:16.000 --> 20:18.000
that have been submitted,

20:18.000 --> 20:20.000
or is that just the latest messages?

20:20.000 --> 20:23.000
Yeah, blackboards can be really good for reproducible

20:23.000 --> 20:25.000
logging of when messages are sent.

20:25.000 --> 20:26.000
Yeah.

20:26.000 --> 20:31.000
The blackboard is actually one thing where we might have a copy.

20:31.000 --> 20:35.000
So we would check if there's some contention on the blackboard,

20:35.000 --> 20:36.000
if you can update,

20:36.000 --> 20:37.000
and if someone is reading,

20:37.000 --> 20:40.000
it could mean that you have another read again,

20:40.000 --> 20:41.000
because someone was right.

20:41.000 --> 20:44.000
I am an effort key by your story here.

20:44.000 --> 20:45.000
Thanks.

20:51.000 --> 20:53.000
Thank you.

20:53.000 --> 20:55.000
If I'm doing zero copy IPC,

20:55.000 --> 20:58.000
does it mean that I need to pre-educate my memory pool

20:58.000 --> 21:00.000
for all of my messages?

21:00.000 --> 21:03.000
So this is always what we do behind the scenes.

21:03.000 --> 21:06.000
So if you configure your services,

21:06.000 --> 21:08.000
you can configure how many,

21:08.000 --> 21:11.000
for instance, publishers and subscribers out there.

21:11.000 --> 21:13.000
You also configure your queue sizes,

21:13.000 --> 21:14.000
and with these values,

21:14.000 --> 21:19.000
we can kind of calculate the worst case amount of memory you need.

21:20.000 --> 21:22.000
In the current version of icebergs too,

21:22.000 --> 21:24.000
it's a bit of the maximum memory,

21:24.000 --> 21:27.000
so it could be that it's the theoretical maximum,

21:27.000 --> 21:30.000
but in practice you don't need it.

21:30.000 --> 21:34.000
We will soon have the option for you to configure it,

21:34.000 --> 21:35.000
individually.

21:35.000 --> 21:38.000
If you want to have a safety system,

21:38.000 --> 21:41.000
you can then configure it beforehand.

21:41.000 --> 21:44.000
In the current,

21:45.000 --> 21:50.000
I would say if we would run out of memory,

21:50.000 --> 21:52.000
we'd just allocate more memory behind the scenes,

21:52.000 --> 21:54.000
so things that are going on.

21:54.000 --> 21:57.000
So that's also kind of smooth,

21:57.000 --> 21:59.000
or then in icebergs,

21:59.000 --> 22:02.000
one value really had to configure all the settings.

22:05.000 --> 22:09.000
Do you have any industrial clients,

22:09.000 --> 22:11.000
and the reason I'm asking is,

22:11.000 --> 22:13.000
I wanted to see if you had ever integrated it

22:13.000 --> 22:16.000
with ethercat or profanant for industrial use,

22:16.000 --> 22:19.000
which is a highly deterministic environment?

22:19.000 --> 22:20.000
No.

22:20.000 --> 22:21.000
How was it?

22:21.000 --> 22:24.000
So far we had no contact with business guys

22:24.000 --> 22:27.000
who are kind of combining it with either cat

22:27.000 --> 22:29.000
or whatever.

22:29.000 --> 22:30.000
It's a lot less.

22:30.000 --> 22:32.000
But for sure,

22:32.000 --> 22:35.000
how we typically do this,

22:35.000 --> 22:37.000
we have gateways,

22:37.000 --> 22:40.000
and the gateway is grabbing data from shared memory,

22:40.000 --> 22:42.000
and then sending it on the wire or whatever.

22:42.000 --> 22:46.000
And you could then have whatever a strict cycle time

22:46.000 --> 22:48.000
in which your gateway is running,

22:48.000 --> 22:49.000
and then consumes,

22:49.000 --> 22:51.000
without queueing the latest credit message

22:51.000 --> 22:53.000
and sends it over etherneto.

22:55.000 --> 22:59.000
Do you support dynamically sized messages,

22:59.000 --> 23:01.000
or do you need to set up a maximum size,

23:01.000 --> 23:02.000
or how does the work?

23:02.000 --> 23:05.000
That's still the limitation.

23:05.000 --> 23:08.000
So data lives in shared memory.

23:08.000 --> 23:13.000
So if you have data that's based on heap allocation,

23:13.000 --> 23:16.000
your heap is processed local, that won't work.

23:16.000 --> 23:20.000
So if you have your own special versions of vectors

23:20.000 --> 23:22.000
and so on, with your own allocator,

23:22.000 --> 23:24.000
you could say, okay, I have an allocator that's allocating

23:24.000 --> 23:26.000
each shared memory,

23:26.000 --> 23:28.000
then you can also realize that.

