WEBVTT

00:00.000 --> 00:15.000
We're talking about Beluga and hopefully telling you what it is and why you should care about it.

00:15.000 --> 00:18.000
I'm going to use it going through an application.

00:18.000 --> 00:21.000
We want to localize the Andino robot.

00:21.000 --> 00:24.000
You see steam bioresensors based stations.

00:24.000 --> 00:27.000
And for that, we're going to use the Beluga library.

00:27.000 --> 00:30.000
So what do we need? We need the Andino robot.

00:30.000 --> 00:33.000
We need the steam bioresensors based stations.

00:33.000 --> 00:37.000
And of course, we need some type of sensor to receive the signal from the waste stations.

00:37.000 --> 00:40.000
I was perfectly described earlier today.

00:40.000 --> 00:44.000
The sensor, it's the bit-press lighthouse position in deck,

00:44.000 --> 00:49.000
which receives the measures the infrared signal transmitted by the waste stations.

00:49.000 --> 00:54.000
And provides this timing and information on angular information of where the waste stations are.

00:54.000 --> 01:03.000
But we still need to do some math on those readings to get actually the asimuth and the elevation angles for the different waste stations.

01:03.000 --> 01:09.000
For that, we're going to use the lighthouse ros node, which is provided by acumen, open source.

01:09.000 --> 01:17.000
And yeah, so this competes basically where the bases are with respect to the robot.

01:17.000 --> 01:19.000
But that's not what we want.

01:19.000 --> 01:23.000
We want to find where the robot is with respect to the non-locations of the waste stations.

01:24.000 --> 01:28.000
The base station calibration and position was talked in the previous meeting.

01:28.000 --> 01:31.000
Very interesting. We're going to do that.

01:31.000 --> 01:35.000
So how do we get where the robot is for that? We're going to use the Beluga.

01:35.000 --> 01:37.000
So what is the Beluga?

01:37.000 --> 01:44.000
Beluga is an open source, Monte Carlo localization toolkit, which is focused on cold quality and performance.

01:44.000 --> 01:48.000
It is both an ANCL for being replaced by nodes.

01:48.000 --> 01:52.000
So many people probably here familiar with ANCL node. This one will work.

01:52.000 --> 01:55.000
Basically, it just was very similar to ANCL.

01:55.000 --> 01:58.000
It supports both ros 1 and ros 2.

01:58.000 --> 02:06.000
But it is also an extensible ros independent ANCL library that we can use to make other localization systems.

02:06.000 --> 02:08.000
And that's what we want to do now.

02:08.000 --> 02:11.000
So we're going to focus on the second aspect of Beluga.

02:11.000 --> 02:17.000
And if we take a look under the hood, we open up the code of the Beluga ANCL node.

02:17.000 --> 02:23.000
We'll find out that it's actually instantiating the ANCL algorithm from the Beluga library.

02:23.000 --> 02:29.000
And configuring it to expose the sensor models and the motion models that are used in the ANCL package.

02:29.000 --> 02:34.000
So that people are familiar with it. It has the same parameters, connects to the same roast topics, publish the same things.

02:34.000 --> 02:37.000
So basically, it works exactly that.

02:37.000 --> 02:43.000
So basically, if we look at the ANCL implementation inside Beluga,

02:43.000 --> 02:47.000
we'll see that it is generic on the sensor models and the motion models.

02:47.000 --> 02:51.000
So in our case, maybe the only thing that we need to do is a new sensor model

02:51.000 --> 02:57.000
that will weight our particles based on the readings from the different from the light position index.

02:57.000 --> 03:02.000
If we wanted to do something more involved, like for example, fuse the lighter that the robot has

03:02.000 --> 03:09.000
and the light force positioning information, we could do our own ANCL like algorithm.

03:09.000 --> 03:14.000
And for that, we might not use the entire ANCL implementation in Beluga,

03:14.000 --> 03:21.000
but we use, like, the underlying abstraction that it has, that uses the, like, this modern style C++ range thing

03:21.000 --> 03:23.000
to, for example, provide all the particles.

03:23.000 --> 03:25.000
And then we can sample them and then we get estimation.

03:25.000 --> 03:31.000
So it's pretty flexible. We can extend as much as we want, or use it as it is.

03:31.000 --> 03:39.000
I'm going to really fast. So how do we, so what do we need now?

03:39.000 --> 03:46.000
I think we can go for the first option and just use a create a new sensor model.

03:46.000 --> 03:53.000
And for that, we are going to, so we can create our lighthouse sensor model to the, which is a class.

03:53.000 --> 03:59.000
And if you look at the Beluga documentation, we'll tell us that it's a class that needs to overload the color operator

03:59.000 --> 04:05.000
then that we receive when called the detections, which would be the sensor readings.

04:05.000 --> 04:11.000
So this information of asymptote and elevation for the base station that it saw at that particular moment, right?

04:11.000 --> 04:17.000
It doesn't need to be seen all the base station at some point, it would be just one, or which one it agrees.

04:17.000 --> 04:21.000
And what we'll return is a method that gets us an argument.

04:21.000 --> 04:25.000
The, the current state, which are the particles basically.

04:25.000 --> 04:30.000
And what we will do, a question, this is basically some type of projection error.

04:30.000 --> 04:40.000
And, and give us the, the probability of getting the, this detection, given this state, this particle.

04:40.000 --> 04:43.000
So basically, the probability, the weight in the particles.

04:43.000 --> 04:48.000
And this is the only thing that the new code that we need to write, which is a bit of math.

04:48.000 --> 04:50.000
And that's it. Assume in some error and whatnot.

04:51.000 --> 04:56.000
But then we package that, we can create our own light, how same seal, cross node.

04:56.000 --> 05:02.000
We still need to write some of the extra, extra code to connect to the system.

05:02.000 --> 05:09.000
But then we are done, and we can localize our robot, perfectly, which is just using pretty little code.

05:09.000 --> 05:11.000
Next up.

05:11.000 --> 05:13.000
Thank you.

