WEBVTT

00:00.000 --> 00:26.000
Hello, and please welcome Roland and some that apparently have fun with calibration, never heard of that before. So, very happy to hear about it.

00:27.000 --> 00:31.000
I need a microphone to actually say something.

00:31.000 --> 00:39.000
Okay, welcome everybody to our presentation about the tools and methods to get top quality robot data.

00:39.000 --> 00:43.000
So, why would you monitor data quality? Let's start with that question.

00:44.000 --> 00:51.000
What kind of talk can we give and seminar have a combined like amount of more than 10 companies where we were annoyed at data quality.

00:51.000 --> 00:55.000
So, we figured that this would be an important topic to present.

00:55.000 --> 01:01.000
Whenever you join a robotics company, robotics companies are always very proud of collecting a lot of data.

01:01.000 --> 01:04.000
But your data should also be good from start.

01:04.000 --> 01:10.000
However, do you want to give data to your users as soon as possible? So, they can find the mistakes that you're not seeing.

01:10.000 --> 01:15.000
Yeah, for machine learning purposes, everybody wants to train machine learning algorithms nowadays.

01:15.000 --> 01:18.000
And for that your data should be good and consistent.

01:18.000 --> 01:26.000
So, if you have collected data for five years and then you discover that your camera was terrible, then you've only collected data for one week.

01:26.000 --> 01:30.000
So, try to get with the people quickly so they can find the issues again.

01:30.000 --> 01:35.000
Yeah, nowadays I'm spending most of my time just begging the computer to write better code.

01:36.000 --> 01:42.000
And that's not going to solve your problems. I noticed that a lot of people are not looking at their data anymore.

01:42.000 --> 01:47.000
So, we are trying to convince you that you should take a look at your data.

01:47.000 --> 01:52.000
Yeah, so we could only put a subset of common issues in this presentation.

01:52.000 --> 01:55.000
Who are we? Sam, who are you?

01:55.000 --> 01:59.000
I'm Sam. I've been doing robotics for over 15 years. I love robots of all kinds.

01:59.000 --> 02:03.000
Even though my favorite, I humanize because they're complex and they're fun. They can do things.

02:03.000 --> 02:07.000
And yeah, I follow my GitHub and ask me any questions afterwards if you want.

02:07.000 --> 02:13.000
My favorite kind of robots are self-driving cars. I think they're pretty cool.

02:13.000 --> 02:16.000
So, I'm working on end learning for self-driving cars at wave.

02:16.000 --> 02:23.000
Yeah, you can follow me on LinkedIn. You can follow Sam on GitHub and you can follow the focuses in my backyard on Instagram.

02:23.000 --> 02:30.000
Yeah, let's get started. So, we figured let's start about researching and planning your sensors.

02:30.000 --> 02:38.000
One thing I at least always notice is that people have not the right expectations for sensors.

02:38.000 --> 02:45.000
So, one of the things I often face is that people say, hey, Roland, can you make a machine learning algorithm to detect this pedestrian?

02:45.000 --> 02:49.000
Which is at 80 or 100 meters away.

02:49.000 --> 02:55.000
And then it's kind of my task to explain people that this pedestrian is only four pixels in the camera space.

02:55.000 --> 03:02.000
Or only has like two lighter points hitting the pedestrian and that no amount of machine learning is going to fix that.

03:02.000 --> 03:09.000
One two, which I found for this, which I really like, is that 10 confusion made a visualizer for lighter,

03:09.000 --> 03:14.000
where you can compare different types of lighters. So, you can see what their range is.

03:14.000 --> 03:21.000
You can see at what distance how many points roughly would be there.

03:21.000 --> 03:31.000
They also have one for depth visualization. So, yeah, start building your robots by thinking about what accuracy do you need at which distance.

03:31.000 --> 03:36.000
And yeah, what is the coverage around you and your robots?

03:36.000 --> 03:41.000
And it's not only about your sensor itself, like how big is the image of getting how many pixels and so on.

03:41.000 --> 03:46.000
You also need to think about the details on it because they may affect you more than you think.

03:46.000 --> 03:50.000
So, for example, the interface of whatever sensor you're using, is it used be?

03:50.000 --> 03:55.000
Well, you may have issues with the, you know, flakiness of the connection of it, which is going to affect you the data quality.

03:55.000 --> 04:00.000
Or you had ethernet, like a lighter, cool, it's awesome, it's really easy to use anything else on it.

04:00.000 --> 04:02.000
You can buy switches, everything is awesome, yeah.

04:02.000 --> 04:11.000
But then you're paying a lot of CPU costs on the network stack, which may also delay your data, which began with affect your data quality.

04:11.000 --> 04:19.000
Yeah. So, and, you know, every kind of technology has a trade-off, but you should probably think about it a little bit, or if you are not sure try it first.

04:19.000 --> 04:24.000
And then whatever sensor you choose, the driver quality is quite important.

04:24.000 --> 04:31.000
We all like open source here, so if your driver is actually open source, so you can look at how things are done and improve them if you need to.

04:31.000 --> 04:37.000
That's great. If you can't, but luck, hopefully your vendor is friendly and helpful.

04:37.000 --> 04:46.000
Also, is your driver efficient, because we all have seen, like, oh, yeah, this driver in particular uses a lot of CPU, well, maybe it's just doing something silly.

04:46.000 --> 04:53.000
And the data format is the sensor giving you the data. In some format you can just already, you need to transform it.

04:53.000 --> 04:59.000
When you transform it, are you losing anything that may affect the data quality also? Check it out.

05:00.000 --> 05:20.000
I know, if you click, I really like that diagram up, so I've quoted another one that does the same thing, but also cameras, and you can also see the person in the camera, so you can place your little sensors and get a bit of an estimation of, oh, how many cameras do you need with what, yeah, the solution for the view, but lighters and so on.

05:20.000 --> 05:26.000
And I made a preset of list of, you know, sensors you can buy nowadays too, so you can pay around.

05:26.000 --> 05:29.000
I accept poor question.

05:29.000 --> 05:35.000
Yeah, on my side, one thing which I often see going wrong is the coordinate systems.

05:35.000 --> 05:44.000
It happens very often that people just think, oh, surely the coordinate system, surely the x-axis to left or up or whatever.

05:44.000 --> 05:55.000
This goes wrong all the time. You really have to understand which coordinate system you're using, and the worst part of coordinate system errors is that you can actually

05:55.000 --> 05:59.000
if you make a mistake upon a mistake, it's good again.

05:59.000 --> 06:06.000
So I've often seen that people, for example, in the perception, like if you have a perception, prediction,

06:06.000 --> 06:12.000
or like perception, fusion, prediction stack, people on the perception side may be do something right.

06:12.000 --> 06:21.000
They use a correct coordinate system, then the fusion people use the wrong coordinate system, then the prediction people use the wrong coordinate system again, and suddenly it's perfect again.

06:21.000 --> 06:27.000
But as soon as you start programming something on top of the fusion system and you actually use a correct coordinate system, it's bad.

06:27.000 --> 06:32.000
If you fix it, you break your other parts of your robot.

06:32.000 --> 06:36.000
So yeah, please, please make sure to check every step.

06:36.000 --> 06:44.000
The only solution I have here is to document, to educate people, to visualize things.

06:45.000 --> 06:58.000
My tip here, and people who work with me can confirm this, is that I make a 3D printable coordinate system for the ISO 885 coordinate system. I leave it all over the office.

06:58.000 --> 07:05.000
So I brought a few, but I forgot to get them out of my bag, but I put them on the table after Sam says something.

07:05.000 --> 07:13.000
You can download this on thingy first, but other thing to notice here is that different types of robots have different coordinate systems.

07:13.000 --> 07:20.000
So if you print these things and put them over your drone company, then your drone company is not going to be happy.

07:20.000 --> 07:23.000
So yeah, keep this in mind, and you can download this.

07:23.000 --> 07:33.000
This is by the way, from this is the exact open data sets, they somehow used three different coordinate systems in the same data sets.

07:33.000 --> 07:40.000
Okay, now time stamps, every company we're going to work with had issues with them, why?

07:40.000 --> 07:42.000
Well, I mean, it's not that easy, really.

07:42.000 --> 07:48.000
But so first thing you need to think about is the reference time of all your sensors the same.

07:48.000 --> 07:53.000
It may be different computers, different sensors the glowing one not, it may have a clock with a battery, it may not.

07:53.000 --> 07:58.000
So yeah, first check that, because did you know the world started to not Thursday?

07:58.000 --> 08:02.000
You know, during the next time starting first of January of the 70.

08:02.000 --> 08:10.000
So this happens a lot, just any kind of time sync where you can, but make sure you have a common time reference in some way.

08:10.000 --> 08:16.000
Or you can't figure it out, even if it's offline, just figure it out in a way that you can ensure you have one.

08:16.000 --> 08:19.000
Another thing is synchronizing the data of your sensors.

08:19.000 --> 08:25.000
Well, do you call the hardware triggering, which is great, if you can do it, it can be complicated to implement.

08:25.000 --> 08:28.000
You may have sensors that just don't support it at all.

08:28.000 --> 08:34.000
You may need to have a mix of hardware triggering with software triggering, which is hell, so think twice if you really need it.

08:34.000 --> 08:39.000
Because it's a lot of overhead, maybe the way you use your data doesn't need this.

08:39.000 --> 08:42.000
I mean, it's great to have if you can, but think about it.

08:42.000 --> 08:45.000
And then yeah, we have timestamps, but what do they represent?

08:45.000 --> 08:49.000
Depending on the technology of sensor or what you're doing or what is running.

08:50.000 --> 08:57.000
It's a times time, you're trigger time, but you told the sensor, now I want an image, light at point or whatever.

08:57.000 --> 09:02.000
Or is it when you resist it from that sensor or from that machine or whatever?

09:02.000 --> 09:05.000
Or when you pull this in your system, like pros.

09:05.000 --> 09:08.000
Or when you record it, it didn't do a bug, for example.

09:08.000 --> 09:12.000
Well, it doesn't really matter which one it is, but just know which one it is.

09:12.000 --> 09:16.000
And if you actually care about multiple of them, well, represent that information.

09:16.000 --> 09:21.000
And again, check your timestamps in your code. If your board just go around, look at your timestamps.

09:21.000 --> 09:23.000
You will find more issues than you think.

09:23.000 --> 09:31.000
It is very easy to miss you issues, like you may put a little bit of code in between getting the time and actually getting the data and you just messed it up.

09:31.000 --> 09:33.000
It's easy to make mistakes.

09:33.000 --> 09:35.000
I see this mistake so many times.

09:35.000 --> 09:39.000
Yeah, so in terms of getting to know your sensors, Sam?

09:39.000 --> 09:43.000
So, calibration, right? That's one of the topics you wanted to talk.

09:43.000 --> 09:46.000
I'm actually not going to talk much about it, other than what do you calibrate?

09:46.000 --> 09:49.000
Well, usually then twin six and the extrinsic of the sensor.

09:49.000 --> 09:53.000
In twin six, for cameras, for example, it's like a distortion of your lens.

09:53.000 --> 09:57.000
So, you can eventually undistorted and make straight lines straight and things like that.

09:57.000 --> 10:06.000
And the extrinsic is nothing else that varies my sensor in relationship to another sensor, which is very useful for, well, projecting things in the position of another thing.

10:06.000 --> 10:10.000
Right? Well, this is great if you calibrate it and they're too short of it.

10:10.000 --> 10:18.000
And it's a pain and calculations go all for the time and the sensor changes during the lifetime, the temperature changes and whatnot.

10:18.000 --> 10:20.000
Lock the data raw.

10:20.000 --> 10:23.000
It's cool that you can change the data how it looks, but log it raw.

10:23.000 --> 10:30.000
As far as you can, obviously within reason, I mean, we don't want like raw from a DSLR camera images, probably for robotics.

10:30.000 --> 10:33.000
I think I may be wrong depending on your application.

10:33.000 --> 10:41.000
And a story calibration that goes with that data at the time and make it so you can change it later on, in case you discovered that you calibration worked wrong, right?

10:41.000 --> 10:48.000
And when you undistorted images, for example, or use that calibration on them, or Canon on Canonalize,

10:48.000 --> 10:55.000
you information to change it from frame of reference, make sure you do it on purpose, don't just do it by default.

10:55.000 --> 11:01.000
The more raw you work, usually close it to the sensor information, the original information is the best.

11:01.000 --> 11:05.000
So just take a second to think if this is better just from or not.

11:05.000 --> 11:07.000
And a couple of cool tools.

11:07.000 --> 11:12.000
If you want to calculate cameras, there's a website, you just go and it opens the camera, if you have a connected computer.

11:12.000 --> 11:17.000
And you can't really read it by showing your phone with a marker, it's probably not so.

11:17.000 --> 11:18.000
Cool.

11:18.000 --> 11:26.000
Yeah, in terms of lighter point clouds, contrary to the previous talk, I'm generally quite happy with lighter point clouds.

11:26.000 --> 11:35.000
They can be very accurate, they can only have like a two millimeter error, but there are scenarios where they are not super accurate.

11:35.000 --> 11:47.000
And my biggest problem whenever I'm working with lighter is that it's very difficult to explain to other people in the company what the errors scenarios are and what mistakes people made.

11:47.000 --> 11:52.000
And it's also hard to predict errors if you are not familiar with lighter.

11:52.000 --> 11:57.000
With cameras, easy everybody understands what a camera is, how it works, what you can expect from it.

11:57.000 --> 12:06.000
And common issues I've seen is just that lighter to vehicle calibration or bad lighter to robot calibration, but lighter to lighter calibration, if you have multiple.

12:06.000 --> 12:09.000
Again, not understanding time steps.

12:09.000 --> 12:15.000
Dirty, if your lighter gets dirty, if you forget to clean it, you get way less range.

12:15.000 --> 12:20.000
You're probably not going to notice if you're not frequently looking at your lighter point clouds.

12:20.000 --> 12:31.000
And yeah, already set unexpected expectations of what a lighter can do trying to detect things at a crazy far distance if you don't get lighter points there.

12:31.000 --> 12:39.000
So one thing which I think is funny with lighter is that everybody seems to build their own visualizer.

12:39.000 --> 12:46.000
And it's very difficult to convey details with a plot because you're trying to put 3D information onto a 2D screen.

12:46.000 --> 12:52.000
And then I always notice that I'm awkwardly trying to zoom in and zoom around to show people what the problem is with their lighters.

12:52.000 --> 12:59.000
So because everyone is rolling their own visualizer, I also made my own, scored immersive points.

12:59.000 --> 13:07.000
The features I have is that I always use it to embed my lighters and my time stamps into a Jupiter notebook.

13:07.000 --> 13:11.000
So if you're running a server with this, you can do that.

13:11.000 --> 13:16.000
And the best part is that it supports virtual reality like natively artificial books.

13:16.000 --> 13:22.000
And it actually helps a lot that if you have lighters which are not calibrated with respect to each other,

13:22.000 --> 13:30.000
show your product manager or CEO that this one corner shows up mode with times in same point clouds.

13:30.000 --> 13:35.000
It's easy if they walk around it. It's harder to show on a 2D plot.

13:35.000 --> 13:43.000
Yeah, you can help me extend this to. I brought a VR headset, but I don't think we have any time to show people, but if we have a break, we can we can.

13:43.000 --> 13:46.000
Yeah, calibrating your senses.

13:46.000 --> 13:56.000
So in terms of one tip here is to always try to visualize sensors with respect to each other.

13:56.000 --> 14:02.000
So you can quite quickly see that things, or it's difficult to see that something is wrong if you only show one sensor.

14:02.000 --> 14:07.000
But if you try to, for example, plot the lighter points on the camera.

14:07.000 --> 14:10.000
And you see that things don't align. You notice something wrong.

14:10.000 --> 14:17.000
Show bounding books on both or try to label bounding boxes in both spaces and see if that's lines.

14:17.000 --> 14:21.000
One easy to break is project camera down on the ground.

14:21.000 --> 14:27.000
And if your camera projections nicely flow into each other, like the features on the roads are matching up in the different cameras.

14:27.000 --> 14:33.000
Then you, these are confident that it's probably well calibrated.

14:33.000 --> 14:40.000
If you have an HDMI and you have good localization, that show map on your robot align with actual world.

14:40.000 --> 14:44.000
And having good raw transforms makes this trivial.

14:44.000 --> 14:48.000
But it's also a good challenge to make sure you have the correct raw transforms.

14:48.000 --> 14:54.000
In terms of visualization tools, I really like the folks love Samloff's RVS.

14:54.000 --> 14:59.000
As it's still an open source conference, folks love it's not open source anymore nowadays.

14:59.000 --> 15:03.000
There are some branches, like folks love or leak leak.

15:03.000 --> 15:07.000
It's a clone of folks love before they like close their repository.

15:07.000 --> 15:15.000
This is also rerun, which I haven't tried myself, but they are committed to stay open source, so that might be something to take a look at.

15:15.000 --> 15:19.000
This is what let me see.

15:19.000 --> 15:26.000
I tried this multiple times and it always works, but of course not when you're trying to actually get for presentation.

15:26.000 --> 15:33.000
We had a really cool, you really have to imagine that there's a cool video here of our robots.

15:33.000 --> 15:38.000
I don't think I'll try at the end if I have some time left to rescue the video.

15:38.000 --> 15:39.000
Oh.

15:39.000 --> 15:42.000
Yeah, you can see it on my LinkedIn, I guess.

15:42.000 --> 15:43.000
Good luck.

15:43.000 --> 15:46.000
Let's see. What did you want to say, Sam?

15:46.000 --> 15:51.000
Okay, so we're talking about calibration and data quiet and so on, but calibrate yourself too.

15:51.000 --> 15:57.000
You know, when you every company has something that they call golden data, like this is their golden tables.

15:57.000 --> 16:02.000
This is all the good data you can use for training, for least on that, you can double check things with it.

16:02.000 --> 16:05.000
Well, why do you consider it's golden?

16:05.000 --> 16:08.000
It's this in opinion of one person, many people.

16:08.000 --> 16:12.000
Is it because it's better in one aspect than the data you had before?

16:12.000 --> 16:14.000
Is it more than one aspect?

16:14.000 --> 16:16.000
Is it avoiding something that you had an issue before?

16:16.000 --> 16:18.000
Maybe you know, you were recording wrong the timestamps.

16:18.000 --> 16:22.000
Maybe you were doing, you know, some other thing the sensor was badly mounted.

16:22.000 --> 16:24.000
Something was referred wrong.

16:24.000 --> 16:26.000
Maybe it was the wrong sensor.

16:26.000 --> 16:30.000
Or does your golden data incorporate something that you didn't have before?

16:30.000 --> 16:33.000
And are you sure that fixes what you were doing?

16:33.000 --> 16:35.000
And doesn't break anything else?

16:35.000 --> 16:37.000
Just be careful and check.

16:38.000 --> 16:42.000
Maybe as to yourself, if I had perfect data, how will it look like?

16:42.000 --> 16:48.000
Like, and from that question you may see what you actually need to change in your system to make it look like that.

16:48.000 --> 16:49.000
Better, ungolden.

16:53.000 --> 16:56.000
Cool. I wanted to end with a couple of easy tricks.

16:56.000 --> 17:01.000
So if you do a couple of simple checks, you can save yourself a lot of hassle later.

17:01.000 --> 17:06.000
One easy trick which always helps me is monitor the time between samples.

17:06.000 --> 17:14.000
So if you see that as a large gap between different images, maybe you're probably dropping them.

17:14.000 --> 17:19.000
Also note here that often you start off with a robot with bomb camera.

17:19.000 --> 17:22.000
And then this check will show that everything is correct.

17:22.000 --> 17:26.000
And then you start adding sensors and you start adding computing things on your robot.

17:26.000 --> 17:28.000
Suddenly your buffers are falling overflowing.

17:28.000 --> 17:32.000
If you have this check in place already, you're going to immediately notice it.

17:32.000 --> 17:35.000
And otherwise it will take you months before you figure this out.

17:36.000 --> 17:39.000
Yeah, monitor where you start and enter runs.

17:39.000 --> 17:47.000
So if you have a good localization system for your robots, check if it's already getting a plot from one person.

17:47.000 --> 17:54.000
If you have a localization system for robots, that your robots start the next run at a place at end of the previous one,

17:54.000 --> 17:56.000
or they're magically transport.

17:56.000 --> 18:01.000
If you expected it to transport as fine, but if you didn't expect it to probably drop in a run.

18:02.000 --> 18:04.000
Visualize what data you record.

18:04.000 --> 18:06.000
So keep checking your assumptions.

18:06.000 --> 18:09.000
Are objects visible when you expect them to be.

18:09.000 --> 18:12.000
Are your sensors clean working right side up?

18:12.000 --> 18:13.000
Yeah.

18:13.000 --> 18:17.000
Did you put your left camera on the left part of your robot?

18:17.000 --> 18:19.000
Classic.

18:19.000 --> 18:20.000
Classic.

18:20.000 --> 18:22.000
Okay.

18:22.000 --> 18:27.000
I think or all these talk, we're trying to say visualize your data.

18:27.000 --> 18:30.000
Whatever you can and make any failure as objects as you can.

18:30.000 --> 18:35.000
So plot the things that will look wrong because you've had it wrong before, probably.

18:35.000 --> 18:37.000
So do mistakes, mistakes are good.

18:37.000 --> 18:39.000
Look for the detail.

18:39.000 --> 18:42.000
After you do that, well, automate all these checks as much as you can.

18:42.000 --> 18:46.000
You know, if you cannot automate, if the image is blurry, if you're dropping frames,

18:46.000 --> 18:50.000
if there's occlusions on your sensor, automated as much as you can.

18:50.000 --> 18:54.000
Every time you find a new issue, if you find a way to automate that check, automate it.

18:54.000 --> 18:56.000
And then take a look again.

18:56.000 --> 18:59.000
Visualize, but take a bird's eye view of it.

18:59.000 --> 19:00.000
Make a dashboard.

19:00.000 --> 19:02.000
Take the trends of your data.

19:02.000 --> 19:05.000
Take for the outliers on your data and just keep an eye.

19:05.000 --> 19:10.000
And every time you see a trend going back down, you go down to the detail, automate, and so on.

19:10.000 --> 19:15.000
By doing that, you know, in a very little time, you will have really good quality data.

19:15.000 --> 19:16.000
Okay.

19:16.000 --> 19:21.000
Last but not least, this was the tools which we mentioned in presentation, which you want to check out.

19:21.000 --> 19:27.000
There's sensor preview app, immersive points for your 3D points in virtual reality.

19:27.000 --> 19:32.000
The ISO 8855 coordinates system, which I now put on the table here.

19:32.000 --> 19:35.000
Foxbox, click click.

19:35.000 --> 19:37.000
So it's a clone of Foxcloth.

19:37.000 --> 19:43.000
If you don't want to use Foxcloth, rerun, which still commits to be open source.

19:43.000 --> 19:46.000
There's a lighter planner by tank renovation.

19:46.000 --> 19:48.000
Yeah, give it up for these tools.

19:48.000 --> 19:49.000
That's it.

19:49.000 --> 19:50.000
Thank you.

19:57.000 --> 20:01.000
I'll try to rescue the cool video while semistairing questions.

20:03.000 --> 20:04.000
Yeah.

20:04.000 --> 20:05.000
Hi.

20:07.000 --> 20:08.000
Is that okay?

20:08.000 --> 20:12.000
Not really just to follow up on rerun.

20:12.000 --> 20:18.000
It's the default visualizer for the lay robot ecosystem.

20:18.000 --> 20:21.000
For like, you know, the fucking face, also arms.

20:21.000 --> 20:24.000
They use rerun.

20:24.000 --> 20:27.000
And it's kind of like, yeah, it's a Foxcloth alike.

20:27.000 --> 20:30.000
It has like profiles and opens channels.

20:30.000 --> 20:33.000
You know, those squiggly graphs and stuff like that.

20:33.000 --> 20:35.000
So definitely worth checking out.

20:35.000 --> 20:39.000
Don't particularly, you know, have anything other than that.

20:39.000 --> 20:44.000
I will say if anyone has a good comparison by experience from both of them, I'm interested.

20:44.000 --> 20:45.000
So yeah.

20:45.000 --> 20:46.000
Yeah.

20:46.000 --> 20:56.000
Let's talk about it afterwards.

20:56.000 --> 20:57.000
Hi.

20:57.000 --> 21:02.000
Do you have any recommendations about the timestamps?

21:02.000 --> 21:07.000
Because when you're dealing with multiple time systems like GPS, UTC, and the local

21:07.000 --> 21:13.000
time and the yard of the insulation, frostbarks get complicated.

21:13.000 --> 21:17.000
Because every message has two times sent and received.

21:17.000 --> 21:19.000
So do you have any recommendations?

21:19.000 --> 21:21.000
How to document that?

21:21.000 --> 21:25.000
So it is somewhat easier to manage.

21:25.000 --> 21:27.000
Honestly, you know, it's hard.

21:27.000 --> 21:28.000
It is a hard problem.

21:28.000 --> 21:32.000
So that's why you need to really put some time on Ethernet.

21:32.000 --> 21:36.000
And if someone really dedicates the time on your system,

21:36.000 --> 21:39.000
just write down good docs, write down your expectations,

21:39.000 --> 21:42.000
and the detailed view of how you work with it.

21:42.000 --> 21:47.000
So everyone understands what is your perspective when you are thinking about the timestamps.

21:47.000 --> 21:49.000
It's not easy.

21:49.000 --> 21:51.000
Otherwise, we will not be talking about it.

21:51.000 --> 21:52.000
But yeah.

21:54.000 --> 21:55.000
Cool.

21:55.000 --> 21:58.000
By now, by the way, we've got the video working.

21:58.000 --> 22:00.000
Let's see if it keeps working.

22:02.000 --> 22:03.000
No.

22:03.000 --> 22:06.000
The answer is no.

22:06.000 --> 22:12.000
We'll keep trying, after the next question.

22:12.000 --> 22:14.000
Yep.

22:14.000 --> 22:21.000
I would like to ask if you have some recommendation to calibrate for example your sensor to your vehicle.

22:21.000 --> 22:26.000
So for example, if you have a very good localization sensor and you know your lighter,

22:26.000 --> 22:30.000
how you connect them like, because often the frame of the robot,

22:30.000 --> 22:33.000
sometimes it's like the center of mass that it's floating in space,

22:33.000 --> 22:39.000
how do you approach this calibration from wider to robot, for example?

22:39.000 --> 22:45.000
In every company I've worked, there was a different system to do so.

22:45.000 --> 22:50.000
From the most just literally measuring real life where your sensor is and do your rest with that,

22:50.000 --> 22:57.000
which honestly gives pretty good results because if you do the exercise of comparing with sensor to each other

22:57.000 --> 23:02.000
and on some ground tools, which within the speak about ground tools, but it's great if you manage to have ground tools somehow,

23:02.000 --> 23:08.000
like a big on system, if you just a GPS, RTK, for example, or anything you can come up with,

23:08.000 --> 23:16.000
like markers around in a well known space, like, let's say if you work with cars and you have a place where you sell the cars

23:16.000 --> 23:21.000
to shop every world with a bunch of markers and, you know, at least know that the walls are annoying.

23:21.000 --> 23:25.000
So you have a reference system to check.

23:26.000 --> 23:30.000
Again, I don't think there's a very easy way to do this.

23:30.000 --> 23:34.000
It's smaller robots having it easier, bigger robots like cars.

23:34.000 --> 23:40.000
When you reference points, it's like the bugwheel, sorry, kind of thick of the word, but you know,

23:40.000 --> 23:46.000
the bugwheel line where you don't really see it, you kind of really measure the accuracy, you need to trust the data sheet.

23:46.000 --> 23:52.000
You usually need to do some optimization method that based on actual real data will give you the best estimation,

23:52.000 --> 23:56.000
which it's hard to get right, but that's the best we can do.

23:56.000 --> 24:01.000
I think nowadays, otherwise maybe some ML network will come soon and we'll help this.

24:01.000 --> 24:02.000
Cool.

24:02.000 --> 24:05.000
I'll try to show the visualization again.

24:05.000 --> 24:09.000
So what you see here is the foxcloth view from our car.

24:09.000 --> 24:15.000
So you can see the cameras, you can see, but if you use the pink points, are the radar points,

24:15.000 --> 24:19.000
and they nicely align with the features of the map.

24:20.000 --> 24:25.000
And if we, and also the green blue overlay you saw, that's the map we feed to the robot.

24:25.000 --> 24:30.000
You can see that we've got three cameras on the front projected, and then nicely flow over into each other.

24:30.000 --> 24:36.000
You can see the predictions of where the mother wants to go, where our trajectory planner wants to go.

24:36.000 --> 24:43.000
The best thing which you can't see because equality is associated is that I even got the wheels to spin in the right way.

24:43.000 --> 24:45.000
He's the most proud of that.

24:45.000 --> 24:52.000
I am Luke Man, I spent an afternoon doing this, and also they are tuning to the correct direction.

24:52.000 --> 24:55.000
Again, this was the most proud of, but it helps a lot.

24:55.000 --> 25:03.000
It helps a lot to see, to save your robots and validate that your radar points are actually at objects.

25:03.000 --> 25:06.000
Your cameras are well calibrated with respect to each other.

25:06.000 --> 25:09.000
Your robot is doing what you think it does.

25:09.000 --> 25:10.000
Yeah.

25:10.000 --> 25:16.000
This kind of visualization should be easy for everyone to access, because people think about the data at different levels,

25:16.000 --> 25:21.000
but they should always be able to go back and make sense of it.

25:21.000 --> 25:22.000
Cool.

25:22.000 --> 25:24.000
Questions?

25:24.000 --> 25:34.000
We had one question from the chat, which is any tips that apply specific to several step stereo cameras.

25:35.000 --> 25:41.000
My experience has been pretty bad with them, so I think they are cool when they work very well.

25:41.000 --> 25:45.000
And the vision is awesome, but when it's not, it's hell.

25:45.000 --> 25:48.000
So I've tried to avoid them.

25:48.000 --> 25:52.000
I don't know what the current best way of evaluating stereo cameras.

25:52.000 --> 25:56.000
But I once made it to which try to overlay the two cameras into one.

