WEBVTT

00:00.000 --> 00:07.000
Good afternoon everyone.

00:07.000 --> 00:12.000
Thanks for joining us on a big room.

00:12.000 --> 00:15.000
I'm here on Bingham.

00:15.000 --> 00:20.000
I work at I design board and we want to make every camera work on every platform.

00:20.000 --> 00:22.000
Simple easy challenge.

00:22.000 --> 00:28.000
I've been working in bed in Linux for 20 something years and with cameras for somehow.

00:28.000 --> 00:33.000
I don't know how but the last decade.

00:33.000 --> 00:37.000
It's far more than just me but I've become the release maintainer for live camera.

00:37.000 --> 00:40.000
So my name gets thrown in there just for that.

00:40.000 --> 00:42.000
That doesn't mean I do all the work.

00:42.000 --> 00:47.000
I actually have very few commits in live camera lately but I just merge everyone's work.

00:47.000 --> 00:51.000
I used to be more of a kernel developer but now I just become a manager.

00:52.000 --> 00:57.000
So today I want to talk about these color charts really.

00:57.000 --> 01:03.000
So anyone who's just been in the talk before which I hope is quite a few of you.

01:03.000 --> 01:09.000
I've just learned that we've now got hardware acceleration for the software's P.

01:09.000 --> 01:11.000
This is really cool for me.

01:11.000 --> 01:18.000
This is amazing because now on all the platforms that we have difficulty processing images and raw bare images.

01:18.000 --> 01:28.000
We can do it in real time even when we don't have the open source drivers for the ISPs.

01:28.000 --> 01:34.000
So that brings us on to what do we do next?

01:34.000 --> 01:36.000
What do we need to do to improve this?

01:36.000 --> 01:40.000
So I'm going to go through what problems are currently having want to solve.

01:40.000 --> 01:42.000
How you tune a camera?

01:42.000 --> 01:45.000
I've got a proposal of what I would like to see happen next.

01:45.000 --> 01:53.000
And a big shout out for help because like I say I just merge other people's code now.

01:53.000 --> 01:55.000
So great.

01:55.000 --> 01:56.000
We've got pictures.

01:56.000 --> 02:03.000
I can turn my laptop on and I can join a web call and I look like that which is stunning.

02:03.000 --> 02:06.000
We are very green but we can fix this.

02:06.000 --> 02:09.000
We've got live camera shipping in products where we have full tuning.

02:09.000 --> 02:14.000
That's easy on other products but we have to do this per sensor per camera per device.

02:14.000 --> 02:16.000
And that's tricky.

02:16.000 --> 02:18.000
So we've got images that come through.

02:18.000 --> 02:22.000
Brian did a great job of showing how we've got all the bare patterns and the big focuses.

02:22.000 --> 02:27.000
The fact that we've got far more green elements that we'll try to come into the next.

02:27.000 --> 02:32.000
And when you zoom in on the picture.

02:32.000 --> 02:37.000
We see all the bare pattern and we've got.

02:37.000 --> 02:39.000
Two greens are red and a blue.

02:39.000 --> 02:40.000
We need to do something about that.

02:40.000 --> 02:45.000
I always had this misconception that it's the fact that you've got two greens that makes it green.

02:45.000 --> 02:52.000
And it's not because we're going to interpolate those and turn the two greens into a single green anyway.

02:52.000 --> 02:58.000
What I've sort of started to understand more is that it's more about this thing called cross talk or the bleeding of the colors.

02:58.000 --> 03:03.000
And the balance of what the sensor really sees the world as.

03:03.000 --> 03:06.000
It doesn't see green as just green.

03:06.000 --> 03:10.000
These are blend and that's with this graph taken from a date sheet.

03:10.000 --> 03:14.000
You can see that there's a lot of overlap.

03:14.000 --> 03:18.000
So the blue pixels are actually picking up a bit of the green sensitivity.

03:18.000 --> 03:20.000
The red is picking up a bit of green.

03:20.000 --> 03:22.000
So they have a bit of bleed.

03:22.000 --> 03:24.000
But the green picks up all of the green.

03:24.000 --> 03:26.000
Some of the red and some of the blue.

03:26.000 --> 03:28.000
So there's more signal there.

03:28.000 --> 03:31.000
So we need to do something about it.

03:31.000 --> 03:33.000
And that's where these come in.

03:34.000 --> 03:36.000
What's real?

03:36.000 --> 03:39.000
What's the ground truth of what is the real color that the sensor sees?

03:39.000 --> 03:42.000
And we need to convert what the sensor sees is what we want to see.

03:42.000 --> 03:48.000
What's the real color space or SIGB that we want to present on our screens digitally?

03:48.000 --> 03:53.000
So to do that, we need to take a lot of pictures.

03:53.000 --> 03:55.000
A lot of pictures in various conditions.

03:55.000 --> 03:57.000
This is in my light box at home.

03:57.000 --> 04:02.000
I've got a big set up for this with like six lights that are calibrated.

04:02.000 --> 04:04.000
And we know exactly what color temperature they are.

04:04.000 --> 04:06.000
And I can measure how much lumens.

04:06.000 --> 04:10.000
How much light photons are coming out of the device.

04:10.000 --> 04:13.000
So key things on the note here.

04:13.000 --> 04:16.000
You'll see I've got a bit of paper in there with the TL83.

04:16.000 --> 04:18.000
So that tells me exactly what lights setting up.

04:18.000 --> 04:19.000
Which lights I've turned on.

04:19.000 --> 04:22.000
And I've got a little, it's called the Snackle.

04:22.000 --> 04:23.000
A little light meter.

04:23.000 --> 04:25.000
I've got one if anyone wants to see it.

04:25.000 --> 04:27.000
It can measure the color temperature.

04:27.000 --> 04:31.000
The color of the light being illuminated.

04:31.000 --> 04:33.000
And also the brightness.

04:33.000 --> 04:35.000
So these are the parameters we want to know.

04:35.000 --> 04:39.000
So that we can say of all these colors that we now know very precisely.

04:39.000 --> 04:41.000
These are calibrated.

04:41.000 --> 04:45.000
So we know exactly what specific pixel values they are.

04:45.000 --> 04:49.000
How do we turn that into what we want to present?

04:49.000 --> 04:53.000
Now, if you look through my photo album.

04:53.000 --> 04:57.000
It's interesting.

04:57.000 --> 04:59.000
It's lots of these pictures.

04:59.000 --> 05:03.000
So I'm going to take what I'm taking pictures as much as I can.

05:03.000 --> 05:05.000
Of any device that people send me.

05:05.000 --> 05:07.000
Of all the color temperatures.

05:07.000 --> 05:09.000
And that I have in my light box.

05:09.000 --> 05:12.000
And this is going to give me not just for one condition.

05:12.000 --> 05:13.000
But many light conditions.

05:13.000 --> 05:16.000
And then we can put that into the tuning process.

05:16.000 --> 05:21.000
So as I said, we've got like say six lights in my bottle.

05:21.000 --> 05:23.000
Or many in their world.

05:23.000 --> 05:28.000
Each light color temperature is going to make that color.

05:28.000 --> 05:31.000
And make that color chart look different to the sensor.

05:31.000 --> 05:36.000
And we need to figure out how do we turn that back into what we want to see.

05:36.000 --> 05:40.000
So we can, that's actually, we can linear interpolate this.

05:40.000 --> 05:43.000
So we don't need to have every single possible combination.

05:43.000 --> 05:47.000
You want something really low, 3000 or 2000 something.

05:47.000 --> 05:50.000
And then something high, 64K is basically sunlight.

05:50.000 --> 05:53.000
If you can get a couple of ranges in there.

05:53.000 --> 05:55.000
You can go crazy and have a lot more.

05:55.000 --> 05:57.000
But here's sort of like a bare minimum.

05:57.000 --> 06:03.000
We can capture images and work out the correction we need to apply for those conditions.

06:03.000 --> 06:05.000
Then in a camera can identify what condition you're in.

06:05.000 --> 06:07.000
Apply the relevant correction.

06:07.000 --> 06:10.000
And we can stop having green images.

06:10.000 --> 06:16.000
So each of those, the calibration tools that we've got will read the chart.

06:16.000 --> 06:19.000
Users open CV to find out where all the grids are.

06:19.000 --> 06:20.000
What color they are.

06:20.000 --> 06:21.000
We know the layout.

06:21.000 --> 06:25.000
So we can turn that into a color correction matrix.

06:25.000 --> 06:31.000
And I let more than I realize on this soon, this talk.

06:31.000 --> 06:33.000
There's interesting properties of this.

06:33.000 --> 06:36.000
Like the diagonal that I've highlighted in orange down the side.

06:36.000 --> 06:39.000
That's basically the strong component of each channel.

06:39.000 --> 06:42.000
You want to take the red to red green screen blue to blue.

06:42.000 --> 06:47.000
It's really important that the rows sum to one.

06:47.000 --> 06:50.000
If we were to apply a correction that was greater than one,

06:50.000 --> 06:54.000
you're going to be adding digital gain to the image.

06:54.000 --> 06:59.000
So trying to visualize these has been really interesting for me.

06:59.000 --> 07:05.000
I also want to build up the tools to evaluate the tuning process.

07:05.000 --> 07:07.000
So we can see what the data looks like.

07:07.000 --> 07:11.000
At the moment our tuning files are just big bunch of numbers.

07:11.000 --> 07:12.000
I'm readable.

07:12.000 --> 07:14.000
If someone says to me a tuning file, I just go merge.

07:14.000 --> 07:16.000
I don't know what it is.

07:16.000 --> 07:21.000
So I'm trying to build up graphs to see what they all look like.

07:21.000 --> 07:26.000
And we'll see a more example in a minute.

07:26.000 --> 07:31.000
Luka sent me some FFM5 and someone else actually sent me another sort of real devices,

07:31.000 --> 07:32.000
which is great.

07:32.000 --> 07:36.000
So I can start capturing these conversions on real devices.

07:36.000 --> 07:39.000
I've got about eight phones in my home now.

07:39.000 --> 07:41.000
And this is great.

07:41.000 --> 07:46.000
If people say me stuff, I can put it through the box in about an hour and a half to take these pictures.

07:46.000 --> 07:50.000
And then we can start building up these magic numbers that are going to help us

07:50.000 --> 07:55.000
correct the color on devices.

07:55.000 --> 08:01.000
So we take a lot of pictures.

08:01.000 --> 08:03.000
We've got some numbers.

08:03.000 --> 08:09.000
We can put it into the pipeline that Brian and Hans has just talked about that they're creating.

08:09.000 --> 08:15.000
So this is also as Brian said, probably not the right sequence,

08:15.000 --> 08:19.000
but I've highlighted the LSC, the AWS and the CCM.

08:19.000 --> 08:24.000
For the most part, those are the components that we need tuning for.

08:24.000 --> 08:29.000
The rest we need to identify from the sensor or something, but these are the ones where we need to know

08:29.000 --> 08:36.000
exactly how the light coming in is going to impact the capture of that.

08:36.000 --> 08:43.000
I really have to see Lenshading's got posted last week, so I'm looking forward to seeing that progress.

08:44.000 --> 08:47.000
AWS kind of comes out of the captures that I'm already taking.

08:47.000 --> 08:51.000
It's one of the things that we can measure in the chart we can see why.

08:51.000 --> 08:52.000
So we can work out.

08:52.000 --> 08:56.000
How do we make that white white, which is the white balance?

08:56.000 --> 08:58.000
And then the one I'm focusing on today is the CCM.

08:58.000 --> 09:04.000
It's kind of sometimes called crosstalk because it's this crossing over of the channels.

09:04.000 --> 09:11.000
There's lots more we can put into these blocks later as previous touch on.

09:11.000 --> 09:13.000
But there's not so much tuning there.

09:13.000 --> 09:16.000
Well, the Dean always and sharpening is going to need a lot of tuning.

09:16.000 --> 09:21.000
We can do more advanced color correction with the 3D look-up table instead of the CCM.

09:21.000 --> 09:28.000
So there's lots more we can do, but my focus at the moment is how do we make the colors look better.

09:28.000 --> 09:37.000
Coming on from the Lenshading topic earlier, it's another thing that we need to calibrate.

09:38.000 --> 09:42.000
As Hans said, it's this fall off to the edges.

09:42.000 --> 09:46.000
You can get strong vignetting and the Lens itself is not square.

09:46.000 --> 09:49.000
It's a circle so you can really see that circle to shape there.

09:49.000 --> 09:54.000
And as the light traveling through the glass, I'm not going to say I know the physics,

09:54.000 --> 09:56.000
but it basically goes slower.

09:56.000 --> 10:02.000
The more glass it has and we get how to apply again because it sort of has a high resistance.

10:02.000 --> 10:11.000
And once we put what we need to do is capture white pictures that are flat.

10:11.000 --> 10:15.000
And so we can measure an average out where this happens.

10:15.000 --> 10:25.000
And you get some really cool graphs that you can actually see the shape of the glass or the shape of the signal traveling.

10:25.000 --> 10:30.000
So what we're doing is in the centre we only apply a low gain or no gain.

10:30.000 --> 10:35.000
We're going to have to amplify that signal to bring it out and balance it.

10:35.000 --> 10:38.000
What I find is interesting if I switch to the next one.

10:38.000 --> 10:40.000
So that's two temperatures.

10:40.000 --> 10:43.000
We've got a 3000 Kelvin and a 5000 Kelvin.

10:43.000 --> 10:45.000
The shape is identical.

10:45.000 --> 10:48.000
But you just see that the scale is different.

10:48.000 --> 10:54.000
Now we have to as part of the calibration measure for many temperatures.

10:54.000 --> 10:58.000
But also each color component.

10:58.000 --> 11:06.000
So the green, the red and blue, they all are affected differently by the physics of traveling through the glass.

11:06.000 --> 11:10.000
So again, it's all just about capturing that data.

11:10.000 --> 11:15.000
We capture the RGB and we can produce these tables.

11:15.000 --> 11:20.000
Next one.

11:21.000 --> 11:26.000
So now we come back to what we do with all this data.

11:26.000 --> 11:34.000
Lib camera has something called the CT, the camera tuning tools in the sources.

11:34.000 --> 11:36.000
So it's all in the repository.

11:36.000 --> 11:41.000
So you're going to capture a bunch of pictures like my lovely gallery and store them in a directory.

11:41.000 --> 11:49.000
The tricky parts are you have to make the file name say what the color temperature.

11:50.000 --> 11:53.000
Of the light was to capture that file and the light's level.

11:53.000 --> 11:56.000
So you have to literally put the metadata into the file.

11:56.000 --> 11:59.000
Maybe we want to handle that differently later.

11:59.000 --> 12:05.000
But it means that you've got to every time I take a picture, I have to manually go through and rename the files based on what I capture there.

12:05.000 --> 12:07.000
That's a bit of a tricky process at the moment.

12:07.000 --> 12:13.000
I'd love to see all this automated with some sort of help at all with our camera application,

12:13.000 --> 12:17.000
where we can just take the picture and give it the data.

12:18.000 --> 12:20.000
But it should be easy.

12:20.000 --> 12:24.000
There's not much to it other than give it a directory of pictures and run the tool.

12:24.000 --> 12:33.000
And that's going to produce you a tuning folder ammo, which you can then put on your device and get better pictures.

12:33.000 --> 12:40.000
For me, the Gospel, the Bible, to read is written by David Plamen at Raspberry Pi.

12:40.000 --> 12:46.000
I don't think any of the work we could have been possible without his expertise.

12:46.000 --> 12:52.000
And he's got an amazing write-up, which really goes into detail about how to understand what's going on here.

12:52.000 --> 13:00.000
My colleague Steffen is working on extending the Lip Camera documentation to platforms other than Raspberry Pi.

13:00.000 --> 13:07.000
And of course, we're looking at the problem from how do we make this generic for all platforms, not just the one.

13:07.000 --> 13:14.000
So Steffen hosts it at the moment itself, but I see that move to the Lip Camera all soon.

13:15.000 --> 13:19.000
Okay, come back to the problem segments and issues.

13:19.000 --> 13:27.000
We start with bare images, we've heard how they don't look how we want them to see, so we need to make some corrections.

13:27.000 --> 13:30.000
Tuning these devices can be quite hard.

13:30.000 --> 13:34.000
There is a bit of built up knowledge that you get just by doing it.

13:34.000 --> 13:41.000
I'll say later, but even the power line flicker, when I take some of the shots for the lens shading,

13:41.000 --> 13:46.000
you can see the 60Hz signal oscillates in the images.

13:46.000 --> 13:51.000
I hope we'll build up as a community the knowledge of how we handle all those situations.

13:51.000 --> 13:57.000
And probably the most awkward part is that you need equipment.

13:57.000 --> 14:02.000
You need one of these, you need light, you need to be able to measure things.

14:02.000 --> 14:09.000
But I'm not worried about any of that, I'm worried about this.

14:09.000 --> 14:15.000
You guys are doing a great job in enabling an enormous amount of devices.

14:15.000 --> 14:22.000
There's more mobile phones, cameras, laptops, tablets, and every other thing that can now capture from a camera.

14:22.000 --> 14:25.000
And we need to calibrate them all.

14:25.000 --> 14:30.000
And there's not enough of me.

14:31.000 --> 14:37.000
I need minions.

14:37.000 --> 14:45.000
So down the left, do not treat these numbers as real, but I'm trying to say that there's a scale here.

14:45.000 --> 14:54.000
As a community, on the device we've got, we don't need 100% quality image shading.

14:54.000 --> 14:59.000
What we need is to get away from zero percent that we have right now.

14:59.000 --> 15:05.000
And the bare minimum that I think is needed is a color emitter.

15:05.000 --> 15:09.000
Something to measure the lights that you know where you are and the color chart.

15:09.000 --> 15:14.000
So that comes in about 130 pounds, which is even itself is a bit awkward.

15:14.000 --> 15:20.000
No one wants to buy a phone and they say, oh, by the way, you've never spend 130 pounds to make the camera work.

15:20.000 --> 15:22.000
But we don't need everyone to have one of these.

15:22.000 --> 15:26.000
We just need some people to have them.

15:26.000 --> 15:29.000
The snuggle is on an express.

15:29.000 --> 15:34.000
Now, it has an accuracy of plus or minus 5%.

15:34.000 --> 15:37.000
Which you've got a 10% range here.

15:37.000 --> 15:39.000
It's 50 pounds.

15:39.000 --> 15:43.000
The real ones of these per like 2000 pounds.

15:43.000 --> 15:45.000
It's the cheaper version.

15:45.000 --> 15:47.000
That's what I'm trying to solve here.

15:47.000 --> 15:51.000
I want this to be easily obtainable equipment.

15:51.000 --> 15:55.000
I don't think anyone should cheap out on the color chart.

15:55.000 --> 15:58.000
There are variants of this with different colors.

15:58.000 --> 16:02.000
Then we need to change the tools to know the different actual colors.

16:02.000 --> 16:05.000
But that's just so pervasive and easy to get hold of.

16:05.000 --> 16:08.000
It's just easy.

16:08.000 --> 16:13.000
If you want to go beyond just capturing a few pictures in various scenes.

16:13.000 --> 16:17.000
What we're doing in our team is building our own light boxes.

16:17.000 --> 16:21.000
So because we need to have a bit more control over the environment.

16:21.000 --> 16:23.000
And we buy box from IKEA.

16:23.000 --> 16:26.000
That gives us a space.

16:26.000 --> 16:30.000
Then we use stage lights because you can accurately control the lights.

16:30.000 --> 16:33.000
And again, when you've got something that can measure what the lights are putting.

16:33.000 --> 16:36.000
Give us a good starting point.

16:36.000 --> 16:40.000
A flat panel LED, almost like the ones in the ceiling here,

16:40.000 --> 16:43.000
are great for the lens shading.

16:43.000 --> 16:46.000
You need to be able to have something bright, flat.

16:46.000 --> 16:48.000
Ideally, you can change the color temperature.

16:48.000 --> 16:52.000
And again, on Amazon, they'll express you to get one that can change the color temperature.

16:52.000 --> 16:54.000
They're usually quite cheap.

16:54.000 --> 16:57.000
At home, I've got a box that costs me 800 pounds.

16:57.000 --> 16:59.000
That's if you want to go further.

16:59.000 --> 17:00.000
It's big, it's bulky.

17:00.000 --> 17:02.000
No one wants to buy that.

17:02.000 --> 17:04.000
And of course, if you're really going for full commercial here,

17:04.000 --> 17:06.000
then you're going to be spending hundreds of thousand pounds.

17:06.000 --> 17:09.000
That's not what we're trying to achieve here.

17:09.000 --> 17:15.000
This is kind of my next problem.

17:15.000 --> 17:23.000
As I capture more pictures and more data, it grows.

17:23.000 --> 17:29.000
And when I finally achieve finding lots of minions to capture all these pictures for me as well,

17:29.000 --> 17:34.000
we're going to have 500 megabytes per sensor-ish.

17:34.000 --> 17:39.000
So if there's three cameras on a phone, we're going to have up to two gigabytes per device.

17:39.000 --> 17:44.000
And I'm already panicking that there's 100 devices or more that we're going to be supporting.

17:44.000 --> 17:46.000
So we're going to have lots of data.

17:46.000 --> 17:50.000
And we need to store that somewhere, and I want to open data,

17:50.000 --> 17:53.000
and I want it shareable so that anyone can work with it.

17:53.000 --> 17:55.000
And...

17:57.000 --> 17:59.000
Did I miss bit? Sorry.

18:02.000 --> 18:10.000
Yeah, so more than that, there's one of me or very small team with lots of devices.

18:10.000 --> 18:13.000
And if you were Qualcomm making a real phone,

18:13.000 --> 18:15.000
you're going to have 100 people per device,

18:15.000 --> 18:18.000
whereas we're in the opposite spectrum.

18:18.000 --> 18:21.000
We want to support every device with low resources.

18:21.000 --> 18:23.000
So I really want to make as much of this as open.

18:23.000 --> 18:25.000
I want all the infrastructure to be open.

18:25.000 --> 18:27.000
I want all the data to be reusable.

18:27.000 --> 18:32.000
And actually, that's really important to me because I don't want...

18:32.000 --> 18:34.000
What do we capture this data?

18:34.000 --> 18:36.000
We're still working on the tuning tools.

18:36.000 --> 18:40.000
So in six months, we might have the tuning process improved.

18:40.000 --> 18:44.000
We can't go out and capture the images again on 100 devices.

18:44.000 --> 18:46.000
We need to store that, save it and reuse it.

18:46.000 --> 18:48.000
The data in the picture is not going to change.

18:48.000 --> 18:51.000
We need to just make it reusable.

18:51.000 --> 18:55.000
And ideally as possible, I want it to be easy.

18:59.000 --> 19:02.000
So, yeah, there's some tricky parts.

19:02.000 --> 19:08.000
I did want to highlight that if you've got a calibrated light box,

19:08.000 --> 19:10.000
the snarkle is going to disagree.

19:10.000 --> 19:12.000
I use both at the moment.

19:12.000 --> 19:14.000
I've got the calibration from my box,

19:14.000 --> 19:18.000
but I am actually currently using what the snarkle replies.

19:18.000 --> 19:20.000
What it tells me.

19:20.000 --> 19:23.000
I want to evaluate how that will work for the devices.

19:23.000 --> 19:27.000
It can be fiddly, but that's just the mechanics.

19:27.000 --> 19:29.000
We can learn that, we can build on that.

19:29.000 --> 19:31.000
And as long as that cold shots visible,

19:31.000 --> 19:33.000
even the environment shouldn't matter.

19:33.000 --> 19:37.000
We want lots of scene and environment in there to have realistic environments.

19:37.000 --> 19:39.000
Realistic shots.

19:41.000 --> 19:43.000
I'm going to not dwell on that.

19:43.000 --> 19:45.000
One important thing I wrote here,

19:45.000 --> 19:49.000
don't capture the light in your shots.

19:49.000 --> 19:54.000
You don't want to have the illuminant in the picture you take.

19:54.000 --> 19:55.000
That's really important.

19:55.000 --> 19:57.000
You only want the reflection.

19:57.000 --> 19:59.000
If you get light in the shot, it's going to saturate.

19:59.000 --> 20:01.000
And that's not what we want.

20:01.000 --> 20:04.000
I actually do most of my tuning at night.

20:04.000 --> 20:06.000
I go into the office with all lights off.

20:06.000 --> 20:07.000
And then I've only got one light.

20:07.000 --> 20:11.000
I don't worry about the outside sunlight or anything like that.

20:11.000 --> 20:14.000
So,

20:17.000 --> 20:19.000
discussing this with a few people,

20:19.000 --> 20:22.000
it was like, how do we get started, where do we put the data?

20:22.000 --> 20:24.000
As a developer, I'm used to get.

20:24.000 --> 20:28.000
I've got control of the camera namespace on free desktop.

20:28.000 --> 20:30.000
So, I've made a get repository.

20:30.000 --> 20:32.000
So, the tunings I've got are up there.

20:32.000 --> 20:34.000
And the tuning repository.

20:34.000 --> 20:37.000
Which is a starting point.

20:37.000 --> 20:38.000
It's easy for me.

20:38.000 --> 20:41.000
I can just commit photos to it on a branch.

20:41.000 --> 20:45.000
I'm using GetLFS, which means that if you clone the repository,

20:45.000 --> 20:47.000
you get main and it's empty.

20:47.000 --> 20:49.000
So, you're not going to clone all this data.

20:49.000 --> 20:51.000
But if you check out onto a branch,

20:51.000 --> 20:56.000
like the Fairphone 5, GetLFS will only copy and download the artifacts

20:56.000 --> 20:57.000
that are used on that branch.

20:57.000 --> 20:59.000
So, that's point of a good starting point.

20:59.000 --> 21:04.000
But it still doesn't scale to 100 devices and hundreds of gigabytes of data.

21:04.000 --> 21:10.000
I can't imagine wanting to fork that to make a pull request.

21:10.000 --> 21:13.000
And I can't see how it works.

21:13.000 --> 21:16.000
So, I'm kind of at the point of trying to work out how to do it next.

21:16.000 --> 21:18.000
And this is where I want help.

21:18.000 --> 21:19.000
I'm not saying this is the answer.

21:19.000 --> 21:21.000
But if we have something like Google photos,

21:21.000 --> 21:24.000
where you put data into a bucket and a web interface,

21:24.000 --> 21:27.000
maybe that's a solution.

21:27.000 --> 21:35.000
So, I've come across an open source Google photos solution called Image.

21:35.000 --> 21:37.000
I'm not saying this is what we need to do,

21:37.000 --> 21:39.000
but it's like this is possibly a way we could do it.

21:39.000 --> 21:43.000
If we can take existing tools, even, and extend that.

21:43.000 --> 21:44.000
That's what we're working with.

21:44.000 --> 21:45.000
We're working with photo albums.

21:45.000 --> 21:48.000
We want to take a collection of photos,

21:48.000 --> 21:52.000
as grouped together, but then be able to choose which ones we like

21:52.000 --> 21:56.000
and add metadata to these images.

21:56.000 --> 22:00.000
We're already handling the back end of how to turn these images into

22:00.000 --> 22:01.000
the tuning data.

22:01.000 --> 22:03.000
But as the live camera team,

22:03.000 --> 22:06.000
we don't have the expertise to do these bits.

22:06.000 --> 22:10.000
I could imagine something that in that

22:10.000 --> 22:13.000
friend's hand might let you change the parameters in real time.

22:13.000 --> 22:15.000
Now that we've got the software SP and GPU,

22:15.000 --> 22:20.000
I could imagine that could even be built in.

22:20.000 --> 22:24.000
That's low.

22:24.000 --> 22:27.000
I could imagine a system where once you open this album,

22:27.000 --> 22:30.000
you choose the picture you want to investigate.

22:30.000 --> 22:33.000
It might load up a page and then give you some sliders and parameters,

22:33.000 --> 22:36.000
so that you can fix things or adjust it,

22:36.000 --> 22:39.000
and then feed that back into that album's metadata

22:39.000 --> 22:41.000
to fix the tuning for that device.

22:41.000 --> 22:44.000
What I'm looking for here is how can we get other people

22:44.000 --> 22:46.000
able to say, I've got this device.

22:46.000 --> 22:47.000
It's not looking right.

22:47.000 --> 22:48.000
Let me fix it.

22:48.000 --> 22:49.000
Not have to.

22:49.000 --> 22:51.000
Sorry, let the person fix it.

22:51.000 --> 22:52.000
Not me.

22:52.000 --> 22:57.000
So that we can open up that process of making the pictures look good.

22:57.000 --> 23:01.000
One thing I think would be really important there is that

23:01.000 --> 23:04.000
there's this magic term for delta E.

23:04.000 --> 23:09.000
It's the difference of what we expect that color to look like on the chart.

23:09.000 --> 23:13.000
It's numerically what it actually is and you sum them up and take an average.

23:13.000 --> 23:15.000
We want that number to be less than 5.

23:15.000 --> 23:18.000
If it's 10, it's visible if it's 5.

23:18.000 --> 23:21.000
It's not so perceivable by the human eye.

23:21.000 --> 23:25.000
So we've got an actual scientific metric that we can say,

23:25.000 --> 23:27.000
these parameters are better in these conditions.

23:27.000 --> 23:30.000
So we've got something we can build into the system to say,

23:30.000 --> 23:33.000
this is better, this is worse.

23:34.000 --> 23:38.000
And if that's if you have people to help,

23:38.000 --> 23:40.000
now that we've got these devices,

23:40.000 --> 23:41.000
take pictures.

23:41.000 --> 23:44.000
If we can get people to get hold of the

23:44.000 --> 23:47.000
the color charts and the color remissor,

23:47.000 --> 23:49.000
not everyone has to have those.

23:49.000 --> 23:51.000
Maybe you've got a group of friends and you share

23:51.000 --> 23:55.000
one set or move them around or we might be able to post those

23:55.000 --> 23:57.000
and get captures that way.

23:57.000 --> 24:01.000
But ideally, it has to be a raw picture.

24:01.000 --> 24:04.000
We've got to have that raw data out.

24:04.000 --> 24:08.000
And what I've been doing so far is taking a raw

24:08.000 --> 24:10.000
and a jpeg on an Android device.

24:10.000 --> 24:12.000
So that kind of actually I'm cheating.

24:12.000 --> 24:14.000
I'm saying, what's this image look like?

24:14.000 --> 24:18.000
What does the real stack think it should look like?

24:18.000 --> 24:22.000
So I can kind of have like a target.

24:22.000 --> 24:26.000
And I think if we can do that as well, that would be helpful.

24:26.000 --> 24:28.000
If anyone wants to contribute, as I said,

24:28.000 --> 24:30.000
there's lots of things that can be done here.

24:30.000 --> 24:35.000
I need help with how to restore it, how to share it,

24:35.000 --> 24:37.000
how to build this interface.

24:37.000 --> 24:40.000
Everything here is like, I don't know the answers.

24:40.000 --> 24:43.000
Yeah, I'm looking for support.

24:46.000 --> 24:48.000
I'm very out of time.

24:48.000 --> 24:49.000
I want to give a shout.

24:49.000 --> 24:50.000
Slide out.

24:50.000 --> 24:53.000
This stack is open source for slides and I've loved it.

24:53.000 --> 24:54.000
It's brilliant.

24:54.000 --> 24:57.000
My travel is sponsored by Adios and Board.

24:58.000 --> 25:01.000
And this is all about the lib camera project.

25:01.000 --> 25:04.000
If you want to get involved, come to the IRC or

25:04.000 --> 25:07.000
Matrix channels, mail the list or find us.

25:07.000 --> 25:10.000
Hopefully there's enough of us that stand out in her

25:10.000 --> 25:11.000
days now.

25:11.000 --> 25:12.000
Thank you very much.

25:12.000 --> 25:15.000
Thank you.

