WEBVTT

00:00.000 --> 00:19.880
So, welcome, you can hear me, okay. So, welcome, I'm David Bernhardt, and the creator of CDVs

00:19.880 --> 00:28.720
and member of the CDVs and groups, the topic of today of this station, at least. I'm open

00:28.720 --> 00:35.720
source, contributors, since several years, maybe some of you already use one of my tools.

00:35.720 --> 00:46.120
No, okay, no, not problem. So, the current story, starting 2023, at the FinTech

00:46.120 --> 00:52.720
scale up, my team work on platforms. The purpose of the platform is to accelerate software

00:52.720 --> 01:02.120
development for software delivery, but also a machine learning model delivery. So, at the end

01:02.120 --> 01:09.840
of the years, at the end of the years, we ask our customers, what do you need for the next

01:09.840 --> 01:18.120
years to prepare the roadmap? And as part of this roadmap, with one of the feature

01:18.120 --> 01:25.120
is having more visibility on deployment, being able to see which version it's at which stage,

01:25.120 --> 01:32.560
UIT stage, production, maybe development, and to have visibility on the history. This

01:32.560 --> 01:39.040
request also comes from product managers, just to know when they create 3 ports, hey, what

01:39.040 --> 01:46.400
is the current version of the product? So, for our my evaluation, I search for solution,

01:46.400 --> 01:54.840
I found none. I was confirmed by some discussion on the CD foundation channel, but also

01:54.840 --> 02:05.600
by a streamer, so we know when streamer says something, it's often true. And we saw the solution

02:05.600 --> 02:12.600
is often build yourself on the CD foundation, it's what's what is mentioned. So, we

02:12.600 --> 02:23.080
developed the cost, this is the value, two, it's was low priority, not a ship solution.

02:23.080 --> 02:31.760
So, that in 20, 24, my department was acquired by another company before we start working

02:31.760 --> 02:37.440
on this topic, it was not probably too bad still present. So, I decided to live in the

02:37.440 --> 02:47.160
company and start working full-time on this open-source solution. But as I live the company,

02:47.160 --> 02:53.560
I also need money, spoiler, money is not yet here, that maybe with your help and the help

02:53.560 --> 03:01.560
of the first them, thank you and the devor, then you can upload them for the work, maybe

03:01.560 --> 03:07.920
it could change. So, the problem is, often you start with an idea, a bag, it's there

03:07.920 --> 03:17.000
up, you create a ticket, start cutting, testing, building, CI, it's fine, you publish to your

03:17.000 --> 03:24.160
package manager, you deploy into the first one, you testing this one, if it's okay,

03:24.160 --> 03:30.920
you promote to the other one, there's multiple way to promote, maybe you publish to another

03:30.920 --> 03:38.640
artifact repository, Xterra Xterra, and this workflow exclude the case of library where

03:38.640 --> 03:47.200
you stop and you're injecting to another project. But to achieve this goals, we use a

03:47.200 --> 03:54.320
ton of tools. Each with its own dashboard, multiple dashboard, one dashboard for Jenkins,

03:54.320 --> 04:03.200
maybe one dashboard for GitHub, one for August City, no unified view, no view of the flow

04:03.200 --> 04:15.360
across all those tools. So, it's currently the many issue, what Hansen mentioned, how

04:15.360 --> 04:20.840
can I correct version of the staff between each stage, what is the deployment frequency,

04:20.840 --> 04:28.000
it could be extracted, that's what is the plumber, sorry, what is the deployment frequency,

04:28.000 --> 04:41.800
at each deployment, it's start to become harder. So, what is the goal? The goal is to integrate

04:41.800 --> 04:47.320
as to have an solution that's integrating to your existing workflow, not to force you to switch

04:47.320 --> 04:52.080
to another tools in which you will have to design everything and it will create. It's

04:52.080 --> 04:58.600
very important to observe and to be able to see what is my software that every life cycle,

04:58.600 --> 05:07.560
in the past that also in the currently. And when I have this capability to observe, maybe

05:07.560 --> 05:14.720
I could use this capability to react and to drive the deployment and not maybe to use a

05:14.720 --> 05:26.360
big orchestrator, that could block external. So, as part of my evaluation, I also joined

05:26.360 --> 05:33.240
the City Foundation and start to contribute to City events. Why City events? So, it's part

05:33.240 --> 05:40.480
of the City Foundation, that it's part of the Linux Foundation project. Multi-proversion

05:40.480 --> 05:49.880
has already been released, it still is the road at it. The last version was end of last year.

05:49.880 --> 05:57.720
Don't you see it? There is a fixed that is coming, maybe this week or next week.

05:57.720 --> 06:10.440
And I bet on this standardization, that in 2023, City events at some

06:10.440 --> 06:20.440
limitation, minimal tool adoption. You understand? It's long. So, no production, deployment,

06:20.440 --> 06:30.680
use it, I guess I know, but specification is not stable. And why for an existing tool,

06:30.680 --> 06:37.280
switching to do this work in progress solution, where each one already has its own tool,

06:37.280 --> 06:46.720
so it's not pragmatic. But we hope, and as part of it, we could make it a better future.

06:46.720 --> 06:58.680
So, I decide to go it. So, what is City events? City events is mainly a set of gistonshimmer

06:58.760 --> 07:12.760
to specify every part of a software-aligned cycle. It's composed of three blocks. The context

07:12.760 --> 07:19.320
writes information about the event itself, an ID for the event, the timestamp when this event

07:19.320 --> 07:24.840
arrives and the type. The type is important because the type defines the content of the subject,

07:24.920 --> 07:32.600
the second part, the important part. The top level field and the other subject are always the same.

07:33.800 --> 07:40.440
The ID and the content and there is an optional source, the type will not talk about it.

07:42.360 --> 07:48.920
And the content of the content of the content is defined by the type, so it's why it's important.

07:48.920 --> 07:56.840
As you can see, the type is namespace, so it means that we have diversity event for normalized events,

07:56.840 --> 08:05.880
and it's followed by what we name the subject in this case, it's service, followed by the predicate,

08:05.880 --> 08:14.360
in this case, deployed, and the version of the type. We also have the spec version used,

08:15.240 --> 08:23.800
and as City event want to be extensible, we are allowed to have another optional part,

08:23.800 --> 08:32.840
name custom data, in which you can put whatever you want. It is important because it's a place

08:32.840 --> 08:41.640
where you can put information not available as part of the subject, at least today. And it's also

08:41.720 --> 08:49.400
extensible because you can create your own type, just on Tuesday, there are 3D events and follow the same pattern.

08:52.120 --> 08:58.920
In the context, there is also some optional fields, like links and charity to help correlate

08:58.920 --> 09:07.080
events between each. So if we back to the previous slide, that this time we subject and predicate,

09:07.800 --> 09:14.280
you can see that we have tickets, change, branch, incident, desk as run, test tree trends,

09:15.080 --> 09:18.280
and for each we have some predicate, I can say action.

09:21.240 --> 09:26.680
Another of them. So with this event, this definition of events,

09:27.400 --> 09:38.280
I start to work on CDVs. So what you have in your company today? You have some tools,

09:38.280 --> 09:47.880
using your software development at the bottom, and some of them already influence CDVs,

09:47.880 --> 09:54.040
but maybe not the latest version of CDVs, maybe each tool can use different version of CDVs.

09:54.280 --> 10:03.160
At the top, you have your enterprise dashboard team, a tool, maybe used by your platform engineer,

10:03.160 --> 10:11.960
the DevOps, that maybe by CDVs, maybe by data analytics. So, and maybe you use several of them

10:13.640 --> 10:19.240
and your system of failure. And as I mentioned, the first goal is to have visibility about

10:19.880 --> 10:30.520
what is running. So we start from the dashboard path, oops, to fast. So to be able to show

10:30.520 --> 10:38.680
data, we need to store data. So for the start, I go with

10:39.320 --> 10:45.960
a pass best fuel, because it's flexible and I can test several approaches with it.

10:45.960 --> 10:52.440
That time pretty happy. I enable timescale DB, because it's even times time period events,

10:52.440 --> 11:01.000
so it's a lot of features and more. But the real core part is the collector.

11:01.800 --> 11:08.440
The purpose of the collector will be to retrieve information from CDVs providers,

11:08.680 --> 11:17.320
and non-CDA then providers. And as we have this collector available to push that

11:17.320 --> 11:24.920
out with the database, we can also use it to send CD events to a third-party system.

11:26.120 --> 11:32.840
Maybe a workflow tool, maybe a QC system and at the other side of the QC system, you have another

11:32.840 --> 11:40.840
collector, you can create a set of topology, because you want to aggregate from different

11:40.840 --> 11:44.600
collectors. It's the run, you want to integrate some tools.

11:47.160 --> 12:01.240
So CDVs inside CDVs. CDVs is like a netale to extract from long load. That menu is extract,

12:01.320 --> 12:05.560
transform load is part of the source. So it can take information from

12:07.320 --> 12:13.560
values sources, HTTP webbook, that also HTTP server sent event,

12:14.840 --> 12:22.840
file system or bucket, Kafka is already support, that most of the tools, you want to integrate,

12:23.720 --> 12:31.800
use a webbook, use HTTP today, so it's this is part. And at the outside of the source,

12:32.520 --> 12:40.600
we have CD event object pushing to the Q and outside of the Q, the rotastic to a bunch of

12:41.640 --> 12:48.680
sink, sink could be log, could be file, could be the database, could be HTTP, could be Kafka,

12:49.080 --> 12:59.080
and more. The collector is provided as a CLI, as a Docker image, as a Kubernetes,

13:00.040 --> 13:07.800
but also as a GitHub action. Why? Because this CLI, you have some kind of

13:07.800 --> 13:14.600
a feature include kill command to push the events, you can do the transformation and send

13:14.600 --> 13:21.000
CDVs or Kafka client directly, and it's also at all YouTube to use it locally,

13:21.000 --> 13:29.000
without have to deploy. Inside the source, as I mentioned, we have the extract part,

13:29.720 --> 13:36.360
the set of transformers, a chain of transformers, and the loader, the purpose of the loader is

13:36.360 --> 13:44.280
to validate that it's a correct CD event that can be pushed to the Q. Each transformer

13:45.160 --> 13:55.720
can split this load, log, create new events, do a bunch of stuff. There are some

13:55.720 --> 14:02.440
upcode transformers that may need, you write transformers using a language, a name, vector,

14:02.520 --> 14:10.360
remap, language, language created by, and pushing by that a dog is used for your further

14:10.360 --> 14:18.920
current as log system aggregator. It is really profound and my repository included events marked

14:18.920 --> 14:29.400
to compare the value solution. And the extractor can be used from a producer that

14:29.480 --> 14:35.720
pushed that a like waybook, but you can also grab information by putting that a example,

14:35.720 --> 14:43.720
looking at the bucket and say, hey, there is a new report, there is a new artifacts around the

14:43.800 --> 14:59.800
bucket, extra. So input can be file, file metadata, JSON, JSON, L, CVS, and next week, XML.

15:02.360 --> 15:06.360
I will explain why XML after. And the database is pretty simple.

15:06.360 --> 15:17.000
In the CD event language, this database is also known as evidence store. It is simple because

15:17.000 --> 15:25.160
I want to allow enterprise to own the data and maybe to extend this schema as they want.

15:26.840 --> 15:35.400
So to ingest that you call a store procedure. And what is done by this storage procedure is

15:35.400 --> 15:42.440
up to you, the default one, the provided one, just store into a table that works like a data

15:42.440 --> 15:48.440
edge. Where we have the rogism and some extracted metadata, the timestamp and the type,

15:48.440 --> 15:58.120
the subject and the predicate. And if you want, you can extend it with new view,

15:59.080 --> 16:05.560
materialized view, maybe compression, retention policy, etc, whatever your database and your

16:05.560 --> 16:18.040
knowledge. What? And on the dashboard, we have some chemitrikes, extra extra.

16:18.760 --> 16:35.880
My favorite and what I started, it is custom dashboards. It is used to track artifact, artifact,

16:35.880 --> 16:43.240
version across the line. So each vertical line is one version of an artifact.

16:44.120 --> 16:54.200
Its original line is a stage, published, maybe tested, deployed as a part of a service

16:54.200 --> 17:00.600
into an environment, etc. And with this information on the table on the other side,

17:00.600 --> 17:06.120
you have access to the frequency of the event, the adverages duration from the point of

17:06.120 --> 17:13.240
promotion between each stage and the latest version at this stage.

17:16.600 --> 17:23.320
So it's a, I'm pretty proud of it, but I'm determined to an interesting, I have to

17:23.320 --> 17:31.160
create it because I didn't find one. Originally, I planned to make a demo that I bypassed the demo

17:32.040 --> 17:41.560
and capture directly the material. So in prediction, maybe I have a single line for an artifact

17:41.560 --> 17:50.600
that is, I don't manage, I just apply it on my first door. It's good. That's sometimes,

17:50.600 --> 17:55.720
I have other bugs because transformer needs to configure for transformation. In this case,

17:55.800 --> 18:01.160
it's a conflict between cube watch events because I watch directly Kubernetes to know what is

18:01.160 --> 18:10.200
deployed and I go city. The naming convention is not well aligned, so the graphics.

18:11.320 --> 18:19.560
And this one is a mix of all the information pretty unreliable. But you can create also dashboard

18:19.560 --> 18:30.680
about your pipelines, test pipeline, CI pipeline, etc. So you have another view of the time,

18:31.800 --> 18:39.640
the queue, the duration, the failure, and you create, you can create. If you want to get started,

18:39.640 --> 18:46.200
I also provide a Docker compose where you can play with it, with this Docker compose, there is

18:46.200 --> 18:54.200
already database, a collector, and a graph and integrative. And inside graph and others, there is a page,

18:54.200 --> 19:03.400
where you have random data and few forms to serve service deploy service events and incident.

19:03.400 --> 19:08.600
So it could be exactly like if you provide this application for your customer to say,

19:08.600 --> 19:15.480
this version is deployed. I have this incident in prediction. And information is used to annotate

19:15.560 --> 19:27.560
the graph below. So what works? I bet for us on that, a part of it, I expected to,

19:28.760 --> 19:37.000
it will succeed at some point in time. I start simple, I guess I use some common tools,

19:37.960 --> 19:49.640
and keeping the relevant, hello to extern, create your view, etc. So it follows,

19:51.240 --> 19:55.960
and maybe later I will create, extract me, create, etc. That you are free to do it.

19:57.000 --> 20:02.840
Don't use information. And because it's modular, user can keep what you want.

20:03.800 --> 20:10.840
If you want, just to use the collector to collect data and storing accounts or another database,

20:10.840 --> 20:17.800
it's fine. If you use all sorts of database, it's fine. In fact, if you change the database,

20:17.800 --> 20:25.800
dashboard should have to be aligned. So we use a can customize. There is a bunch of stuff that

20:25.800 --> 20:38.680
didn't work. The adoption, not very far. It's also the purpose of the collector to help

20:39.320 --> 20:47.320
you can use it without adoption. That by using it without adoption, we lost part of the information.

20:48.280 --> 20:55.080
And a big part is how to correlate information. Because contrary to this with it,

20:55.080 --> 20:59.800
trusting where you have a trust ID and it's propagated inside the process, and you have the

20:59.800 --> 21:07.240
trust ID outside, and then you can correlate. It is not possible. So you have to create some rules

21:07.880 --> 21:14.520
to do it. So the challenge, energy and space, correlation, dashboard audience, maybe you have

21:14.600 --> 21:21.160
devops, c-levels, developers that want it, target, etc. The documentation and the marketing

21:21.160 --> 21:31.960
that I work on it. So we are building on an expected energy and space. So there are some

21:31.960 --> 21:39.480
problems and some problems that we need your help. On this, on CD events, we need your feedback,

21:41.080 --> 21:45.160
and CDVs also need your feedback, and if you're happy, you're money.

21:51.240 --> 22:00.760
So what is coming for the roadmap for CDVs? So more dashboard is dedicated to test. It will

22:00.840 --> 22:07.720
come in for Q1. It's also why XML will be used as input to be able to read directly,

22:07.720 --> 22:16.360
like example, the unit 3 ports, and to integrate this information, more source, more that dashboard,

22:16.360 --> 22:28.280
from your external and integration of all feedbacks. And if it's go well, maybe using also CDVs,

22:28.280 --> 22:36.600
and especially the collector, to trigger your workflow, and to start to go to the event CICD.

22:40.360 --> 22:45.320
So building on an emerging standard is risky that I expect to be rewarding,

22:46.280 --> 22:53.960
and someone has to go first. CDVs and CDVs are you, the doesn't, the doesn't fix all the

22:53.960 --> 23:01.800
problems that you can start to use them today, because CDVs also take care of,

23:01.800 --> 23:09.880
maybe transforming all CDVs into new format of the CDVs with a transformer.

23:12.920 --> 23:18.920
So we are looking for user contribution, star feedbacks, comments including it,

23:19.880 --> 23:26.760
why you think it's useless for you, and what is missing, and what to improve. Thank you.

23:34.600 --> 23:42.680
We have two minutes, maybe two quick questions or one more. Thanks for the presentation,

23:42.680 --> 23:48.680
and happy to see that there's some little thoughts about CICD. I have just one question about

23:48.680 --> 23:53.560
the standard information and the semantics of our motion. Do you have the opportunity to talk

23:53.560 --> 24:02.920
to the office elements in your practice? Yes, so a recurrent questions. Do we work with open

24:02.920 --> 24:11.960
telemetry, CICD groups? No. To be clear, often say we discuss, it's maybe we have just,

24:12.280 --> 24:18.440
I don't take, I'm not part of it. There are some discussion at point. We look at what they

24:18.440 --> 24:23.720
are working on, and we work on another approach. Our goal is to have some thing more

24:23.720 --> 24:30.920
particles and using other side. But we use them as also as inspiration. It's attributes,

24:30.920 --> 24:40.600
they define, etc. as part of what I start to using system data just to look at it. Thank you.

