WEBVTT

00:00.000 --> 00:14.640
Yeah, FIR filter design basically. There's a little bit of math in this talk. I have graphs

00:14.640 --> 00:18.880
for mostly everything. So if you want to you can ignore the formulas and you can just

00:18.880 --> 00:24.560
look at the graphs and also the last half of the talk is practical examples and applications.

00:24.560 --> 00:32.160
So if you become lost or bored, that is a good point to rejoin. So what is there about

00:34.720 --> 00:40.880
quick show of hands? Who knows about the parks, McLean, also known as Remes, filter design algorithm?

00:42.160 --> 00:50.400
Okay, a few people. Next question, who does peep install PM Remes or you can substitute

00:50.480 --> 00:54.560
peep install by whatever is your favorite way of installing pattern packages?

00:56.720 --> 01:03.280
Marcus, maybe? Yeah. Okay, so the goal of these talk is to get more people racing hands for

01:03.280 --> 01:08.960
both of these questions, especially the first one because the algorithm is good. The other one

01:08.960 --> 01:14.800
is, I think, a good way of using it in Python, but you can use your favorite implementation.

01:15.520 --> 01:24.320
So a quick recap about FIR design. If we want to design a Lopos filter, the filter is supposed

01:24.320 --> 01:30.800
to be one which means let us in and through in the passband and zero exactly zero in the stopband,

01:30.800 --> 01:38.000
which means kilo of the signal here. And for sure that's mathematically impossible. So the

01:38.000 --> 01:45.280
best thing we can do is something like this, which is a practical example of a 64 tap FIR filter.

01:45.280 --> 01:51.920
We have 64 numbers and this is the frequency response we get. This is the design with a humming

01:51.920 --> 01:59.840
window using the fairway method. So this is something you might have done already. The two main

01:59.840 --> 02:07.680
differences between these filters is a transition window because we cannot have this tip jump

02:07.680 --> 02:14.160
from 1 to 0. And then also if we look at the passband and the stopband in more detail,

02:14.160 --> 02:21.360
you can zoom in. And then in here we can see that there is a repo. So it's not exactly one.

02:21.360 --> 02:28.800
It waves with an error. And in the stopband we can see it's normally plotted in decibels.

02:28.800 --> 02:33.600
So we can see it reels about zero. And that means we are not canceling signals. We are

02:34.560 --> 02:40.800
attenuating them by about 60 degrees. Okay, can we do better than something like this?

02:40.800 --> 02:46.800
That is the question. So let's say I have a fixed number of taps because computation increases

02:46.800 --> 02:51.840
with a number of taps. So let's say with the computation I can afford with 64 taps,

02:51.840 --> 03:00.480
can I do any better than this window method? And the idea is, so I'm always going to have an error,

03:01.440 --> 03:06.000
but what I want to do is with this fixed number of taps, I want to find the affair filter, which

03:06.000 --> 03:16.080
has the lowest maximum error. What is this maximum error? So for example, in here for the blue one,

03:16.080 --> 03:20.720
which is the window method, the maximum error is this one. Here's better, it has lower error.

03:21.680 --> 03:28.720
But if the thing we want to optimize is the maximum error, the algorithm we need is these

03:28.720 --> 03:33.680
parts, my clan, and algorithm. That's basically the goal of the algorithm. And we can see that

03:33.680 --> 03:39.760
in orange, we basically get a much better filter with exactly the same number of taps. So that's the

03:39.760 --> 03:46.560
fair thing I want to make clear in the talk. You should be using this. For most FIA design applications,

03:46.560 --> 03:52.160
it's the right algorithm to use. You just get much better filters. As you can see in here,

03:52.240 --> 04:03.200
there are no tricks. It's just the right number of taps. So, okay, this is kind of the mathy way of

04:04.080 --> 04:09.840
stating what the algorithm does, it's basically what I said. It's finally the affair filter with

04:09.840 --> 04:15.360
a fixed number of taps that minimizes the maximum error. And you can also put a weight function

04:15.360 --> 04:20.480
for the error, which is going to be useful in the practical applications. We will see a couple examples.

04:23.040 --> 04:32.240
Technical thingy, this is only supposed to work for real filters. It actually needs to have real

04:32.240 --> 04:40.480
taps and the impulse response. So that's the taps needs to be either symmetric or anti-symmetric,

04:40.480 --> 04:46.800
basically even symmetric or alt-symmetric. That is equivalent to the filter having linear phase,

04:47.440 --> 04:53.040
which is the same as having constant group delay. So unless you are doing some dispersion

04:53.040 --> 04:58.720
information or dispersion cancellation, these are the types of filters you are going to work with.

04:58.720 --> 05:04.400
If you need to do some dispersion, then this is not the right algorithm. It just doesn't work.

05:07.840 --> 05:15.280
So let's take a detour on polynomial approximation. Let's say I have this Gaussian function in blue.

05:15.920 --> 05:22.480
And I want to find the polynomial, which approximates this function. For simplicity,

05:22.480 --> 05:27.600
I want to find a degree five polynomial, just because if I had a higher degree,

05:28.400 --> 05:34.800
the pot would get messier. A degree five polynomial has six coefficients. So with these six numbers,

05:34.800 --> 05:39.120
I want to approximate these Gaussian function, which is an example. It could be whatever

05:39.120 --> 05:47.520
continuous function we want to have here. And there is this theorem by ChevyChef,

05:47.520 --> 05:56.400
which says that my polynomial is the best uniform approximation to this function. If there is this

05:57.520 --> 06:03.840
equi oscillation property, and I'm going to explain what that means with a diagram,

06:04.640 --> 06:11.520
because it sees it on the standard math, the math you have it up there. So to come up with

06:11.520 --> 06:18.320
this polynomial, what I've done is I'm choosing interpolation notes in green. These are the six

06:18.320 --> 06:25.760
green points. And then I compute the unique polynomial, which interpolates the function. It basically

06:25.760 --> 06:30.720
has the same value as the function in the green points. And you can see that the two

06:31.680 --> 06:36.800
blue and orange plots, they cross each other on the green points. That's how we come up with the

06:36.800 --> 06:46.800
polynomial. And then we look at the local extreme for the error. So that's here, here,

06:47.440 --> 06:58.400
here, these red points. That's when the orange and blue traces, they are like locally further apart.

06:58.480 --> 07:03.600
Right, it's the maximum for them. A local maximum or local minimum for the error.

07:04.640 --> 07:10.960
And what happens here, you might notice here the error is pretty large, here the error is pretty small,

07:10.960 --> 07:18.320
right? So one, my thing, this is like intuitive, maybe I can wiggle a bit the coefficients of my

07:18.320 --> 07:26.000
polynomial to try to make this thing go up and so the error decreases. And because nothing is free

07:26.080 --> 07:32.080
in life when you are trying to optimize for something, then I would have to pay with something else,

07:32.080 --> 07:38.800
which is probably the error here grows. So if I try to push this red thing up, these are the red

07:38.800 --> 07:46.560
thing would go down. Okay, that's fair. But I mean, if I want to minimize the maximum error,

07:46.560 --> 07:52.480
that's the right thing to do, right? I want to basically push this up. These will automatically

07:52.480 --> 07:59.200
push down until both of them have this error. So that's kind of the idea of this theorem.

07:59.840 --> 08:05.520
And then the question is, I'm mentioning I need to push these things, so how do I actually do it?

08:07.600 --> 08:14.560
So the thing is basically we start with points on this interval in which we want to do the

08:14.560 --> 08:20.640
approximation. They can be any points, you can take them equally spaced or or anything.

08:21.600 --> 08:29.280
We solve this linear system. And here the thing is, so this thing is supers to be the error.

08:30.000 --> 08:35.680
And then we have this plus and minus one because the error is supposed to alternate. So if I go back,

08:35.680 --> 08:42.720
you can notice that because polynomials are like weekly things, they do not want to follow a Gaussian curve

08:42.800 --> 08:51.600
or anything else. I am like above below and so on and so forth. So if I want these distances to

08:51.600 --> 08:57.520
be all the same, what I want to do is basically something like this. I want to find these B

08:58.080 --> 09:06.240
coefficients of a polynomial so that they go up and down the function, which is G here,

09:06.560 --> 09:13.280
with respect to some error, which is E. So that is the linear system. I need to solve

09:14.320 --> 09:23.200
there are M plus two equations, there are M plus two points. So there's a single solution. We solve

09:23.200 --> 09:30.400
it. We have our polynomial, but then the thing will happen is I want this thing to be the error

09:30.400 --> 09:37.520
extrema. But in general they are not going to be. So we need to find the local extrema of the error.

09:38.400 --> 09:45.680
And then we replace the n plus two points that we started with by this local extrema.

09:46.320 --> 09:54.720
And then we basically iterate. Okay, this is best explained with just the plots of the thing.

09:54.800 --> 09:59.840
So I started with the thing I had before. And as I said, I want to push this thing up.

10:00.720 --> 10:07.840
I want to push this thing down. So what I do is I compute these function,

10:08.720 --> 10:17.760
which in the green nodes, it's supposed to go like up and down, up and down. And it's probably not

10:18.720 --> 10:24.400
clear from this picture, but these distances on the green points,

10:25.280 --> 10:29.840
there are exactly the same, because that's the linear program I'm solving for.

10:30.400 --> 10:37.440
But then what happens is if I look at the error extrema, they are not exactly at the green points.

10:37.440 --> 10:44.880
You can see them in red again. And for example, this one is actually at a green point.

10:45.840 --> 10:52.080
But this one is slightly different and the error is slightly larger in here.

10:53.040 --> 11:01.360
Okay, that's fine. So what we do now is we basically iterate. We put green points where we had red points.

11:01.360 --> 11:08.880
We repeat the same thing. And now the next thing we get is basically the green points on the red points.

11:08.880 --> 11:16.560
Now they are very, very similar. You cannot tell them apart in this plot. And so we can see now

11:17.280 --> 11:23.520
all of these distances, they are the same. So that's the thing we wanted to have. We wanted to have

11:23.520 --> 11:31.360
a five degree polynomial, which approximates best discussion curve in the uniform sense.

11:32.000 --> 11:37.280
So this is kind of the recipe. But now you might be saying, hey, this is the DSP room.

11:37.280 --> 11:43.680
I've come here to learn about FR filters. What does this thing have to do with FR filters at all?

11:45.920 --> 11:51.040
Unfortunately, I don't have like a picture for this, but I want to convince you really quickly

11:51.680 --> 11:58.080
that basically computing an FR filter and doing this polynomial approximation,

11:58.080 --> 12:05.920
thinking they are kind of equivalent and because of some easy math. So I'm going to do it for the

12:05.920 --> 12:13.120
old length FR with even symmetry. That's the easiest one, but the other ones are similar.

12:13.680 --> 12:21.840
And then the thing is, so what is the frequency response of the FR filter? It is basically this thing.

12:21.840 --> 12:27.920
It's a Fourier transform of the coefficients. And here there is some symmetry because I'm

12:27.920 --> 12:34.480
saying that, for example, A0 is the same as A2n, right? The first coefficient is the same as the

12:34.480 --> 12:40.720
last and so on and so forth. So what I can do to exploit this symmetry is to take out this thing as

12:40.720 --> 12:48.640
a common factor. And surely I need to put it in there. I'm also writing my summation index

12:49.200 --> 12:55.600
so that the symmetry between the coefficients is more obvious. So happens is the minus n is

12:55.600 --> 13:03.600
emitted with the plus n and so on and so forth. And then what happens? For example, with the first one

13:03.600 --> 13:10.000
and the last one, because they now have the same exponential, but with an opposite sign,

13:10.000 --> 13:17.920
I can bring them together. I can use these allus formula and the thing I get is a cosine function.

13:17.920 --> 13:24.240
So basically I get these B0 times the cosine of something, blah, blah. So I get a bunch of cosine

13:25.040 --> 13:30.880
and these BK coefficients, which are basically two times the A's, except for the one in the

13:30.880 --> 13:38.800
middle where you just have that one because you don't have two things to put together. So next thing,

13:38.800 --> 13:45.600
we can do a change of variables to this cosine, we call it X. And then what happens is this function

13:45.600 --> 13:54.160
here, can be written as a polynomial of degree n in X. And the thing you need to remember is

13:55.120 --> 14:01.040
here actually, I don't have like powers of X. I don't have powers of cosine. I have cosine of

14:01.040 --> 14:08.880
something with an n or with a k inside, but we can write that as powers of cosine function. For example,

14:08.880 --> 14:16.560
for k equals two, so that's the cosine of twice the angle, you might remember this formula.

14:16.560 --> 14:22.400
And then because I'm doing, so now I have powers of cosine. And now I can make the change,

14:22.400 --> 14:30.160
like cosine, blah, blah, blah. And you can do this for free k. And so that's basically what happens.

14:30.160 --> 14:38.000
Any a fire filter like this, you can write the frequency response as a polynomial by using this

14:38.000 --> 14:44.320
change of variables. And so you can use techniques for polynomial approximation to design a fire

14:44.320 --> 14:55.360
filters. Okay, so open source implementations of these things. Here I should first say that

14:55.360 --> 15:03.760
the first implementation of these comes from parts, McLean and maybe Robiner. I don't remember

15:03.760 --> 15:09.600
they are like a couple papers from the 70s. And in one of them it's just parts and McLean,

15:09.680 --> 15:15.680
the other one, there's also Robiner. In one of those papers, they give you the fourth one code

15:15.680 --> 15:21.200
for these. It's like the paper is description of the algorithm and then annex fourth one code.

15:21.200 --> 15:28.960
And I don't know about the license in the 70s, people were not so worried about licenses of these things.

15:28.960 --> 15:34.400
And in any, any how it's a mathematical algorithm, read and down in fourth time.

15:35.040 --> 15:41.840
So what happens to these days, if you are using sci-fi, the ramus algorithm which is in there

15:42.400 --> 15:48.800
is the C translation of these original four ton code. And that means it's horrible. You can look at it

15:48.800 --> 15:55.040
like the names of the variables are the names which were reasonable to put on punch cards for code

15:55.040 --> 16:04.240
and on staff. So okay, horrible and readable code. Then you have, so when you want to choose

16:04.960 --> 16:09.760
your filter design, how my response function is going to look like, how my width function is

16:09.760 --> 16:15.040
going to look like, these are piecewise constant. So for example, for the low pass filter

16:15.040 --> 16:21.040
I gave, you have one in the pass bank, you have zero in the stop bank. And that is fine for basic

16:21.040 --> 16:27.040
filters, but we will see some example use cases where we want to have custom functions for those.

16:28.000 --> 16:33.520
So can you do that with sci-fi? No. And what is the main reason for that? Well,

16:33.520 --> 16:40.160
in the 70s paper, the authors were like, yes, we are shipping you some, I think it's called routines

16:40.160 --> 16:47.440
in 490, it doesn't even have like functions. So we are giving you some very simple routines to

16:48.240 --> 16:54.080
define the constant response and weights. In case you need and they were aware that for some

16:54.080 --> 17:00.560
use cases you need like to put in here a function which is not constant, just modify the code,

17:00.560 --> 17:05.520
which is a reasonable thing to do in 490 and 70s. But these days, you know, we have like

17:05.520 --> 17:11.840
closures, lambda functions and stuff in programming languages. And so we should be able to put

17:11.840 --> 17:17.680
in there any function we want and we will see examples. Then license in BSD license,

17:18.560 --> 17:21.360
and all the things I'm going to explain in more detail in a couple slides,

17:22.240 --> 17:27.280
sometimes the algorithm doesn't converge. So you might be putting a program for the algorithm

17:27.280 --> 17:35.440
and it just fails in whatever surprise and way. Then we have going to radio our octave.

17:35.440 --> 17:41.840
And there is an ore here because basically the going to radio implementation is a copy and paste

17:41.920 --> 17:51.200
from octave and it hasn't been changed in per year decade. More two decades. Okay, so

17:53.040 --> 17:59.360
it is seen written from scratch in the 90s. They are coming saying there are things which are wrong.

18:00.320 --> 18:07.440
It is like author of the code says this routine doesn't make any sense, I should fix it and

18:07.520 --> 18:14.400
the comment is from the 90s and no one has fixed it. Then the same piecewise constant response

18:14.400 --> 18:21.280
on weights, it is GPL 2.0 later. That's not only because of the radio, it's because that's

18:21.280 --> 18:29.280
the octave license and it has lots of convergence issues not to wonder because there are things

18:29.280 --> 18:35.680
which are wrong and the author knew. And there are also nicer new things. So for example,

18:35.680 --> 18:43.120
there is this implementation by S Philip which is basically he wrote a paper about how to implement

18:43.120 --> 18:50.240
these things better and then he has the open source C++ implementation demonstrating how these

18:50.240 --> 18:57.520
works. So this is much nicer. It is modern C++. It uses the eigenlibrary so that means you can

18:57.520 --> 19:07.040
put in there any numerical types you want and it will work. It uses GMP or MAPMPFR for arbitrary

19:07.040 --> 19:13.440
numerical position which is important to solve some of the three case filtering sign problems.

19:14.240 --> 19:21.200
Again, unfortunately, piecewise constant response and weights. I think this would be easy to change.

19:21.200 --> 19:28.320
It's just like an API thing. You could put in there a C++ lambda function. It should work.

19:29.600 --> 19:35.680
This used to be GPL 3.0 later but it switched to BSD in April 2024.

19:37.440 --> 19:43.600
Because this is the guy who knows how to write these things properly. This has good convergence.

19:43.600 --> 19:50.800
It also has good runtime. It runs pretty fast and all the numerical methods,

19:50.800 --> 19:56.000
all the tricks they are in a paper. You can go to the GitHub repository and the paper is linked in there.

19:58.400 --> 20:06.080
And finally, I'm presenting here, there is this PM Remes. This is a vast implementation of

20:06.080 --> 20:14.080
the Remes algorithm. It's based on ideas from S Philip. So basically based on ideas

20:14.080 --> 20:24.160
from these good C++ implementation. It has a Python package. So you can just

20:24.160 --> 20:29.600
pp install PM Remes. If you want to do some filter design, I don't know, on a Jupyter notebook,

20:29.600 --> 20:37.760
on the command line or something. You can put in there are beta responses and weights. You can give it

20:37.760 --> 20:44.560
the rast function. You can give it a Python function and anything which is a Python pullable.

20:44.560 --> 20:50.320
You can pass it down. It's permissively licensed. So the main reason I started implementing

20:50.320 --> 20:57.600
this thing was that the S Philip library was GPL and I needed something permissively licensed for

20:57.600 --> 21:04.640
my SDR. It just happened that right when I finished this, he changed the license. I think it was

21:04.640 --> 21:12.800
a pure coincidence because I was working on this and I only published the thing when I

21:12.800 --> 21:19.840
had it completed and he switched, I think, a few days before I published the thing. So just

21:19.840 --> 21:24.800
technical incidents. And it comes with very good documentation. That's also something else which is

21:25.600 --> 21:31.360
the thing we are going to look at next. Well, actually not next, but in a couple slides.

21:32.320 --> 21:40.480
So why are some of these implementations problematic when it comes to convergence issues?

21:41.920 --> 21:49.040
Well, the thing is you have, so if you have many tabs, you have like many of these interpolation

21:49.040 --> 21:55.920
points, you have tiny errors and all of these can be tricky for floating point calculations,

21:56.000 --> 22:03.040
especially depending on how you do these calculations. And if some techniques you can use to improve

22:03.040 --> 22:10.080
things, one of them is something called barricentric Lagrange interpolation, which is a way of doing

22:10.080 --> 22:16.960
polynomial interpolation that basically reducing numerical error. The other one is TabiShep,

22:16.960 --> 22:25.440
proxy route finding. That's actually a smart thing because when I have been floating things in the

22:25.520 --> 22:30.800
previous slides, I have been saying, we now need to search the local extrema for the error,

22:30.800 --> 22:36.560
blah, blah, blah, I'm floating red points on these things. But the question is, how do you find

22:36.560 --> 22:43.680
the maximum of a function? And there are a whole bunch of ways to do it. In those plots, I was

22:43.680 --> 22:50.640
basically cheating. I was pointing like a thousand points on the plot and taking the point where

22:50.720 --> 22:55.680
it has the maximum value. It's not really the maximum, but it's just very close to it. It's the

22:55.680 --> 23:02.720
maximum in my plot. And some of the implementations like site pie and guru radio, that's what they do.

23:02.720 --> 23:08.480
They take a frying grid of points, they compute the error, that grid, and then off you go. The

23:08.480 --> 23:14.560
point where you get the maximum value that's supposed to be the like the local extrema. And

23:14.560 --> 23:18.800
surely when you have many, many points that's going to run into trouble. So these

23:18.960 --> 23:25.440
CherishF proxy route finding is a clever thing. You approximate your function in a small interval

23:25.440 --> 23:33.840
with a polynomial. And then you use like clever math tricks about finding routes of the derivative

23:33.840 --> 23:40.240
of that polynomial, which is also a polynomial. And so the routes are, you know, when the function

23:40.240 --> 23:46.080
is flat and so has the local extrema. And the thing is finding zeros of a polynomial can be done

23:46.080 --> 23:53.280
as an eigenvalue problem. And so you know there are all these numerical methods to solve eigenvalue

23:53.280 --> 24:00.480
problems. And of course you're going to use a higher precision numerical implementation. That

24:00.480 --> 24:07.920
also helps. So that's the reason why there are better and worse implementations of these algorithm.

24:07.920 --> 24:16.000
So now let's do some practical FIR design. All of the code showing is like in more details

24:16.160 --> 24:22.560
in the Python package documentation. So you can go in there and you actually have like the code

24:22.560 --> 24:31.280
for the plots and everything. So very simple decimation. Let's say I want to decimate by two or by

24:31.280 --> 24:38.720
any other number. To do that without aliasing, I need to design a low pass filter whose

24:38.720 --> 24:45.120
cutoff frequency is basically going to be the new Nyquist frequency after decimation. Okay,

24:45.120 --> 24:52.000
fine. It also needs to have some transition bandwidth because it cannot go immediately from

24:52.000 --> 24:59.440
Python to stop on. That is another design problem. And that is usually given as percentage of the

24:59.440 --> 25:04.880
output bandwidth. So in this case, not point two means that 20% of the output bandwidth is going

25:04.880 --> 25:10.160
to be transition bandwidth. So basically part of the spectrum is that we will not work properly

25:10.160 --> 25:14.960
for your application. Your signal is supposed to be like in the center 80% if you do this.

25:14.960 --> 25:23.520
And then the number of types, for example 35. So you basically use this code and so you get

25:23.520 --> 25:29.680
something like this, which is the thing you wanted to have. Depending on the signals, your

25:29.760 --> 25:35.200
supposed to have depending on applications, you might need better adenuation minus 60 dBs,

25:35.200 --> 25:43.200
which is the thing we're getting here is reasonable in many cases. And also a question you might ask

25:43.200 --> 25:50.080
is, well, I don't know what number of types I'm supposed to use to get something like this. So

25:50.080 --> 25:57.280
you can try. There are also formulas like there's an approximate formula which will give you a

25:57.280 --> 26:02.560
hint for the number of types. And then you can try with that and you can even eat rate. You know,

26:02.560 --> 26:08.240
if the thing is doesn't satisfy my criteria, as I put one tap more and try again and so on and so

26:08.240 --> 26:15.840
forth. And you have like the minimal number of types for you think. How can we improve these design?

26:15.840 --> 26:23.520
So the thing is because of this equi-repled property for the error, with the previous design,

26:23.520 --> 26:29.200
we are getting the same error on the passband and on the stopband. But oftentimes this is not the

26:29.200 --> 26:34.000
thing we want to do. Because in the passband, it depends on the modulation and stuff,

26:34.000 --> 26:40.640
but usually a 1% error is fine. In the stopband it depends on interferes signals. So for example,

26:40.640 --> 26:49.440
you might need to have more attenuation in the stopband or you might want to have a lower number of

26:49.440 --> 26:55.520
tops or something like that. So how can we achieve having different errors in the stopband and

26:55.520 --> 27:01.040
the passband? That's where the weight comes in. And basically, I have the rule of them. There,

27:01.040 --> 27:07.040
if you want 1% passband re-pull, then the weight depends on how much stopband attenuation you want.

27:07.040 --> 27:13.920
For example, here, I think I have a weight of 10. And so I get my 1% error. So these

27:14.880 --> 27:22.400
worse than in the previous plot, but I still get my 60 dB stopband rejection. So that's the thing.

27:22.400 --> 27:29.120
You can play with the weights here. And then, and all the things you can do, Fred Harris proposes

27:29.120 --> 27:36.400
everyone to do this. You can have these 1 over F response on the stopband. So you can see it here.

27:36.960 --> 27:45.440
It's no longer flat, it's decreasing. And the intention of doing this is that when all of these

27:45.440 --> 27:52.320
stopband aliases back into the passband because you are decimating, for here it does a mother, but

27:52.320 --> 27:58.080
let's say you are decimating by 100. You have 100 copies of the noise floor which go on top of

27:58.160 --> 28:05.680
your signal. So if you do not do this thing, then you have 100 times 60 decibels and that's

28:05.680 --> 28:10.080
going to increase the noise floor you get on the output. So by doing this, then you have like

28:10.080 --> 28:15.760
these sum of things which are decreasing and basically converges to constant value rather than

28:16.720 --> 28:23.920
increasing as you increase the decimation. So that's like the reason why it's useful and we can

28:23.920 --> 28:30.480
design this thing by saying at the beginning of the stopband the weight is going to be W at the end

28:30.480 --> 28:36.400
of the stopband the weight is going to be W plus something larger. So the weight is supposed to be a linear

28:36.400 --> 28:44.480
function. It has like these two values at the end points and we can define these with a triple.

28:44.480 --> 28:48.880
The API is we'll understand that this is supposed to be a linear function having these two

28:49.760 --> 28:59.920
values on the end points. What else? We can basically use all of these techniques to make a

28:59.920 --> 29:07.920
PFB prototype filter and I'm not going to spend much time in the interest of the other examples explaining

29:07.920 --> 29:15.760
what the PFB is. The only thing I'm going to say is that because a PFB can have many many

29:15.760 --> 29:23.920
channels here's an example for 248 channels and then you also have like taps per channel. So this is

29:23.920 --> 29:30.240
six taps per channel which is the lowest minimum number of taps per channel to get anything

29:30.240 --> 29:37.040
slightly reasonable. We basically have 6,000 blah blah blah points and this is quite a lot. This is

29:37.040 --> 29:43.840
like close to the point where the algorithm breaks. If you want to have like for example 4,000 and 96

29:43.840 --> 29:49.520
points it will break but then what do you do? You design it for lower number of channels and then you

29:49.520 --> 29:54.320
do linear interpolation of the taps. When you have these many taps they are like a nice continuous

29:54.320 --> 30:03.120
function and you can just interpolate the taps to get a larger number of channels. So that's the trick

30:03.120 --> 30:11.440
you need to know about. My favorite example is this one. CI C filters, maybe some of you are familiar

30:11.520 --> 30:17.760
with these. Desimation filters, they are great because they only use summations. They do not need

30:17.760 --> 30:24.000
multiplications. So if you are RFPJ or something which doesn't know how to multiply or it's very expensive

30:24.000 --> 30:30.960
to multiply, these are ideal. The issue though is that the frequency response is crap. It's the great

30:30.960 --> 30:39.120
thing here. So it droops. It only has good attenuation like in here and here in the notches.

30:40.000 --> 30:47.440
Here the attenuation minus 50 is bad. So what do you do? You use a combination of a CFC filter which

30:47.440 --> 30:54.400
is like doing the bulk of the decimation and then you put a decimation FIR. Usually decimate

30:54.400 --> 31:03.360
by 4 or by 3 to make things nicer. And by makes things nicer what you can do is if you design the

31:03.360 --> 31:09.600
CFIR filter properly, you can compensate for the droop that the CI C filter has in the

31:10.960 --> 31:15.920
stop band. Sorry in the pass band. So you can you can see this here. Basically this is drooping

31:16.720 --> 31:24.560
and it's easy to see in here. And this actually has some gain so that when we have the combination

31:24.560 --> 31:30.320
it is actually flat. And let's see how we are doing in time. We are doing pretty well.

31:31.280 --> 31:37.680
So something which is not that well known but it's like once you do the mathy subfuse

31:38.240 --> 31:49.760
is the design that I need for a compensation filter depends on my DC ear which is the decimation

31:49.760 --> 31:57.520
factor I apply on the CI C. Why is this relevant? The reason this is relevant is that when you

31:57.600 --> 32:03.760
build this architecture where you have a CI C filter followed by a compensation filter it is very

32:03.760 --> 32:10.160
easy to change the decimation factor on the CI C filter just by changing when you look at the

32:10.160 --> 32:18.560
output. Basically you can be decimated by 10. And if you look at every other sample then suddenly

32:18.560 --> 32:25.600
you are decimated by 20. For example this is how a USB or many other SDRs which perform

32:25.680 --> 32:31.760
decimation on the FPGA work. That is the reason why your USB is able to do many different

32:31.760 --> 32:39.840
decimation ratios just by changing the this DC the CI C decimation ratio. But formally the

32:41.040 --> 32:47.120
like the transfer function we need for this compensation filter depends on DC there is bad but

32:47.280 --> 32:55.440
luckily it's more or less the same thing as this this is like the small angle approximation for

32:55.440 --> 33:02.800
the same function. So we get something independent of the DC. That is nice for example if you want to

33:02.800 --> 33:08.800
build a USRP because people are going to change the CI C decimation and you do not want to

33:09.360 --> 33:16.480
design any filter for each of them load them in the FPGA blah blah blah. So how can we compute

33:17.120 --> 33:24.720
these filter with PM remaste. In this case we have a lambda function because the transfer function

33:24.720 --> 33:31.040
we need is the inverse of this thing. So that's fine we can have lambda functions in Python and

33:31.040 --> 33:38.000
we can put it in there in the desired response and that's how we get our nice passband shape which

33:38.000 --> 33:46.480
compensates the FPGA filter group. So that's for a particular DC and if you want to have compensation

33:46.560 --> 33:51.840
independent of DC it's the other formula. So just a different lambda function and this is basically

33:51.840 --> 33:58.080
how it looks like because there is a narrower because we have made a small angle approximation in

33:58.080 --> 34:05.120
that formula. Now this is no longer flat but in many cases the error is small enough that it doesn't matter.

34:05.360 --> 34:15.600
What else? There is this thing called a Hilbert filter and different people call different

34:15.600 --> 34:23.200
filters a Hilbert filter. If you take your radio this is not exactly the thing which is inside

34:23.200 --> 34:30.800
the Hilbert filter but this is the main building block. So how ideas we have complex signals and

34:30.800 --> 34:36.240
because they are complex they have both positive and negative frequencies. We want to do something

34:36.240 --> 34:43.440
which looks rather stupid which is a frequency response of i. So that means plus 90 degrees on

34:43.440 --> 34:51.040
positive frequencies and minus i. So that means minus 90 degrees phase shift on negative frequencies.

34:51.040 --> 34:57.360
Why would we do anything like that? The reason is if you have this filter as a building block you

34:57.440 --> 35:03.200
come for example cancel negative frequencies which is the thing that the GUNU radio Hilbert

35:03.200 --> 35:11.520
filter block does basically using these and there's a plus one somewhere. So or plus i.

35:12.960 --> 35:22.000
So this is another example where other implementations of parks, McLean and algorithm can be

35:22.000 --> 35:27.840
awkward to use because they consider this thing as a special example. But in here it's just a

35:27.840 --> 35:33.920
general example the only things you need to remember it's all taps and an all impulse response

35:33.920 --> 35:41.760
symmetry that's how you get your imaginary frequency response and then you need to design an all

35:41.760 --> 35:46.960
pass filter because ideally you want to have these like on the whole pass band of course that's

35:46.960 --> 35:52.000
impossible because you cannot jump immediately from plus i to minus i as of course frequency

35:52.000 --> 36:00.640
equals zero but we shall try. So then what changes not much n needs to be odd I don't know what

36:00.640 --> 36:06.160
I used to hear you need to specify that the symmetry of the filter is odd because by default it's

36:06.160 --> 36:14.800
going to be even and then you have this transition bandwidth which is basically the same as before.

36:14.880 --> 36:20.960
And so we get the nice response we expect it which is basically flat in most of our frequency

36:20.960 --> 36:27.520
ranks and then because you know you need to change to minus i we get a transition region.

36:30.080 --> 36:39.520
Other more complex applications these are in my blog you can search for them to show that this is

36:40.160 --> 36:47.920
quite general and useful. So there's something called a band h filter and this is available in

36:47.920 --> 36:58.160
the radio it's the blog is called fl band h. I think the main reason to use this filter is to find

36:58.160 --> 37:04.960
the frequency of a signal and it's usually used in a closed loop way so you basically have a

37:04.960 --> 37:13.120
frequency look look that's how you tune in frequency to a signal. Interestingly you can also use it

37:13.120 --> 37:22.080
for symbol clock recovery but you know that's like fret Harry's very clever stuff. So how does this

37:22.080 --> 37:29.200
filter work? This is supposed to be for our RC waveforms that's root raised cosine waveform

37:29.280 --> 37:37.840
like the thing I have in gray there and the idea so the ideal thing you wanted to have is this

37:39.680 --> 37:48.800
green like dashed green transaction which is the derivative of the gray thing basically.

37:49.840 --> 37:57.680
And you can notice that because this thing like has a sharp cut of here this derivative jumps

37:57.760 --> 38:03.680
straight from plus one to zero so we already know we are not going to be able to get this with

38:03.680 --> 38:10.080
any kind of filter we can make an approximation and the usual approximation which is what

38:10.080 --> 38:15.600
fret Harry's normally uses and what's also in the radio is this blue thing which is a half

38:15.600 --> 38:22.080
sine window function and this can be explicitly constructed in the time domain and it's nice.

38:22.160 --> 38:28.560
The issue though is what happens if there is an adjacent signal in here. All that's problematic

38:28.560 --> 38:35.520
because suddenly I'm ingesting part of the signal as part of my frequency error estimator.

38:36.160 --> 38:43.040
So by using PM Remains we can tell Remains hey try to find the thing which is closed as close as possible

38:43.600 --> 38:48.720
to this green thing which is the ideal and of course the thing will say yes yes but leave me some

38:48.720 --> 38:54.720
gap where I try to go like very quickly from plus one to zero and so you leave it at

38:54.720 --> 39:01.200
transition bandwidth but you know you can do something like this and depending on the location you

39:01.200 --> 39:09.440
pay for example by having larger stop band worst stop band at innovation I would say nothing is free

39:09.440 --> 39:14.160
right but in some occasions these PM Remains design might be better

39:14.240 --> 39:24.080
and finally I was picking out polyphase filter banks so the thing with polyphase filter banks

39:24.080 --> 39:30.400
they are like an FFT with a window but the window is an FIR filter rather than just multiplication by

39:30.400 --> 39:37.520
a number and because you have all these flexibility of FIR filters you come make nice things

39:37.520 --> 39:40.880
with nice properties. So for example let's say you want to do

39:41.200 --> 39:47.360
spectrum estimation you want to measure power you might want to measure power at different

39:47.360 --> 39:55.200
resolutions on things like that so you can do a design for that and the idea of this is if you look

39:55.200 --> 40:03.840
here at how the frequency responses for the things are designed they basically overlap in such a way

40:03.840 --> 40:12.960
that the thing is flat they added to something which is flat so for example here I am adding

40:12.960 --> 40:22.240
being zero and being two which are these blue and green thing but if I add it's also flat so this is

40:22.240 --> 40:29.920
well adjusted and what this gives me is now I can be power with a flat frequency response on a range

40:30.000 --> 40:38.400
which is twice as coarse as my channel spacing by adding every other bin so that's the key idea

40:38.400 --> 40:45.040
here you you have like these nice tools rather than you know just do an FFT and maybe use the

40:45.040 --> 40:50.880
flat the window I will be nice it is kind of nice but not when you try for example what is the

40:50.880 --> 40:57.760
summed power of a set of the frequency range you can make it way better with this and the design

40:57.760 --> 41:02.960
trick here the thing you have is the freedom of word to put the cat of frequency if you set it

41:02.960 --> 41:09.040
just right and you can make like an optimization problem where you search for this thing you can

41:09.040 --> 41:18.320
achieve this effect where they are perfectly flat and that's it questions

41:27.760 --> 41:33.760
we also hear once to difficult

41:38.160 --> 41:40.880
and you see are new type as you radio

41:47.520 --> 41:50.000
the different is your calculations

41:50.000 --> 41:54.320
what is your-

41:54.320 --> 42:04.480
Yes, so the question is about GR filter, where you can design filters, but not with the

42:04.480 --> 42:10.160
remnants algorithm and face response, whether this is something you can tune with this algorithm.

42:10.160 --> 42:16.920
So the answer would say is not you do not have any direct control over face response with

42:17.920 --> 42:26.920
basically baked into the way that basically you are approximating a real transfer function.

42:26.920 --> 42:32.920
And so the face response you get is going to be real and it's going to be what it is.

42:46.920 --> 42:58.920
Okay, yeah, the question is whether my algorithm starts with equity space points.

42:58.920 --> 43:02.920
And if there is a better choice, what is the reason?

43:02.920 --> 43:10.920
So my implementation begins with equity space points on the x domain, which is not the same thing.

43:10.920 --> 43:14.920
As the frequency domain, there are actually chairmanship points on the frequency domain.

43:14.920 --> 43:20.920
There are smart choices in some cases and the paper bias Philippe talks about it.

43:20.920 --> 43:26.920
But for simplicity, I did the thing which is which works nice in most of the cases.

43:26.920 --> 43:34.920
And the only improvement you can gain by choosing more cleverly the points is runtime rather than convergence.

43:40.920 --> 43:50.920
Thank you very much.

