WEBVTT

00:00.000 --> 00:08.000
I'm trying to see what I'm going to tell us more about it, so please stay away.

00:08.000 --> 00:09.000
Yeah, thank you.

00:09.000 --> 00:10.000
Good morning.

00:10.000 --> 00:12.000
Thank you for making the early talk.

00:12.000 --> 00:13.000
I'm Travis.

00:13.000 --> 00:16.000
I'm trust in safety tech lead for the matrix organization.

00:16.000 --> 00:19.000
We're going to talk a little bit about community moderation.

00:19.000 --> 00:22.000
But first, we need to talk about how matrix works.

00:22.000 --> 00:26.000
So this is a slightly different diagram than when you might be familiar with.

00:26.000 --> 00:29.000
If you've been to a matrix talk before, we have servers in the middle here.

00:29.000 --> 00:32.000
In clients on the outside.

00:32.000 --> 00:34.000
We transit events.

00:34.000 --> 00:36.000
We're also going to call the messages at the same time.

00:36.000 --> 00:38.000
This is a full mesh distribution.

00:38.000 --> 00:43.000
So any time that say a client on server A wants to send a message.

00:43.000 --> 00:50.000
It goes through their server to the other servers and then through to their clients as well.

00:50.000 --> 00:57.000
This, yeah, is just the full mesh that we have today in matrix.

00:58.000 --> 01:01.000
We have a little bit of a problem though when somebody wants to send something a little bit.

01:01.000 --> 01:05.000
Let's say not great for the room or for the community in general.

01:05.000 --> 01:07.000
So we have moderation bots.

01:07.000 --> 01:09.000
Just like drop here, mail in there and molnir.

01:09.000 --> 01:12.000
That can reside on another server.

01:12.000 --> 01:17.000
So in this case, I'm using the little hammer to represent that moderation bot.

01:17.000 --> 01:20.000
It's on server B in this case.

01:20.000 --> 01:24.000
And the message is trying to originate from server D somewhere.

01:25.000 --> 01:28.000
So that message goes out just like any other message.

01:28.000 --> 01:31.000
The moderation bot will see it and try and clean it up.

01:31.000 --> 01:33.000
So this leads to a lot of arrows.

01:33.000 --> 01:35.000
This is what we call reactive tooling.

01:35.000 --> 01:39.000
So that moderation bot sends out a reduction to remove that event from the room.

01:39.000 --> 01:43.000
But unfortunately, everybody's already kind of seen it.

01:43.000 --> 01:45.000
So this is possible.

01:45.000 --> 01:48.000
There's lots of events flying around.

01:48.000 --> 01:52.000
This has worked for most communities for quite a while.

01:52.000 --> 01:56.000
The molnir is existed for five plus years at this point.

01:56.000 --> 01:59.000
A lot of the other bots have also existed for quite a while.

01:59.000 --> 02:01.000
Most communities use this model today.

02:01.000 --> 02:03.000
Where if you receive an event.

02:03.000 --> 02:05.000
Your moderation bot will check it.

02:05.000 --> 02:07.000
Remove it if needed.

02:07.000 --> 02:08.000
Bend people.

02:08.000 --> 02:09.000
That sort of stuff.

02:09.000 --> 02:12.000
There's also some server and capabilities in there.

02:12.000 --> 02:15.000
If we're looking for a more proactive approach though,

02:15.000 --> 02:17.000
we can kind of switch to policy servers.

02:17.000 --> 02:19.000
So the diagram has changed a little bit.

02:20.000 --> 02:22.000
Server C, we've just gotten rid of its clients.

02:22.000 --> 02:25.000
It's just a whole policy server now.

02:25.000 --> 02:28.000
And server D, we're now calling it the questionable message.

02:28.000 --> 02:35.000
We're not sure if it's good or bad or if it needs to be moderated in some capacity.

02:35.000 --> 02:38.000
And we're not even sure if we want to see it in the room.

02:38.000 --> 02:44.000
So the foundation maintains a policy server implementation called policy serve.

02:44.000 --> 02:48.000
It's very creatively worded or named there.

02:49.000 --> 02:54.000
So when the server D wants to send that message,

02:54.000 --> 02:57.000
it first checks with the policy server.

02:57.000 --> 03:05.000
Is this a message that is worth for the room or not?

03:05.000 --> 03:08.000
So the policy server will respond.

03:08.000 --> 03:09.000
In this case it's saying it's great.

03:09.000 --> 03:10.000
It's fine.

03:10.000 --> 03:11.000
Sure.

03:11.000 --> 03:12.000
Whatever post it.

03:12.000 --> 03:14.000
And you get your normal full message after that.

03:14.000 --> 03:17.000
That gets an additional signature attached to the event.

03:17.000 --> 03:22.000
So that way other servers and other clients potentially can see that the policy server has,

03:22.000 --> 03:25.000
in fact, checked that event has signed off on it.

03:25.000 --> 03:26.000
It's doing great.

03:26.000 --> 03:27.000
Send it to everybody.

03:27.000 --> 03:28.000
It's going well.

03:28.000 --> 03:31.000
The policy server, however, says don't.

03:31.000 --> 03:35.000
That's that's not under them that I'm okay with for this room.

03:35.000 --> 03:38.000
Well, that's that's where the diagram stops.

03:38.000 --> 03:42.000
It doesn't ever reach the room at least hopefully.

03:42.000 --> 03:44.000
But in current room versions.

03:44.000 --> 03:48.000
Because matrix has room versions to sort of segment this functionality.

03:48.000 --> 03:54.000
There may be a case where that server just sends it anyways.

03:54.000 --> 03:58.000
And in that case, you go back to your layering of tooling.

03:58.000 --> 04:00.000
So you might still have a moderation bought.

04:00.000 --> 04:02.000
You might have a policy server at the same time.

04:02.000 --> 04:07.000
Your moderation bought will catch that event that server D has just sent anyways.

04:07.000 --> 04:09.000
The policy server said don't bother.

04:09.000 --> 04:11.000
And there it goes.

04:11.000 --> 04:14.000
The moderation bought will take care of it.

04:14.000 --> 04:19.000
So that is the rough architecture of how matrix can work.

04:19.000 --> 04:22.000
At least in a public sort of room.

04:22.000 --> 04:26.000
But there's other tooling that's available for these types of communities.

04:26.000 --> 04:30.000
So in public rooms, we consider, or at least public communities.

04:30.000 --> 04:34.000
We consider communities to be a collection of rooms in a individual room.

04:34.000 --> 04:36.000
It could be a whole server.

04:36.000 --> 04:41.000
There's no specific technical necessary definition of what is public.

04:41.000 --> 04:45.000
Obviously, like rooms that say you can just join today,

04:45.000 --> 04:48.000
or probably a little bit better for that.

04:48.000 --> 04:50.000
But you can use all sorts of different things.

04:50.000 --> 04:54.000
Like if you're in a room-based community, moderation bought's obviously policy servers.

04:54.000 --> 04:55.000
Surgery seals.

04:55.000 --> 05:00.000
That's where you can prevent other servers from posting or joining the room.

05:00.000 --> 05:03.000
You might also have like non-technical things like code of conduct.

05:03.000 --> 05:05.000
Community guidelines.

05:05.000 --> 05:11.000
That sort of stuff to sort of guide people into a more social kind of moderation there.

05:11.000 --> 05:14.000
And obviously bands, production, silences.

05:14.000 --> 05:15.000
That sort of stuff.

05:15.000 --> 05:19.000
On the server side, we are looking at something called search readirection.

05:19.000 --> 05:30.000
So when somebody is looking for a room, they might be using keywords that aren't necessarily useful or productive for that particular server-based community.

05:30.000 --> 05:37.000
So you can just simply say, there are no results and redirect them elsewhere.

05:37.000 --> 05:42.000
This is primarily helpful for CSAM style readirection.

05:42.000 --> 05:50.000
So we're trying to prevent searches for CSAM and offering help for people, say, to go to StopItNowin.uk.

05:51.000 --> 05:54.000
Hasharmaster actioner, or HMA, is from Mayda.

05:54.000 --> 06:03.000
It's an open source thing that allows you to basically just dump a bunch of content hashes into a giant server somewhere.

06:03.000 --> 06:06.000
And you can upload more hashes to see if it matches.

06:06.000 --> 06:10.000
And then you can take action against them, hence the name.

06:10.000 --> 06:14.000
We are currently in the process of deploying this on Matrix.org.

06:14.000 --> 06:19.000
Not necessarily for the whole server, but at least for our community-based rooms.

06:19.000 --> 06:27.000
So whenever an image is posted, we can actually start to see, like, it doesn't match certain types of content.

06:27.000 --> 06:30.000
Do we want that content in our room? Probably not.

06:30.000 --> 06:33.000
And we can kind of, where we can just prevent it from happening.

06:33.000 --> 06:36.000
Either through policy servers or moderation bots, or both.

06:36.000 --> 06:40.000
And again, you can continue going through a lot of the sort of social,

06:40.000 --> 06:43.000
a moderation style technique.

06:43.000 --> 06:46.000
You have terms of service, that sort of stuff.

06:46.000 --> 06:49.000
You also have your reporting tools, which might lead into more of the rules tooling,

06:49.000 --> 06:52.000
which I believe they have a talk after me on that.

06:52.000 --> 06:57.000
So I will leave them to talk about, I believe, Osprey and Coup.

06:57.000 --> 07:01.000
And so, yeah, Coup is their investigations to Osprey as a rules-based engine.

07:01.000 --> 07:06.000
Other way around, I'm being told, I'll update the slide.

07:06.000 --> 07:11.000
But yeah, the rules-based engine is very similar to policy servers in Matrix.

07:12.000 --> 07:19.000
If your community is not quite public, but pretty public, we might be called as a near public community.

07:19.000 --> 07:23.000
So this is where you might have the same sort of tooling from before,

07:23.000 --> 07:27.000
but you might also consider something like a join gate, or join rules,

07:27.000 --> 07:33.000
where the idea is, as people are trying to join your room,

07:33.000 --> 07:40.000
you are simply telling them, hey, you need to, like, agree to the community guidelines.

07:40.000 --> 07:45.000
You need to check a capture or something before you can join.

07:45.000 --> 07:50.000
And that can limit a little bit of the sort of join flooding that can happen in some of these communities.

07:50.000 --> 07:54.000
You know, I have power levels, this is your moderators, your admins, those sorts of things

07:54.000 --> 07:59.000
that allow you to expand upon your actual capabilities in that room.

07:59.000 --> 08:06.000
Obviously, topics, pendivents, that sort of stuff are where you can describe a lot of those sorts of rules within your community.

08:06.000 --> 08:10.000
And again, that's more of that social moderation sort of stuff.

08:10.000 --> 08:15.000
Invite links, more working on it, but that's the idea of, like, hey, you can't quite join yet.

08:15.000 --> 08:18.000
You need to have a specific link to be able to join.

08:18.000 --> 08:22.000
Obviously, on the server side, you have great limits, anti-spam web hooks,

08:22.000 --> 08:29.000
that sort of stuff that can actually help bring more of that tooling in.

08:29.000 --> 08:38.000
As we continued down the sort of four categories that we at least consider for moderation or communities types, we have near private.

08:38.000 --> 08:44.000
This is where you start getting into those, like, more exclusive, not quite everybody can join,

08:44.000 --> 08:47.000
but, like, somebody might be able to get in with an invite.

08:47.000 --> 08:51.000
These might be encrypted, they might not be moderation bots come up again,

08:51.000 --> 08:55.000
because moderation bots are just useful for managing your different communities,

08:55.000 --> 08:58.000
because a lot of these bots support changing those power levels,

08:58.000 --> 09:01.000
changing whose moderators, admins, sort of stuff.

09:01.000 --> 09:04.000
Some of them allow them to synchronize that as well.

09:04.000 --> 09:09.000
Join gates also continue to sort of make an appearance here, because, again,

09:09.000 --> 09:13.000
you can use that to limit who is able to actually join that room.

09:13.000 --> 09:16.000
Same thing on the server side, you start getting into registration tokens,

09:16.000 --> 09:21.000
very specific invite links, just to be able to create an account on that server.

09:21.000 --> 09:27.000
And, again, room stealing comes and starts to make that a little bit easier,

09:27.000 --> 09:31.000
so you can start to manage that community, start to see what's kind of happening,

09:31.000 --> 09:35.000
and making sure that it's as safe as it can be.

09:35.000 --> 09:42.000
Obviously, we also have private as the sort of end of the sort of four taxonomies here.

09:42.000 --> 09:47.000
It's very similar to any of the private you're going to want probably encryption,

09:47.000 --> 09:52.000
the moderation bots, power levels, registration tokens, invite links, that sort of stuff.

09:52.000 --> 09:57.000
ESS Pro is also where you're going to start to get some more of that admin functionality

09:57.000 --> 10:02.000
to manage that sort of community, suspend accounts, that sort of stuff.

10:02.000 --> 10:09.000
Of course, it would not be a false demo, unless we had a actual demo.

10:09.000 --> 10:16.000
So, if you forgive me for using Windows, and I'm just going to start a bot here.

10:16.000 --> 10:33.000
So, this is essentially what like a moderation view might look like in a just a standard sort of community here.

10:33.000 --> 10:41.000
I'm just going to make sure that I'm mute, so that way element does not send notifications to the live stream.

10:41.000 --> 10:45.000
So, you'll have a moderation internal room typically.

10:45.000 --> 10:50.000
This is where you're tooling might sit, so an apologies if any of this breaks.

10:50.000 --> 10:53.000
I am on the false demo Wi-Fi testing and production.

10:53.000 --> 10:58.000
So, this is going great.

10:58.000 --> 11:03.000
But yeah, deep, deep do.

11:03.000 --> 11:06.000
Well, those refresh elements, and hope for the best.

11:06.000 --> 11:10.000
All right, do I have at least a policy server?

11:11.000 --> 11:13.000
Cool, I have a policy server.

11:13.000 --> 11:16.000
Well, that's all the things that I really need.

11:16.000 --> 11:22.000
So, we will hope that that is working, but yeah.

11:22.000 --> 11:27.000
So, conveniently, there's some spam in my general chat here.

11:27.000 --> 11:34.000
It looks like it's pretty common of what they're posting.

11:34.000 --> 11:37.000
The accounts are pretty similar as well.

11:37.000 --> 11:43.000
So, we will first, oh, there's my moderation bot.

11:43.000 --> 11:45.000
All right, we're back.

11:45.000 --> 11:50.000
So, yeah, we're first just going to see like, oh, I have a keyword currently in my policy server.

11:50.000 --> 11:55.000
There's a bunch of different filters available.

11:55.000 --> 12:01.000
Just to make sure I just see like there's a lot that you can do with a policy server.

12:01.000 --> 12:02.000
This is our implementation.

12:02.000 --> 12:06.000
This is not all of the things that you can enable different trusted sources.

12:06.000 --> 12:07.000
You can disable media.

12:07.000 --> 12:09.000
You can enable media for some people.

12:09.000 --> 12:14.000
You can check message density, length, all that sort of stuff.

12:14.000 --> 12:17.000
We're just doing the simple word.

12:17.000 --> 12:25.000
We're just going to change the keywords to ban our particular type of spam.

12:25.000 --> 12:29.000
That's going on here, especially how it's getting quite high in the numbers there.

12:29.000 --> 12:31.000
It says it's going to take a minute.

12:31.000 --> 12:32.000
I'm going to hope that it's not.

12:32.000 --> 12:37.000
And if I switch conveniently over to my my little spam bot script here,

12:37.000 --> 12:41.000
you can see that it's starting to actually get rejected when it's trying to send these messages

12:41.000 --> 12:42.000
so earlier.

12:42.000 --> 12:43.000
It was very successful.

12:43.000 --> 12:48.000
And now it's just getting told, hey, that was rejected as probable spam.

12:48.000 --> 12:54.000
That's, you know, that message has been denied by the policy server.

12:54.000 --> 12:58.000
Which also means that in theory, if I go back to element here,

12:59.000 --> 13:01.000
the messages have stopped.

13:01.000 --> 13:04.000
We've been waiting about a minute here.

13:04.000 --> 13:07.000
It's been about a minute since the last message.

13:07.000 --> 13:08.000
The messages are done.

13:08.000 --> 13:12.000
But I still have some number of messages to clean up.

13:12.000 --> 13:18.000
This is where I might use my moderation bot to sort of get rid of those users.

13:18.000 --> 13:22.000
So I'm going to ban those users.

13:22.000 --> 13:24.000
I'm going to ban all of them.

13:25.000 --> 13:31.000
And then I'm also going to use a feature of a lot of these moderation bots to both ban and

13:31.000 --> 13:33.000
redact all of their events.

13:33.000 --> 13:37.000
So if I use the the bad reason of spam here,

13:37.000 --> 13:43.000
that should cause the the auto redaction to happen.

13:43.000 --> 13:46.000
So again, we're just going to hope.

13:46.000 --> 13:51.000
Oh, yeah, that's great.

13:52.000 --> 14:01.000
Apparently I did not set the default list in my mole here, which is great prep.

14:01.000 --> 14:06.000
So we want to just try that again.

14:06.000 --> 14:07.000
There we go.

14:07.000 --> 14:09.000
So yeah, it's banned a couple of users there.

14:09.000 --> 14:13.000
If we peek through the spoilers, it looks like a gut both of the span accounts that happen

14:13.000 --> 14:14.000
to be there.

14:14.000 --> 14:18.000
And if I go back to the general chat, it's done the thing.

14:18.000 --> 14:19.000
That's been removed.

14:19.000 --> 14:25.000
And you know, element is just backfilling to a few days ago when I started setting this up.

14:25.000 --> 14:28.000
So yeah, that is.

14:28.000 --> 14:31.000
Thank you.

14:31.000 --> 14:35.000
So that is moderation tooling and action very briefly.

14:35.000 --> 14:41.000
We will continue on the very or the last couple slides here before going to questions.

14:41.000 --> 14:45.000
So next up for the foundation is we're hoping to actually finish the FCP on

14:46.000 --> 14:50.000
MSE 42 84 FCP is final comment period.

14:50.000 --> 14:53.000
It's currently in review with the spec core team.

14:53.000 --> 14:58.000
So hopefully in the next few weeks, we will see it actually get accepted into the spec.

14:58.000 --> 15:10.000
And then it's a matter of actually converting that 700 line MSE into real spec for the sort of for everybody else to sort of implement and get it stable and all that fun stuff.

15:11.000 --> 15:17.000
We're also hoping to actually implement that in that later room version, making those signatures required from the policy servers.

15:17.000 --> 15:20.000
That way we don't necessarily need to layer the tooling as much.

15:20.000 --> 15:28.000
And we can instead just require that, hey, if you don't have a signature on that event, it just can't be sent.

15:28.000 --> 15:30.000
It's not accepted into the room.

15:30.000 --> 15:31.000
It doesn't exist.

15:31.000 --> 15:37.000
We're also going to continue exploring a lot of the open source tooling that's available primarily through Roost.

15:37.000 --> 15:40.000
And where the foundation is one of the partners.

15:40.000 --> 15:46.000
So yeah, we've currently got Osprey in most of the way into our staging environment.

15:46.000 --> 15:53.000
We are working on trying to figure out how we can integrate coop with a lot of our stuff.

15:53.000 --> 15:56.000
HMA is currently in progress.

15:56.000 --> 16:03.000
We're hoping that it'll actually help us turn media back on in our rooms because we've disabled media for close to a year now.

16:03.000 --> 16:11.000
And we would like to stop seeing screenshots showing up in our action log on the foundation side.

16:11.000 --> 16:18.000
So we know it's important for just doing support and trying to get that support.

16:18.000 --> 16:23.000
So yeah, we want to be able to turn that media back on.

16:23.000 --> 16:27.000
We also want to support something that we're calling server-centric communities in policy serve.

16:27.000 --> 16:33.000
So these are the ideas of putting joint gates in search read direction possibly through policy serve.

16:33.000 --> 16:44.000
That's not necessarily to say that policy servers in general will support these features, but at least the foundations implementation will experiment with that and see how it's kind of going.

16:44.000 --> 16:53.000
Finally, if you want to actually set up a policy server for your community on Matrix, you can follow these six lovely commands.

16:54.000 --> 16:59.000
Essentially, you just invite the bot here to an encrypted room.

16:59.000 --> 17:02.000
Set up your community, start to apply for some rooms.

17:02.000 --> 17:05.000
That'll send some notifications to us on the Trust and Safety team.

17:05.000 --> 17:15.000
We will most likely approve it assuming there's nothing terribly concerning in that application.

17:15.000 --> 17:17.000
Once we approve it, it should be fine.

17:17.000 --> 17:21.000
You can set up all of your filters on itself ahead of the approval as well.

17:21.000 --> 17:27.000
If you have any questions or support or anything like that, just join our room on Matrix, and we will be happy to help.

17:27.000 --> 17:32.000
Other than that, questions, concerns, complaints?

17:32.000 --> 17:35.000
What can users do if their messages get blocked off?

17:35.000 --> 17:47.000
No, they should be okay because you see that the algorithms on social media often filter out stuff that's okay, that's not bad.

17:47.000 --> 17:56.000
Yeah, so the question is what can users do if they get something blocked that they maybe shouldn't have been blocked, is that correct?

17:56.000 --> 18:11.000
Yeah, so a lot of these, like with policy server in particular, a lot of the filters are static analysis, so it might falsely flag an event.

18:12.000 --> 18:22.000
And so there are some things that, like, particularly moderators and community admins can do to adjust those filters to allow those events.

18:22.000 --> 18:30.000
But unfortunately, like, if the policy server says like that event can't be sent, it does just get rejected.

18:30.000 --> 18:35.000
Same thing with the moderation block, if a moderation block reduces an event, it is redacted.

18:35.000 --> 18:43.000
You can try resending the event in our foundation rooms, if you try to send an event that was rejected by the policy server.

18:43.000 --> 18:52.000
It puts you typically on a few minutes silence just to make sure that any sort of follow on events don't go through.

18:52.000 --> 19:03.000
So you can try again in a few minutes, try adjusting the content in particular, just to get around some of those other, I guess, more strict filters.

19:03.000 --> 19:06.000
Mem.

19:06.000 --> 19:07.000
Other questions?

19:07.000 --> 19:09.000
I'll go over there.

19:09.000 --> 19:16.000
So maybe the elephant's a room here, haven't we now introduced a single point to failure in our lovely federated network?

19:16.000 --> 19:18.000
Are you using policy server?

19:18.000 --> 19:23.000
Yeah, so question is elephant in the room, is there a single point failure?

19:23.000 --> 19:24.000
Yes and no.

19:24.000 --> 19:31.000
The policy server is not able to, or when you configure it in a room, it is completely optional.

19:31.000 --> 19:36.000
You did not have to use it. A lot of communities might find that they don't need to use it.

19:36.000 --> 19:42.000
And the policy server itself is not allowed to scan its own configuration in that room.

19:42.000 --> 19:52.000
So if the policy server goes down and your community finds that it can't communicate anymore, you can just remove the policy server from the room and then your communication is restored.

19:52.000 --> 19:55.000
That's also important when you have layer tooling.

19:55.000 --> 20:02.000
So that way your moderation box can still take care of things. It might be a little bit more manual, it might take a little bit more time.

20:02.000 --> 20:13.000
But you can still use the existing tooling to ensure that your community has kept safe there.

20:13.000 --> 20:21.000
It does put a little bit of strain on the policy server itself, because obviously if you have a large room,

20:21.000 --> 20:36.000
say matrix hq with a couple thousand servers, then all of those servers will have to go to your policy server to see if there is an event that needs to be, or if there is to see if there events can be posted to the room.

20:36.000 --> 20:43.000
But again, policy servers are intentionally designed to support that large amount of traffic.

20:43.000 --> 20:46.000
I believe there were other questions over here.

20:46.000 --> 20:47.000
Matthew.

20:47.000 --> 20:54.000
Yeah, I have some funders or anyone's two questions. Firstly, can you have multiple policy servers?

20:54.000 --> 20:55.000
Okay.

20:55.000 --> 21:01.000
And then second question, what happens if I email a piece of matrix to who is it in my call or not?

21:01.000 --> 21:13.000
Fair enough. So yeah, first question is, can you have multiple policy servers currently know there's some complexities around trying to figure out what that means?

21:13.000 --> 21:26.000
If you have multiple policy servers or multiple layers of tooling, then you might have basically both the least and most restricted configuration wins.

21:26.000 --> 21:37.000
So when your policy server, let's say you have one policy server that allows basically anything, and then you have another policy server that allows basically nothing.

21:37.000 --> 21:46.000
Which one should win? It's kind of the question. So we're kind of deferring that to a later MSC at the point or at this point.

21:46.000 --> 21:49.000
But it is something that we are thinking about.

21:49.000 --> 21:56.000
I believe the second question was, what happens when you email abuse at matrix to org, which is our trust and safety email address?

21:56.000 --> 22:00.000
Is it a black hole? Hopefully not.

22:00.000 --> 22:08.000
We do investigate as many reports as we can. We don't necessarily reply to all of those reports.

22:08.000 --> 22:14.000
But if you do get the auto responder email, then it is in our inbox. We are seeing it. It has a ticket number.

22:14.000 --> 22:18.000
We will take action against it if necessary.

22:18.000 --> 22:29.000
It just might take us a little bit of time. We receive a lot of reports every week, and we have a whole team that is responsible for ensuring that those reports are taking care of

22:29.000 --> 22:33.000
and go through the various stages and that sort of stuff.

22:33.000 --> 22:42.000
You might also not see that the action taken on your report, or you might not see the action taken against the subject of your report,

22:42.000 --> 22:48.000
because we might suspend an account, block it, deactivated that sort of stuff.

22:48.000 --> 22:58.000
We might also remove rooms. We might remove media, and we might not necessarily advertise that we have done that to the outside world.

22:58.000 --> 23:01.000
Any other questions?

23:01.000 --> 23:02.000
Nuxie?

23:02.000 --> 23:10.000
Yes. Is there any time you know that you lose the availability of the room when the reports are stuck with your information?

23:10.000 --> 23:20.000
So, yeah, look again. The question was, is it intentional that you lose availability of the room and the policy server goes down?

23:20.000 --> 23:29.000
So, again, like when the policy server is not allowed to scan its own configuration in the room, so you can just remove the policy server from the room.

23:29.000 --> 23:38.000
There are two ways of doing that. You can just change the configuration to not have a policy server, or you can remove its users from the room.

23:38.000 --> 23:39.000
Yeah.

23:39.000 --> 23:43.000
I'm not asking whether you can get out of the situation.

23:43.000 --> 23:48.000
Could it be whether it's intentional, like, bail safe and not come again?

23:48.000 --> 23:50.000
Yeah.

23:50.000 --> 23:55.000
So, the question is, I guess, whether or not it's intentional, it's fail safe.

23:55.000 --> 24:04.000
Currently, yes. We do intend that the expected behavior is when a policy server goes down, that all messages are effectively rejected,

24:04.000 --> 24:10.000
because you do not have those signatures. You do not have a way to get those signatures from that policy server.

24:11.000 --> 24:17.000
So, policy servers, in general, are expected to have a high availability component to them.

24:17.000 --> 24:25.000
On the make sure serve foundation, we have a significantly high availability set up for our instance of policy serve,

24:25.000 --> 24:34.000
just to make sure that it can support a lot of those communities in those sorts of environments.

24:34.000 --> 24:37.000
How will they notice that a policy server is down?

24:37.000 --> 24:42.000
I don't know, just noticing that they're almost too required.

24:42.000 --> 24:49.000
So, how will a, I guess, a community understand if the policy server is down?

24:49.000 --> 24:54.000
Hopefully there are status pages, or ideally, they just have a high up time.

24:54.000 --> 25:01.000
But aside from that, a lot of users might report that, like, hey, I can't send messages,

25:01.000 --> 25:06.000
or policy server does support webbooks. I didn't demonstrate it here.

25:06.000 --> 25:13.000
But it can send webbooks back to your community to say, like, hey, this message was rejected under these filters for this reason.

25:13.000 --> 25:16.000
It was sent at this time, all that sort of stuff.

25:16.000 --> 25:22.000
So, if community might notice that they're getting a lot of notifications all of a sudden, they're just like, hey, like,

25:22.000 --> 25:26.000
sending completely benign messages of just like, hey, it's not quite working,

25:26.000 --> 25:30.000
or like, that might be an indication of a filter failure, that sort of stuff.

25:30.000 --> 25:35.000
So, you might notice a lot more activity on the sort of failure front.

25:35.000 --> 25:40.000
And particularly, like, your community admins might no longer be able to post,

25:40.000 --> 25:46.000
and that would be a very clear sign of, like, hey, wait a minute, it's not working.

25:46.000 --> 25:50.000
So, that would be the general expectation.

25:50.000 --> 25:53.000
Are there any questions in the chat?

25:53.000 --> 26:00.000
There's one from BGT level who asks, in what I may have a script that was sending spam messages.

26:00.000 --> 26:04.000
After it was getting bought, you said those boxes took the policy server,

26:04.000 --> 26:06.000
is blocking your event as a possible spam.

26:06.000 --> 26:10.000
How is that communicated? Is it reduction reason sent to the client?

26:10.000 --> 26:16.000
So, the question is, I guess, from the chat, like, is the, or in the script,

26:16.000 --> 26:21.000
is the reduction reason being sent to the client?

26:21.000 --> 26:25.000
And it looks like it's, you know, it's been running this whole time,

26:25.000 --> 26:31.000
and has been being told it's either well, it's not in the room anymore.

26:31.000 --> 26:39.000
With the policy server, when it is getting, or when it is rejecting an event,

26:39.000 --> 26:44.000
the client in this case, like it happens to be a script, so it just receives the error message directly,

26:44.000 --> 26:48.000
but there is no reduction reason, because the event has not made it to the room.

26:48.000 --> 26:53.000
It has not joined the day, it has not been accepted into the room.

26:53.000 --> 26:58.000
So, the client just receives an error, currently on element,

26:58.000 --> 27:01.000
that error is near message failed to send.

27:01.000 --> 27:03.000
We are working on improving that.

27:03.000 --> 27:09.000
We want to expose more of what the types of harm that we think of as a policy server

27:09.000 --> 27:15.000
has found in the message, so we can better communicate how you can correct that behavior.

27:15.000 --> 27:20.000
But again, it's not actually sent to the room.

27:20.000 --> 27:23.000
Yeah.

27:23.000 --> 27:31.000
Since, so when did you consider not telling the standard that we've had been,

27:31.000 --> 27:35.000
that these messages have been rejected to think it's been successful?

27:35.000 --> 27:41.000
Question being, did we consider not telling the standard that the message has been rejected?

27:42.000 --> 27:44.000
Yes, we did consider it.

27:44.000 --> 27:49.000
We have found that ineffective in some of our other experiments,

27:49.000 --> 27:55.000
particularly shadow banning at registration, where most people,

27:55.000 --> 28:02.000
well, most farmers will figure it out pretty quickly that their messages are being sent,

28:02.000 --> 28:06.000
and just not actually making it to two other users.

28:07.000 --> 28:11.000
There is also some complexities in how that might work at the day level,

28:11.000 --> 28:17.000
because what we're also trying to prevent here is the room itself,

28:17.000 --> 28:20.000
becoming larger and larger and larger, filled with spam,

28:20.000 --> 28:22.000
and then you're having to persist that.

28:22.000 --> 28:27.000
And so it's ideally a situation where you can just prevent it from joining,

28:27.000 --> 28:33.000
or being accepted into the room at all, and then it becomes just,

28:33.000 --> 28:37.000
you just don't have to store the events, particularly during large volumes,

28:37.000 --> 28:39.000
spam waves, and that's sort of stuff.

28:39.000 --> 28:41.000
Then, questions?

28:41.000 --> 28:42.000
Yeah.

28:42.000 --> 28:46.000
What about media's only text to check if I've provided some services,

28:46.000 --> 28:52.000
but also much media, so it doesn't have explicit content on the way.

28:52.000 --> 28:57.000
Question is, does it also check media on the policy server?

28:57.000 --> 28:58.000
Yes, they can.

28:58.000 --> 29:01.000
So our policy server implementation,

29:02.000 --> 29:05.000
does have the capability of detecting media types,

29:05.000 --> 29:10.000
and standing that media with HMA currently,

29:10.000 --> 29:15.000
and then, yeah, you can do all sorts of fun stuff with that.

29:15.000 --> 29:21.000
Our current configuration in the Foundation's public rooms is to just detect the media type,

29:21.000 --> 29:24.000
and just say no, it doesn't scan it, it doesn't do anything,

29:24.000 --> 29:27.000
it just, it just outright blocks it,

29:27.000 --> 29:32.000
but we're hoping to relax that restriction through various trusted sources,

29:32.000 --> 29:39.000
so we can see that, like, oh, this event is probably from a trusted source.

29:39.000 --> 29:41.000
We're going to scan it with HMA anyways,

29:41.000 --> 29:46.000
just check it against some known content banks, and just see, like, oh, it's,

29:46.000 --> 29:52.000
it's not there great, it's probably okay to sort of let it go into the room,

29:53.000 --> 30:01.000
but we will see, or yeah, we won't rely on that.

30:01.000 --> 30:04.000
I believe I have 30 seconds.

30:04.000 --> 30:06.000
We're going to just call it there.

30:06.000 --> 30:10.000
If you have further questions, I'll probably be hanging around the Matrix stand,

30:10.000 --> 30:14.000
or just drop by our rooms in the next Foundation side,

30:14.000 --> 30:18.000
and we will aim to answer them as quickly as we can.

30:18.000 --> 30:23.000
Thank you.

