WEBVTT

00:00.000 --> 00:14.000
Okay, hello, welcome to stop reinventing in isolation, bringing open source to trust and safety infrastructure.

00:14.000 --> 00:18.000
First, I want to start out with a brief content warning.

00:18.000 --> 00:22.000
We're not going to be talking like in detail about certain things, but we are going to be

00:22.000 --> 00:27.000
talking abstractly about some of the sorts of harms that we encounter in trust and safety,

00:27.000 --> 00:32.000
including things like child abuse, violent content, self-harm, unwanted content in other harms.

00:32.000 --> 00:35.000
So I just want to give people a chance to understand that's what we're going to be, kind of,

00:35.000 --> 00:40.000
I can abstractly talking about in case that is sensitive for you.

00:40.000 --> 00:48.000
So this is a spoiler for what we're going to talk about, but first, who are we?

00:48.000 --> 00:53.000
We are roost. We're a nonprofit building open source tools for online safety.

00:53.000 --> 00:58.000
We are about one year old, right? We're informed about a year ago.

00:58.000 --> 01:03.000
And yeah, each of those words I like to always point out, oh, it's not full screen anymore.

01:03.000 --> 01:04.000
That's okay.

01:04.000 --> 01:05.000
It's fine.

01:05.000 --> 01:09.000
Each of those words is actually really important and really core to what we do.

01:09.000 --> 01:12.000
So there's a reason that each of those is in our name.

01:12.000 --> 01:14.000
It's more specifically who are we.

01:14.000 --> 01:22.000
I'm Cassidy, I'm the community manager at Roost, and this is Ann, who is the CEO of Roost as well.

01:22.000 --> 01:26.000
And now I'm going to hand this over to Ann for a trust and safety crash course.

01:26.000 --> 01:29.000
All right. Excellent. We got this. We know that.

01:29.000 --> 01:32.000
Somebody this morning said the best thing to do at Fazdom is improvised.

01:32.000 --> 01:34.000
So I feel like we're, you know, we're doing well.

01:34.000 --> 01:38.000
Yeah, so like Cassidy said, I think, you know, we've had some talks this morning about trust and safety.

01:38.000 --> 01:43.000
How many people don't feel like they are familiar with what that generally entails?

01:43.000 --> 01:47.000
Oh, and I mean, that's pretty decent, but there's a solid group of hands down.

01:47.000 --> 01:55.000
So to give context for the rest of the demos and things we're going to show, I'm going to do a super speed run on what is it that we talk about and I'm referring to.

01:55.000 --> 02:00.000
Because I think sometimes we use very abstract words to not actually directly mention the things.

02:00.000 --> 02:04.000
And I know you all probably responsible for instances and servers and breams.

02:04.000 --> 02:08.000
And so that's why we want to share a little bit more of the specifics of what this work entails and what we talk about.

02:08.000 --> 02:16.000
So you can think, oh, I haven't thought about that as something that could happen on my platform or I haven't investigated enough of how might I prevent that.

02:16.000 --> 02:26.000
So the super crash course, to frame things before we get into specific harms, you kind of just can imagine and Cassidy picked Cass.

02:26.000 --> 02:33.000
So if you are a cat person, you can take it up with him, but.

02:33.000 --> 02:38.000
So, you know, if you think about like, here's the thing that this is a harm, this is a thing we've decided.

02:38.000 --> 02:41.000
This is a not good thing for our organization, our community.

02:41.000 --> 02:48.000
You might want to think about, how am I going to identify that? How am I going to block that? How am I going to watch what might be going on?

02:48.000 --> 02:52.000
To then decide, now it is at a point, it is a threshold that I need to block that.

02:52.000 --> 02:57.000
Someone earlier was talking about the difference between reactive and proactive detection.

02:57.000 --> 03:00.000
How are you going to find these cats in the first place?

03:00.000 --> 03:03.000
And, you know, cats are crafty little adapters.

03:03.000 --> 03:06.000
Look at how they went from big cheetahs to cute little minksy things.

03:06.000 --> 03:12.000
You're going to also similarly follow the evolution of that as a harm type.

03:12.000 --> 03:20.000
Our colleagues are incredibly brilliant and they had developed a framework around this to help you remember like, how does this workflow work?

03:20.000 --> 03:26.000
What does this look like? They called it the dire framework a little cheesy because the situation is dire.

03:26.000 --> 03:29.000
But it is a very helpful memory tool.

03:29.000 --> 03:35.000
So you start at detection like we were just talking about, you know, I need to be able to identify what's going on.

03:35.000 --> 03:43.000
I need to be able to investigate it and not just find here all the instances, but what's maybe the underlying pattern of, you know, the question behind the question of,

03:43.000 --> 03:47.000
where are all these instances coming from? Why are they coming from the same place?

03:47.000 --> 03:50.000
Or maybe the same users, the same IP addresses.

03:50.000 --> 03:56.000
Things like that, you're then going to need to still review all those individual things and figure out how you're going to take action

03:56.000 --> 04:01.000
and enforce that, including also things like someone asked a great question about what do appeals look like?

04:01.000 --> 04:06.000
What does audits look like? How do we know that we can review as a group looking back?

04:06.000 --> 04:10.000
We've made good decisions or actually this is a place that may be upon reflection.

04:10.000 --> 04:16.000
We should have made a different choice, but now we have those audit logs to have that kind of self reflection.

04:16.000 --> 04:20.000
The other thing to keep in mind as, you know,

04:21.000 --> 04:29.000
regulation around digital platforms is rapidly changing and depending on what jurisdiction your community is in,

04:29.000 --> 04:34.000
there might be laws where you are required to have things like appeals process and audit logs.

04:34.000 --> 04:40.000
There might be laws around the types of material you are required to take down or to take certain action on.

04:40.000 --> 04:44.000
So in particular, we're talking about things like child sexual abuse material.

04:44.000 --> 04:46.000
If you're not familiar with that term,

04:46.000 --> 04:53.000
Locally you might call it child pornography, but we like to use child sexual abuse material to distinguish between situations that have consent

04:53.000 --> 04:56.000
and a child obviously does not have consent involved.

04:56.000 --> 05:02.000
So harm type, CSAM, same thing with terrorism and violent extremism content and different jurisdictions.

05:02.000 --> 05:07.000
There are certain things that are defined as that and you might be required to take action.

05:07.000 --> 05:13.000
This is a rapidly changing space. It's a little overwhelming, but it is something to keep in mind.

05:15.000 --> 05:21.000
The problem with all this is that everybody is trying to do essentially what is the same thing.

05:21.000 --> 05:26.000
There's nothing about this that I mean from our belief should really be something like a product differentiator of,

05:26.000 --> 05:31.000
you know, you come to my platform and the difference is you're not drowning in violent content and child abuse.

05:31.000 --> 05:34.000
That is not the state of the world.

05:34.000 --> 05:41.000
This is really if we think about our digital spaces as public goods, a very important part about what society means today.

05:41.000 --> 05:45.000
That safety also needs to be equally available to everyone.

05:45.000 --> 05:50.000
That's a really big thing behind resounding, but you can think about this in a couple different ways.

05:50.000 --> 05:55.000
Of we're all trying to figure out what are the things we should be detecting, how are we going to take action?

05:55.000 --> 05:57.000
What are these tools look like?

05:57.000 --> 06:04.000
And in trust and safety for a long time, you know, people have been relying on interesting vendor tools, proprietary tools,

06:04.000 --> 06:09.000
but what it means is if they're like, well, I can't afford that or I can't access that tool.

06:09.000 --> 06:15.000
They're sitting there writing very basic review content dashboards over and over and over again, rather than saying,

06:15.000 --> 06:18.000
I know that there's this particular group active on my platform.

06:18.000 --> 06:23.000
I really just want to dive into taking care of that, but I got to start back with a very basic review panel.

06:23.000 --> 06:27.000
It's not a great use of time.

06:27.000 --> 06:36.000
So this is just a brief flash I want to put on the screen here, not to overwhelm anyone, but as you might think,

06:36.000 --> 06:39.000
you might see something and say, oh, gosh, I'm not familiar with that.

06:39.000 --> 06:47.000
I might want to jot that down and think about doing a little bit more learning and figuring out, is that something that could be prone to happening on my platform,

06:47.000 --> 06:52.000
or if you go on to do further reading, these are just common terms that you might see come up.

06:52.000 --> 06:55.000
So we wanted to share the acronyms, I think, in trust and safety.

06:55.000 --> 07:04.000
There is, there's a lot of shorthand, there's a lot of acronyms, and we wanted to get folks just a quick moment to think about different things that might be happening.

07:04.000 --> 07:07.000
And thinking about learning, the last little part of our crash course.

07:07.000 --> 07:13.000
Rather than listing every single resource out there, because I think there are a ton of really wonderful and helpful things,

07:13.000 --> 07:20.000
I figured I would list maybe a first stop shopping rather than if you're familiar with the phrase like one stop shopping.

07:20.000 --> 07:28.000
A first stop of here are some two great free resources from the trust and safety professionals association that if you were to look them up,

07:28.000 --> 07:32.000
you would probably find something and go, oh, I'm going to pull it that thread a little more.

07:32.000 --> 07:39.000
I'm going to go research more about this, it'll take you down some great rabbit holes, but I figured that is a good first stop for folks.

07:39.000 --> 07:48.000
So with that, I'm going to hand it back to Cassidy for exactly what we're doing.

07:48.000 --> 07:57.000
So yeah, let me talk a little bit about our nonprofit and open source approach and why we think that's the right way to do it for trust and safety.

07:57.000 --> 08:04.000
So yeah, the status quo is kind of talked a little bit about in the crash course there is that trust and safety is kind of broken in the industry right now.

08:04.000 --> 08:09.000
Smaller platforms or newer protocols might not know that they need these tools.

08:09.000 --> 08:16.000
There's not a lot of like open resources that are easy to find and just intuitively know that you need these tools.

08:16.000 --> 08:24.000
And oftentimes, like answer that industry tools can be unaffordable or inaccessible depending on what, you know, how big your organization is or what type of platform you are.

08:24.000 --> 08:37.000
And even when you can get access to those proprietary tools, a lot of times there's lengthy application approval processes that are just really hard to follow through with when you're trying to actively, you know, combat abuse on your platform.

08:37.000 --> 08:42.000
And again, we talked about reinventing basic functionality instead of focusing on specific harms.

08:42.000 --> 09:02.000
But we're also seeing a generative AI and LLMs accelerating the production and spread of harms and that's a very real thing we're hearing from platforms that like we're already outnumbered by a spam bots or moderators are already outnumbered by spam bots and now they're like these people who are spammy are being given these tools that can just rackably generate more and more content.

09:02.000 --> 09:07.000
And then in the trust and safety industry openness is kind of bent taboo historically.

09:07.000 --> 09:12.000
There's a sort of sense that if we talk about the trust and safety, then bad actors will figure out how to get around our protections.

09:12.000 --> 09:23.000
But as I hope we all know, security through obscurity is not really a valid thing, but it is that mentality is still common in trust and safety.

09:23.000 --> 09:25.000
So what do we believe?

09:25.000 --> 09:29.000
We believe that the missing piece in all of this is really tangible tools.

09:29.000 --> 09:41.000
The actual tools code that can be used by platforms, not more proprietary platforms that can be used but just like actually building tools that can be used.

09:41.000 --> 09:49.000
And these tools can and actually must be open source so that they are accessible so that they are modifiable so that they're auditable.

09:49.000 --> 09:57.000
And so that they're part of a public safety commons so that you can actually like we can actually all build them these things and make them together.

09:57.000 --> 10:07.000
Which again audience, it falls in my feel like understands that, but these are the sort of things that oftentimes in the trust and safety world is kind of a novel concept.

10:07.000 --> 10:10.000
So roost from the start, we're very open source.

10:10.000 --> 10:18.000
We're all in an open source, both licensing like technically it's open source but also the development model of open source, the community driven development.

10:19.000 --> 10:39.000
We've talked, if you've been here for a couple other talks this morning, there's been lots of mentions of HMA, which is probably not actually out of meta, but we've kind of taken on the community aspect of HMA, so we run the office hours and help kind of people get together and contribute and collaborate on HMA.

10:39.000 --> 10:49.000
We also host something called the roost model community, which explores ways to use and improve open weight models in trust and safety, so it's using some of the best automation tooling.

10:49.000 --> 11:07.000
And roost itself, we really strive to be open source in every way possible, so our things like our roadmap, our community documentation, our government documents, things like that are openly available and actually open for feedback from the community as well, because we really want to show that open source is the way to go and transparency is key.

11:08.000 --> 11:18.000
As far as specific projects we're working on, there's a project called coupe, which is actually going to be releasing very soon, which is a review console that we acquired and are working to actively open source.

11:18.000 --> 11:28.000
And then osprey, which is an automated roost engine that actually originated from discord, and that's what we're going to talk about a little bit more here, which is still me, yes.

11:28.000 --> 11:37.000
So osprey, we like to, Julia kind of coined this term, or call you calling this term, you can't talk.

11:37.000 --> 11:44.000
Automate the obvious and investigate the ambiguous, so kind of think about how that works with trust and safety and what osprey is.

11:44.000 --> 11:55.000
It's an automated roost engine, again, it originated at discord, so this was actually discord built internally for their trust and safety tooling, and we open source to at roost.

11:55.000 --> 12:07.000
In partnership with discord and in-dev, and it's built for incident response, so when you have an incident happening on your platform, it's how you can actually go in and respond to it and adapt your approach.

12:07.000 --> 12:20.000
It's self-hosted, it was open source, it's self-hosted on your own infrastructure, it's not a service we're running, it's a tool you host, it's highly scalable with built-in load balancing and it ingests tons of metadata.

12:20.000 --> 12:33.000
So events come in, not just the actual content of a post or of a message, but you can have image or video caches for hash matching, you can have the actual text content, but also like IP addresses and headers,

12:33.000 --> 12:43.000
basically all of the metadata so that you can then perform investigations to actually dig in and find the patterns of attack.

12:44.000 --> 12:55.000
So when I'm used, going back to, I guess, the cat analogy, let's say you have an emerging incident happening on your platform involving cats, which you've decided are evil and you're not going to include on your platform.

12:55.000 --> 13:07.000
Maybe there's word of a cat manifesto being disseminated on your platform, maybe accounts are linking to a cat related site in their bio and there's spamming people to go, make them look at cats.

13:07.000 --> 13:20.000
And, or maybe accounts are joining public channels and just spamming tons of photos of cats, which some of these might sound familiar if you're aware of things that are happening in the decentralized space.

13:20.000 --> 13:35.000
So, Osprey helps in these incidents by helping determine the jurisdiction for the incident to aid in reporting, so let's say you have something being posted and you want to know what local authorities do we need to report that to.

13:35.000 --> 13:42.000
Like, this is illegal content, we need to report it somewhere, where does that go, where is this event happening.

13:42.000 --> 13:50.000
And also, you know, you can search for that cat manifesto, the title of it, or key phrases to create new labels and rules to automate things.

13:50.000 --> 13:58.000
And you can also perform really deep technical analysis and investigation using all that metadata we've mentioned.

13:58.000 --> 14:03.000
So we're going to do a demo, is it the fun part, which on fuzz and wifi, we'll see.

14:03.000 --> 14:07.000
So, this should have been running in the background the whole time, so we have lots of spamming cats.

14:07.000 --> 14:15.000
Oh, how are we going to display good, hmm, good drag it over, we're just going to be fun to drag it.

14:15.000 --> 14:18.000
You want the mic for button to search.

14:18.000 --> 14:23.000
Okay, so this is Osprey.

14:23.000 --> 14:27.000
This is Osprey.

14:27.000 --> 14:32.000
This is Osprey, it is, this is the dashboard for it.

14:32.000 --> 14:37.000
So you can see there's tons and tons of metadata coming in.

14:37.000 --> 14:43.000
The, how you kind of can do performing investigation is you will write a query.

14:43.000 --> 14:49.000
And so on top left there's query there, this is using a special query syntax.

14:49.000 --> 14:53.000
Some made up language, SML, great.

14:53.000 --> 14:56.000
It's SQLish, yeah.

14:57.000 --> 15:04.000
So we can have these different rules that are defined actually by your, the, by obviously running on your platform.

15:04.000 --> 15:07.000
You can define the rules that you have implemented.

15:07.000 --> 15:08.000
We're good.

15:08.000 --> 15:09.000
Okay.

15:09.000 --> 15:14.000
So we're going to search for, we're saying there's an attack with like scammers or spambers or something.

15:14.000 --> 15:25.000
So we're going to search for a suspicious display name, which again, this is a rule that we've created on our platform that is implemented in the code.

15:25.000 --> 15:30.000
And we're going to see if the query runs on the Wi-Fi.

15:30.000 --> 15:34.000
It's good. Hey, something's happening.

15:34.000 --> 15:40.000
And yeah, so we're performing just a really simple, like, hey, doesn't match this specific rule that we've created ahead of time.

15:40.000 --> 15:44.000
And you can see, yes, there are, there are posts coming in that match this rule.

15:44.000 --> 15:47.000
And here's the time chart of these, these different posts.

15:47.000 --> 15:53.000
You can break it down by different, you know, by out day, hour, minute, et cetera.

15:53.000 --> 15:56.000
You can then also, like, drill down.

15:56.000 --> 16:05.000
So you say, okay, these were things that were flagged using this rule, but we want to drill down even further and see if, like, check out the display names that were actually flagged in this.

16:05.000 --> 16:11.000
So we have, you know, users on our account with display names like free giveaway and DM me now and click here.

16:11.000 --> 16:22.000
These were flagged by the rule, but we're coming in and kind of confirming and labeling them maybe more specifically to aid in other automations.

16:22.000 --> 16:25.000
So do all of these have labels.

16:25.000 --> 16:28.000
Yeah, here's one that got, no.

16:28.000 --> 16:31.000
That one shouldn't have had the label, that's fine.

16:31.000 --> 16:34.000
But you can go in and add a label.

16:34.000 --> 16:41.000
Let's say you, we're doing an investigation and you wanted to say this one is, you know, a crypto scam or this one is a,

16:41.000 --> 16:46.000
some of the kind of scam you can label and more specifically that way.

16:46.000 --> 16:47.000
The ends are also spam.

16:47.000 --> 16:50.000
Yeah.

16:50.000 --> 16:51.000
Oh, yes.

16:51.000 --> 16:53.000
And you can have, it supports expiration of these labels.

16:53.000 --> 16:57.000
So these labels can support enforcing different rules on your platform.

16:57.000 --> 17:00.000
And you can set an expiration.

17:00.000 --> 17:03.000
So it can maybe they'll be shadow vander banned for a certain amount of time.

17:03.000 --> 17:07.000
And then once that label expires, then they'll post would come through again.

17:07.000 --> 17:08.000
Yeah.

17:08.000 --> 17:13.000
And you can see the somewhat overwhelming amount of metadata here that we're pumping in the, the,

17:13.000 --> 17:16.000
Oh, that opening in the new tab.

17:16.000 --> 17:18.000
Optimizing.

17:19.000 --> 17:21.000
Yeah, this tool originated at Discord.

17:21.000 --> 17:23.000
So some of the loading messages are very fun.

17:23.000 --> 17:28.000
And when you delete a chart, you eat the chart, which, yeah, you know, it's fun.

17:28.000 --> 17:31.000
Or going to eat it not is open for a comment.

17:31.000 --> 17:35.000
Yeah, eating, eating the, yeah.

17:35.000 --> 17:37.000
It's an open source project.

17:37.000 --> 17:38.000
We might change it.

17:38.000 --> 17:41.000
So let's say you found a pattern here.

17:41.000 --> 17:45.000
You said a lot of the, a lot of the content is coming from the same IP address.

17:45.000 --> 17:49.000
So you could, you can see that and then you could drill down even further in

17:49.000 --> 17:53.000
Osprey to filter it by that IP address.

17:53.000 --> 17:55.000
So you could do the IP address equals.

17:55.000 --> 18:00.000
Yeah, it's so hard to, this is not the great demo environment.

18:00.000 --> 18:01.000
Oh, yes.

18:01.000 --> 18:02.000
It's a string.

18:02.000 --> 18:03.000
Yep.

18:03.000 --> 18:04.000
And then string.

18:04.000 --> 18:05.000
So you match that.

18:05.000 --> 18:10.000
Again, this is using the, the SML markup.

18:10.000 --> 18:14.000
I'm looking at our time on what we should accelerate the demo.

18:14.000 --> 18:17.000
So we can do the rest of our content because we started a little later.

18:17.000 --> 18:19.000
And we should answer questions.

18:19.000 --> 18:20.000
Yeah.

18:20.000 --> 18:21.000
Yeah.

18:21.000 --> 18:24.000
So it's, it's a little complicated to demo because there's so much going on here.

18:24.000 --> 18:30.000
But the idea is you can really drill down and investigate specific attacks and find patterns.

18:30.000 --> 18:35.000
And then you can go back and you can save these queries so that you can, you know,

18:35.000 --> 18:37.000
investigate and follow up on these investigations later.

18:37.000 --> 18:42.000
Every, every action that you perform in here from labeling and any actions that you take

18:42.000 --> 18:45.000
are added to an audit log as well.

18:45.000 --> 18:50.000
So that helps in, you know, reporting and, um, yeah.

18:50.000 --> 18:52.000
I think we switch back to the,

18:52.000 --> 18:53.000
What's it?

18:53.000 --> 18:57.000
We're having a demo more one-on-one if you can find us after the talk as well,

18:57.000 --> 19:03.000
to like, go through a, uh, of more kind of specific scenario, too.

19:03.000 --> 19:04.000
Okay.

19:04.000 --> 19:06.000
And we're going to find this slide back here.

19:06.000 --> 19:07.000
Great.

19:08.000 --> 19:11.000
That's a backup video.

19:11.000 --> 19:13.000
See, I'll just click the next one.

19:13.000 --> 19:14.000
Yes.

19:14.000 --> 19:15.000
All right.

19:15.000 --> 19:18.000
So something important to mention to us,

19:18.000 --> 19:20.000
Osprey is production ready.

19:20.000 --> 19:25.000
It's actually being used in production today across both discord and blue sky.

19:25.000 --> 19:31.000
Um, at blue sky, they have reported that they're processing over 45 million events per day with Osprey,

19:32.000 --> 19:38.000
um, with over 100,000 daily enforcement actions that are being taken from just their first set of rules,

19:38.000 --> 19:40.000
like the kind of the first pass of like,

19:40.000 --> 19:44.000
let's set up these rules and see how we can actually, uh, implement Osprey.

19:44.000 --> 19:46.000
So it's being used in production there.

19:46.000 --> 19:49.000
And at discord, again, this tool originated discord before it was open source.

19:49.000 --> 19:53.000
And now they've reintegrated the open source version at discord.

19:53.000 --> 19:56.000
And they, they handle orders of magnitude more events than that, um,

19:56.000 --> 19:57.000
every day being processed.

19:57.000 --> 20:00.000
So it's, it's truly, truly scalable and used in production and production ready.

20:01.000 --> 20:04.000
Um, it's also being used by matrix.org right now.

20:04.000 --> 20:06.000
Um, they're using, I think, in staging right now,

20:06.000 --> 20:09.000
but they're bringing, you know, these trust and safety tools to, um,

20:09.000 --> 20:12.000
the open source decentralized network.

20:12.000 --> 20:15.000
Um, and importantly, it's open source.

20:15.000 --> 20:16.000
So you can get involved.

20:16.000 --> 20:21.000
Um, I want to hammer that point home because that's kind of why we're all here, right?

20:21.000 --> 20:23.000
Um, we can't do this alone.

20:23.000 --> 20:29.000
Uh, we can, we can do a lot, but we, we need everybody's help that, that we can get.

20:29.000 --> 20:31.000
Um, come talk to us, come find us.

20:31.000 --> 20:33.000
You can find all the links on our website.

20:33.000 --> 20:35.000
Uh, the code is all up on GitHub.

20:35.000 --> 20:36.000
Um, we have a discord channel.

20:36.000 --> 20:39.000
We know we're in the decentralized communications room.

20:39.000 --> 20:42.000
You can come yell at us about that on our discord.

20:42.000 --> 20:48.000
Uh, you can, uh, try using offspring or any of the other open source projects.

20:48.000 --> 20:51.000
We work on our support like H&MAT or R&C models.

20:51.000 --> 20:52.000
Um, and share your feedback with us.

20:52.000 --> 20:56.000
Feedback is a really super valuable way to get involved actually.

20:56.000 --> 20:58.000
Just like, hey, I have a platform as doing this.

20:58.000 --> 20:59.000
Um, we tried implementing it.

20:59.000 --> 21:00.000
Here's what we learned.

21:00.000 --> 21:02.000
That's super valuable for us to know.

21:02.000 --> 21:04.000
Um, what to focus on, what to build next.

21:04.000 --> 21:06.000
Um, you can also join our office hours.

21:06.000 --> 21:08.000
We have things like, uh, every couple weeks.

21:08.000 --> 21:10.000
We have office hours for different open source projects.

21:10.000 --> 21:14.000
And so you can come in and chat with us there and and get help implementing things.

21:14.000 --> 21:17.000
Um, and also you can help us by building integrations.

21:17.000 --> 21:23.000
Like, uh, Matrix.org foundation has been doing with, with Matrix, which is super valuable.

21:23.000 --> 21:27.000
And that is all of our talk, but we'd love to take a couple questions if there's time.

21:27.000 --> 21:30.000
That's a great question.

21:30.000 --> 21:34.000
The question was, uh, specifically for Matrix, would this be a policy server implementation,

21:34.000 --> 21:38.000
which is probably a good question, actually, for Travis or, yep.

21:38.000 --> 21:39.000
Um, but yes, probably.

21:39.000 --> 21:44.000
It would be an, it would be implemented at the policy server level.

21:44.000 --> 21:46.000
Um, and we have a lot of questions.

21:46.000 --> 21:49.000
Um, and we have a lot of questions that there's time.

21:49.000 --> 21:51.000
That's a great question.

21:51.000 --> 21:55.000
Uh, specifically for Matrix, would this be a policy server implementation, which is probably a good

21:55.000 --> 21:56.000
level.

21:56.000 --> 21:57.000
Yeah.

21:57.000 --> 22:04.000
Um, does next sense to use it also for less events, like, maybe, so for some more projects?

22:04.000 --> 22:10.000
So questions would it make sense to use Osprey for fewer events than, like, the scale of having millions of events?

22:10.000 --> 22:14.000
Um, yeah, it's, uh, I don't with Osprey we've said.

22:14.000 --> 22:15.000
I mean, you can take that.

22:15.000 --> 22:16.000
Yeah.

22:16.000 --> 22:22.000
I mean, I think it would, that would greatly depend on, um, your teams, you know, comfort level with

22:22.000 --> 22:23.000
the tool like that and ability.

22:23.000 --> 22:28.000
I think it's really helpful for if you're sort of a, have any background in data analysis or

22:28.000 --> 22:29.000
investigations.

22:29.000 --> 22:32.000
It's a very helpful kind of automation tool in that way.

22:32.000 --> 22:36.000
Um, if your vet load is very small and your team is very new to that.

22:36.000 --> 22:38.000
And might, I could, I could say being a little overwhelming.

22:38.000 --> 22:43.000
But I think if you get comfortable with the query language, it's really helpful.

22:43.000 --> 22:44.000
All right.

22:44.000 --> 22:48.000
I know we like to say, like, Osprey is one of those tools that you don't know you need it until

22:48.000 --> 22:49.000
you need it.

22:49.000 --> 22:51.000
Um, it oftentimes feels like way overkill and you don't.

22:51.000 --> 22:52.000
It's like, oh, yeah, yeah.

22:52.000 --> 22:53.000
We'll, we'll handle that when we have it.

22:53.000 --> 22:56.000
And then your platform scales to a point where you get a big attack.

22:56.000 --> 22:59.000
And then you, it would be really good to have had something like Osprey.

22:59.000 --> 23:02.000
So it is kind of a balance of, of that.

23:02.000 --> 23:05.000
Any questions?

23:05.000 --> 23:06.000
Yes.

23:06.000 --> 23:07.000
Okay.

23:07.000 --> 23:11.000
Do you guys have any account levels filled with questions?

23:11.000 --> 23:14.000
Do we have any account level fraud tooling?

23:14.000 --> 23:16.000
Um, yeah.

23:17.000 --> 23:21.000
Yeah. So I know the examples we gave were things like, um, see Sam and T back.

23:21.000 --> 23:24.000
But a lot of folks are thinking about Osprey also for fraud signals.

23:24.000 --> 23:28.000
Because you might look at, um, and you can define this in Osprey.

23:28.000 --> 23:32.000
What time was the account created and how quick was it to a first post?

23:32.000 --> 23:37.000
Is this account, you know, links should IP address that has created 10 other accounts within a specific amount of time?

23:37.000 --> 23:39.000
These are all very user defined.

23:39.000 --> 23:42.000
So I think there is a, yeah, definitely for fraud creation is.

23:42.000 --> 23:43.000
Yeah.

23:44.000 --> 23:51.000
It's important to note that like all of the, all of the metadata that's coming into Osprey is like that's up to the platform to define.

23:51.000 --> 23:57.000
So your platform if IP addresses don't make sense for your platform, like then you wouldn't be feeding IP addresses into it.

23:57.000 --> 24:03.000
So all of the metadata that's coming in is actually like up to the platform to implement the defined.

24:03.000 --> 24:04.000
Yeah.

24:04.000 --> 24:06.000
Oh, go ahead.

24:06.000 --> 24:11.000
What are your self posting requirements like, and how are your deployments right now?

24:11.000 --> 24:15.000
What are your deployments like?

24:15.000 --> 24:17.000
What are the deployments like?

24:17.000 --> 24:19.000
That's a good question.

24:19.000 --> 24:21.000
Are you pulling up?

24:27.000 --> 24:28.000
I don't remember where that is.

24:28.000 --> 24:29.000
Yeah.

24:29.000 --> 24:30.000
I think Haley has a good.

24:30.000 --> 24:31.000
Okay.

24:31.000 --> 24:32.000
So yeah.

24:32.000 --> 24:39.000
It's part of it is, you know, with this with the, a horizontal scaling that Osprey can do it depends on how many events you're processing.

24:39.000 --> 24:43.000
That said, the demo we were running, you can run it on a laptop.

24:43.000 --> 24:47.000
So it's not like this massive, massive thing.

24:47.000 --> 24:53.000
You need a huge server for necessarily, but it really does scale with the number of events here.

24:53.000 --> 24:54.000
Yeah.

24:54.000 --> 24:55.000
Do you pull it up?

24:55.000 --> 24:57.000
I didn't find it, but here's my, my promise.

24:57.000 --> 25:03.000
So blue sky has actually created a very wonderful helpful architecture diagram that they posted of how they got it set up.

25:03.000 --> 25:06.000
So you can see, you know, general line of all different requirements.

25:06.000 --> 25:09.000
I will find it and put it on our, my blue sky.

25:09.000 --> 25:10.000
And our time is up.

