WEBVTT

00:00.000 --> 00:13.000
So, until that, we now here Neil, we still talk online safety laws and the learnings from

00:13.000 --> 00:15.000
a lot of pro bono work.

00:15.000 --> 00:24.000
So, Neil, the process here.

00:24.000 --> 00:31.000
Can you hear me at the back? It's my friend working.

00:31.000 --> 00:34.000
It is working, but you can't hear me at the back.

00:34.000 --> 00:37.000
Okay, thank you, thumb up.

00:37.000 --> 00:40.000
Put it back on.

00:54.000 --> 00:59.000
Right.

00:59.000 --> 01:02.000
Hello, I'm Neil.

01:02.000 --> 01:04.000
Some of you, I know.

01:04.000 --> 01:06.000
Some of you, I don't know.

01:06.000 --> 01:08.000
I'm a lawyer on the freight.

01:08.000 --> 01:11.000
Look, nobody's perfect.

01:11.000 --> 01:16.000
I would normally say, I'm not your lawyer, but looking around, I actually am your lawyer for

01:16.000 --> 01:17.000
some of you here.

01:17.000 --> 01:22.000
So, for those of you where I'm not your lawyer, if those of you who I am your lawyer, I am still your lawyer,

01:22.000 --> 01:24.000
at the end of this as well.

01:24.000 --> 01:25.000
Okay.

01:25.000 --> 01:31.000
So, I run a small law firm and I run it almost entirely on free software.

01:31.000 --> 01:38.000
And I spent a huge amount of time over the last 12, 18 months looking at online safety laws and

01:38.000 --> 01:40.000
free software communities.

01:40.000 --> 01:46.000
So, that's looking both at people who run free software in a way that

01:46.000 --> 01:49.000
brings it within scope of online safety laws.

01:49.000 --> 01:54.000
But also people who develop free software, which when people run it, brings them in scope of online

01:54.000 --> 01:55.000
safety laws.

01:55.000 --> 02:00.000
So, I'm talking today, predominantly, from the point of view of people running these services,

02:00.000 --> 02:05.000
but it also has an impact on people developing these services, developing the software that

02:05.000 --> 02:06.000
powers these services.

02:06.000 --> 02:10.000
What can be built into things to help with some of this?

02:10.000 --> 02:17.000
So, I'm talking about online safety laws and I'm in English lawyer, I can only advise on English

02:17.000 --> 02:22.000
law, but other places are thinking about online safety laws as well.

02:22.000 --> 02:28.000
So, although my focus is on the UK's online safety act, there are online safety laws in Australia,

02:28.000 --> 02:33.000
there are online safety laws in some states in the USA, and there's an awful lot of

02:33.000 --> 02:38.000
conversation about other online safety laws in other countries as well.

02:38.000 --> 02:41.000
And they're not all the same.

02:41.000 --> 02:45.000
They're not all the same, so if you're running a project that's available globally,

02:45.000 --> 02:48.000
how do you think about compliance?

02:48.000 --> 02:53.000
What I'm going to do, I'm going to paint a background picture first, not quite how we got to here,

02:53.000 --> 02:55.000
but this is where we are.

02:55.000 --> 02:59.000
And then talk through some of the considerations, some of the mitigations,

02:59.000 --> 03:06.000
some that I've spoken with with others here over the last year or so, what can you do about some of these things?

03:06.000 --> 03:13.000
But your starting question is, okay, all these countries have got or are developing online safety laws.

03:13.000 --> 03:16.000
That's nice, so what?

03:16.000 --> 03:23.000
Unfortunately, much as it would be great to be able to care only about the laws that apply in your country.

03:23.000 --> 03:28.000
Lots of these laws purport to have extraterritorial effect.

03:28.000 --> 03:33.000
In other words, they claim to apply to people outside their country,

03:33.000 --> 03:38.000
not to people outside their country, where the servers are outside the country as well.

03:38.000 --> 03:43.000
Now, I say purport, because what the law says, and what can be enforced,

03:43.000 --> 03:46.000
aren't necessarily the same thing.

03:46.000 --> 03:48.000
So I take an example from the UK.

03:48.000 --> 03:56.000
The UK Telecom's regulator has just imposed a £1 million fine for breach of the online safety laws.

03:56.000 --> 04:01.000
They've imposed this fine against a provider in embellis,

04:01.000 --> 04:07.000
who has never engaged with any of the regulators' correspondence.

04:07.000 --> 04:11.000
Hands up if you think that's fine is going to be paid.

04:11.000 --> 04:16.000
And I don't know whether you're an optimist or a pessimist, but yes, either way.

04:16.000 --> 04:22.000
There may be a purpose to bring that to imposing that fine to simply say, look, we can.

04:22.000 --> 04:29.000
But to me, the chances of actually being able to get that fine paid to them has got to be really rather limited.

04:29.000 --> 04:33.000
But these laws are claiming to have extraterritorial effect.

04:34.000 --> 04:43.000
We'll talk a bit later, perhaps, about assessing the risk as to whether you really need to be worried about the extraterritorial effect of somebody else's laws.

04:43.000 --> 04:47.000
So you've got a starting point, the laws aren't always the same in all the countries.

04:47.000 --> 04:52.000
You've got the second point that sometimes these laws say they apply to people outside the country,

04:52.000 --> 04:57.000
they may apply to you, even though there are laws of a completely different country.

04:57.000 --> 05:03.000
And they vary in terms of this scope, okay, who he uses next cloud.

05:03.000 --> 05:09.000
Gitsy, mastered on who follows me or mastered on.

05:09.000 --> 05:13.000
I'm so sorry for that.

05:13.000 --> 05:20.000
Loads and loads of everyday services brought within scope, okay, different countries take different approaches.

05:20.000 --> 05:24.000
So I say brings them to the scope, I'm thinking from a UK point of view.

05:24.000 --> 05:30.000
The online safety act encompasses services which lots of foster projects use.

05:30.000 --> 05:37.000
Australian approach is to name specific services and to apply the rules to those.

05:37.000 --> 05:39.000
That's a fundamentally different approach.

05:39.000 --> 05:43.000
So you've got different countries adopting it in completely different ways.

05:43.000 --> 05:46.000
You may recognize some of these services.

05:46.000 --> 05:52.000
And while you're reading that, I'm going to try and re-attach this.

05:52.000 --> 06:02.000
Can you hear me at the back still?

06:02.000 --> 06:07.000
Sort of yes, but I'm judging by the fact you did anything at all means you can sort of hear me.

06:07.000 --> 06:10.000
Okay, so a whole load of things brought within scope.

06:10.000 --> 06:15.000
And much of the work I've done over the last year has been about these kind of things.

06:15.000 --> 06:20.000
A free software project that wants to have a forum for its users.

06:20.000 --> 06:26.000
The wants to have an online tool for collaborative generation of translations.

06:26.000 --> 06:30.000
This is not the kind of thing that you hear about in Parliament.

06:30.000 --> 06:34.000
This is not the kind of thing that comes up in online safety discourse and says,

06:34.000 --> 06:37.000
someone who's been reading archwiky.

06:37.000 --> 06:48.000
And nevertheless, wikies could be in scope, a service that allows users to see what other users have written in bash.

06:48.000 --> 06:50.000
All sorts of things.

06:50.000 --> 06:54.000
So we've got a plethora of different laws in different countries.

06:54.000 --> 06:58.000
We've got countries that say that their laws apply to you even if they're not in those countries.

06:58.000 --> 07:06.000
And we've got different scopes of those laws, some of which cover services that most of you will use multiple times a day.

07:06.000 --> 07:10.000
So what? Okay.

07:10.000 --> 07:12.000
We've talked about how we got to hear.

07:12.000 --> 07:16.000
This is some of my thinking on perhaps what can we do about this.

07:16.000 --> 07:20.000
Now, inevitably, what you do will depend on your own situation,

07:20.000 --> 07:24.000
which is another way of the lawyers standard of it depends.

07:24.000 --> 07:28.000
Okay? These are things that I've drawn on in the last year or so.

07:28.000 --> 07:32.000
And the first one, can I just ignore it?

07:32.000 --> 07:36.000
Can I let it all go away?

07:36.000 --> 07:44.000
The answer is potentially yes, but it will be sensible to approach it in a pragmatic way.

07:44.000 --> 07:50.000
And what is the genuine risk of enforcement activity?

07:50.000 --> 07:54.000
Are you doing the kind of things?

07:54.000 --> 07:57.000
Is it regulated going to be interested in you?

07:57.000 --> 08:00.000
Because it's one thing for the law to say it does apply to you.

08:00.000 --> 08:04.000
And it's quite another thing to find yourself a regulatory target.

08:04.000 --> 08:13.000
Now, that's something that's much easier to access as a law is being on the statute books for a while with experience.

08:13.000 --> 08:18.000
You can see some regulators publish regulatory priority lists.

08:18.000 --> 08:24.000
You can get a sense from public discourse of the kind of services that people care about.

08:24.000 --> 08:33.000
And you can make an assessment that, you know, maybe the forum for discussing predominantly among contributors of a particular software project

08:33.000 --> 08:39.000
probably isn't as high up the list as some very large popular online platform.

08:39.000 --> 08:45.000
There's still an element of risk in that, but you could make an assessment that that risk is relatively low.

08:45.000 --> 08:53.000
You can make an assessment that the risk at that point in time is relatively low and keep watching and see how things develop.

08:53.000 --> 08:59.000
But you may very well come to the conclusion that yes, there are online safety laws in various countries.

08:59.000 --> 09:04.000
There are various laws that require different things in different countries.

09:04.000 --> 09:09.000
And your plan is to do absolutely nothing at least for now.

09:09.000 --> 09:12.000
That's a consideration in K.

09:12.000 --> 09:18.000
So that's thinking about things from what risk does the regulation present to you?

09:18.000 --> 09:20.000
What's the realistic risk of enforcement?

09:20.000 --> 09:25.000
You could also look at this from a perspective of your own services,

09:25.000 --> 09:30.000
or if you're developing software, the risks that someone running your software would face.

09:30.000 --> 09:34.000
What actual risks does your software or does your service pose?

09:34.000 --> 09:37.000
Can you try and mitigate that?

09:37.000 --> 09:40.000
There are different ways of doing that.

09:40.000 --> 09:43.000
I think a risk assessment makes sense.

09:43.000 --> 09:50.000
There are lots of different ways of doing risk assessments and different laws may require you to consider different things,

09:50.000 --> 09:53.000
or to do your risk assessments slightly differently.

09:53.000 --> 10:01.000
From my point of view, having some form of risk assessment, I know some of you in this room have done online safety-related risk assessments,

10:01.000 --> 10:05.000
because I've read some of your online safety-related risk assessments.

10:05.000 --> 10:10.000
I know how to party, and they are not all the same.

10:10.000 --> 10:19.000
But what they have each done is shown that someone has engaged with a subject matter that they've tried to think through the risks that apply to their service.

10:19.000 --> 10:25.000
So, while some regulators may say you should do it this way, in some laws say no you should do it this way,

10:25.000 --> 10:32.000
if you see it simply as a tool to think through risks in a structured way, great.

10:32.000 --> 10:37.000
And okay, different laws, different regulators may require different things.

10:37.000 --> 10:44.000
But, if you are working on a free software project, in a lot of cases this will be volunteer labor.

10:44.000 --> 10:47.000
This will be volunteer time, and this will be volunteer skills.

10:47.000 --> 10:54.000
Maybe you don't have the resources to cover every single different risk assessment to do it in every single different way.

10:54.000 --> 11:01.000
Maybe in an ideal world you could, but if a regulator does come knocking and you've done something, hey, you've done something.

11:01.000 --> 11:06.000
You've got something to show here that you didn't completely brush it to one side.

11:06.000 --> 11:08.000
You weren't saying I've done nothing at all.

11:08.000 --> 11:15.000
You're saying I've tried to engage with this in a way that's appropriate to our scope, our scale, and our services.

11:15.000 --> 11:19.000
Will that sort of look, we've got something, will that always fly?

11:19.000 --> 11:26.000
No, nothing will always fly, but it seems like a sensible way to try and approach it.

11:26.000 --> 11:32.000
And there are some things that are going to come up almost irrespective of the country.

11:33.000 --> 11:41.000
If you run a service that particularly aims at having child users, you are likely to be a high risk service.

11:41.000 --> 11:49.000
And there'll be many people who run services that don't aim to have child services, but nor do they completely discourage them.

11:49.000 --> 11:59.000
If you run a wiki for a niche Linux or BSD project, you probably do have child users, but you're probably not encouraging child users.

12:00.000 --> 12:12.000
Having is what you're doing massively attractive to children, you know you're going to get loads of them, you know you've got loads of them, or is it the kind of thing where you may, but probably not particularly attractive.

12:12.000 --> 12:22.000
And services which involve pornography, so alongside the work I do with free software, the other part of my pro bow and a work is predominantly for sex workers.

12:22.000 --> 12:45.000
pornography comes up quite a lot in that context. In the free software world, I'm going to park that for now, but if you find that you have a four or more community where people are regularly posting pornographic images, that or or other pornographic content, that will be something to consider as part of the risk assessment from under almost any online safety law.

12:45.000 --> 12:58.000
But sexual abuse material, so whether that's a potential for grooming, or a potential for image distribution or video distribution, again high priority, as is the terrorist bucket.

12:58.000 --> 13:14.000
Unfortunately, terrorism means different things to different people trying to determine what is terrorist content may vary by country, but if you were to think through those four things for most services, you are covering things that most countries are likely to have at the top of their priority list.

13:14.000 --> 13:23.000
Others will have other things too, that's not exhaustive, those are the things that I've seen come up time and time again in different countries.

13:23.000 --> 13:31.000
So that's if you want to take an approach of what do we need to do to at least try and be compliant?

13:31.000 --> 13:37.000
What if you wanted to slightly more than that? What if you actually wanted to do some good with this process?

13:37.000 --> 13:51.000
Could you move beyond compliance, move from a mindset of what do we need to do to keep the regulator off our backs to actually, how could we use this to keep our users our community safer?

13:51.000 --> 14:00.000
And can you create a better, safer experience for all users, but especially your more vulnerable users?

14:00.000 --> 14:09.000
And when you start thinking through the online safety laws from that perspective, how can you do better, how can you build a better, richer, stronger, more inclusive community?

14:09.000 --> 14:16.000
You start seeing things slightly differently, and it moves away from being a regulatory tick box exercise into a, oh just a minute.

14:16.000 --> 14:23.000
Yeah, that could be used in a harmful way, but it's still a useful feature, what could we do about it?

14:23.000 --> 14:34.000
And how can you lessen the risk of either the abusive use or the misuse of the services that you're operating for your community or the software that you're developing?

14:34.000 --> 14:42.000
Okay, so we now have we got to hear kind of considerations what tangibly could be done?

14:42.000 --> 14:50.000
I'm going to split this into three, but fundamentally the mitigations that you deploy will depend on the risk that you've identified.

14:50.000 --> 14:54.000
It's very hard to try and mitigate something where you haven't already considered it to be a risk.

14:54.000 --> 15:01.000
So I come back to that idea of a risk assessment, and if you are going to mitigate those risks, write them down as well.

15:02.000 --> 15:14.000
Because then you can show if somebody comes, look, not only have we thought about it, we thought about what's an appropriate proportionate solution that's available within our budget time resources and whatever.

15:14.000 --> 15:22.000
We can show we're thinking about these things in a way that a regulator's likely to see a mature organisation, a mature community.

15:23.000 --> 15:29.000
So, your mitigations based on your risk is that you've thought about them, you've started to document them.

15:29.000 --> 15:39.000
The first thing I talk about is something that Karen talked about as she opened the room today, setting standards and upholding them.

15:39.000 --> 15:43.000
Clear, enforced rules or codes of conduct.

15:44.000 --> 15:50.000
Accessible to everyone in plain language, there's no secret code of conduct.

15:50.000 --> 15:52.000
There's no convoluted legalese.

15:52.000 --> 15:59.000
It simply sets out these are the standards of behaviour that are expected, and that they apply to everyone.

15:59.000 --> 16:05.000
Now, this is an area where I've seen numerous contexts and also free software projects.

16:06.000 --> 16:11.000
That the rules may not apply to everyone equally.

16:11.000 --> 16:16.000
Very difficult to enforce a framework where some people get special treatment and others don't.

16:16.000 --> 16:22.000
But I'd start clear behaviours, this is what we expect, this is what happens if you don't.

16:22.000 --> 16:28.000
The second bucket is reactive, dealing with harm that's already happened.

16:28.000 --> 16:33.000
Now, I'd hope my risk assessment, you've already identified that these are relatively low risk harms,

16:33.000 --> 16:37.000
because if you've got high risk harms, you don't want to be dealing with these after these happened,

16:37.000 --> 16:41.000
because it means that someone must have suffered through them by that point.

16:41.000 --> 16:47.000
So, when you look at features in a social media application, for example, to block somebody or to meet someone,

16:47.000 --> 16:50.000
yes, you can use them, proactively.

16:50.000 --> 16:54.000
But chances are most people do it after they've already suffered the conduct in question.

16:54.000 --> 16:57.000
They're typically reactive features.

16:58.000 --> 17:00.000
Obviously, things that come up.

17:00.000 --> 17:03.000
Who does a user go and talk to if they have a problem with something?

17:03.000 --> 17:06.000
Not all services have an easy, obvious report button.

17:06.000 --> 17:10.000
If they do, great, as long as it's backed up by action.

17:10.000 --> 17:14.000
But if that isn't there, and if you can't introduce it, because the service doesn't have their feature,

17:14.000 --> 17:17.000
is it clear and prominent who they should go and talk to?

17:17.000 --> 17:20.000
Who do they need to report something to?

17:20.000 --> 17:25.000
Does the person whom they report stuff know what they need to do?

17:26.000 --> 17:29.000
Who handles the reports? How do they handle it?

17:29.000 --> 17:34.000
Are the sanctions the consequences documented somewhere, so that someone who's acting as that moderator,

17:34.000 --> 17:39.000
someone who's acting in that role of investigating whatever, they know what they need to do,

17:39.000 --> 17:44.000
and they can apply those standards consistently repeatedly and easily.

17:44.000 --> 17:50.000
That's the one I see most organisations slip up on, is they've kind of thought about it.

17:50.000 --> 17:54.000
They've got a code of conduct, but a report comes in, and then there's a panic.

17:55.000 --> 17:58.000
And there's a panic, because they're not sure who's dealing with it, who's already dealt with it,

17:58.000 --> 18:03.000
who's commented on the forum thread about it, that it doesn't need to be massively difficult.

18:03.000 --> 18:07.000
You're not trying to write your own procedural rules akin to a court, just something less you,

18:07.000 --> 18:13.000
just something clear and simple that you can apply equally.

18:13.000 --> 18:17.000
But this is where something bad already happened, someone is reporting it,

18:17.000 --> 18:22.000
you're going in to stop, to put out the fire that's already started.

18:22.000 --> 18:25.000
What could you do to stop that in the first place?

18:25.000 --> 18:29.000
Where you're really looking at these high risk cards, it's what can you do proactively,

18:29.000 --> 18:31.000
trying to prevent the harm from happening?

18:31.000 --> 18:35.000
So, again, it will depend on the service in question.

18:35.000 --> 18:41.000
An awful lot of harm sometimes someone sees, comes from services where anyone can use them,

18:41.000 --> 18:45.000
just simply sign up, off you go, you can start posting.

18:45.000 --> 18:50.000
There are good reasons for making things easier to use, for reducing barriers to entry.

18:50.000 --> 18:55.000
I'm not saying never do it, but when you've assessed the risk of context of your service,

18:55.000 --> 19:00.000
maybe having some form of manual approval step before people can start using them,

19:00.000 --> 19:03.000
could reduce the risk, but still make your service useful.

19:03.000 --> 19:06.000
Could users have different functionality?

19:06.000 --> 19:12.000
Does every user need the ability to send an encrypted direct message to every other user,

19:12.000 --> 19:14.000
the moment that they sign up?

19:14.000 --> 19:19.000
Do they need to be able to send an encrypted direct message containing an image to everyone?

19:19.000 --> 19:22.000
Who's just when they just signed up?

19:22.000 --> 19:25.000
The vector for abuse is very obvious.

19:25.000 --> 19:29.000
There could be good reasons, but there are also very obvious abuse factors.

19:29.000 --> 19:34.000
So, can you differentiate based on user time, duration of use of the service,

19:34.000 --> 19:36.000
all number of different facets?

19:36.000 --> 19:40.000
The third one, I've written it, I feel it's a little pointed,

19:40.000 --> 19:44.000
but kind of with reason, I love GITC, I use GITC.

19:44.000 --> 19:47.000
I do look at GITC instances.

19:47.000 --> 19:53.000
GITC for those who don't know is a fast fantastic free software video conferencing service.

19:53.000 --> 19:58.000
It runs in your browser, you don't need a client for it.

19:58.000 --> 20:02.000
You join using a meeting name, somebody else joins using a meeting name,

20:02.000 --> 20:05.000
you have a video chat.

20:05.000 --> 20:12.000
A fantastic tool for all sorts of things, including CSUN content.

20:12.000 --> 20:16.000
A fantastic tool for having a meeting to which someone else joins

20:16.000 --> 20:19.000
and starts broadcasting pornography or CSUN content.

20:19.000 --> 20:22.000
So, if you run a GITC or other video conference,

20:22.000 --> 20:25.000
one of my first thoughts from a risk assessment point of view

20:25.000 --> 20:29.000
is, do you really need anyone to be able to start that conference?

20:29.000 --> 20:32.000
Could you look at looking it down a bit?

20:32.000 --> 20:36.000
The last thing I'd touch on is community knowledge sharing.

20:36.000 --> 20:39.000
Not everything I've done with the online safety act,

20:39.000 --> 20:40.000
but an awful lot of it.

20:40.000 --> 20:44.000
I've put online, online safety act.co.uk, risk assessments,

20:44.000 --> 20:47.000
sample terms and additions, all sorts of things,

20:47.000 --> 20:50.000
because these frameworks are onerous.

20:50.000 --> 20:54.000
They are challenging, particularly if you're not used to dealing with them.

20:54.000 --> 20:58.000
The more we can pull knowledge about how other services have done it,

20:58.000 --> 21:02.000
the more we can build up that baseline and get to a good place.

21:02.000 --> 21:07.000
Now, they may also be scope for knowledge sharing about bad servers, bad actors,

21:07.000 --> 21:08.000
problems and things.

21:08.000 --> 21:11.000
When you start sharing information about individuals,

21:11.000 --> 21:15.000
you might also want to think from a data protection point of view.

21:15.000 --> 21:19.000
But thank goodness, I'm not doing a data protection talk this afternoon.

21:19.000 --> 21:21.000
Just a flag, there may be value in doing that.

21:21.000 --> 21:23.000
It may be perfectly legitimate to do that.

21:23.000 --> 21:26.000
You may find a perfectly good legal way of doing it.

21:26.000 --> 21:31.000
There will be data protection considerations when you start sharing information about individuals.

21:31.000 --> 21:34.000
Now, the other thing that I pondered talking about,

21:34.000 --> 21:39.000
but decided in 30 minutes not going to age verification.

21:39.000 --> 21:43.000
It's the topic that's coming up in all sorts of different countries.

21:43.000 --> 21:46.000
Again, not all asking for the same thing.

21:46.000 --> 21:48.000
Should it be device-based?

21:48.000 --> 21:50.000
Should it be browser-based?

21:50.000 --> 21:55.000
I think the one that worries me the most is the idea that before one visits a pornography site,

21:55.000 --> 21:58.000
you should turn on your camera and prove that you are an adult.

21:58.000 --> 22:01.000
This doesn't seem particularly well thought through,

22:01.000 --> 22:04.000
but that's the level of the dialogue in some places at the moment.

22:04.000 --> 22:07.000
So, I won't talk through that.

22:07.000 --> 22:12.000
I think it's an issue that will come up and we will need to deal with them in the free software community.

22:12.000 --> 22:17.000
But with three or four minutes remaining, what questions do you have?

22:17.000 --> 22:30.000
Thank you.

22:30.000 --> 22:33.000
So, since you've stepped upon this already,

22:33.000 --> 22:37.000
so say one instead of launching a GTS instance,

22:37.000 --> 22:41.000
so say one to launch on more than one single user server,

22:41.000 --> 22:47.000
or a matrix server, or anything else that's a bookroom server, anything.

22:47.000 --> 22:49.000
What would you recommend to do?

22:49.000 --> 22:53.000
So, the question related to Fediverse server,

22:53.000 --> 22:56.000
was it a single user server, did you say?

22:56.000 --> 22:59.000
So, I have a single user mastered on service.

22:59.000 --> 23:02.000
Some people will say that's still one user too many.

23:02.000 --> 23:04.000
I come on with a risk.

23:04.000 --> 23:07.000
I may have done a risk assessment or not.

23:07.000 --> 23:11.000
I don't think many online safety rules are designed to deal with someone's own speech

23:11.000 --> 23:14.000
in the same way of what they write on their own blog.

23:14.000 --> 23:18.000
The way some Fediverse server's work means other users may be able to,

23:18.000 --> 23:21.000
you'll post, maybe effectively relay through your server,

23:21.000 --> 23:23.000
you may be able to search someone else's posts.

23:23.000 --> 23:26.000
I've locked all that functionality down on my server.

23:26.000 --> 23:30.000
People can only see what it is that I post for better or worse.

23:30.000 --> 23:33.000
And at that point, I think I'm quite comfortable with the risk.

23:33.000 --> 23:36.000
If I were to run, say, a matrix server or an XMP server,

23:36.000 --> 23:41.000
in our family, as I do, I haven't bothered about risk assessment for that.

23:41.000 --> 23:44.000
If I were to allow other people to sign up for it,

23:44.000 --> 23:47.000
I would be thinking, far more carefully about it.

23:47.000 --> 23:48.000
Okay.

23:48.000 --> 23:49.000
Hi.

23:49.000 --> 23:52.000
I know you're British, but sorry.

23:52.000 --> 23:57.000
Do you happen to be able to shed any light on the upcoming,

23:57.000 --> 23:59.000
any upcoming European laws?

23:59.000 --> 24:01.000
I'm a complete lay, I have no idea,

24:01.000 --> 24:03.000
which kinds of laws are coming up.

24:03.000 --> 24:06.000
And the names of these laws were to find more information about them

24:06.000 --> 24:10.000
in this notoriously opaque process.

24:10.000 --> 24:15.000
I'm not, so are there any other European laws coming up to deal with this?

24:15.000 --> 24:18.000
I don't know if there are specific legislative proposals.

24:18.000 --> 24:19.000
Others might.

24:19.000 --> 24:23.000
What I'm seeing is a general direction of travel along this kind of lines.

24:23.000 --> 24:26.000
I don't have a specific citation for you, I'm afraid.

24:27.000 --> 24:28.000
Okay.

24:32.000 --> 24:35.000
Talk tomorrow about it at 12.

24:35.000 --> 24:36.000
Okay.

24:36.000 --> 24:38.000
Sorry, I wasn't saying you should talk tomorrow about it at 12,

24:38.000 --> 24:41.000
but there is a talk tomorrow about it at 12.

24:41.000 --> 24:42.000
Matthias.

24:42.000 --> 24:43.000
Hi.

24:43.000 --> 24:45.000
I'm setting up a hack space.

24:45.000 --> 24:49.000
And instead of having it be that members can visit websites

24:49.000 --> 24:52.000
over the internet, I was thinking about using tall hidden services.

24:52.000 --> 24:56.000
And having DNS point to these six addresses and on unaddresses.

24:56.000 --> 25:00.000
And this way, no one on the internet can visit my websites.

25:00.000 --> 25:04.000
Unless they have, well, yeah, if I put in DNS, they would know.

25:04.000 --> 25:06.000
But the idea is I would share the hints of service with people.

25:06.000 --> 25:08.000
Physically, maybe on a piece of paper.

25:08.000 --> 25:10.000
And now they can visit the website or the internet.

25:10.000 --> 25:14.000
Do you think that that is a valid solution to some of these problems?

25:14.000 --> 25:17.000
So, is running something as a tall hidden service,

25:17.000 --> 25:20.000
a way of avoiding regulatory risk?

25:20.000 --> 25:22.000
I run tall hidden services.

25:22.000 --> 25:25.000
All of my websites and stuff behind tall hidden services.

25:25.000 --> 25:26.000
No, not fundamentally.

25:26.000 --> 25:28.000
If you've got users, you've got users.

25:28.000 --> 25:32.000
You might be able to find basic regulatory investigations.

25:32.000 --> 25:36.000
But there are a lot of people also using tall hidden services for user to use

25:36.000 --> 25:40.000
stuff that regulators and law enforcement agencies know perfectly well about.

25:40.000 --> 25:42.000
So, do it for, because it's good for privacy.

25:42.000 --> 25:44.000
Do it because it's good for security.

25:44.000 --> 25:46.000
Probably not a great regulatory risk mechanism.

25:46.000 --> 25:50.000
Please all chime in with a big round of applause for Neil.

