Let me start by saying that all of us at Facebook stand with the victims, their families and everyone affected by the horrific terror attack in New Zealand.
In the aftermath of such awful acts it's more important than ever that we stand against hate and violence.
I'm here to tell you today that at Facebook we continue to make that a priority in everything we do Facebook's mission is to give people power to build communities and to bring the world closer together.
our users share billions of pictures,stories and videos about their lives and their beliefs everyday,and that diversity of viewpoints ,expression and experience highlights much of what is best about Facebook.
But as we give people voice, we want to make sure that they're not using that voice to hurt others.
Facebook embraces the responsibility of making sure that our tools are used for good.
And we take that responsibility seriously.
There is no place for terrorism or hate on Facebook.
Remove any content of violence, police harasses or threatens others, that's why we have long standing policies just like terrorism and hate.
Why we invested so heavily in safe and security in the past few years.
And investing [UNKNOWN] are human and technological capabilities.
And Facebook now employ's more than 30,000 people across the globe, who are focused on safety and security.
Our rules have always been clear that white supremacists are not allowed on the platform, under any circumstance.
In fact, we've banned more than 200 white supremacist organizations under our Dangerous Organizations Policy.
And last month we extended that policy to include a ban on all praise, support and reciprocation for white nationalism and white separatism.
We see these ideologies as being inextricably linked to supremacy with intents of violence more generally.
The internet has been a force for creativity.
learning and accesed information.
A Google supporting this free full of ideas as court our mission to organize and make the worlds information universally accessible and useful.
This openness has democratize how stories and whose stories get told.
It's created a space for communities to tell their own stories and it's created a platform where anyone can be a creator and can succeed.
Around 2 Billion people come to YouTube every month.
And we still have over $500 of video uploaded every minute making it one of the largest living collections of Cuban culture ever assembled on one place.
We are deeply troubled by the recent increase in hate and violence in the word particularly the Acts of terrorism and violent extremism in New Zealand We take these issues seriously and wanna be part of the solution.
Over the past two years, we've invested heavily in machines and people to quickly identify and remove content that violates our policies against incitement to violent and hate speech.
I'd like to briefly outline how these processes work at YouTube.
First, YouTube enforcement system starts from the point in which a user uploads a video.
If it's [UNKNOWN] the video that already violated their policy, it's sent to humans for review.
If they determine that it violates their policy, they remove it, and the system makes a digital fingerprint so that it can not be uploaded again.
Second, we rely on experts to find videos, the algorithm might be missing.
Some of these experts that are in our in-house intel desks which proactively looks for new trends and content that may violate our policies.
We also allow expert NGOs and governments to notify us of bad content in bulk through our tested finder program.
We reserve the final decision on whether to remove a video that gets flagged by any of these entities but we benefit immensely from their expertise.
Finally, we go beyond enforcing our policy by creating programs to promote counter speech.
Example of this work include our creator pro change programs which supports the YouTube creator who are tackling issues like extremism and hate by building empathy and acting as positive role models In addition, Google [INAUDIBLE] Group has developed the redirect method which uses targeted ads on YouTube videos to disrupt online radicalization.
It's important to note that hate speech removal can be particularly complex compared to other types of content.
Hate speech, because it often relies on spoken Rather than visual cues is sometimes harder to detect than some form of Brandon terrorist propaganda on the opposite end over aggressive enforcement can also inevitably silence voices that are using the platform to make themselves heard on this important issues often in this case we find the content can sit in this gray area it comes up right against the line It may be offensive but it does not violate YouTube's policies against incitement to violence and hate speech.
When this occurs, we've built a policy to drastically reduce the video's visibility by making it ineligible for ads, removing its comments, and excluding it from our recommendation system.
In particular, we understand the issues around YouTube's recommendation system maybe top of mind This is why several months ago, we also updated our recommendation systems to begin reducing recommendations of even more borderline content, or content that can misinform users in harmful ways.
In conclusion, I'd like to end where I began, Google built its products for all users from all political stripes around the globe.
The long term success of our business is directly related to our ability to earn and maintain the trust of our users.
We have a natural and long term incentive to make sure that our products work for users of all viewpoints.
That's why hate speech and violent extremism have no place on YouTube.
Does Instagram have the same standards as Facebook For the most part, we apply our community standards across Instagram too.
There are certain things where there are differences, but for the most part, the community standards apply as well.
Well, I'm told that I can have this screenshot at the back.
Here, you have someone that's calling, would crush the United States under our feet, etc.
That was reported, and within a minute.
The report came back from Instagram that there's no problem here.
Basically these aren't the drones you're looking for, just move on.
I'm really curious.
If you're gonna enforce these standards, why are they so quickly enforced And erroneously enforced against people like my friends Diamond and Silk.
I asked them recently when I saw them, are you still having trouble with Facebook?
And they said, now anytime we say something nice about Donald Trump We spend forever just trying to prove that we're not a Russian robot [LAUGH].
And here, you have people that, as a result of their misunderstanding of their own religion, they want to crush the United States.
They think of us at the big Satan.
Israel is the little Satan.
And I would just encourage you to take a look at that and why someone who wants to destroy the United States and kill everyone in this room gets a pass when others don't, so I would welcome.
Any explanation you can find for that.
Thank you, Congressman, I'm not familiar with that exact example-
I know, it just happened.
I'd be happy to get back to my team, to make sure we look and review that.
Any calls to violence that target people based off their nationality, their ethnicity, [CROSSTALK]
Well, I know what it's supposed to be.
We would remove it, I just unfortunately am not familiar With that case, but it does go against our principles.
Ms. Waldon, many white nationalists have used misinformation propaganda to radicalize social media users.
How is YouTube working to stop the spread of far right conspiracies
Intent on skewing users perceptions of fact and fiction.
Most recently we have made updates to our recommendation algorithm so othat content that's on the boarderline is not pushed out through our recommendation system.
So content that violates are guidelines, our hate speech guidelines which prohibit Anything that promotes and exites violence against individuals or groups.
Or promote hatred against individuals or groups based on their characters and [UNKNOWN] based on their ethnicity.
All of that content is abiding with community guidelines.
The content that's on the border is content that we no longer include in our recommendation algorithm, and it can also be demonitized and comments are disabled, etc.
So we do our best to ensure that content is on the border isn't fully distributed across the platform.
All right, well Facebook has worked to stop the spread of the New Zealand video on its platform Three days later, the video was still spreading freely on WhatsApp, Facebook encrypted messaging services.
By design, WhatsApp does not have a way of tracking or preventing the spread of the videos like the New Zealand video.
What's Facebook doing to fix this issue And prevent WhatsApp from being used to spread hate speech.
As you mentioned on Facebook and Instagram we took immediate action towards that video.
Once we were made aware we were able to remove the video within ten minutes, we were able to leverage our artificial intelligence by uploading the video producing visual finger prints as Ms Walden explained earlier
To prevent an additional $1.5 million uploads, we actually prevent a 1.2 million and we're able to find 300,000 additional uploads of that video within the first 24 hours.
And had a very [UNKNOWN] swift response.
To your question about what's the update, what's the [UNKNOWN] own policies that go towards the content, the [UNKNOWN] working with law enforcement and they do often So my question to the internet platforms represented here today is.
I don't think you can be both.
You can't be a neutral platform and at the same time exercise editorial control over content.
So the question very simply is which are you?
Are you a neutral forum?
Or are you an editorial publication responsible for your content.
Mr. Potts, Ms Walden, which is it?
Thank you, Congressman, first and foremost, Facebook is a tech company.
We are not a platform in that sense, we are not a content creator, we do not edit content, although we do moderate content under our community standards.
After hearing your discussion, I think that those are many of the issues that we wrestle with.
To give people the ability to have a voice on a platform and also to balance safety.
We [UNKNOWN] more speech, we want to get people the voice but we have to draw line somewhere and we feel that by drawing lines around things like cause the violence even some things that are more just agree just child pornography for example.
By not having that on the platform, we'll give the platform to more people so that they can share their voice.
So it's a constant tension that we rustle with daily, my teams rustle with it all the time.
We try to strike that down.
It's a hard one.
We know that there are many opinions.
We want to be across all the spectrum of ideas to have Have those ideas fostered on the platform, but again, it is a difficult discussion.
To visit a place where we want anyone to come and share their ideas, diverse opinions about their politics, things that are even controversial or offensive.
Our community guidelines are, are politically neutral.
And YouTube is a place where users are uploading content.
So the community guidelines are in place to make sure that we are creating a free and open platform for users to upload their own content.
But they're also in place to ensure that that's happening free from hate, from violence and harassment on the platform.
Let me ask the tech companies, cuz you all did say that you would inform law enforcement.
When you find bad users, do you talk to each other at all?
So that all the platforms, so if you identify somebody will you then alert Google and Twitter and Facebook and Instagram and everybody.
Do you all coordinate at all?
We do have a strong industry partnerships.
One is with the GIFCT, and that is the Global for Counter Terrorism.
So in a case like New Zealand, for instance, when we became aware of that, our first priority was to work with the New Zealand law enforcement, which we did.
We sent some of our trust and safety officials on the ground to be a resource for law enforcement.
But one of the next steps we took was to upload that the images with into our AI designate as a terrorist attack.
And then go work with companies like Microsoft.
Twitter, Google, Snapchat, others, sharing it across the board.
So they could also be on the lookout, and then enact their systems to prevent it.
I can just reiterate that the Global Internet Forum to Counter Terrorism is the body that the four companies founded.
And in the context of New Zealand, it is the way in which we used hashes To ensure that we were minimizing the distribution in the context of New Zealand.
There have long been close partnerships between the companies working on issues around hate and violent extremism and terrorism.
And we find that that really enhances our ability to learn from one another.
In the ways that we're tackling these problems that are unique on our individual platforms.
Well, thank you, and I would just encourage you all to figure it out, because you don't want us to figure it out for you.
So thank you, and with that, I yield back, Mr Chairman.