YouTube is getting tougher on terrorist propaganda videos.
Tech companies have been criticized for not doing enough to stop extremist content from being shared online.
In light of this, Google announced four new steps the company's gonna take to stop terrorist messages from spreading on YouTube.
First.
It'll use more artificial intelligence and machine learing so that the system can spot this content quicker.
Second, it'll expand YouTube's trusted flagger program.
These super flaggers have the power to alert YouTube staff of 20 videos at once.
There are 63 organizations now with that power including many police units.
But Google is adding another 50 non-government groups.
Groups with expertise in things like hate speech, self-harm, and terrorism.
Third, YouTube is taking a new approach in dealing with videos that fall in a gray zone.
Maybe a video doesn't clearly violate a policy but it contains offensive view points regarding religion.
Now when that happens YouTube will put up warnings, make that video harder to find, and will cut off commenting or liking.
And fourth, is something called the redirect method.
To fight back against radicalization, Google target advertisements and videos with anti-terrorist messages to potential ISIS recruits, in hopes to change people's minds about joining.
This will be implemented more broadly across Europe, but it's already in use in the US.
Just last week, Facebook also made a similar announcement talking about how it uses AI to better spot and flag terrorist propaganda on the social network.
Google says it's working with Facebook, along with other tech companies, to address terrorism concerns.
I am Bridget Carey.
For more information, head to cnet.com.