Instagram is taking action against bullying on its platform.
On Tuesday, the Facebook-owned photo sharing network rolled out a machine-learning tool that detects bullying in photos and captions. If the AI tool deems a photo unkind or unwelcome, it will send the snap sent to Instagram's community operations team for further review, according to a blog post.
The AI tool detects bullying in captions and comments, such as attacks on someone's appearance or well-being, according to an Instagram spokesperson. It also compares, ranks and rates images and captions to find bullying in photos, such as a split-screen image where a person is compared to another person in a negative way.
Instagram also introduced a bullying comments filter for live videos. The filter can detect and block offensive words during a live stream. Instagram launched the filter in May for comments on photos and videos in Feed, Explore and Profile.
Facebook and Twitter have also launched similar initiatives to limit bullying on their platforms. Twitter put a timeline in place last October to remove things like nudity and hateful imagery from its platform. Earlier this month, Facebook added tools to enable users to hide or delete multiple comments at once and allow users to report bullying or harassment on behalf of a friend or family member.
In addition, Instagram also added a kindness camera effect with teen author and actor Maddie Ziegler. If you follow Ziegler, you get the effect automatically. In selfie mode, your face is covered in hearts, and you can tag someone to support, according to the blog post. If you switch to the rear camera, you see the word "kindness" in different languages. If you don't follow Ziegler on Instagram, you can see someone using the effect and tap "try it" to add the filter to your camera.
First published on Oct. 9, 8:18 a.m. PT.
Updates, 10:19 a.m. PT: Adds information provided by an Instagram spokesperson.