[ad_1]
Instagram has started to crack down on the spread of misinformation and fake news on its platform by rolling-out a new feature that detects forged photographs.
According to the Facebook-owned social network, its newly-unveiled system leverages “a combination of feedback from our community and technology” to identify which photos should be passed onto independent third-party fact-checkers.
If those fact-checkers decide that a photo is fake, it will be hidden behind a warning message before anyone can view it.
This is to prevent the image from being seen without the express permission of the viewer – who has to click through a few warnings to actually get to the original image.
Not only that, photos deemed to be “factually inaccurate” by Instagram are completely wiped from the Explore tab. They also won’t appear when searching around through a certain social hashtag.
The new policies are intended to crack down on online propaganda and misinformation, the new features are making use of third-party fact-checkers who are now targeting ‘heavily doctored’ images by photographers and digital artists by labeling them as false information.
According to a report from PetaPixel, an algorithm introduced by Instagram in December is designed to mitigate the spread of fake images, has been flagging some non-threatening content created and/or altered by digital artists.
Instagram says it is standing by the decision to filter out the content in this case, telling The Verge: “We will treat this content the same way we treat all misinformation on Instagram.”
Following an outcry by digital artists over the move, Facebook Company clarified that an Instagram user’s photos will not be taken down because of it being photoshopped, but will instead be given a label after being rated by a fact-checker.
[ad_2]
Source link