DEV Community

Cover image for Image moderation system in minutes
Ido Shamun
Ido Shamun

Posted on

Image moderation system in minutes

Every user generated content platform needs some sort of content moderation system to make sure that the content is appropriate and respectful, otherwise you might get some serious negative feedback from your users (talking from experience 😡).
In this post I would like to talk specifically about image moderation and how easy it is to build a system which rejects NSFW images from your application. πŸ™ˆ

Google Cloud Vision

Enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy-to-use REST API.

I will be using the Cloud Vision API to automatically detect inappropriate images powered by SafeSearch. SafeSearch rates your image by the likeliness of the following: adult, spoof, medical, violence and racy. In our case (NSFW) adult, violence and racy might be the metrics we are looking for. You can try the API for free to see how it's like here.
Of course there are many alternatives to Cloud Vision but this is my favorite.

Server-side

We will be using Node to write our moderation code and the @google-cloud/vision package.

First, we have to initialize our annotator so we will be able to use it later on

const vision = require(`@google-cloud/vision`);
const client = new vision.ImageAnnotatorClient();

Next, let's say a user wants to upload an image to our server and we would like to reject the image if it is NSFW.

const veryLikely = detection => detection === 'VERY_LIKELY';

const likelyOrGreater = detection =>
  detection === 'LIKELY' || veryLikely(detection);

const moderateContent = url =>
  client.safeSearchDetection(url)
    .then(([result]) => {
      const detections = result.safeSearchAnnotation;
      return likelyOrGreater(detections.adult) || likelyOrGreater(detections.violence) || veryLikely(detections.racy);
    });

Our function moderateContent gets a url as parameter (it can actually receive also buffer), this url points to a local image file or a remote one. The functions returns a Promise which resolves to true if the content has to be rejected or false otherwise. It actually contains only one third-party call to Cloud Vision API to run a SafeSearch detection on the provided image. SafeSearch ranks the image with the following rankings:
UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, and VERY_LIKELY.
I set the threshold to adult and violence ranking "likely" or better and racy to "very likely", obviously you can set your threshold to whatever you want.

Using the moderateContent function our server can decide whether to proceed with the provided image or to reject it with error code 400 for example.

I hope that now you understand how easy it is to implement a content moderation system, all you need is a few lines of code and a Google Cloud account.

Good luck, let me know how it goes in the comment below :)

Top comments (3)

Collapse
 
veselinastaneva profile image
Vesi Staneva

Cool post & interesting way to solve the Content Moderation problem. :)
Actually, my team just completed an open-sourced Content Moderation Service built with Node.js, TensorFlowJS, and ReactJS that we have been working over the past weeks. We have now released the first part of a series of three tutorials - How to create an NSFW Image Classification REST API and we would love to hear your feedback(no ML experience needed to get it working). Any comments & suggestions are more than welcome. Thanks in advance!
(Fork it on GitHub or click🌟star to support us and stay connectedπŸ™Œ)

Collapse
 
idoshamun profile image
Ido Shamun

That's super awesome! Thanks for letting me know and good luck :)

Collapse
 
veselinastaneva profile image
Vesi Staneva

Thank you very much for your kind words! Keep up the good work :)