The Moderation model stands out as something that could be useful as a Telegram bot. Owners of SFW (Safe For Work) groups could have a bot checking photos posted to the group, and trigger some sort of action if a photo meets some criteria. The model comes with the categories:
- explicit
- suggestive
- drug
- gore
Like the other models, these categories come with a confidence.
For instance this photo came with a confidence of 92.65% for "suggestive".
Something like that wouldn't trigger an automatic action from the bot in most groups. Right now, this is experimental. I have this running in an SFW group right now, but it only messages the owner and moderators of the group if it sees something above the threshold. I don't want to say the name of the group because I don't want people going to this SFW group just to test the limits.
In the future, the group owner might add RainRatBot, then specify a policy they'd like applied. For instance, they might say, "Delete any photo that is 'explicit' with 99%+ likelihood, then message the moderators if any of the categories are 90%+"
To get a feel for what the confidences and categories translate to for things that might be posted to your group, I created a group the bot treats specially. It will run the photo past the different models and tell you what the results were. If you are a group owner, message me at @RainRattie to get the link. (only join if you are 18+ and ok with seeing adult material).
No comments:
Post a Comment