In other words, if your behavior is considered "troll-like" by Twitter, it will be harder for other users to find your posts on the platform.
Twitter concedes that "some troll-like behavior is fun, good and humorous", but says that there are some accounts and tweets that are "behaving in ways that distort the conversation" without actually violating any policies.
Twitter will begin using a wider range of signals to rank tweets in conversations and searches, hiding more replies that are likely to be abusive, the company said today.
"There are many new signals we're taking in, most of which are not visible externally", said Del Harvey, vice president of trust and safety, and David Gasca, director of product management, health, in a blog post titled "Serving Healthy Conversation". Twitter will look for signals such as unconfirmed email addresses, multiple accounts opened by the same user, and repeated interactions with accounts that don't follow them.
"Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them", he continued.
For Twitter, this means utilizing an amalgamation of code-based rules, human reviews, and machine learning-which will all help organize and present content to the user in a purportedly healthier way, in areas such as search and conversation.
No doubt attention-seeking trolls will be hopping with rage and crying censorship over the latest development, but Twitter said that early testing of the new tools in various markets around the world shows that keeping the negative commentary out of sight is having a positive impact.
It said it had deleted or added warnings to about 29 million posts that had broken its rules on hate speech, graphic violence, terrorism and sex, during the first three months of the year.
"This is only one part of our work to improve the health of the conversation and to make everyone's Twitter experience better". There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter.
The company says it will deploy a screen saying "show more replies" in front of responses that its systems adjudicate as vexatious, cynical or calculated to offend.