Social media algorithms and AI content generated by malicious users have aggravated the problems of online misinformation and disinformation considerably. Misleading information hurt the trust in the organization and can cause irreparable harm to its reputation. Organizations, therefore, must respond to these attacks with lightning speed (in minutes ideally) to mitigate the impact.

Following are some best practices that might aid in stanching the spread of mis-/disinformation.

The first, is an obvious one, i.e., organizations must become more vigilant and constantly scan its online environment for suspicious content. The Observatory on Social Media at Indiana University (https://osome.iu.edu/), for one, provides some free online tools such as, BotAmp to detect unusual activity on Twitter. Availability of free online programs and commercial products should make it more practicable for organizations to scan its online environment diligently.

Second, caution must be exercised when liking, commenting, or replying on social media as AI driven disinformation makes it harder to distinguish between credible and rogue sites to avoid becoming a part of the disinformation campaign unwittingly. Some of the free and commercially available tools to detect disinformation are helpful in mitigating the impact of unsophisticated attacks by bots; however, not all disinformation campaigns involve bots. The more advanced programs might provide insights into the dynamics of the disinformation campaign and identify some online accounts at the center of the disinformation campaign, but it would be optimistic to assume that all of them could be identified. Hence, it would be prudent to exercise caution.

Third, this might be contentious, but turning off the comments temporarily during a suspected disinformation campaign would serve better than waiting to confirm the veracity of the posts/comments before blocking it, as speed is of utmost importance given the exponential nature of the online diffusion of disinformation. Public would need to be re-directed to the company’s official website/channels of communication where they could obtain accurate and updated information.

Fourth, organizations should focus its efforts on building and protecting its credibility. At the highest level of abstraction, AI driven approaches use data labels to classify social media posts as credible or deceptive. User credibility is one of the factors considered in creating these data labels. Organizations that score higher on credibility measures would fare better during a disinformation campaign.

Fifth, organizations should consider hiring its own domain experts to assist in fact checking or identifying information for fact checking. Sometimes, the misinformation can emanate from within the organization. As more and more organizations switch to chatbots, it has been observed some have the potential to turn rogue. Recently, Tessa, the chatbot for the National Eating Disorder Association went rogue and started giving advice which was unsafe and outside of its designated role. Given the limitations of both AI and human-based approaches independently in fact checking, a hybrid of both AI and human-based defense system is required to leverage the scalability of AI to efficiently process extremely large amounts of data and the expertise of humans in checking the accuracy of the statements. This hybrid method will not only improve speed but also the accuracy of outcomes and enhance trust in the organization.

Leave a Reply

Your email address will not be published. Required fields are marked *