In an increasingly digital world, there is scope for fake news publishers to make a huge social impact as well as large profits through the spread of disinformation. Accordingly, this is a problem that has and will continue to grow. The spread is compounded by our very human natures which compel us to engage with inflammatory content and often share before we’ve had time to fact-check and verify.
The spread of disinformation is problematic on a number of levels, it can impact a brands image, spread harmful or misleading medical information - as we’ve seen throughout COVID-19, or even undermine democracy itself as was seen in the 2016 US elections. Ultimately, to combat misinformation organizations need to be equipped with the right tools and understand both what they’re looking for, and the reasons for spreading misinformation.
The High Cost of Fake News
There are serious potential ramifications for the unchecked proliferation of misinformation which can impact both B2C and B2B organizations. For example, a competitor or disgruntled customer or employee could hire or create a fake news publisher to damage your brand image for purposes of revenge or to gain a competitive market advantage.
These adversarial news generation sites could easily generate a huge amount of very believable content, syndicate across a number of channels, and promote heavily through social media, potentially through the use of bots. Overwhelmed companies would face a significant challenge when developing a response to counteract these examples of bad “press” and it would be necessary for those targeted organizations to have real-time actionable data at their fingertips.
How do you Spot a Bot?
Anonymity
Real people sharing real stories will have full accounts, normally with a photo of themselves. These people will have friends, followers, family and likely engage largely with their friends content. The opposite is fairly true for bots. Bots, by their very nature don’t have identities which often results in bot accounts appearing to have a highly anonymous approach.
This could be evidenced in the lack of information they share, or perhaps they use a generic profile picture like a well-known landmark.
Activity
The frequency of their postings as well as how successful those posts are are good indicators of a bot. For example, you might come across an account with only one post and no followers yet that post has thousands of shares.
Content
The people that create bots have an agenda. Whether that’s to drive traffic to a website, generate income, spread political disinformation, etc. Whatever, their reason, the bots will be used to achieve it which means all their posts will have a common theme such as inflammatory political context.
Stolen photo
It’s not uncommon for bots to steal profile pictures. A quick test can be running their profile picture through Google image finder to find the real owner of the image.
Related: Responding to Global Crises like COVID-19 with Increased Situational Awareness
A quick checklist for botnet detection
Bot accounts used in one network or campaign usually have several of the below listed features in common:
Multiple accounts with similar names or handles;
Accounts were created on the same date;
Each account is posting to the same sites, or even the exact same links;
The same phrasing or grammatical error appears across each accounts;
They all follow each other and/ or share each other posts;
They use the same tool for link shortening;
The bios have similarities;
Profile pictures are generic or identifiably not them (easily searchable through Google).
Obviously, just because some accounts have similarities doesn’t mean they are all bots, however, it should certainly raise some eyebrows in suspicion especially if you have four or five accounts with several of these signs.
Fake Accounts vs. Account Takeovers
We outline above a few of the tell-tale signs of a bot. There is an additional tactic that is commonly used to amplify the distribution of fake or inflammatory content and this is through an account takeover.
For this approach botnet operators perform credential stuffing attacks on social media accounts and then use the accounts they gain access to, to share information through direct messaging or by sharing content. Additionally, a compromised account could theoretically mean sensitive information is exposed and executives or organizations as a whole could suffer reputational damage or financial loss.
Standard security protocols, such as having unique passwords for all your online accounts, should help individuals avoid becoming victims of these tactics.
The Importance of Verifying Information
The best way to check the accuracy of a source is to check it against another source.
However, this does raise another question. What if those other sources, those source which are supposed to independently verify the truth are working with the information source you’re fact-checking. Or what if the facts in the source are. largely correct but the story is spun to support one side of an argument. This might ring with scepticism and conspiracy, however, it is a point worth making, with whom do you place your faith and at what point do you stop questioning the validity of information?
Identifying Click-bait
Click-bait titles are purposefully crafted to evoke a powerful response from the readers. The reason for this is it encourages people to share the post, often without even reading the text. Less reputable news sites are occasionally guilty of this tactic, twisting the truth in their titles to get a response and increase their reach. However, it is also a tactic employed by botnet operators to maximise the reach of fake news. Signs that this might be the case are as follows:
Does it evoke a strong emotional reaction?
Is the story utterly ridiculous - or does it perfectly confirm your beliefs?
Are you going to spend money because of it?
Does it make you want to share it?
What’s the Bigger Context
Understanding the context behind a piece of news can help you determine how much, if any, of the story is true as well as lead you to a better understanding of what the publishers end goal is.
Who’s providing the information?
What’s the scale of the story?
If there’s an “outrage,” are people actually upset?
How do different news outlets present the same story?
Understand their Angle
Just because something is misleading or even incorrect doesn’t mean it’s without use especially in a security context. In fact, understanding the reason behind the content might give insight into potentially harmful tactics targeting your organization and better allow you to create an effective response.
When determining what their angle is ask the following questions:
Are important facts getting left out or distorted?
What’s the larger narrative?
What if you are actually wrong? Your previous opinion on a subject might have been formed by a different piece of fake news.
Why did they share this story?
Determining Truth from Fiction Online with Signal OSINT
How companies utilize technology and adapt to the shifting threat landscape will determine how effectively they are able to mitigate the threat of disinformation.
Signal enables organizations to monitor and manage large amounts of data from a plethora of different data sources across the surface, deep, and dark web. This, paired with advanced filters and boolean logic means that security teams are empowered to identify disinformation, discover patterns and botnets, and practically respond to these potential and evolving threats.
Additionally, Signal enables security teams to detect data leaks. This data may be used in credential stuffing attacks and poses a severe security risk. Identifying data leaks early is essential for mitigating the threat of credential stuffing and in this case preventing harmful misinformation from being spread through or by an organizations workforce.