6 Common Social Engineering Tactics and How to Prevent Them

In this article, we take a look at some of the more common forms and tactics of social engineering as well as exploring just how an organization can protect itself from such an attack.

Social engineering is an attempt by attackers to fool or manipulate others into surrendering access details, credentials, banking information, or other sensitive data. Once access is gained the general goal is to gain money. 

Recently, for example, Twitter was subject to a high profile social engineering attack. Attackers manipulated several Twitter employees to gain access to the platforms admin accounts. Once they got access they used the admin privileges to post a tweet saying “All Bitcoin sent to our address below will be sent back to you doubled!”  They posted on a number of celebrity and company profiles including Apple, Bill Gates, Elon Musk and Joe Biden.

social engneering on twitter.jpg

Twitter shut the attack down quickly but not before the attackers got away with an estimated $120,000 USD worth of Bitcoin.

Social engineering is a creative strategy for attackers to exploit human emotion and ego, generally for a financial reward. It often forms part of other strategies as well such as ransomware

In this article, we take a look at some of the more common forms and tactics of social engineering as well as exploring just how an organization can protect itself from such an attack.

What are the stages of a social engineering attack?

In general, social engineering attacks are implemented in three stages.

  1. Research. Attackers perform research to identify potential targets as well as to determine what strategies might work best against these particular targets. Attackers will likely collect data off company websites, LinkedIn and other social media profiles and potentially even in-person.

  2. Planning. Once the attackers know who they will be targeting and have an idea of the targets potential weaknesses, they have to put together a strategy that is likely to work. The attacker needs to design the strategy and specific messages they will use to exploit the target’s individual weaknesses. Sometimes discussions surrounding plans can be found on darknet forums.

  3. Implementation. The first stage of execution of their prepared strategy is often sending messages through email, social media messaging or some other messaging platform. Depending on their approach the entire process could be automated, targeting a broad number of individuals, or it might be more personal with the attacker interacting personally with their victim. Generally, they are aiming to gain access to private accounts, uncover banking or credit card details, or to install malware.

6 of the Most Common Social Engineering Attack Strategies

1. Phishing and Spear Phishing.

Phishing messages are designed to get a victim’s attention with an alarming or curious message. They work on emotional triggers and often masquerade as well known brands making it seem like the messages come from a legitimate source.

Most phishing messages have a sense of urgency about them causing the victim to believe that something negative will happen if they don’t surrender their details. For example, they might pose as a banking institute and pretend to be a fraud notice asking them to log into their account immediately, however, the email actually links to a fake login page.

Spear phishing is similar but with a more targeted individualistic approach.

social engineering - phishing

2. Baiting.

A baiting attack generally pretends to offer something that the victim would find useful, for example, a software update. However, instead of a useful update or new software, it is, in fact, a malicious file or malware. 

3. Scareware. 

Playing on the targets fear this approach seeks to persuade the target that there is already a malware installed on their computer, or perhaps seek to persuade them that they already have access to their email address. This attack will then persuade the target to pay a fee to remove the malware. 

4. Pretexting.

In a pretexting atack the attacker creates a fake identity and they use it to manipulate their victims into providing private information. For example, the attacker might pretend t be part of a third-party IT service provider. They would then ask for the users account details and password in order to assist them with a problem. 

5. Quid Pro Quo. 

Similar to baiting, a quid pro quo attack promises to perform an action which will benefit the target. For example, an attacker might call an individual in company who has a technical support inquiry and then pretend to help them. However, instead of actually helping them they get the individual to compromise the security of their own device.

6. Tailgating.

Tailgating is a physical type of social engineering. It enables criminals to gain physical access to a building or secure area. An example of how this might work would be the criminal following behind someone authorized to access an area, they ask the person ahead to simply hold the door for them assuming an air of innocence.

unsecure phone.png

How to Prevent Social Engineering

One of the key reasons social engineering is so difficult to protect against is because of the variety of ways it can be implemented. Attackers can be incredibly creative and this can make it very hard to spot a social engineering attack. Additionally, security professionals have to contend with skilful manipulation of the human ego.

Social engineering attacks exploit human behaviour. They target peoples fears or concerns often with messaging that centres around urgency attempting to encourage victims to take action immediately before they figure out they are part of a social engineering attack. Key to prevention then is remaining suspicious of emails, voicemails, or instant messages through platforms such as Facebook. 

Additionally, security teams need to stay ahead of the attackers. They need to be aware of each variation of a particular social engineering attack. Using OSINT tools, for example, they can learn about current messaging and strategies being implemented as well as potential exploits. Allowing them to take actions to mitigate evolving and emerging threats.

Increased awareness and vigilance though is only the first step. These attacks are common because they are effective, and they are effective because they take advantage of inherently human traits. Changing this human behaviour though doesn’t happen overnight. An internal education strategy needs to be put in place to regularly inform and teach employees about current social engineering strategies in an effort to reduce the potential for any employee to fall prey to one. In these ways, security professionals can mitigate the potential risks that surround social engineering attacks.

Read More
Corporate Security Ben Luxon Corporate Security Ben Luxon

Fighting Disinformation: How to Detect Bots and Determine Fake News

In our increasingly digital world the proliferation of disinformation forms a serious threat to organizations. To combat misinformation companies need the right tools and information.

In an increasingly digital world, there is scope for fake news publishers to make a huge social impact as well as large profits through the spread of disinformation. Accordingly, this is a problem that has and will continue to grow. The spread is compounded by our very human natures which compel us to engage with inflammatory content and often share before we’ve had time to fact-check and verify.

The spread of disinformation is problematic on a number of levels, it can impact a brands image, spread harmful or misleading medical information - as we’ve seen throughout COVID-19, or even undermine democracy itself as was seen in the 2016 US elections. Ultimately, to combat misinformation organizations need to be equipped with the right tools and understand both what they’re looking for, and the reasons for spreading misinformation.

The High Cost of Fake News

There are serious potential ramifications for the unchecked proliferation of misinformation which can impact both B2C and B2B organizations. For example, a competitor or disgruntled customer or employee could hire or create a fake news publisher to damage your brand image for purposes of revenge or to gain a competitive market advantage. 

These adversarial news generation sites could easily generate a huge amount of very believable content, syndicate across a number of channels, and promote heavily through social media, potentially through the use of bots. Overwhelmed companies would face a significant challenge when developing a response to counteract these examples of bad “press” and it would be necessary for those targeted organizations to have real-time actionable data at their fingertips.

How do you Spot a Bot?

Anonymity

Real people sharing real stories will have full accounts, normally with a photo of themselves. These people will have friends, followers, family and likely engage largely with their friends content. The opposite is fairly true for bots. Bots, by their very nature don’t have identities which often results in bot accounts appearing to have a highly anonymous approach.

This could be evidenced in the lack of information they share, or perhaps they use a generic profile picture like a well-known landmark.

Activity

The frequency of their postings as well as how successful those posts are are good indicators of a bot. For example, you might come across an account with only one post and no followers yet that post has thousands of shares.

Content

The people that create bots have an agenda. Whether that’s to drive traffic to a website, generate income, spread political disinformation, etc. Whatever, their reason, the bots will be used to achieve it which means all their posts will have a common theme such as inflammatory political context.

Stolen photo

It’s not uncommon for bots to steal profile pictures. A quick test can be running their profile picture through Google image finder to find the real owner of the image.

Related: Responding to Global Crises like COVID-19 with Increased Situational Awareness

Things might appear real at a glance, but prove to be fake on closer inspection..

Things might appear real at a glance, but prove to be fake on closer inspection..

A quick checklist for botnet detection

Bot accounts used in one network or campaign usually have several of the below listed features in common:

  • Multiple accounts with similar names or handles;

  • Accounts were created on the same date;

  • Each account is posting to the same sites, or even the exact same links;

  • The same phrasing or grammatical error appears across each accounts;

  • They all follow each other and/ or share each other posts;

  • They use the same tool for link shortening;

  • The bios have similarities;

  • Profile pictures are generic or identifiably not them (easily searchable through Google).

Obviously, just because some accounts have similarities doesn’t mean they are all bots, however, it should certainly raise some eyebrows in suspicion especially if you have  four or five accounts with several of these signs.

Fake Accounts vs. Account Takeovers

We outline above a few of the tell-tale signs of a bot. There is an additional tactic that is commonly used to amplify the distribution of fake or inflammatory content and this is through an account takeover. 

For this approach botnet operators perform credential stuffing attacks on social media accounts and then use the accounts they gain access to, to share information through direct messaging or by sharing content. Additionally, a compromised account could theoretically mean sensitive information is exposed and executives or organizations as a whole could suffer reputational damage or financial loss.

Standard security protocols, such as having unique passwords for all your online accounts, should help individuals avoid becoming victims of these tactics. 

The Importance of Verifying Information

The best way to check the accuracy of a source is to check it against another source.

However, this does raise another question. What if those other sources, those source which are supposed to independently verify the truth are working with the information source you’re fact-checking. Or what if the facts in the source are. largely correct but the story is spun to support one side of an argument. This might ring with scepticism and conspiracy, however, it is a point worth making, with whom do you place your faith and at what point do you stop questioning the validity of information?

Identifying Click-bait

Click-bait titles are purposefully crafted to evoke a powerful response from the readers. The reason for this is it encourages people to share the post, often without even reading the text. Less reputable news sites are occasionally guilty of this tactic, twisting the truth in their titles to get a response and increase their reach. However, it is also a tactic employed by botnet operators to maximise the reach of fake news. Signs that this might be the case are as follows:

  • Does it evoke a strong emotional reaction?

  • Is the story utterly ridiculous - or does it perfectly confirm your beliefs?

  • Are you going to spend money because of it?

  • Does it make you want to share it?

What’s the Bigger Context

Understanding the context behind a piece of news can help you determine how much, if any, of the story is true as well as lead you to a better understanding of what the publishers end goal is.

  • Who’s providing the information?

  • What’s the scale of the story?

  • If there’s an “outrage,” are people actually upset?

  • How do different news outlets present the same story?

Understand their Angle

Just because something is misleading or even incorrect doesn’t mean it’s without use especially in a security context. In fact, understanding the reason behind the content might give insight into potentially harmful tactics targeting your organization and better allow you to create an effective response.

When determining what their angle is ask the following questions:

  • Are important facts getting left out or distorted?

  • What’s the larger narrative?

  • What if you are actually wrong? Your previous opinion on a subject might have been formed by a different piece of fake news.

  • Why did they share this story?

coding .jpg

Determining Truth from Fiction Online with Signal OSINT

How companies utilize technology and adapt to the shifting threat landscape will determine how effectively they are able to mitigate the threat of disinformation.

Signal enables organizations to monitor and manage large amounts of data from a plethora of different data sources across the surface, deep, and dark web. This, paired with advanced filters and boolean logic means that security teams are empowered to identify disinformation, discover patterns and botnets, and practically respond to these potential and evolving threats. 

Additionally, Signal enables security teams to detect data leaks. This data may be used in credential stuffing attacks and poses a severe security risk. Identifying data leaks early is essential for mitigating the threat of credential stuffing and in this case preventing harmful misinformation from being spread through or by an organizations workforce.

Read More