7 Growing Cybersecurity Threats Professionals are Increasingly Worried About
We take a look at 7 of the growing concerns that cybersecurity and infosec professionals have as the trend towards digitizations continues at an increasingly explosive pace.
The new softwares and systems that are employed across an organization create new attack vectors for threat actors and new data security concerns. Not only that but as these new digital systems are put into use to replace once manual tasks additional complications arise from potential user errors, for example, an employee might make private data public without even realising.
In this article, we take a look at 7 of the growing concerns that cyber and infosec professionals hold as this trend towards digitizations continues at an increasingly explosive pace.
1. Unintentional Data Exposure
“To err is human,“ as Alexander Pope famously wrote. We all make mistakes and to combat this we have progressively leveraged more technology across industries to automate processes and reduce the potential for human error. However, technology can’t prevent our every mistake, and paradoxically, this use of technology increases the amount of data we as people and organizations produce and store in our systems. Hackers are aware of this and continue to find creative ways to exploit human weakness with strategies such as complex phishing campaigns.
On top of this, the adoption and rapid development of hardware (phones, for example) mean many people conduct work from their personal mobile device. And the move towards work from home driven by the COVID-19 pandemic has furthered this merger of work and personal devices as well as increased the amount of work done from unsecured networks.
2. Adoption of AI into Malware for Scale and Evasion
Denial of service attacks can take a variety of forms, from malware to DDoS attacks, and have huge financial implications for an organization. In 2018, for example, shipping giant Maersk had their IT systems taken out by a vicious malware called NotPetya, costing them an estimate $300 million.
These ransomware attacks might be driven by political motives, thoughts of financial gain, or something else entirely. Over the last few years, these tactics have evolved they’ve adopted new technologies and strategies allowing threat actors to increase both the scale of the attacks, as well as to more effectively neutralize increasingly complex security protocols.
One increasing concern is the adoption of AI into these attacks. AI can be used in a variety of ways, such as increasing the effectiveness of phishing campaigns. One example was developed by IBM Research, DeepLocker. DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners and then uses facial recognition to identify the specific target and launch its payload.
How AI is used to could completely change the way information security and cybersecurity professionals, in general, need to adapt and respond to threats.
3. Financial Fraud
Financial fraud off the back of data breaches is nothing new. However, it continues to be a problem today and into the foreseeable future. Data breaches from large organizations, whether they are related to your organization or not could easily lead to new attack vectors on your company.
There is a huge amount of Personal Identifiable Information (PII) for sale on the dark web. This data can be used in a number of ways, from credential stuffing strategies to identifying high-value targets and refining strategies for spear-phishing campaigns.
4. 3rd Party Integrations
Often organizations spend a huge amount of time and money ensuring their internal cybersecurity practices are excellent. It only takes one breach to realize the efficacy of this investment. Successful ransomware, for example, against an organization for example could cost tens of millions not even considering the reputational damages that might accompany the financial ones.
However, as was seen with the 2020 SolarWinds breach, it doesn’t matter how well educated your staff, how up to date your firewalls, how alert your security teams are if your third party integrations have weaknesses.
5. Increasing Amounts of Sensitive Data Collected Through IoT Devices
Internet of Things (IoT) devices is beginning to infiltrate every level of our lives. From mobile robots, to inventory tracking, to personal assistants, connected speakers and smart TVs. These devices seek to automate and simplify our lives.
However, what many people don’t realize is that these machines are often insecure by design and offer attackers new opportunities. Additionally, the terms and conditions around data sharing and usage from many of these devices lack transparency, and by utilizing this technology an organization makes it increasingly difficult to know and control what data is going out.
Finally, it’s often the case that, while a vendor may recommend applying new firmware updates, they are not applied unless the device starts misbehaving and someone applies the update to troubleshoot the issue. This could lead to serious security compromises.
6. Rise of Fake Online Personas
This threat can have a direct and dramatic impact on organizations reputation and the physical security of employees. By creating and leveraging fake or phantom social profiles threat actors can create trending news and information, promote poor products, or push lies and deceptions to further an agenda.
The application for these kinds of campaigns is vast, affecting everything from national elections to company sales and share prices, and there is currently no system in place to identify false profiles efficiently and counter the purposeful spread of misinformation in this way.
7. Shortfall of Professionals
The final security risk on the list is the continued shortage of skilled security workers. As cybersecurity threats evolve, and areas such as information security become more important for organizational security, increasing numbers of skilled and trained professionals will be needed.
Finals Words
Many people are now desensitized to the fact their data is shared online either through breaches or loose company policies. Because we cannot regain our privacy, they often become careless about protecting it further. Add to this the constant evolution of cybersecurity threats, and the challenge for cybersecurity professionals looks like a tough one.
To ensure organizational security, companies need a combined response, that includes continuous education of employees, restricted accesses, and multi-factor authentication. This needs to be paired with a skilled security team who are armed with the necessary knowledge and tools such as OSINT software.
Security professionals need to be able to gather real-time data on emerging threats and proactively implement an effective response.
5 Ways AI is Subtly Shaping the World as we Know it
AI is shaping our world in numerous ways from targeted ads to rapidly advancing facial recognition applications and even AI-generated malware.
Artificial Intelligence (AI) describes technologies that can make informed, non-random decisions algorithmically. It has many current and potential applications, it is the current pinnacle of humanities ceaseless drive towards greater and greater efficiency. In particular regard to OSINT though, it enables humans to collect, analyze and interpret huge sets of data, data sets so large that it would be entirely unfathomable to even approach them without machine assistance.
Everyone knows AI is shaping their world in one way or another. But often the changes are subtle, gradual and go unnoticed. Very few of us know what actually goes on behind the steel doors of the big tech companies like Alphabet, Facebook, and Apple. And yet we interact with their AI systems on a daily basis and those systems have huge power over our lives. In this article, we take a look at some of the key ways AI is being used today and how it will become increasingly important as our technologies improve.
5 Ways AI is Shaping the World
1. Improving and optimising business processes
The very first robots in the workplace were all about automating simple manual tasks. This is the age of factories and production lines. Today though, it’s not manual tasks that robots are taking over. Instead, software-based robots are taking on repetitive tasks carried out on computers.
Initially, this was limited to automating simple repetitive tasks, such as “send follow up email 2 if no response after 3 days”. This has already reduced admin tasks and improved business operational efficiencies immeasurably. The next step though is the use of AI technologies to further alleviate some of the more labour intensive ‘intelligent’ tasks such as data gathering, aggregating and analysis, leaving people to spend more time on complex, strategic, creative and interpersonal tasks.
2. More personalization will take place in real-time
Big tech companies are already using data to personalization services. Google Discover, for example, is a feed based on a complex algorithm which reads your online history and tailors the news feed to your particular interests. Other big tech examples are Spotify and Netflix which use AI to suggest relevant media based on your historical behaviour.
This technology is constantly being evolved and is probably one of the most noticeable in our day to day lives. The end goal is a system which can almost perfectly predict your desires and needs, an outcome none of us are likely to protest against. On the other side of the same coin though is the use of that very same data to target individuals with hyper-relevant ads. This practice can often seem intrusive and is one of the driving forces behind the adoption of VPN’s.
3. AI in the creative space
Some things are still, even in 2020, better handled by humans. That being said AI technologies are now beginning to encroach on the creative spaces. Scorsese's, The Irishman, is one example of this, where Robert De Niro was de-aged on-screen using AI technology.
There are additional uses though, for example, AI is being used to edit video clips for the purposes of spreading misinformation, and often these edits are incredibly hard to spot. This has led to a new sector of cybersecurity which requires AI technology to spot AI-generated or edited video and audio files.
4. Increasing AI in Cybersecurity
Even as data grows and is used to progress the development of AI this simultaneously opens up new avenues for exploits by threat actors. For example, AI can be used to create and automate targeted ‘intelligent’ phishing campaigns. AI-supported cyberattacks though have the potential to go much further. As such, increasingly advanced AI is needed to combat the evolving cyber threat landscape.
Related: How Machine Learning is Changing Modern Security Intelligence
5. AI learning to perfectly emulate humans
Anyone that keeps their eye on the work that Google is doing will know about their 2019 update, BERT. A natural language processing (NLP) framework which is designed to better understand context and intertextual reference so that they can correctly identify both the searcher's intent as well as the intent behind any content created.
One of the key challenges that faces AI right now is idiomatic or referential speech; language that has more depth of meaning, for example, determining the importance of the concept of a mother, or understanding a phrase like “six feet under”. Our current research and development project at Signal is one example of the practical applications of overcoming this challenge. It involves using machine learning to enable our software to understand the intent behind text, even when ‘hidden’ behind challenging language like idioms, to more accurately identify threats.
As these natural language processes advance, so too will conversational AI bots, to the point where, because of the range and complexities of their answers, you would be forgiven for mistaking them as human.
The Future of AI and what that means for OSINT
Artificial Intelligence, machine learning, and automation have already revolutionized intelligence gathering. With OSINT tools like Signal security teams and intelligence agents can effectively and efficiently monitor the open, deep, and dark web, setting up customized alerts based on searches that leverage boolean logic. Machine learning takes this intelligence to the next level. It allows for vast amounts of data to be collected, aggregated, and for all the irrelevant hits to be essentially culled, supplying the security team at the end with actionable, relevant intelligence.
Humans play an essential role in this new intelligence lifecycle. In defining the search terms to match security strategies, analysing the end date the system feeds back, reassessing the searches based on the new evidential data and implementing appropriate responses. This is a key role that will no doubt evolve as the technology becomes more accurate, reducing inefficiencies in process.
How Machine Learning is Changing Modern Security Intelligence
Today, AI and machine learning enable both attackers and defenders to operate at new magnitudes of speed and scale. Security teams need to leverage the power of machine learning and automation if they want to stand a chance of mitigating threats.
A key challenge facing modern security teams is the explosion of new potential threats, both cyber and physical, and the speed with which new exploits are taken advantage of. Additionally, in our globalized world threats can evolve from innumerable sources and manifest as a diverse range of hazards.
Because of this, security teams need to efficiently utilize automation technology and machine learning to identify threats as or even before they emerge if they want to mitigate risks or prevent attacks.
Artificial Intelligence in the Cyber Security Arms Race
Today, AI and machine learning play active roles on both sides of the cybersecurity struggle, enabling both attackers and defenders to operate at new magnitudes of speed and scale.
When thinking about the role of machine learning for corporate security and determining the need, you first need to understand how it is already being used for adversarial applications. For example, machine learning algorithms are being used to implement massive spear-phishing campaigns. Attackers harvest data through hacks and open-source intelligence (OSINT) and then can deploy ‘intelligent’ social engineering strategies with relatively high success rate. Often this can be largely automated which ultimately allows previously unseen volumes of attack to be deployed with very little effort.
Another key example, a strategy that has been growing in popularity as the technology evolves, making it both more effective and harder to prevent, is Deepfake attacks. This uses AI to mimic voice and appearance in audio and video files. This is a relatively new branch of attack in the spread of disinformation and can be harnessed to devastating effect. For example, there are serious fears of the influence they may bring to significant future political events such as the 2020 US Presidential Election.
These are just two of the more obvious strategies currently being implemented in a widespread fashion by threat actors. AI supported cyberattacks though have the potential to go much further. IBM’s DeepLocker, for example, describes an entirely new class of malware in which AI models can be used to disguise malware, carrying it as a ‘payload’ to be launched when specific criteria are met - for example, facial recognition of its target.
Managing Data Volumes
One of the primary and critical uses of AI for security professionals is managing data volumes. In fact, in Capgemini’s 2019 cybersecurity report 61% of organizations acknowledged that they would not be able to identify critical threats without AI because of the quantities of data it is necessary to analyze.
“Machine learning can be used as a ‘first pass’, to bring the probable relevant posts up to the top and push the irrelevant ones to the bottom. The relevant posts for any organization are typically less than 0.1% of the total mass of incoming messages, so efficient culling is necessary for the timely retrieval of the relevant ones.” - Thomas Bevan, Head Data Scientist at Signal.
Without the assistance of advanced automation softwares and AI, it becomes impossible to make timely decisions - impossible to detect anomalous activity. The result of which is that those organizations who don’t employ AI and automation softwares for intelligence gathering often miss critical threats or only discover them when it’s too late.
Signal OSINT and Machine Learning
Signal OSINT platform uses machine learning and automation techniques to improve data collection and aggregation. The platform allows you to create targeted searches using Boolean logic, but it is our machine learning capabilities which allow us to go beyond Boolean keyword searches.
“By recognising patterns in speech and relations between commonly used words, one can find examples of relevant posts even without keywords. While phrases like ‘I’m gonna kill the boss’ can be picked up by keywords easily, keyword searches alone struggle with more idiomatic speech like, ‘I’m gonna put the boss six feet under’, and will incorrectly flag posts like ‘Check out the new glory kill animation on the final boss’. Machine learning, given the right training data, can be taught to handle these sorts of examples,” says Thomas Bevan.
Signal continuously scans the surface, deep, and dark web and has customizable SMS and Email alert capability so that security teams can get real-time alerts from a wide array of data sources such as Reddit, 4Chan, 8Kun etc. Additionally, Signal allows teams to monitor and gather data from dark web sources that they would otherwise be unable to access either for security reasons or because of captive portals.
Finally, the software allows users to analyze data across languages and translate posts for further human analysis. There are additional capabilities, such as our emotional analysis tool Spotlight, which can help indicate the threat level based on language indicators.
Complementing AI with Human Intelligence
In order to stay ahead of this rapidly evolving threat landscape, security professionals should be using a layered approach that pairs the strategic advantages of machine learning to parse through the vast quantities of new data with human intelligence to make up for current flaws in AI technology.
Machines have been at the forefront of security for decades now. Their role though is evolving as they get passed the heavy lifting, allowing analysts and security professionals to analyse hyper-relevant data efficiently.