Should you be worried about false negative insider threats?
By Veriato - March 26, 2020
The consistent rise in Insider Threat-related incidents has led to a growing focus and investment in proactively detecting these threats. According to reports, 60% of organizations discovered one or more insider attacks last year, and 90% admitted that they felt vulnerable to insider attacks. Reports also show that it takes an average of over two months to contain an insider attack. Furthermore, incidents that take more than 90 days to contain cost about $14 million compared to $7 million when detected within 30 days. Nonetheless, one of the biggest concerns is still that insider threats often go undetected. Trusted employees have better chances of circumventing security controls, covering their tracks, and getting away with the attack.
In response, organizations incorporate technology that can help detect malicious activities. However, a challenge for the industry has been that the majority of alarms triggered are deemed false positives. A false positive is when an alarm has been triggered, but there is no threat. This results in a significant amount of time spent by investigators to determine if the risk is a valid concern or not. While this is a challenge many companies should focus on fixing, there is an even greater concern that's often neglected: the dangers of false negatives.
Incidents that take more than 90 days to contain cost almost $14 million, compared to $7 million, when detected within 30 days. – Ponemon Institute (format to stand out)
What is a false negative?
As the name implies, a false negative is the opposite of a false positive. Where a false positive is like the fire alarm ringing when there is no actual fire, a false negative is like the alarm remaining silent while a building goes up in flames. In other words, false negatives occur when detective tools deployed by an organization fail to alert on real threats.
Often time, these false negatives are intentional ill-willed employees trying to avoid being caught. The belief that insider threats are often carried out by only malicious employees has also been countered in recent cases where both careless and non-malicious employees have caused a large percentage of insider attacks. Sometimes, employees do not know that they are being exploited or leveraged to carry out an attack on their organization. Because they are authorized users, in many cases, they go undetected as well.
Why are false negatives a problem for businesses?
False negatives are a problem for businesses for several reasons.
Increasing attack sophistication: It'sIt's concerning when you find out that an outsider has gained access to trusted user's credentials or account and is engaging in activity that generally appears normal. How do you know it's an imposter? This is one of the largest gray areas and the greatest risks when considering false negatives and insider threats.
Ineffective use of an already limited security budget: Although there have been improvements in how organizations detect insider threats while reducing false positives, rule-based solutions like traditional Data Loss Prevention don't fare well against false negatives. As a result, vulnerabilities are exploited and left undetected until the breach occurs and is discovered later on. Despite spending millions of dollars to deploy these tools, organizations still have to deal with massive breaches that originate from their employees and trusted third parties.
The insider advantage: As previously mentioned, insiders often have legitimate access to systems, insider knowledge, and more. They can be difficult to predict and detect without the right technology and processes.
Balancing security with usability: Security is not meant to be a hindrance to operations. Another challenge with trying to manage false negatives is that when organizations become too rigid and try to lock everything down based on limited rules, they can disrupt legitimate actions. Consider the growing adoption of remote workforces. Imagine shutting down an employees' account access because they logged in from a strange location. They are in the middle of closing a critical sale for your company in a new territory. This could have easily been a false positive where the employee is legitimately working in a new region, and their work has now been disrupted.
On the contrary, imagine that "strange location" being an intentional decoy that a malicious employee is using to throw the security team off of their scent for the real crime they are conducting elsewhere. This might end up being a false negative, making context critical in either case. The best programs can catch the right incidents, from the right people, at the right time, without disrupting business as usual.
Examples of false negatives insider threats within organizations
It is essential to understand the different types of insider threats to prevent these attacks better. According to an Interset Report, 36% of insider attacks were from malicious employees, 34% were external attackers, 30% were accidental and unintentional mishaps. Here are examples of the missed threats in action:
- Email Oversight: Consider a malicious phishing email or file that made it to a user's inbox. It might have gone through many firewalls, intrusion prevention systems (IPS) or intrusion detection systems (IDS), anti-virus applications, and more that each failed to move the risky files to quarantine. At each tollgate, the system failed to detect a threat and let the message through. Eventually, the user clicks on the link or file and introduces a threat to your organization, like ransomware. This was a false negative.
- Internal Credential Abuse: Malicious insiders often bypass detection systems by exploiting or impersonating a privileged employee account or elevating their own to carry out an attack. Traditional systems may fail to flag this as an anomaly if, according to the rules configured, the user should have the required privilege or access. This, again, could result in a false negative insider threat detection.
- External Credential Theft: A malicious hacker can leverage compromised employee or third-party vendor credentials to access your system. If you are checking standard attributes like location and time of login, but they are sitting in the parking lot at your headquarters during business hours, it may seem like regular activity and go undetected.
36% of insider attacks were from malicious employees, 34% were external attackers, 30% were accidental and unintentional mishaps. (format to stand out)
How to reduce false negatives in your company with machine learning and AI-based solutions
Machine Learning and Artificial intelligence (AI) are transforming human knowledge into processes carried out by devices. With the rise in cyber-attacks, it's nearly impossible for analysts and human operators to stay on top of the number of incidents and threats. It would take the best security analysts considerable time, years in some cases, to scan through terabytes of user activity data collected from various tools. Machine learning and AI-based technology have proven the ability to sift through and analyze the same information in record time frames.
Machine learning (ML) is a rapidly growing concept summarized as the capacity for technology to adapt to patterns or trends and predict future events or activities. The ability for a machine to reach a stage of non-iteration (e.g., being able to repeat steps over again with little or no supervision) is in its interconnection of various algorithms known as "neural networks," to adapt to a pattern continually. Depending on the complexity of a task, several algorithms can be incorporated, and the flows can be repeated until the machine accomplishes the desirable results.
For false negatives in insider threat detection, deep learning (the expansion of neural networks by using big data and larger networks) plays a major role in the grouping, timely analysis, and detection of these threats. A simple example is in the case of using psycholinguistic to detect patterns and traits in employee behavior. Actively monitoring and collecting information about an employee can result in a timely analysis of the user's behavior and how it deviates from the baseline. Unsupervised learning has also proven to be an excellent model for curbing false negatives, as this helps to discover random and unknown patterns in a large set of data or files. This reduces false positives and enhances the ability of technology to more accurately highlight anomalies, to focus on detecting real threats.
Fine-tuning for deep learning is another advantage of incorporating AI to detect false negatives. With time, the model can transfer knowledge and incorporate large sets of data from the previous modeling, allowing it to enhance a previous analysis and perform a similar but constantly advanced detection. These advancements provide the opportunity for analysts to focus on the most essential and respond to the threat in a timely fashion. This also leaves more time to focus their attention on enhancing algorithms and testing in order to improve its predictive analytics nature.
One machine-learning backed method making a difference - User Behavior Analytics
Before recent advancements, insider threat detection was based on various traditional rule-based applications. (e.g., monitoring an employee's chat or social media account, keeping track of their geolocations, collecting logs on user activity on their workstation). However, these were individually analyzed from individual endpoints with no integration or central coordination. Next-generation solutions that incorporate AI and machine learning have not only streamlined this process but also made it more resilient and secure. User Behavior Analytics (UBA) merges all of those capabilities together to apply contextual analysis to user activity and make a world of difference for security teams.
How user behavior analytics helps:
With UBA, you can identify and create a baseline of activities that are considered normal and are part of a user's task and permissions. For example, what kind of file is an employee allowed to access, what location is considered "usual" for an employee to physically or remotely connect to the network, is an employee sending an unusual amount of emails to external contacts?
From there, you can constantly analyze employee's activities and actions. In doing this, it is crucial to create an insider threat detection scheme such as a digital fingerprint that will map out the deviation from standards and predict future threats. Look out for Indicators of Compromise (IOC) such as file modification, massive port scanning, elevated privileges in regular users so that you can apply deeper context through machine learning algorithms. It is important to note that not all insider threats originated from the employee. Some are from external attackers who leverage an employee's privileges and access. AI can help provide early detection and prevent this attack from being skipped as a false negative.
Next, intelligently analyze if an activity is an anomaly and malicious. Not all anomaly behaviors are real threats, making this is a critical step to help reduce false positives and negatives. Several tools have failed in effectively differentiating these two. The trend in inaccurate evaluation or analysis decreases as AI helps to detect sophisticated threats, prioritize them, and predict future threats before they result in a breach.
Lastly, alert and discuss mitigation of any threat identified throughout the ongoing process. Be sure to provide detailed information on measures taken to arrive at a threat finding. This can be incorporated in future insider
Other ways to address false negatives
In addition to using AI for early threat detection, organizations can eliminate false negatives by focusing on visibility, access, and employee reporting programs.
- Network visibility: Know what is on your network, from devices to users.
- Identity and Access Management (IAM): Incorporating an IAM framework to control user privileges and access can help to reduce the threat level posed by a rogue employee. Also, consider inactivating dormant accounts, enforcing password rotations, enforcing the use of multi-factor authentication, enabling the use of Single-Sign-On (SSO), segregating employees' duties, and alerting the security team of non-compliance. Deploy Privileged Access Management (PAM) solutions for employees with administrative privileges, isolate privileged accounts and entitlements from system accounts, and enforce temporary access management (TAM) so that access is only granted for the time required and revoked afterward.
- Employee reporting program: You'll never be able to catch every single false negative, but what technology misses, people might catch. Your employees can quickly notice suspicious activities from their co-workers. Thus, working with your employees to identify and combat these threats can yield positive outcomes. Some companies incentivize engagement by offering perks and bonuses for employees to report suspicious activities.
No organization is immune to insider threat attacks and the challenge of overcoming false negatives. However, by introducing user behavior analytics and machine learning-based insider threat detection technology, the risk can be considerably reduced. These next-generation concepts enable early threat detection, fine-tuning of models to learn and predict future threats, and better accuracy over traditional tools and processes. Ultimately, electing to skip out on user behavior analytics capabilities increases the risk of threats remaining undetected. These threats, when exploited, can have significant damage and financial impact on the organization. To learn more about how you get ahead of dangerous false-negative threats, check out Veriato Cerebral.