Insider Threats Part III: Advanced Analytic Techniques and the Challenges of Implementation

 

Insider Threats Part III: Advanced Analytic Techniques and the Challenges of Implementation

Wednesday, October 18th, 2017 - 12:55
This blog concludes the series by discussing the advanced analytic techniques key to conducting a whole-person continuous evaluation of an employee, as well as the challenges of implementing this type of Insider Threat system.

The first two blogs in this Insider Threats series presented an overview of Insider Threat policies and key organizations and an assessment of current and recommended strategies for mitigating Insider Threats, respectively. 

Five Advanced Analytic Techniques to Mitigate Insider Threats

The continuous evaluation process for predicting Insider Threats suggested in Carnegie Mellon’s 2015 Software Engineering Institute report and in other sources—and required by the National Insider Threat Task Force, National Background Investigative Bureau, and Department of Defense Insider Threat Mitigation and Analysis Center—demands the application of advanced techniques to achieve the desired whole-person risk rating.  Aside from the network analysis tools described previously, Insider Threat continuous evaluation assessment programs will require some or all of the follow techniques to effectively predict Insider Threats (as described by an April 2017 Intelligence and National Security Alliance report):

  • Linguistic Analytics: Uses data a person generates through blogs, tweets, forum posts, and email to generate a score relative to a sample population along a spectrum of cognitive and social characteristics.  For Insider Threat analysis, this analysis can be used in comparing current employee behaviors with a previous baseline. 
  • Natural Language Processing (NLP): A method of computational science allowing a computer to understand human language as it is spoken or written.  In the context of Insider Threat analysis, NLP will be necessary to correctly infer what an employee means in a text or email. 
  • Data Mining: Tools and techniques that can be used to extract information from the myriad of social media sites (Twitter, Facebook, Instagram, Tumblr, etc.). 
  • Machine Learning: Techniques such as Naïve Bayes, Support Vector Machine, Principal Component Analysis, Neural Networks, Deep Learning, and many others, that can be applied to mined data to enable understanding what events are occurring as they are occurring.    
  • Sentiment Analysis: Application of machine learning in which the words used in the months preceding or following a life event can be captured and used to develop an individual profile for detecting future life events and corresponding stress based upon the words used (even if the event is not mentioned).

In addition, recent advances in the broader artificial intelligence and cognitive computing fields will likely have impacts for Insider Threat analysis. 

Five Challenges to Automating Insider Threat Evaluations

Though big data analytics and the whole-person concept of continuous evaluations can provide the opportunity to more quickly and automatically identify potential Insider Threats, such evaluations are not without several related challenges. 

  • Information Access: To conduct such whole-person Insider Threat analysis, the automated system must have regular access to personal information.  For example, using the tools described above to analyze an employee’s Facebook postings implies unimpeded access to that employee’s profile.  If the organization cannot get the access required, it will not be able to do this type of analysis. 
  • IT Overhead: A related challenge is the information technology overhead required to store the data and conduct these advanced analytic techniques.  Continually capturing, storing, and analyzing the trove of data collected for an individual employee is no insignificant task, let alone for tens or hundreds of thousands of employees. 
  • Baseline Analysis: The previous challenge relates directly to the third challenge: The necessity for a historical baseline of an employee’s behaviors and actions upon which to compare their current indicators.  Without the ability to look for long-term behavioral outliers, such an automated system will be ineffective in identifying the true risks from the noise generated by an employee’s everyday activities.  This training of the system will require vast data stores and significant model tuning, tasks that require substantial IT resources and human expertise. 
  • Acting on Data: The next challenge speaks to the organization’s Insider Threat program administration:  How does the organization react to potential threat information?  Without a comprehensive plan for acting upon identified risks, the organization will be ill-prepared to prevent a threat from endangering the organization or to develop mitigation strategies to help an at-risk employee get off a dangerous course.  Perhaps most importantly, organizations must acknowledge that the presence of an analytic indicator or increased risk rating is not, by itself, a definitive indicator of a pending Insider Threat attack.  Rather, because of the potential for high false-positive rates of such automated systems, the presence of an indicator of anomalous behavior, or collection of indicators, should be considered carefully and acted upon within the parameters of its Insider Threat program. 
  • Statutory Limitations: It should be noted that these challenges assume that any such automatic Insider Threat detection systems adhere to the statutory requirements and limitations associated with data collection and use—policy limitations which are still undergoing review and modification.   Because such data collection can be perceived as intrusive and invasive, government agencies must ensure that they balance their Insider Threat actions against violating the privacy of their employees. 

Conclusion

Insider Threats pose a significant danger to government—and commercial—organizations.  The access and trust afforded to employees, while necessary for mission accomplishment, expose organizational vulnerabilities to malicious insiders.  Three government organizations— the National Insider Threat Task Force, the National Background Investigative Bureau, and the Department of Defense Insider Threat Management and Analysis Center— are key to achieving the goal of detecting and preventing Insider Threat attacks.  These organizations are guiding efforts to shift Insider Threat programs from current, IT-based efforts to automated, whole-person risk-rating systems.  The automated systems will enable organizational Insider Threat programs to quickly identify and react to the indicators of potential Insider Threats, thus mitigating their effects or preventing them altogether.  These automated systems will require the implementation of advanced analytic techniques to discern a potential Insider Threat from benign employee behavior.  The challenges of implementing such systems are many—not the least of which are the significant technological requirements and statutory limitations. 

Though this blog series focused on the automated detection of potential Insider Threats, a key component of any Insider Threat program is the employee.  A common theme amongst the research conducted for this three-part blog series has been the ever-increasing importance for all employees to be engaged in the organization’s mission to prevent Insider Threats from harming the organization.  Just as ongoing cyberattacks require all employees to be vigilant in their network activity, so too does the potential for Insider Threats.  As a recent IBM Center blog noted, cybersecurity must be “a positive part of the culture—an integral element of an organizational standard way of operating, not a separate silo.”  This premise is equally true regarding Insider Threats. 

 

 

Disclaimer:  The ideas and opinions presented in this paper are those of the author and do not represent an official statement by the U.S. Department of Defense, U.S. Army, or other government entity.