B&B_NEW_LOGO_400

AI and its impact on law firm cybersecurity

The rise of artificial intelligence has a number of implications within the legal community. Apart from its impact on operational tasks and its potential for increasing efficiency, AI will likely feature as an element of in-house cybersecurity policies and practices. 

Efforts to counteract cyber threats are becoming as sophisticated as the technologies we use on a daily basis. Law firms are especially at risk of cybercrime due to the sensitive information they create and store. Personally identifying information, data relating to litigation, and client communications are only some of the data types a typical law firm will store and access on a daily basis. This data make law firms prime targets for a variety of threat actors, especially when paired with less-than-ideal security standards. In the face of ever-expanding threats, many organizations are turning to artificial intelligence to assist in their cybersecurity initiatives. At a time when budgeting for security is often not seen as a priority, it is significant that almost half of enterprises surveyed in a recent study by the Capgemini Research Institute (Reinventing Cybersecurity with Artificial Intelligence) say that their budgets for cybersecurity AI will increase by an average of 29 percent in Fiscal Year 2020.1

As organizations grow and embrace new technologies, their risk of data breaches and cyber events increases. More employees, more devices, and trends toward BYOD (bring your own device) policies, cloud infrastructures, and remote work all make for potential sources of vulnerability. The Internet of Things also creates a much wider zone in which cybercriminals can act. With this pattern in mind, organizations have to consider what the best course of action will be when, not if, they are attacked. Law firms use these technologies too, often managing them with convenience and ease of use as the top priorities.

Focusing on detection

As set forth in the Capgemini study, at this point enterprises are largely turning to AI solutions for the purposes of detection. As cyber events come to seem increasingly inevitable, organizations are facing the fact that early detection may be the best course of action. The sooner a cyber event is detected, the sooner it can be mitigated. Speedy mitigation helps organizations keep the costs associated with breaches as low as possible by ensuring that threat actors have less time to exploit vulnerabilities and exfiltrate data.

But as AI is implemented over time, it will also be beneficial in creating proactive solutions, both predictive and responsive.2 These methods will undoubtedly spur new policies as organizations learn to use AI to its fullest potential. 

It remains to be seen how AI will be incorporated into each facet of a cybersecurity policy, but it will most likely continue to be an instrumental component of a strong security program for its reactive and proactive potential. Reduced attack times make for a reduction in the financial, operational, and reputational risks that organizations face from cyber events. Its implementation may also evolve into a requirement of cyber insurance policies, along with regularly scheduled risk assessments. 

The human element

With IT professionals increasingly overburdened, the use of AI to bolster security efforts helps to minimize human error. But the human component can never be completely removed. False positives and issues brought about by insufficient data will still need to be monitored and assessed by security professionals. In spite of its myriad benefits, especially within settings where confidential data is at stake, AI will never be a foolproof safety net. The complexities of developing security cultures, creating proactive strategies, and navigating the intricacies of public response and mitigation strategies are still issues that will require human attention. 

Early detection, network intrusion scanning, email attack surveillance, and user behavior analysis are just some of the ways that AI is being used to strengthen security.3 Given this multitude of functions, many experts believe that AI will also be put to use by cybercriminals, with large-scale cyberattack campaigns a primary concern. As these issues materialize, they will require the expertise of security professionals to create sustainable solutions. The defensive capabilities of AI will be needed to counteract the ways in which it can be utilized aggressively by bad actors. This technology poses yet another instance of organizations and security professionals alike needing to balance security with convenience, and ease of use with the acknowledgement that no security measure is ever going to be a “cure-all.” 

The legal community is undoubtedly tasked with maintaining the highest of standards in regard to protecting client data. As AI continues to shape the ways in which law firms conduct business, it is critical to stay apprised of its equally important role in reinforcing security postures.


MARK LANTERMAN is CTO of Computer Forensic Services. A former member of the U.S. Secret Service Electronic Crimes Taskforce, Mark has 28 years of security/forensic experience and has testified in over 2,000 matters. He is a member of the MN Lawyers Professional Responsibility Board.  



Notes

1 https://www.forbes.com/sites/louiscolumbus/2019/07/14/why-ai-is-the-future-of-cybersecurity/#1322c0a4117e 

2 Id.

3 https://resources.infosecinstitute.com/ai-in-cybersecurity/#gref