How resilient are your RPA bots really?
The recent CrowdStrike incident is a timely reminder that lightning does strike. In today’s highly digitalized and connected world, ensuring digital resiliency should be a key tenet of every organization’s business continuity plans. This would start with identifying and evaluating critical IT systems that are vulnerable to outages and disruptions, with a significant operational impact.
In recent years, many companies have embraced Robotic Process Automation (RPA) as a means to automate mundane, menial tasks, allowing their employees to focus on higher value work. While the ability for RPA to improve productivity and reduce costs is undoubted, not enough attention is being paid to the risks inherent in RPA.
Indeed, the RPA bots’ ability to perform thousands of read, write, and deletion actions at high rates of speed poses unique risks to enterprises’ systems and data. For one, this can make it difficult to identify logic and processing errors—and their associated consequences—before serious damage is done.
Back at the start of 2024, we made the following bold prediction:
The proliferation of citizen-developed bots is raising concerns around governance, compliance and security. Citizen developers, while well-intentioned, may inadvertently introduced logic flaws and security vulnerabilities that may compromise entire systems. Also, the rise of shadow IT is reducing the visibility of IT and security teams of the various threat vectors that are lurking within the organization. We expect the outbreak of a major RPA security breach to be the trigger for an industry rethink on how to discover, mitigate and monitor such threats.
Case in point: The General Services Administration’s (GSA) Office of Inspector General (OIG) recently released a report that recommended GSA to strengthen the security of its RPA programs. Some of the findings made included:
GSA’s RPA program did not comply with its own IT security requirements to ensure that bots are operating securely and properly
GSA did not consistently update system security plans to address access by bots
GSA’s RPA program did not establish an access removal process for decommissioned bots, resulting in prolonged, unnecessary access that placed GSA systems and data at risk of exposure
We believe this presents merely the tip of the iceberg, and more needs to be done to enhance the security posture of RPA bots.
If you would like to receive implementable recommendations on how to improve the security and resilience of your RPA program (including vendor-specific ones), get in touch with us now for a complimentary discussion.
Comments