According to a PwC survey, 73% of U.S. companies have adopted AI in at least some areas of their business. That percentage is expected to grow as artificial intelligence is transforming businesses across sectors.
MSPs are racing to adopt the technology, which promises to yield business value in the form of increased efficiency, reduced costs, improved customer experience, and data-driven decision making.
While AI technologies can unlock tremendous business value, they also have potential risks in areas such as privacy, copyright infringement, misinformation, and cybersecurity vulnerabilities.
As the technology gains widespread adoption, policymakers concerned about how AI systems collect and use data are working to enact laws and regulations aimed at ensuring data privacy, security, and accountability. Many states in the U.S. have already enacted AI laws and more are expected to do so this year.
Amidst this regulatory complexity and uncertainty, MSPs must learn to harness AI safely and securely to comply with evolving state laws and avoid costly consequences such as fines and penalties, loss of customer trust, and reputational harm.
The Patchwork of State Laws
In the 2023 legislative session, at least 25 states introduced AI bills, and 18 states and Puerto Rico adopted resolutions or enacted legislation, according to the National Conference of State Legislatures.
Some legislation — like the California Privacy Rights Act (CPRA) — impacts AI with limitations on data retention, data sharing, and use of sensitive personal information. Other legislation and laws enacted in states like Colorado, Connecticut, Virginia, Utah include a provision giving consumers “the right to opt-out of profiling in furtherance of automated decisions.”
The patchwork of state laws and local laws is expected to grow in 2024. According to the LexisNexis State Net legislative tracking system, 89 bills referring to “artificial intelligence” were pre-filed or introduced in 20 states as of Jan. 11, adding to the more than 100 AI bills that carried over from last year. LexisNexis notes most of these new measures “seek to study, regulate, outlaw, or okay critical aspects of the technology’s use in society.”
Getting Compliance Ready
While many MSPs rush to deploy AI into their offerings and operations, some are concerned about the privacy and data security risks associated with the technology. A recent survey by Gartner found that generative AI adoption is the top-ranked issue for legal, compliance, and privacy leaders for the next two years.
That concern is well founded considering most companies aren’t regulating the use of this technology. According to a KPMG survey, only 6% of organizations reported having a dedicated team in place for evaluating risk and implementing risk mitigation strategies as part of their overall generative AI strategy. Another 25% of organizations said they are putting risk management strategies in place, but it is a work in progress.
Best Practices for Mitigating AI Risk
As AI is increasingly integrated across operations, MSPs will need to adopt best practices that can help keep them compliant with an evolving regulatory landscape.
First, organizations should remain up to date on evolving state AI laws and understand how to deploy AI in alignment with applicable existing and new regulatory frameworks.
Channels should also develop comprehensive employee AI usage policies that address what tools are permissible in the organization and how employees are allowed to use them. Safe AI usage training should include keeping employees updated on AI cyber risks, educating them on safe and unsafe AI usage practices, and the importance of complying with AI regulations.
Carefully vetting external AI tools and programs to understand how data is collected, used, and stored is another critical best practice. It is important to conduct due diligence to determine whether the tool uses encryption, whether data is anonymized, and whether the tool complies with state regulations and numerous other privacy regulations.
Wrapping up
State policymakers across the U.S. have enacted or are considering legislation that puts guardrails on the use of AI. Adopters of it will need to navigate increasing regulatory complexity by integrating responsible AI governance into existing privacy programs and compliance efforts. To achieve this, organizations should develop acceptable AI usage policies, provide regular employee training, vet external AI tools for security and compliance and create a governance team to oversee responsible AI usage.
In this way, MSPs can unlock the business value of AI, while mitigating data security and privacy risk to remain compliant with evolving regulations.
Anurag Lal is president and CEO of NetSfere. He has more than 25 years of experience in technology, cybersecurity, ransomware, broadband, and mobile security services.