AI and Cybersecurity: Comprehensive Insights from the April Workshop


A gray-themed background with rounded rectangle shapes. The text reads

On April 2, the Information Services Support Group (ISSG) hosted a highly informative workshop on AI and Cybersecurity in collaboration with the Minnesota Bureau of Criminal Apprehension (BCA) and SHI. The event brought together experts to discuss the latest trends, risks, and best practices in the field of AI and cybersecurity.

In the spirit of the presentations shared on April 2, this article was created with Microsoft Copilot, using notes from MnCCC staff and the presentations from the BCA and SHI. The content was reviewed by MnCCC staff.

Click here to view the video from the workshop.

Presentation Highlights

AI Risks and CJIS Data

The workshop kicked off with a presentation by Jodie Monette, the CJIS Systems Agency Information Security Officer for the Minnesota BCA. With over 30 years of experience in IT and a 20-year focus on cybersecurity, Jodie provided valuable insights into the risks associated with AI, particularly in the context of FBI CJIS data. She highlighted several key points:

  • Conventional Cybersecurity Risks: These include insecure code, plug-ins, and vulnerabilities in libraries.
  • Bias Risks: Issues with training data, data selection, and system prompts can lead to harmful biases.
  • Regulatory Risks: Exposure of sensitive data, such as HIPAA and IRS 1075, poses significant regulatory challenges.
  • Privacy and Intellectual Property Risks: AI can expose sensitive information and intellectual property, leading to reputational damage.
  • Risk Assessments: Jodie emphasized the importance of conducting thorough risk assessments, considering threats, vulnerabilities, and the likelihood of adverse events.

Jodie also discussed the FBI CJIS security policy, focusing on sections relevant to risk assessments, vulnerability monitoring, and scanning. She provided an overview of the OWASP Top 10 risks for large learning models (LLMs) and shared resources like the MITRE ATLAS for understanding AI adversarial threats.

M365 Copilot

The second presentation was delivered by the SHI Public Sector team, including Jim Daniels, Carrie Randolph, Sunni Groom, and Greg Rohleder. They discussed the various versions of Copilot and its applications in day-to-day operations. Key takeaways from their presentation included:

  • AI Adoption Curve: The team outlined the stages of AI adoption, from providing broad access to redesigning business processes and ensuring secure and compliant AI governance.
  • Use Cases for Copilot: Examples included improving procurement cycle times, summarizing meetings, creating PowerPoint presentations, and managing budgets in Excel.
  • Implementation and Data Flow: The team explained how Copilot integrates with M365 apps, accessing the graph and semantic index for pre- and post-processing of user prompts.

The SHI team also emphasized the importance of responsible AI principles, including accountability, transparency, fairness, reliability, safety, privacy, and inclusiveness. They highlighted the need for robust user enablement programs, technical readiness, and clear AI value drivers to ensure successful AI implementation.

Detailed Insights

AI Risks and CJIS Data

Jodie Monette's presentation delved into the multifaceted risks associated with AI, particularly when handling FBI CJIS data. She underscored the importance of understanding and mitigating conventional cybersecurity risks such as insecure code, plug-ins, and vulnerabilities in libraries. Additionally, she highlighted the critical issue of bias in AI systems, which can arise from training data, data selection, and system prompts. These biases can lead to harmful outcomes and must be rigorously tested and addressed.

Regulatory risks were another focal point, with Jodie discussing the exposure of sensitive data, including HIPAA and IRS 1075 information. She stressed the need for comprehensive risk assessments that consider threats, vulnerabilities, and the likelihood of adverse events. Jodie also provided an in-depth look at the FBI CJIS security policy, particularly sections related to risk assessments, vulnerability monitoring, and scanning.

Jodie introduced the OWASP Top 10 risks for large learning models (LLMs), which include issues such as prompt injection, sensitive information disclosure, and data model poisoning. She also shared resources like the MITRE ATLAS, a knowledge base of adversary tactics and techniques for AI systems. These resources are invaluable for understanding and mitigating the evolving threats to AI-enabled systems.

M365 Copilot

The SHI Public Sector team's presentation on M365 Copilot provided a comprehensive overview of its various versions and applications. They discussed the AI adoption curve, which involves providing broad access to AI, redesigning business processes to realize AI's value, and ensuring secure and compliant AI governance. The team shared practical use cases for Copilot, such as improving procurement cycle times, summarizing meetings, creating PowerPoint presentations, and managing budgets in Excel.

The implementation and data flow of Copilot were explained in detail, highlighting how it integrates with M365 apps and accesses the graph and semantic index for pre- and post-processing of user prompts. The SHI team emphasized the importance of responsible AI principles, including accountability, transparency, fairness, reliability, safety, privacy, and inclusiveness. They also stressed the need for robust user enablement programs, technical readiness, and clear AI value drivers to ensure successful AI implementation.

SHI also discussed security measures Copilot has in place, which organizations can use to understand the data and the sensitivity labels it should have. Sensitivity labels can encrypt data (group-based, time-based, user settings); watermarks, headers, and footage can help label data; protect data with containers, sites, and groups; apply labels automatically or recommend labels; and adjust sharing settings, like adjusting the default link type. One example is Restricted SharePoint Search, which is intended as a temporary solution to give you time to review and audit site permissions, while implementing robust data security solutions from Microsoft Purview and content management with SharePoint Advanced Management. Identify where things are overshared, turn on proactive audit and protection, discover oversharing risks, define group access within label, restrict Copilot access.

Key Takeaways

The workshop provided attendees with a comprehensive understanding of the risks and opportunities associated with AI and cybersecurity. Some of the key takeaways included:

  • Inventory AI: Identify where AI is being used in your organization and understand its data flow.
  • Security Controls: Implement security controls around AI, similar to those used for production code. Sensitivity labels are crucial for organizations to maintain control over data and data sharing.
  • Responsible AI: Follow responsible AI principles to ensure ethical and secure AI usage.
  • Human Change Management: Manage the human transformation with robust user enablement programs and clear communication.

Overall, the workshop was a valuable opportunity for professionals to learn from experts and gain insights into the evolving landscape of AI and cybersecurity. By staying informed and proactive, organizations can better navigate the challenges and opportunities presented by AI technologies.

Sources:

blogit.back_to_overview