Some companies see cybersecurity as a cost center. We see things a little different. LEARN MORE >

Our seasoned Chief Information Security Officers bring strategic guidance to your leadership team, helping you right-size your cybersecurity operations.


A full suite of manage solutions from our US-based Security Operations Center (SOC)—staffed 24x7x365 by a full team of experienced analysts.


You can count on our IR team to contain the damage from a cyberattack, investigate the origins of the breach and build better protections for the future.


Why Inversion6

With an abundance of solutions and providers, the task of choosing the right option is critical and can sometimes be overwhelming.

Contact Us
By: Jack Nichelson

Six Steps for Managing the Business Risks of AI


As artificial intelligence continues to move from experimental to operational, its impact on the cybersecurity world is undeniable. 

Adoption is surging rapidly across all business units; today HR, marketing, finance and development teams are all embracing new tools at a pace that’s far outstripping traditional IT oversight. For security leaders, this raises urgent questions: Are we in control? Do we even understand the tools being introduced? And if we’re not at the table, how can we hope to shape policy or mitigate risk? 

At Inversion6, we’ve observed that many organizations are already behind the curve in understanding and governing AI usage within their walls. Even within cybersecurity and IT leadership, there’s a tendency to overestimate familiarity with AI systems. And when the people responsible for managing risk aren’t actively engaged in using the tools, they have limited influence on how AI is implemented. 

Here's a few practical ideas for building some protections around on all of this unbridled innovation without strangling its benefits. 

 

Awareness with a dose of humility 

One of the most important steps in managing AI risk is acknowledging how little many organizations truly know about the tools themselves.  

For example, many users don’t realize many tools are configured by default to retain their data inputs for model training purposes. Unless you take active steps, everything you give them goes right into the hive mind.  

I confess, I didn’t even know this myself when I first began testing these tools (that’s the humility part). That’s why I recommend all cybersecurity leaders begin by engaging with AI tools personally, not just to understand how they work, but to build credibility for the inevitable strategic discussions coming down the road. 

Let regulations set the tone for your risk framework 

The European Union’s 20205 Artificial Intelligence Act is currently leading the charge toward comprehensive AI governance. In the same way their GDPR law is shaping the global future of data privacy, this legislation is poised to influence how organizations worldwide approach AI compliance and risk management. 

While these regulations can be complex, they also provide much needed structure. At Inversion6, we encourage security leaders to use these frameworks as a helpful lens to understand risk. 

Concepts like explainability, transparency and secure-by-design architecture are more than regulatory checkboxes— they can help set the tone and offer real insight into the potential vulnerabilities AI systems can introduce. 

Build better (not broader) policies 

Generic AI acceptable use policies are not going to cut it for much longer. The future demands specificity and organizations will soon need tailored guidelines on a variety of issues, including: 

  • Approved tools and use cases 

  • Data protection protocols (e.g., restrictions on entering IP or financial data into AI systems) 

  • Human oversight requirements for AI-generated outputs 

  • Vendor vetting and subscription controls 

  • Expectations for transparency and ethical usage 

These guidelines should not be created in a vacuum. Cross-functional AI steering committees are important and HR, marketing, development and finance leaders—(often the earliest adopters of AI) need to be at the table from the start if you hope to have any sort of meaningful compliance. 

Make training and awareness foundational 

Training is not a helpful add-on to AI governance; it’s a core pillar. At every level of the organization, users need to understand what tools are approved, how to validate outputs and what risks to avoid.  

Key training points include understanding hallucinations and bias in AI outputs, spotting malicious AI activity such as deepfakes or AI-generated phishing and many more. Bottom line, training improves literacy, and improved literacy is the best way to empower your employees to use these tools both effectively and responsibly. 

Rein in the sprawl through better oversight 

From Microsoft Copilot to customer service chatbots, AI is becoming a default feature in all sorts of tools. This can result in rapid AI sprawl, i.e. fragmented, inconsistent deployments across multiple departments and platforms, with little to no central oversight.  

Not only does this undermine data security and governance, it also prevents organizations from learning collectively and improving AI performance across the entire organization.  

A good first step toward regaining control and setting up centralized governance is to begin thorough recurring audits of AI tools and usage across your organization to start getting the visibility you need. 

Step up your vendor management tools 

Traditional third-party risk questionnaires are simply not equipped to evaluate the unique challenges of AI vendors, many of whom are startups with limited compliance experience. 

Security leaders need to seek clear answers to some critical questions including: 

  • What data types will this tool access, and how is it protected? 

  • What are the protocols for model training, updates and change management? 

  • How is output accuracy verified over time? 

This sort of rigorous vendor scrutiny can help you avoid poorly governed, high-risk AI tools, which can create messes that are difficult to put back in the box.  


As a CISO, my goal is not to block innovation, but to guide it. We should aim to become trusted advisors who help their organizations explore AI responsibly, not fearfully.  

AI isn’t going away, and organizations that learn to manage its risks through policy, training, governance and vendor oversight will be the ones best positioned to unlock its full potential. 

If we take the right steps, ask the right questions and embrace our role as stewards of this technology, we can enable this innovation without sacrificing security. 

Post Written By: Jack Nichelson
Jack Nichelson is a Chief Information Security Officer for Inversion6 and a technology executive with 25 years of experience in the government, financial and manufacturing sectors. His roles have included leading transformation and management of information security and IT infrastructure, data management and more for organizations in numerous industries. Jack earned recognition as one of the “People Who Made a Difference in Security” by the SANS Institute and received the CSO50 award for connecting security initiatives to business value. Jack holds an Executive MBA from Baldwin-Wallace University, where he is an adviser for its Collegiate Cyber Defense Competition (CCDC) team. He is certified in the following: CISSP, GCIH, GSLC, CRISC, CCNP, CCDA, CCNA and VCP.

Related Blog Posts

Let's TALK

Our team of experts in information security, storage, and networking works alongside your team to implement technology solutions that are smart, flexible, and customized to fit your needs. Ready to learn how we can help strengthen your technology environment? Fill out the form below to get started.

TALK TO AN EXPERT