ESTABLISH PROTOCOL & STRENGTHEN RESPONSE: MISSION ACCOMPLISHED READ MORE >

Partners

We collaborate with best in the business to ensure our customers receive the highest levels of care and support. These trusted relationships allow us to better serve and educate our customers.

Regional Partner of the Year Award

Partner of the Year Award

Why Inversion6

With an abundance of solutions and providers, the task of choosing the right option is critical and can sometimes be overwhelming.

industry validation

"Thanks to Inversion6, we now have an established protocol and response procedure whenever incidents are detected. Now, we are able to act immediately to prevent a security event from becoming a larger incident."

Read Full Story

Resources

Our experts are thought leaders in the cybersecurity space. From blogs to publications and webinars, check out these resources to learn more about what’s trending in our industry and how you can stay ahead.

It’s Time To Elevate Data-Centric Cybersecurity

By Christopher Prewitt

Read Article
Latest Inversion6 Press

CISO Craig Burland on Biden administration’s update to AI security goals

View Story
November 16, 2023
By: Jack Nichelson

AI and Cyber security 101: Defining the Parameters


Unless you’ve been on a deserted island for the past year and a half then you already know the onset of artificial intelligence (AI) has threatened to revolutionize industries far and wide. And while the gold rush is on to find and field generative AI solutions for business problems far and wide, it’s worth it to examine AI and cyber security further. 

Like many sectors, cyber security has found substantial benefits from incorporating AI into day-to-day operations. But before discussing how to leverage AI into your cyber security processes, let’s dive into generative AI, large language models, and what steps organizations should start with when looking to explore everything AI has to offer. Just like with security as a whole, a strong foundation will serve you well. 

Make Sure Your Business is Protected: Connect with our cyber security experts to get started on your tailored security solution today.  

What is AI? 

AI has become the catch-all term for applications that perform cognitive tasks typically associated with human minds or that require human input. These tasks run the gamut, from playing chess to communicating with customers online and even self-driving cars. AI is broadly categorized as Narrow or General: Narrow AI is designed and trained for a particular task (like recommendation algorithms on streaming platforms). General AI would have the ability to learn and apply knowledge across different domains, but remains largely theoretical.  

Subsets of AI include machine learning, deep learning, neural networks, and generative AI — which creates outputs based on prompts with the most prominent current example being OpenAI’s ChatGPT. It’s important to note however, that current AI isn’t true artificial intelligence. AI can’t yet replace the pilot. It still needs that initial point of inception to produce anything. To borrow from self-driving cars, AI helps the vehicle stay in motion but it still needs human input to be put into motion. 

What Are Large Language Models? 

Large Language Models (LLMs) are a class of artificial intelligence systems that are trained on massive text data sets to generate highly capable language processing capabilities. These models are typically built using deep learning techniques, specifically using architectures like transformers. They are trained on massive volumes of textual data including books, websites, and conversations and can generate contextually relevant human-like text, engage in dialogue, summarize content, translate languages, answer questions, and more. 

LLMs are accurately named as they contain billions or trillions of parameters, requiring immense computing power and data to develop. Again, ChatGPT is a clear example of both AI (as conventionally referenced) and an LLM — version GPT-3 had 175 billion parameters.  While not every AI system is a large language model, many of the AI tools developed and released in the current innovation boom are LLMs.  

What Are the Risks of Implementing AI? 

Because of the ability to access and process information, and produce outputs, at great speed, industries across the spectrum have implemented AI into their processes at exponential rates over the last year. AI and cyber security are no exception.  

However, using AI requires an examination into the pitfalls and problem areas that can exist when adding AI to your company’s toolkit. Without these considerations, many organizations have rushed to replace people with AI due to aspects such as budget concerns — and have paid a steep price for it.  

Hallucinations 

AI hallucinations refer to instances when a LLM — most commonly a generative AI chatbot — finds patterns or objects that are nonexistent or imperceptible to human observers, creating results that are wholly inaccurate or nonsensical. Errors and accuracy are always a risk when relying on AI-generated or LLM-produced answers or information. Remember, despite impressive capabilities, LLMs have no real-world knowledge so can be susceptible to false claims or bias inherent in the materials they’re trained on.  

Authenticity 

Generative AI can rapidly produce text based on knowledge, but still has difficulty producing copy that conveys emotional depth or human-sounding language. This can leave lasting reputational damage to businesses relying on AI to create public facing statements. Genuine, authentic communication still has value and failing to consider the potential impact for soulless, artificial sounding language is a mistake — whether you’re relating sports information to readers or alerting your campus to a shooting incident.  

Legal 

The use of AI tools is still in its infancy, and the potential for lawsuits stemming from its use is unknown territory. A host of ethical issues still surround AI use, including questions around bias, misinformation, privacy, and responsible use. But beyond those thorny ethical questions, legal trouble is a legitimate risk too. Because LLMs are built on pre-existing materials — published works, websites, and so on — there are copyright issues and questions about reproducing or repurposing others’ work that have yet to be settled.  

The First Steps of AI and Cyber security 

The choice to use AI at all, and where, is being removed for many businesses as vendors have deployed the functionality into many of their existing offerings, products, and platforms. Soon, AI-bolstered tools will be prevalent and nearly unavoidable. So, how do companies begin to address AI and cyber security, or manage the risk inherent in implementing AI? 

To successfully adopt AI in cyber security or any other use case, organizations will need to carefully consider the risks and benefits of AI and ensure that it is properly trained and tested before being deployed. Teams should consider building out contingency plans to respond to scenarios where AI starts to produce false positives. 

Developing acceptable use policies for your users is a productive first step. Working with a proven cyber security managed services provider offers insight into what to consider when crafting such policies. Formalized policy also gives transparency to interested customers or clients into how you’re using this technology, and how their data may be potentially impacted.  

Here’s some sample policy statements that should give you an idea of the guidelines organizations should put into place to begin to govern the use of AI in their operations.  

Artificial Intelligence Platforms — Company employees are responsible for proper use of platforms that utilize artificial intelligence (AI) and store information. These guidelines are designed to ensure the ethical and responsible utilization of AI technologies and safeguard the integrity, privacy, and security of the stored information. 

Approved AI Tools — Employees are only permitted to use AI tools that have been approved by the Company. Unauthorized tools should not be utilized. 

Confidentiality of Data — Employees must recognize that any data entered into AI tools may be accessible to others, even if the tool claims data protection. Exercise caution when entering sensitive information. 

Caution in Usage — Exercise discretion when utilizing AI tools and formulating queries. AI results can be misleading or inaccurate. Be aware of the potential risks of AI tools, such as the creation of fake news or propaganda, and take appropriate steps to mitigate these risks. 

Fact-Checking — Prior to incorporating AI tool results into business decisions, independently verify the accuracy and reliability of the information. Employees should always use AI tools with caution and cross-check their output. 

Compliance with Laws and Regulations — Employees must adhere to all applicable laws while using AI tools. This includes avoiding impersonation, copyright infringement, trademark infringement, and respecting intellectual property rights. 

Reporting Violations — Employees are responsible for promptly reporting any known or suspected policy violations to their manager or IT department. 

Secure Your Entire Digital Landscape: XDR protection provides comprehensive cyber security solutions for an organization’s entire digital landscape. Learn more today. 

Navigate AI and Cyber security Issues with Inversion6 

AI is still in its infancy and figuring out how you use it to your advantage is a question every organization is wrestling with. You need expert assistance to guide your use and experience with this technology, and soon we’ll be sharing more on AI and cyber security, including how to leverage it to improve your security efficiency.  

Inversion6 provides the steady guidance required to find a secure path forward through the rapid onset of AI. With several decades of experience in finding, creating and advising on information security, we have the expertise to help your organization in a variety of ways: 

  • We’ve been a long-time partner with many cutting-edge solution providers 

  • We have substantial experience working with businesses across many industries to solve complex challenges 

  • We activating work with AI in our internal process and have developed strategies to leverage the technology for better efficiency and protection 

From fractional CISOs and a Secure Operations Center for monitoring coverage to XDR services and autonomous penetration testing, we partner with you to protect your business at every level. 

AI and cyber security can offer up scary and intimidating terrain. Connect with our team today to learn how we can help. 

Post Written By: Jack Nichelson
Jack Nichelson is a Chief Information Security Officer for Inversion6 and a technology executive with 25 years of experience in the government, financial and manufacturing sectors. His roles have included leading transformation and management of information security and IT infrastructure, data management and more for organizations in numerous industries. Jack earned recognition as one of the “People Who Made a Difference in Security” by the SANS Institute and received the CSO50 award for connecting security initiatives to business value. Jack holds an Executive MBA from Baldwin-Wallace University, where he is an adviser for its Collegiate Cyber Defense Competition (CCDC) team. He is certified in the following: CISSP, GCIH, GSLC, CRISC, CCNP, CCDA, CCNA and VCP.

Related Blog Posts

Let's TALK

Our team of experts in information security, storage, and networking works alongside your team to implement technology solutions that are smart, flexible, and customized to fit your needs. Ready to learn how we can help strengthen your technology environment? Fill out the form below to get started.

TALK TO AN EXPERT