Just Ask Artificial Intelligence (AI)—It’s About the Fundamentals
In Frank Herbert’s series of science fiction novels, beginning with Dune, there was a backstory of a war between humans and "Thinking Machines.” The subsequent victory by the humans (yay for humans!) is that computers with advanced “thinking” capabilities were banished from use due to their threat to humanity (in the series of novels). This background of major conflict between Humans vs. Automatons lingers from the science fiction stories to influence the current public discourse of Artificial Intelligence; we should account for the real fear of the unknown with the advent of newer, commercially viable, AI tools and services.
In the novel series, the fictional response to having no advanced computing tools was the introduction of the Mentat, or a human with massive computer-like analytic and reasoning capabilities. Will today’s modern AI tools help make us all fill the role of Mentat, or will it be the reverse and we lose cognitive and reasoning skills as we turn to use AI tools for that purpose?
Let’s take a closer look.
Make Sure Your Business is Protected: Connect with our cybersecurity experts to get started on your tailored security solution today.
Artificial Intelligence Fundamentals—Language
What is AI? Artificial intelligence has been around for several years under the name Machine Learning (ML), but has grown in the past three years to become something that an average person with an Internet browser can access and use. If we recall that the precursor to the Internet—the ARPANET—was around in the 1970s; it took time to mature before widespread adoption in the 1990s.
Today’s AI has matured more rapidly because it is based upon three things that were not around at the beginning of Machine Learning: improved mathematical and programming models to convert data into meaningful information, high-performance chipsets like what Nvidia produces for graphics processing (growth driven by the crypto mining craze), and large amounts of textual data available for free (almost) across the internet.
Machine learning and large language models, combined, now permit AI tools to perform tasks that a human can direct with “natural language.” Looking back into the early days of computer programming (for me, the 1970s), programmers had to direct computers in very structured ways, starting with what was called machine language. Human language, particularly English, is filled with nuance and inference that can create unclear results. Structured machine languages were important to make the digital systems operate predictably.
Now with recent large language models, a human can provide input to an AI service with simple terms, or prompts, such as: “write” this or “compose” that. Such activities have prompted the creation of a new programming dimension for humans, called prompt engineering. The language needed is still structured, but significantly less complex than using modern computing languages.
Computing power needed
Computing power needed for artificial intelligence is significantly greater than what the average corporation uses for its annual operations. According to the AI Now Institute, the Floating Point Operations (FLOP), a measure of computing power, has doubled every 9.9 months since 2015—compared to 21.3 months between 1952 and 2015. This is clearly evident in the race by the big tech companies—Microsoft, Amazon, Meta, Google, Nvidia, and OpenA—to increase their chip investments and cloud data center infrastructure to leverage AI potential. The same article notes that the large-scale AI models use up to 100 times more computing power than other AI models. These computing power demands mean you won't have personal AI on your handheld for some time.
Data inflow
All the computing needed to power artificial intelligence using large language models also requires significant data input. It is data that trains the AI models and makes their output useful. This is also the Achilles’ heel of AI—bad data input trains the model incorrectly, resulting in erroneous output (recall the adage “garbage in, garbage out” and think about the public problems with Google’s Gemini). I’m not seeing an AI version control feature, so the investment needed for building and training the model is apparently immutable; you can’t go back to a previously known state. Companies like Google and Meta have large user populations and equivalently large data sets to use in their AI quest.
If you are tempted to play with online AI tools, just remember whatever data you give it, there are no “takesies-backsies” and that data remains in the model, wherever it is. You should think clearly before posting your resume into an AI tool and say, “Find me an ideal job.” Remember, as companies tout their products as “AI-powered” they currently mean that their own (or customers’) data alone support their models. Your AI features are only going to be as useful as the quality of the data with which you trained it.
Artificial Intelligence in the Cybersecurity Space
Cybersecurity defensive and analysis systems are reliant upon and generate loads of data. So much data that it’s often discarded (of course while adhering to retention requirements!) as soon as we think it is no longer useful. One of the major problems for cybersecurity practitioners is how to find patterns of malicious activity hiding in plain sight, or in the regular noise of monitored network and authentication data, and activity monitoring. The promise of the use of AI in this domain is to gain an improved identification of errors and lowered false alarms, and perhaps artificial responses to cut off attacks as they occur.
The fundamental artificial intelligence question that cybersecurity asks is this: Where did your data come from? If it came from your vendor, say Microsoft, you will want to specify the scope of the data used to train the AI assistance. Initially this could be best if your Microsoft tenant is the only component used. But, if you get AI capability from all of Microsoft’s clients, you potentially could take advantage of problems seen in IT environments unlike your own.
For What’s Next in Artificial Intelligence and Cybersecurity, Turn to Inversion6
We didn’t intend to perform a full summary of all the various flavors of Generative AI tools available, but this Google Blog entry gives a nice overview for readers wanting to dig deeper. What you need to know is that AI requires massive computing and storage resources, needs clean data to work well, and like all new innovations in IT will mature as IT and cybersecurity professionals gain more experience from successes and failures.
It will also be useful for consumers to have a healthy skepticism for AI-enabled services. In general, you can develop your Mentat skills by watching the progress, using AI with careful attention to your own reasoning, and be ready to “fail well” as the technology matures.
As cybersecurity experts and your comprehensive managed security services provider, Inversion6 works hard to stay current with the threats and breakthrough technologies of today. We’ve touched on artificial intelligence before and have even offered some early thoughts on how it can be implemented to augment security and resilience.
From our entire suite of security offerings to insight into new technologies, Inversion6 helps you form a proactive plan to better protect your organization. Connect with our team today to schedule a consultation.