All Articles Technology Practical AI Why automation without oversight is the next cybersecurity challenge

Why automation without oversight is the next cybersecurity challenge

How to solve AI's "blinking 12" problem.

9 min read

Practical AITechnology

Pixabay

Back in Christmas of 1978, my grandparents gifted our family with a videocassette recorder (VCR). For those of you born after… whenever, that’s a device that uses tapes to play and even record movies and TV shows. We barely even knew what it was when we got it: This particular machine was one of those old “pop top” versions. We loved its authoritative “kethunk” as it popped open from the top, ready to receive the next VCR tape. Still, it was a bit of an adventure to find any VCR tapes to rent; the only tape rental store was a convenience store that had only three movies available: “Patton,” “The Sting” and “Close Encounters of the Third Kind.” So, we had to record a lot of TV shows.

(Getty Images)

But, to do that, we had to first program that VCR. The problem was that programming this thing wasn’t for the faint of heart for most families; just programming the clock on the front of the machine to stop it from constantly blinking “12:00” was a true exercise in pain. I think a good project manager would struggle to record the collective hours we spent trying to modify those settings, just to make that deceptively simple change. I remember the moment when, after our many fumblings, that number changed to the current time: We all let out a collective victory cry. When I went to school the next day, I heard stories about other families who simply put black tape over that blinking 12:00 on that LED display and settled for watching Mr. Spielberg’s semi-latest movie.

Our “blinking 12” AI problem

What does this story have to do with practical AI training? Well, quite a bit. I suppose you’re thinking that I could have referred to something more cutting-edge than a VCR, and are wondering why I went full-frontal retro on this example. Well, I did because I didn’t make up the saying “blinking 12.” It isn’t my term. And no, I’m not talking about the “12th man,” either (go Seattle Seahawks).

The “blinking 12 problem” refers to any situation where features or functions of a device, program or system go unused because the interface is difficult to use. In fact, the problem is even deeper than a poor user interface. The problem is largely because the system’s developers were unable to anticipate the level of interaction necessary for users to operate the technology. The “blinking 12 problem” is also related to a deeper issue: To use AI effectively, we all need to interact much more deeply with our tech. Both traditional workers and our AI co-workers struggle to do this, right now. Why? Because developers, designers, and systems workers have rarely anticipated the reality that, in order to be productive, both “eaters” (the latest Silicon Valley term for humans) and AI agents need to understand nuances and exhibit a greater level of literacy than ever before.

I suppose some folks might be thinking right now that AI is so easy to use: After all, one of the smartest things generative AI providers have done is adopt Google’s simple text box interface: All we have to do is type in a request, question or command, and things happen automagically. But when it comes to truly productive use of AI, more interaction is required. In fact, our CompTIA Workforce and Learning Trends report indicates that organizations in virtually every industry sector continue refining how workers and AI interact with each other.

AI and reducing friction

Like almost any automation technology, AI is designed to reduce friction and lower barriers to entry. I call this process “defrictioning.” For example, I recently conducted a penetration test where I needed some Python code to automate the scanning process. I had a GenAI tool create the code. I had to fix it, but nevertheless, GenAI saved me some time. Another example of defrictioning comes from Europe, where I once spoke with a CIO in the health care field who told me about how she was able to use AI to enhance a disease detection diagnosis tool. She was able to use AI to model disease agents, saving the project at least six months.

The problem is that it is easy to misuse AI to create sheer fiction during the “defrictioning” process. This is where we humans find ways to steamroll over a particular process that requires more time and thought. The “Copy and Paste” model isn’t an effective way to interact with AI or anything else. But many are guilty of it. That’s not proper AI interaction; that’s avoiding interaction.

Substitution and AI implementation

I often hear how AI has made vast improvements, creating wonderful code, usually from people who have never created any useful code in their lives. I just experienced such a moment last week from an academic who fancies himself as a great programmer and seasoned systems administrator. He informed me and almost 200 guests that AI creates incredible, usable code that even he doesn’t have to check anymore. I was a bit surprised at this statement.

Sure, GenAI continues to make great progress, especially in the developer field. But, I find that anyone who uses code without first validating it is willfully engaging in technical debt – a malady that continues to vex developers, cloud engineers and cybersecurity workers alike. Accepting untested code – which involves skipping security validation steps – is a particularly dangerous form of technical debt that I see in every industry sector.

Let me put a finer point on this: Whenever I’ve asked working sysadmins, cloud engineers, cybersecurity professionals and developers if they use the code that AI spits out, they all give me a funny look, then give me some sort of variation on the following statement: “I never use AI directly in my work.” So, what’s the problem here? It’s less to do with arrogant academicians and more with a problem I call “substitution,” a particularly insidious version of the “blinking twelve” problem.

I call it “substitution,” because it is where someone eliminates a valid step and inserts something that is problematic. It’s where naïve workers use AI to automate things that they find either unimportant or somehow impossible to do given the time that they have. We’ve all engaged in this kind of substitution: Upon discovering a barrier to your work, how easy is it to just say, “Hey, it’s simple to eliminate what someone else usually does, because AI now does it better.” I call this substitution because you’re doing more than mere disintermediation, which is where you eliminate a middle player to create efficiencies. You’re creating a weak substitute for a process that actually requires nuanced work.

The result? Well, you end up with sloppy stuff: Weak AI, poor cybersecurity and bad business. And no one needs sloppy tech in today’s business.

A few examples of substitution

In one company, I saw a situation where HR professionals were asked to help train their HR upskilling software. Instead of doing it themselves, they hired a contractor. The contractor, however, didn’t have experience in creating upskilling programs. As a result, the company lost an opportunity to collect data and properly automate their training. This is a rather common “blinking 12” problem in many industry sectors.

In one country, I saw the military engage a contractor to use GenAI to help vet and map industry certifications and courses to their standards. But, the company wasn’t able to train the AI model, given the available time and money. As a result, the government went back to a manual process. To me, this is nothing short of putting black tape over the AI blinking twelve problem.

We’ve seen public examples of substitution, including Meta’s AI misadventure, and times where organizations implement AI without implementing data labeling strategies that enable governance. In my own experience, I’ve seen organizations try to use AI to create an incident response policy, rather than use experienced cybersecurity professionals.

It seems in each of the above situations, organizations are discovering situations where their AI or tech implementations are incessantly sending the following flash: “Your blinking 12 problem is showing!”

Solving substitution with curiosity, communication

It’s up to us to properly train our workers – “eaters” and AI alike. Essential skillsets needed to properly address the burden of work and avoid improper substitutions. In one case, I know of a small boarding school in Australia that is deliberately adopting AI for its instructors, staff and students. They see the tremendous efficiencies that AI can create. But, instead of using AI to create policies, they are working with industry leaders from various sectors, including retail, finance and the consumer sectors.

They are starting by adopting the ISO 42001:2023 AI Management System standard. As part of this adoption, the IT and management teams are working together with internal staff to create and vet governance and incident response documents. They are creating written documentation that outlines their AI strategy to help students, as well as AI acceptable use policies. Their progress has been slow, but steady.

But, it’s possible that I’m getting ahead of myself here. Their success, like many others I’ve seen in Europe, South America and Japan, isn’t because they have adopted a particular standard or framework. They’re experiencing success because they are carefully liaising with staff, students and consultants to make sure that they address how AI will affect workflows. So far, this school’s leaders haven’t made the mistake of ignoring input from people who work for a living. That would improperly displace collective wisdom for theoretical ideas. That’s not transformation. That’s substitution.

The leaders I have spoken with have leveraged their ability to remain curious and ask tough questions. Right now, that’s the primary skill that organizations are using to solve the problem. In fact, whenever I ask leaders in every industry sector to tell me the most critical skill that helps them implement AI, they use two words: curiosity and communication, two skills that go hand in hand, along with “eater” humans and their AI agents. So, we’ve come a long way from the limited VCR interface. Now, we’re engaging in iteration and interaction and using agents that help us complete tasks. These things will help us avoid the blinking 12 problem.