Janelle Shane on Giraffes, Rulers, Onesies and AI

All Articles Marketing Digital Technology Giraffes and Rulers and Onesies, Oh My! AI guru Janelle Shane puts AI into perspective and tells us why chess isn’t like laundry

Giraffes and Rulers and Onesies, Oh My!
AI guru Janelle Shane puts AI into perspective and tells us why chess isn’t like laundry

Technologist Janelle Shane showcases the blind spots AI still has and digs into the strange situations they can produce. 

5 min read

Digital TechnologyMarketing

A human hand shakes a blue electronic hand coming out of a laptop.

kiquebg from Pixabay

AI. It might take your job. It might save the world. It might kill us all. It definitely generates the most disturbing hands ever. Industries from marketing to infrastructure are using the new tech, and leaders across the board are turning to it for insights – which hopefully don’t include “resistance is futile.” What’s the truth behind the hype?

Technologist Janelle Shane, who is delivering a keynote at SmartBrief’s upcoming AI Impact Summit, says that enthusiasts and worrywarts both need to take a breath. In her AI Weirdness blog and her book, “You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place,” she showcases the blind spots AI still has and digs into  the strange situations they can produce. 

Strange world

In a SmartBrief interview, Janelle noted that algorithms are great at memorization and picking probable answers —- but “probable” doesn’t always mean “accurate.” Algorithms also don’t memorize the most applicable data. 

Janelle describes one such amusing case on her blog. A New Jersey zoo has a baby giraffe born without spots, who’s as cute as you might expect. When Janelle asked two AI image recognition systems to describe the youngster, using a series of different questions, the responses she got ranged from the incoherent to the almost-human sounding – –but only one of them ever mentioned the lack of spots, and Janelle never got a repeat of that answer. Instead, the AI put in objects that weren’t there, described the giraffe as spotted or striped, and even said it was wearing a coat. 

Another head scratcher comes out of a skin cancer recognition project at Stanford University. Dermatologists like to measure spots and lesions that they’re worried about, hence many pictures of malignant lesions on the web also include rulers. So, when the AI chatbot was prompted to identify malignant tumors, the algorithm decided that the presence of a ruler itself meant a lesion was more likely to be malignant. That’s not what you want out of a diagnostic system.

Then there’s the creepy stuff, like the chatbot that hit on a reporter. What’s up with that?

Well, it ties back to one of the oldest computer science sayings: GIGO, or Garbage In, Garbage Out. AI is only as good as its data, and large language models scrape from a wide variety of sources. (One ChatGPT-based bot memorably revealed that it got phrases from a very niche type of fanfiction. Maybe don’t Google that.) As Janelle points out, some of those sources include science fiction stories, where sentient, lovestruck, potentially murderous robots are a staple.

“AI doesn’t make moral judgments or truth judgments,” Janelle says. 

That can lead to the responses like the ones outlined above, to the 2016 incident where a Microsoft chatbot became racist after learning from the internet, and to other sticky situations in law, academia and journalism. ChatGPT has made up legal citations – and anyone who’s watched an episode of “Law and Order” can understand the immediate problem with that. 

Lawyers are also, understandably, not thrilled about having their names attached to documents they didn’t write and may not agree with. Neither are scientists, journalists or, really, anybody else whose reputation could suffer when a bot credits them with writing phantom content.

Kill all humans?

Physical danger is also becoming a problem. No, not from Skynet or AM, but from how closely AI can pass as a human without taking human needs into consideration. AI mental health apps raise a plethora of concerns. AI-generated mushroom guides or cookbooks can be lethal. A robot doesn’t know how our human physiology works or, for example, why making tea with belladonna is a very bad idea. 

In some ways, the use of AI in inappropriate situations is a very old human problem. We have a tendency to try and use cheap substitutes even if they aren’t as good as the real thing and, in some cases, even if they’re dangerous. That’s how we got the FDA, and now it’s why we need caution and guard rails for AI. 

The sunny side

So, what is AI good for?

Narrow, well-defined problems, Janelle says. AI can beat humans at chess, where the potential scenarios are limited, but struggles with sorting laundry, where everything from stains to fabric to the hardness of the local water varies a lot. In application, this means that AI isn’t a good idea for rating job interviews – one program would give an applicant higher scores if they had a bookcase in the background of their video – but it excels at tasks like finding protein folding shapes, where the field is constrained and testing is built in.

AI’s very oddness also makes a great strange idea generator. With the right prompt, Janelle produced baby onesies with adorably absurd designs like “Galaxies on Ice,” and “Onions in Snow.” If you’re stuck for inspiration about your next creative project, you could do worse than feeding a prompt into AI and sorting through the results. Just keep in mind that sorting is essential. 

Humans: Not obsolete yet 

Janelle’s research reveals that even the most advanced AI needs a team of people backing it up. Humans provide expert knowledge, empathy, creative development and moral judgment.  We can even spot when a giraffe doesn’t have any. 

We may have our faults as a species, but the robots shouldn’t take over yet. 

Don’t miss Janelle’s talk at our AI Impact event!