GAI Is Going Well

Being somewhat inspired by Molly White who writes Web3 is going great. I’ve been tracking the more hilarious and often quite alarming unintended ( or purposeful in some cases ) consequences of using LLMs for over a year now.

There are folks looking for a quick fix , poorly if at all tested apps , and bad guys & good guys figuring out how to use it to their advantage or to protect their livelihood.

I’m saddened that the same old problems I’d started observing over a year ago are the same problems we are getting now and with ever more powerful & capable models the problem I feel will get worse before it gets better as folks ae rushing to take advantage of GAI powered tools yet not actually understanding ( or caring in many cases) how they work nor trying to mitigate for the adverse side effects.

What’s really frustrating is there are ways to help mitigate and plan for these unintended effects some of which I discussed here .

However it seems many folks are not even doing the minimum. Yes it’s understood that putting in place good testing processes, validating input and output, implementing security & privacy guardrails all costs both in terms of the skills required & money but what is that when weighed up compared to your reputation and in some cases your livelihood?

With governments around the world talking about and in some cases already introducing regulations around the use of AI the woes are only going to get worse if basic mitigations are not a default starting point .

Anyway enough of me grumbling about that as the evidence speaks for itself.

I’ll only include entries from Jan 2023 up to mid february 2024 when I wrote this post.

I avoided including too many research papers deciding to focus more on issues identified in the wild ( I may write a post about research in this area at some point but for now that is out of scope!).

I didn’t catch them all and this post would have been way longer if I had !

If any issues seemed to be just general bad practice such as password sharing etc I opted not to include.

Where I had read articles beyond a paywall I tried to not include them. If you want to continue to track these incidents then I suggest https://incidentdatabase.ai/ . It has case studies/reports of failures of deployed AI systems And is ( quoting) “dedicated to indexing the collective history of harms or near harms realised in the real world by the deployment of artificial intelligence systems”

Out of control chat bots

This issue is so old now the fact it’s still a thing and causing headlines I just keep going DOH! Whether it’s unintentional or a deliberate prompt injection attack the consequences are never great for whomever the chatbot belongs to

Cutting corners

Cutting corners by relying on AI to do your work for you probably will result in hallucinations and providing you with responses that seem accurate. As these examples show you also need to do your homework to ensure that what is being generated is actually factually accurate, reflects your principles and isn’t biased .

Folks who cut corners with an over reliance on chat bots & AI tools just don’t understand how LLMs work . Seems the warnings that accompany chat bots are just being ignored as a simple check of the output would maybe stop this behaviour? Both copilot and Gemini give you an option to validate the results so I am expecting ( well hoping ) the reports of this type of error to reduce quite a bit .

This is particularly sensitive. LLMs tend to be trained on web scale data and part of that leads to swallowing up data that the originators are not happy should be freely available to train AIs. This has led to a number of law suits that are still going through the courts . Govts are also struggling with what is fair use and isn’t.

Divulging stuff that wasn’t for public consumption

July 2023 Business insider listed companies who were concerned about leakage of private data and thus were restricting use of ChatGPT

Using for nefarious purposes

(Deep) Fakes

This is often harmful and there are multiple ways it can manifest . It can be annoying to folks looking for original artefacts. There’s the alarming intentional misdirection by the creation of fake videos and audios. It’s often used to harass and cause distress.

Bad guys love this and Govts are waking up to the consequences. It’s not new but it’s easier to do now .

I didn’t have a category for this one but I wanted to include it anyway.

Part II is here