NotebookLM and GAI Going Well

So as some of you may be aware I am a big fan of NotebookLM and with the addition of being able to create an Audio overview as well as use YouTube videos as source what was already good became even better.

I’d tried the audio overview to create a podcast of what I have been working on in my day job. It really was very very good but I wanted to see if it could repeat the trick … tl;dr it does

You have a max of 50 sources you can use and NotebookLM uses that as the grounded source to generate the content.

I wondered if I could use it to create a listenable podcast/ audio review on some of my most recent additions to my GAI collection.

As I was going to use the urls from my selection I thought it was nice to know that NotebookLLM does respect sites that put up please don’t scrape me ( robots.txt ) so I was unable to add as sources anything from Arstechnica, Techcrunch, The Guardian and the verge which if you check my sources make frequent appearances in my collection.( sure you could work around that if you really wanted to but that would be unethical imho!)

The audio review it generated from a recent selection of 7 allowed sources from the in the wild collection was okay and I am still trying to work out which of the sources had the bluesky reference ! Apparently blue sky is Twitter’s cooler, calmer cousin . Which I admit I kinda agree with! I Guess wherever it got than from it was being referenced as twitter.

Next I thought okay let’s see how it gets on with the research & opinions collection. From this I felt the podcast was more compelling and as it’s primarily a research tool it kinda makes sense that it would shine. This time I added 11 sources so it had more to go through so it took slightly longer to generate the audio but you can come back later so time to get a cuppa !

One observation is that the avrix pdf docs are better than the html experimental versions for this.

I also got it to generate a briefing doc using the same sources as I hadn’t tried that before and that’s pretty cool too. Not sure why it decided on Oct 2024 as the date of the briefing though ( I generated it on Sat 28th Sept)

AI Developments & Concerns: Briefing Doc - October 2024
This briefing document analyses recent developments in Artificial Intelligence (AI), highlighting key themes, facts, and potential areas of concern arising from the provided source material.
Main Themes:
AI-Generated Content in the Wild: We are witnessing the increasing emergence of AI-generated content outside of controlled environments. This includes malicious applications like malware, as well as more mundane instances like images and news reports.
Ethical and Safety Concerns: The rise of AI-generated content brings forth a range of ethical and safety concerns. These include the potential for misuse, the spread of misinformation, and the difficulty of distinguishing between human-created and AI-generated content.
Impact on Industries and Professions: The use of AI is beginning to impact various sectors, including journalism and cybersecurity. This raises concerns about job displacement and the need for new skills and training.

Key Findings:

AI-Generated Malware:
Researchers at HP have identified an email campaign distributing malware via an AI-generated dropper script. ("AI-Generated Malware Found in the Wild")
While this instance used AI for a relatively simple task, it indicates a potential trend toward more sophisticated AI-generated malware.
Quote: "We've known for some time that gen-AI could be used to generate malware... But we haven't seen any definitive proof. Now we have a data point telling us that criminals are using AI in anger in the wild." - Alex Holland, HP.
The ease of access to AI tools lowers the barrier to entry for cybercriminals.
AI-Generated Images in Search Results:
Google Search has been displaying AI-generated images in response to certain queries, often misrepresenting them as real photographs. ("Google Serves AI Slop as Top Result for One of the Most Famous Paintings in History," "Google Serving AI-Generated Images of Mushrooms Could Have 'Devastating Consequences'")
This has been observed with images of famous paintings and even species of mushrooms, raising serious concerns about misinformation.
Misidentifying poisonous mushrooms based on AI-generated images could have "devastating consequences."
While Google claims to be addressing these issues, the problem persists, highlighting the challenges of controlling AI-generated content online.
AI in Journalism:
The Garden Island, a Hawaiian newspaper, has begun using AI-generated newscasters to produce video news segments. ("Historic Newspaper Uses Janky AI Newscasters Instead of Human Journalists")
This move has been met with criticism from the newspaper's union, who view it as replacing human journalists with AI.
Quote: "The journalists, to put it lightly, raised several concerns and advised against using AI avatars. Obviously, management decided to move forward." - Union representative.
Unexpected AI Behaviour:
Users have reported instances where OpenAI's ChatGPT appears to initiate conversations unprompted, leading to speculation about new features or unexpected AI behaviour. ("OpenAI Says It's Fixed Issue Where ChatGPT Appeared to Be Messaging Users Unprompted")
While OpenAI claims to have resolved the issue, attributing it to a technical glitch, it underscores the unpredictable nature of AI and the potential for unintended consequences.
Other Concerns:
Google's AI search feature, "AI Overview," has been found to provide unsafe and inaccurate advice, such as suggesting users smear faeces on a balloon for potty training. ("Google's Search AI Makes Disgusting Recommendation for Parents of Toddlers")
This highlights the limitations of current AI technology in understanding context and nuances in information.

Conclusion:
These recent incidents highlight the rapid evolution of AI and its increasing integration into various aspects of our lives. While AI presents numerous opportunities, it also introduces new challenges and risks that need to be carefully considered.

Recommendations:
Increased research and development of methods to detect and mitigate the malicious use of AI.
Development of ethical guidelines and regulations surrounding the creation and distribution of AI-generated content.
Public education initiatives to raise awareness about the potential benefits and risks associated with AI.
Fostering collaboration between technology companies, researchers, policymakers, and the public to ensure the responsible development and deployment of AI.

I also tried out NotebookLM YouTube support on one of my old talks (from 2018) You can make up your own mind what you think. It made me smile anyway!

NotebookLM is for individuals rather than group work . To get the same capabilities at scale then you can probably start from Building a Dynamic Podcast Generator Inspired by Google’s NotebookLM and Illuminate | by Sascha Heyer .

So yes I am a fan of NotebookLLM and maybe you will be too however I am fully prepared for the day it may start to go off the rails. However, today is not that day.