Thoughts on GenAI

02 Oct 2025

GenAI

I got called into a vendor meeting abruptly the other day (which is not unusual) but the vendor asked us to individually articulate how we feel about the current state of play with regards to Generative AI tools. The vendor was trying to guage how hard to sell (their product is built on AI) but the introvert in me has been pondering this for a few days now and I wanted to write my thoughts down - it will be interesting to see whether my opinion changes over time.

I think overall I would describe my outlook as cautiously optimistic. There are some amazing use cases for GenAI but there are some serious dangers as well. The ability to generate paragraphs of text, images, videos, etc. all through a simple text prompt is nothing short of amazing. The level of automation is off the charts - it reminds me of the initial industrial revolution or the invention of the printing press and how these things transformed the world in terms of productivity and inforamtion (respectively) but these things are of course a double edged sword. There are concerns about the ecological impacts (water and power use), ethical concerns, privacy concerns, copyright infringement of training data, etc. So many facets to explore, so many rabbit holes to go down (and at work we have been working on a '23 things' framework to explore all of these, it has been really interesting) but what I really wanted to point a magnifying lens at is how GenAI is being used day-to-day by most people: search and summary. Chatbots is another one but I'll leave that for another day!

I read a recent post by Hugh Rundle regarding Google search and the impact that description plays in information discovery. While libraries use controlled vocabulary to describe resources, the internet does not follow these rules so Google had to create their own method of indexing everything. As soon as a system is put in place, people determine how it can be used to their advantage - and SEO is born! And you start getting commercial links at the top of your search results. Now Google has implemented AI summaries and this is part of what I wanted to expand upon. In the early days of search, you typed in a search string (and as Hugh points out, this may or may not have been what you were ultimately looking for) but you got back a long list of results - links to websites. It was up to you to determine whether those results are relevant, which ones required further investigation - information literacy 101: Is it current? Is it relevant? Authority? Accuracy? and what is the purpose of the website/information? With AI summaries, you are effectively outsourcing your information literacy to an algorithm but the algorithm doesn't know your context and as Vanderbilt University's research has succinctly summarised, GenAI tools have some limitations:

  • Incomplete: Most Large language models cannot access the most recently published work
  • Inconsistant: Unable to replicate the same results, for the same question, over time (for me this is a serious limitation, especially when you are talking about automation)
  • Incoherent: Cannot provide provenance for where they source their information
  • Illogical: (makes me think of Spock!) Fail to solve problems that are trivial for humans or other 'simpler' software
  • Indulgent: Encourages confirmation bias and path dependency rather than critical thinking

I think some of these issues are becoming less so over time - a lot of summaries will link to the sites that have been used to generate the overview but as was recently uncovered in some of the GenAI tools used by information publishers (as per Aaron Tay's post regarding the issue) there are unknown guardrails or behaviours that have been built into tools that may be unknowingly steering you down a path to biased information. Even for small 'simple' searches it can be an issue - you are looking for an answer to something like 'What is the capital city of Australia?' but it is not guaranteed that you will get the correct answer. It reminds me of that BigPond internet - Great Wall of China ad that was on TV.

Given the huge volume of searches that occur every second of every hour of every day around the world - there are a lot of people that will be relying on dodgy information to construct knowledge, goals and beliefs and it highlights to me how essential information literacy and critical thinking is. I'm sure most people will look at the outputs critically but overtime I think you either tend to stop using it due to inaccuracies or you come to trust it as a source of information despite its inaccuracies - this is the way to conspiracy theories and AI psychosis. That scares me. Given the rate and amount of information that is being generated by AI tools on the internet it feels a bit like an unbalanced wheel, slowly getting more and more off balance until it eventually flies off the axle and the wagon of society will come crashing to a stop in a muddy puddle on the side of the road.

Thats the pessimist in me - the optimist on the other hand loves how much time it saves me writing emails and reports (even if I have to double check everything that is generated)!

← Home