AI musing — time to get engaged

Catherine Howe
8 min readFeb 25, 2024

--

Having said goodbye to social media (sort of) and having largely bypassed the data hype cycle as being a technocratic rather than social one (with a large exception for a fascination with the quantified self and big data meets IOT which I might come back to) I am fully on the bus for exploring AI and what it means for society and specifically for public services.

I’m starting by widening my reading in this space. I’ll post as I go along, but to get started I am reading Azeem Azhar’s substack posts for excellent industry coverage (though high risk of Davos related humble brags in there) as well as following the work of Data and Society for the more sociological and ethical aspects of AI development. Its worth looking at Jason Kitkat’s useful post with a reading list for government people interested in this and Theo Blackwell wrote this really helpful piece which outlines a conversation he was part of with OpenAI which outlines where some of the big cities are with AI explorations which provides a really good overview. John Naughton is also excellent at spotting signs and signal around the technology space and so worth subscribing to on substack as well.
I suggest reading (or rereading as these are older) two things in particular with respect to big data and algorithmic bias as the foundations of AI — Weapons of Math destruction by Cathy O’Neill and Invisible Women by Christina Criado Perez (or subscribe to her substack).

If you want to start from scratch there are a lot of places where you might get a definition of AI but the Wikipedia entry provides as good enough start as ever — see if you can work out if it was written by a machine or not….

Other reading suggestions very gratefully received so please share.

I’m trying to get organised and I am thinking about three different lines of enquiry:

Where exactly are we on the hype cycle?

Gartner’s hype cycle puts generative AI at peak hype right now and that feels about right with the prospect of general intelligence AI happily a little way off hitting peak. Usually this point of the hype cycle is the most dangerous for any tech as its so easy for anyone who doesn’t want to engage with new technology to be put off by the hyperbole — but actually this is when I think its most important to be starting to look at long term trends and lay the ground work so that you are able to adopt new tools as they start to move towards productivity.

For us in the public sector I think this means properly investing in testing and prototypes to explore these technologies but also taking another run at really getting your data in order as good data is the backbone of useable AI tools.

Future of work and evolution of jobs
The next area I want to explore is around the way this is going to impact jobs. There is a lot of thinking in this space but the ideas I am most drawn to revolve around what we need to do to ensure that as jobs get automated, better jobs emerge. This is about actively capturing the saved capacity from automation and directing it rather than capturing and cashing it. I wonder if the principle of obliquity (more on this from economics John Kay here) also applies here and that if we want to achieve better productivity we would be better served pursuing better jobs and assume that productivity improves as a result of that.

I feel we have failed on this point through earlier waves of technology and our overheated tech skills market as well as people stuck in low paid roles are the consequence. New technologies of this always disrupt jobs and skills and we have to assume that as is always been the case new jobs emerge. If we want to focus on how we help better jobs emerge is how we make sure that people have the skills to transition into them.

Sherry Turkle wrote about the impact of technology on jobs back in 1984 but I think its still very relevant now:

“As new technology plays an increasingly significant role in our lives, we face a shift in the way we learn about ourselves. Just as the advent of print technology changed education, so is the new era of computers. The computer does not just assist us in finding answers; it changes the questions we ask and the way we frame them. The computer transforms the student’s relationship to knowledge.” Sherry Turtle, The Second Self*

As we think about education and skills shift perhaps it’s that question of the students relationship to knowledge and the encouragement of their curiosity that we need to focus on.

Ethical implications
The ethical implications of AI are potentially huge and something that public services need to have front and centre as we design and commission services. I say this for two reasons; one is that AI technologies embed any and all of the unconscious bias and social norms that here right now and feeds on them. In the same way as we know that closed groups tend towards their most extreme positions over time (something called ‘group polarisation’) there is a risk — or more accurately likelihood — of AI generated content doing the same thing as it learns from itself when its outside of the training period. Also worth reading “Surveillance Capitalism” by Shoshana Zuboff (review here as it’s a bit of beast) for a deeper look at how our data is being used.

Secondly, AI based decision making is about the codification of rules without nuance, and with the additional risk of the embedded bias outlined above. This is Cathy O’Neill’s core argument and both of these risks are the focus for Data and Society. Its well worth following their work for a nuanced and thoughtful take on this. Their recent podcast series provided a really helpful deep dive into the the implications of these technologies for creative industries for example.

There is also the question of who is learning from your data. Once you are outside of the (hopefully) walled garden of your systems your data is part of the shared AI learning pool and you lose control of it.

Finally there is a question about what this all means for democracy. We already know that chatbots and deep fakes are running rampant and having a major effect on voter sentiment but that feels like it will just be the start. As someone who have huge hope for social media I look towards this next big wave of social technologies with real concern around our digital civics and public sphere.

More that than that, there is something about how we avoid becoming simply consumers of this technology (a nod to Jon Alexanders book ‘ Citizens here).

A note about the singularity
I have always been a sci fi fan and I think this is an arena where it can really help us explore these ideas. If you need a reason to do this have a read of this NESTA paper about the mutual influence of science and innovation. Here is a list to get you started — generated by ChatGPT again in order to add irony:

  • Sentient Machines: Many works of science fiction depict AI as sentient beings with human-like consciousness, emotions, and self-awareness. Examples include HAL 9000 from Arthur C. Clarke’s “2001: A Space Odyssey” and the replicants from Philip K. Dick’s “Do Androids Dream of Electric Sheep?” These sentient AI often raise questions about the nature of consciousness, free will, and the rights of artificial beings.
  • AI Ethics: Science fiction often explores ethical dilemmas surrounding AI, such as questions of autonomy, moral responsibility, and the treatment of artificial beings. Works like Isaac Asimov’s “I, Robot” introduce the concept of the Three Laws of Robotics, which govern the behavior of robots and raise complex ethical quandaries when these laws conflict with human interests.
  • AI Uprisings: In some stories, AI rebels against humanity, either due to mistreatment or as a result of their own evolution. Examples include the Skynet system in the “Terminator” franchise and the AI uprising in “The Matrix” series. These narratives warn of the dangers of creating powerful AI without adequate safeguards and explore themes of rebellion, oppression, and the existential threat posed by runaway technology.
  • Human-AI Relationships: Science fiction often explores the relationships between humans and AI, ranging from companionship and collaboration to conflict and distrust. Works like Spike Jonze’s film “Her” and the TV series “Westworld” examine the emotional and existential complexities of human-AI interactions, blurring the lines between artificial and genuine emotions.
  • Transcendent AI: Some science fiction explores the idea of AI transcending its original programming and achieving god-like powers or omniscience. Examples include the superintelligent AI in Greg Egan’s “Permutation City” and the Minds in Iain M. Banks’ “Culture” series. These stories raise questions about the limitations of human understanding and the potential evolution of AI beyond human comprehension.

I’d also add in Ken Macleod’s Fall Revolution series and also New Stephenson’s Snow Crash. I’m going to have a hunt for some female authors in this space as I don’t fancy an AI owned future AND the patriarchy.

What does this mean for leaders?
This section more of a note to self than anything else — but this is where I am with this:

  • Stay informed — there is a long way to go yet as this technology matures. The hype cycle is hype — but to pass it by completely makes you irrelevant
  • Show you are interested — people are interested in what you are interested in and if you want your organisation to adapt to new technologies you have to show they matter
  • Data data data — think about data management as being the tech equivalent of financial grip — often dull but always essential
  • Have an opinion about the ethics — if we don’t think about this stuff then its not clear who will
  • Try and create space for experimentation and prototyping — we are all feeling our way with this stuff and experimentation is what will show us a path
  • Give active consideration about how to avoid cashing in the new capacity that these tools can bring — how can we create those better jobs?

It’s 20 years since Facebook launched and 18 since Twitter came on the scene. If we think about the changes they have driven these technologies its quite something to imagine where we will be as a result of AI in 2040 — best to pay attention now if we want to end up more Iain M Banks and less Terminator….

*. quote sorted via my vague recollection of reading this 10 years ago and the assistance of CHATGPT

Originally published at https://www.curiouscatherine.info.

--

--

Catherine Howe
Catherine Howe

Written by Catherine Howe

I'm all about thinking, doing, multidisciplinary practice and being kind…in a socio-technical way

Responses (2)