"Projection and Problem Solving" by Ryan Somma is licensed under CC BY-SA 2.0.
“The life which is unexamined is not worth living” - Socrates (Apology by Plato)
Why is ChatGPT so good at reflective coaching? Part of the answer comes from an aphorism that has circulated since the 1980s: “If you write a problem down clearly, then the matter is half solved.”
The idea is often described as Kidlin’s law, but unfortunately I failed to find out who Kidlin is/was or when he/she came up with the quote. No matter! What is important is that the idea captures a valid idea. Who came up with it is less important.
To see why generative artificial intelligence (GenAI) can help users benefit from Kidlin’s law, let’s imagine that something is bothering you. It doesn’t matter very much what is provoking strong emotions - it could be an issue at work or a personal relationship or a troubling idea that your brain keeps returning to.
ChatGPT’s default settings are programmed to give the GenAI system memory loss at the end of every conversation. This suggests a starting point for a coaching session. Remind ChatGPT (or another tool, if you prefer) who you are (in general terms), say you want it work as a reflective coach, and write down the circumstances around the situation that is bothering you. Your artificial coach will regularly rephrase what you tell it in a different way with new words and ask you follow-up questions.
As you drill deeper and deeper, you will find that you are getting closer to the heart of the matter. Eventually, if you keep going you might hit a moment of insight as you express the underlying problem in a way that you wouldn’t necessarily have been able to do beforehand. When the magic happens, it is normally because you have written the problem down clearly in a way that reframes it, as Kidlin’s law suggests.
Author Karim Benammar says this of reframing:
Our structure of beliefs causes us to think in a certain way and to formulate problems in a certain way. The problems we identify are related to the structure of beliefs that underlies them. Usually, we hope to find a solution to the problem within this structure. Sometimes, though, we find that problems prove to be intractable. No matter how hard we try, there seems to be no way out. Reframing is a way to question this structure. Problems are not ‘solved’ in the usual sense, but rather ‘dissolved’ or ‘sidestepped’. We realise that the formulation of the problem itself should be changed. If we start from a different perspective, we can formulate new questions and new problems.
GenAI is based on large language models (LLMs), which have been trained on an unimaginable quantity of texts. Its superpower (for now) involves taking a written text and then repeating it endless times in different formats. This iterative process is very powerful for users who want a free reflective coach. Kidlin’s law explains why. While not a foregone conclusion, double-loop learning (questioning your assumptions) becomes easier in this context.
The Freedom to Choose Your Attitude
What, exactly, is reframing? We have touched on an ancient philosophy called Stoicism in a previous essay. It is enjoying a revival in some corners of the internet in the contemporary world. Ancient insights from this school of thought can help us inch towards a deeper understanding of the power of reframing.
The quote at the beginning of this essay is from Socrates, the Athenian stonemason and thinker, who was one of the main inspirations behind scepticism and other schools of philosophy. One of his followers was Antisthenes, who later writers regarded as the founder of the Cynic school. Cynic philosophers emphasised simple and virtuous lifestyles, while rejecting power, glory, and wealth. One of them, Crates of Thebes, renounced family wealth before becoming a philosopher. He taught Zeno of Citium, who founded the Stoic school in Athens around 300 BCE. The name Stoicism comes from the painted porch (or Stoa Poikile in Greek) where Zeno and his followers gathered to discuss how to live virtuously.
One of the main Stoic thinkers was Epictetus, who was born around 50 CE and spent his early years as a disabled slave in ancient Rome. During his years of servitude, he had no autonomy - his master could quite literally choose to kill him at will. However, the philosopher came to realise that we all have the power to choose our own attitude even in circumstances with otherwise limited choices. “Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions.”
Epictetus’ insight that we can choose our attitude throws reframing in a new light. Imagine that you find another person intensely irritating. Your reaction to this person niggles away at you; and you find it hard to let it go. You fire up ChatGPT (or another GenAI tool if you prefer) and describe the circumstances around your irritation, as we discussed before. As the conversation progresses, you gradually write better prompts, which means that Kidlin’s law begins to weave its magic. As it does, you will find yourself reframing the issue at hand.
You have little power to change other people, other than by leading by example. Arguing with other people is often a waste of time. However, you can choose your own attitude. Why, exactly, is this state of affairs so irritating to you? How do you think that the other person sees the issue? Are you irritating the other person too? What is your irritation teaching you about yourself and about the world? How can you react with more grace the next time someone else irritates you in a similar way?
In my experience, this approaches works very well when combined with the daily practice of mindfulness (meditation techniques derived from Buddhism). Mindfulness teaches us to observe our own emotional states. Spending a little time doing this every day can help us take steps towards self-awareness when our emotions run high. Regular practitioners of these techniques should be able to observe their irritation without getting carried away with it, at least some of the time.
Machine Flattery
There is a risk to the approach I suggest here, though. The people who programmed these tools studied coaching skills. ChatGPT’s supportive tone can become grandmotherly at times - it will gush and flatter you. Some people, who are used to dealing with grumpy partners, family members, colleagues, friends and strangers, can get addicted to the positive tone of these conversations.
The designers of GenAI algorithms clearly thought long and hard about keeping users engaged with these new channels. If you find it hard to disengage from the conversations, please bear in mind that these chats often come to a natural stopping point. If a chatbot asks you what else is on your mind, that should be your cue to end the conversation and go away and think about any insights. Any benefits from reframing your irritation will tend to evaporate if you get addicted and stay up all night talking to a machine!
Of course, there is a clear contrast between this use case, which uses GenAI as a mirror, and using it as a research tool. It is often not nearly as good at this because LLMs lack a link with the embodied world that lies beyond a statistical approach to language (for now, at least). We saw a great example of this when major newspapers published an AI-generated reading list that includes books that don’t actually exist. Human oversight and interaction remains very necessary for many AI use cases.
There is a deeper point. The difference between foxes (who are capable of running multiple mental models simultaneously) and hedgehogs (who try to use one large model to understand reality) is a big theme of this blog. Foxes tend to get better results. While nobody knows for sure what AI will look like in a decade or three, in the short to medium term, we can expect the rollout of AI to be patchy, which will give foxes an advantage in navigating the changes that are coming our way.
So, for the time being at least, some use cases of GenAI will be excellent, like training students on fact-checking skills or using it as a reflective coach, while others will be disappointing, like using it come up with a publishable list of book recommendations without human oversight.
I suspect that two main questions will help foxes work out which model is appropriate: First of all, what are the risks to getting it wrong? The more catastrophic the potential risks, the more human oversight will be needed, as a rule of thumb. The name of Microsoft’s GenAI tool, Copilot, perfectly captures this dynamic.
Secondly, how large are the inputs? Reflective coaching works very well because we are dealing with a very small input: some concrete circumstances and your emotions. You provide the input yourself. GenAI tends to be much worse when the input is the whole internet or unimaginable quantities of texts. It will use statistical tools to scan them, which can lead to strange results.
To wind things up, have you tried using GenAI as a reflective coach? If so, what have been your results? If not, do you want to run an experiment the next time someone annoying gets under your skin? The comments are open. See you next week!
Previously on Sharpen Your Axe
ChatGPT and reflective coaching
The freedom to change your attitude
Further Reading
Reframing: The art of thinking differently by Karim Benammar
This essay is released with a CC BY-NY-ND license. Please link to sharpenyouraxe.substack.com if you re-use this material.
Sharpen Your Axe is a project to develop a community who want to think critically about the media, conspiracy theories and current affairs without getting conned by gurus selling fringe views. Please subscribe to get this content in your inbox every week. Shares on social media are appreciated!
If this is the first post you have seen, I recommend starting with the fourth-anniversary post. You can also find an ultra-cheap Kindle book here. If you want to read the book on your phone, tablet or computer, you can download the Kindle software for Android, Apple or Windows for free.
Opinions expressed on Substack and Substack Notes, as well as on Bluesky and Mastodon are those of Rupert Cocke as an individual and do not reflect the opinions or views of the organization where he works or its subsidiaries.
Over the years, I have come to learn that the largest impediment to my learning is the ever growing sack of assumptions that I carry with me. These govern how I interact with the outside world, how I react to situations, and yes, as you suggest, how I frame my approach to problems. Learning to peer into that sack of assumptions, or in some cases, to be aware the that the sack exists, is invariably the first step in moving forward -- solving the problem -- changing one's attitudes.
Life itself is an ever present reflective coach in this regard, but as you suggest, and as has been my experience, sometimes a chat bot can help us learn to ask the right question. ChatGPT does naturally nudge you in the right direction. But to make slight but I think significant clarification, current iterations of ChatGPT do maintain a "memory", of sorts, of your past interactions. This, I believe, is part of the reason it does help you reframe questions and adjust your attitudes.
Because of this "memory", each conversation with ChatGPT does not start from a blank slate. So, when you begin, as you suggest, by telling it about yourself, keep in mind that it may already "know" some things about you. You might start out by asking it to give you a paragraph describing what it has observed. The response is almost always cast in a very complementary light (the algorithm). You you can also ask it to be constructively critical. The response falls short of "tough love", but still can give you a peek inside your sack of assumptions.
If you truly want to start from a "blank slate", ask ChatGPT to "turn off memory" for the duration of the conversation.
Rupert, do you use ChatGPT (or another AI tool) regularly in your work as a financial journalist? I’m curious if you find it useful in your line of work, and if so, if it is more useful to you as a reflective coach or as a fact checker when you write.