Over the years, I have come to learn that the largest impediment to my learning is the ever growing sack of assumptions that I carry with me. These govern how I interact with the outside world, how I react to situations, and yes, as you suggest, how I frame my approach to problems. Learning to peer into that sack of assumptions, or in some cases, to be aware the that the sack exists, is invariably the first step in moving forward -- solving the problem -- changing one's attitudes.
Life itself is an ever present reflective coach in this regard, but as you suggest, and as has been my experience, sometimes a chat bot can help us learn to ask the right question. ChatGPT does naturally nudge you in the right direction. But to make slight but I think significant clarification, current iterations of ChatGPT do maintain a "memory", of sorts, of your past interactions. This, I believe, is part of the reason it does help you reframe questions and adjust your attitudes.
Because of this "memory", each conversation with ChatGPT does not start from a blank slate. So, when you begin, as you suggest, by telling it about yourself, keep in mind that it may already "know" some things about you. You might start out by asking it to give you a paragraph describing what it has observed. The response is almost always cast in a very complementary light (the algorithm). You you can also ask it to be constructively critical. The response falls short of "tough love", but still can give you a peek inside your sack of assumptions.
If you truly want to start from a "blank slate", ask ChatGPT to "turn off memory" for the duration of the conversation.
I checked and the memory is turned off. I do find that the "reflective coach" tone only emerges gradually during deeper conversations. ChatGPT says this: "A reflective coach often asks probing or values-based questions. At the outset, that could feel intrusive or overly intimate. I wait for cues that you're open to a deeper or more exploratory mode." I also often remind ChatGPT not to give me a million random suggestions every time I mull something over. I prefer to slow-cook my ideas!
At one point during my testing, ChatGPT did inform me that it will, without a specific request from the user (me), shift to "Deep Deliberative Reasoning" mode, as I discussed in my Prompting Reason post. I think this is what it is referring to when it says it is waiting for "... cues that you are open to a more exploratory mode." I had "memory" turned on during my testing, but from your experience, this shift apparently happens when it is turned off as well.
And, I know what you mean about the incessant suggestions at the end of every response. I haven't tried asking it not to do this, but I am putting it on my list of things to test.
I have still not experimented with using AI as a tool, but the writings you and Rupert provide are helping me get to the point where I will be trying it. Thank you both for your work on this!
The more I learn and write about AI (thank you for reading, Linda), the more apparent it is that it is, at its core, a simulation -- a model. Sometimes models can be useful. But sometimes they stand in the way of what is literally all around you. If you're looking for the best, in my opinion, reflective coach, spend time around young children. Nothing cuts through your assumptions more than a child, especially an annoying one that keeps asking "Why?" in response to your every "answer."
I can agree with that! Children’s words and actions usually reflect what they feel and live at home and they usually are not afraid to comment on the world (or you) as they see it. My current job is the first I’ve ever held where I do not work directly with children. (Worked in camping, as an experiential, outdoor educator, and as a first grade teacher and substitute teacher in public schools). I miss them!
Rupert, do you use ChatGPT (or another AI tool) regularly in your work as a financial journalist? I’m curious if you find it useful in your line of work, and if so, if it is more useful to you as a reflective coach or as a fact checker when you write.
I use Copilot at work, particularly when I need to rewrite something or when I need some background information on a new subject. I think it's a useful tool, just like Google or Wikipedia!
Do you prefer Copilot over ChatGPT? If so, what are your reasons? I develop software almost exclusively with Microsoft tools, but have not used Copilot for either coding or general purpose tasks. On the rare occasions where I need coding assistance, I have been using ChatGPT's o4-mini-high model, which is preferred for coding.
Over the years, I have come to learn that the largest impediment to my learning is the ever growing sack of assumptions that I carry with me. These govern how I interact with the outside world, how I react to situations, and yes, as you suggest, how I frame my approach to problems. Learning to peer into that sack of assumptions, or in some cases, to be aware the that the sack exists, is invariably the first step in moving forward -- solving the problem -- changing one's attitudes.
Life itself is an ever present reflective coach in this regard, but as you suggest, and as has been my experience, sometimes a chat bot can help us learn to ask the right question. ChatGPT does naturally nudge you in the right direction. But to make slight but I think significant clarification, current iterations of ChatGPT do maintain a "memory", of sorts, of your past interactions. This, I believe, is part of the reason it does help you reframe questions and adjust your attitudes.
Because of this "memory", each conversation with ChatGPT does not start from a blank slate. So, when you begin, as you suggest, by telling it about yourself, keep in mind that it may already "know" some things about you. You might start out by asking it to give you a paragraph describing what it has observed. The response is almost always cast in a very complementary light (the algorithm). You you can also ask it to be constructively critical. The response falls short of "tough love", but still can give you a peek inside your sack of assumptions.
If you truly want to start from a "blank slate", ask ChatGPT to "turn off memory" for the duration of the conversation.
I checked and the memory is turned off. I do find that the "reflective coach" tone only emerges gradually during deeper conversations. ChatGPT says this: "A reflective coach often asks probing or values-based questions. At the outset, that could feel intrusive or overly intimate. I wait for cues that you're open to a deeper or more exploratory mode." I also often remind ChatGPT not to give me a million random suggestions every time I mull something over. I prefer to slow-cook my ideas!
At one point during my testing, ChatGPT did inform me that it will, without a specific request from the user (me), shift to "Deep Deliberative Reasoning" mode, as I discussed in my Prompting Reason post. I think this is what it is referring to when it says it is waiting for "... cues that you are open to a more exploratory mode." I had "memory" turned on during my testing, but from your experience, this shift apparently happens when it is turned off as well.
And, I know what you mean about the incessant suggestions at the end of every response. I haven't tried asking it not to do this, but I am putting it on my list of things to test.
I have still not experimented with using AI as a tool, but the writings you and Rupert provide are helping me get to the point where I will be trying it. Thank you both for your work on this!
The more I learn and write about AI (thank you for reading, Linda), the more apparent it is that it is, at its core, a simulation -- a model. Sometimes models can be useful. But sometimes they stand in the way of what is literally all around you. If you're looking for the best, in my opinion, reflective coach, spend time around young children. Nothing cuts through your assumptions more than a child, especially an annoying one that keeps asking "Why?" in response to your every "answer."
I can agree with that! Children’s words and actions usually reflect what they feel and live at home and they usually are not afraid to comment on the world (or you) as they see it. My current job is the first I’ve ever held where I do not work directly with children. (Worked in camping, as an experiential, outdoor educator, and as a first grade teacher and substitute teacher in public schools). I miss them!
Rupert, do you use ChatGPT (or another AI tool) regularly in your work as a financial journalist? I’m curious if you find it useful in your line of work, and if so, if it is more useful to you as a reflective coach or as a fact checker when you write.
I use Copilot at work, particularly when I need to rewrite something or when I need some background information on a new subject. I think it's a useful tool, just like Google or Wikipedia!
Do you prefer Copilot over ChatGPT? If so, what are your reasons? I develop software almost exclusively with Microsoft tools, but have not used Copilot for either coding or general purpose tasks. On the rare occasions where I need coding assistance, I have been using ChatGPT's o4-mini-high model, which is preferred for coding.
I actually prefer ChatGPT for personal stuff. I'd prefer not to go into the specifics of my workplace, for obvious reasons.
In general, though, plenty of companies prefer licenced tools like Copilot over free ones for legal/compliance reasons. Here's an article on how Microsoft will help corporates that run into any copyright issues that they might run into after using Copilot: https://www.techtarget.com/searchenterprisedesktop/tip/Microsoft-Copilot-Copyright-Commitment-explained And here is a link about Microsoft's data protection policies: https://learn.microsoft.com/en-us/copilot/microsoft-365/enterprise-data-protection There are also issues around where data is hosted and audit trails, if you want to look into it deeper.