Show me versus tell me.
That’s a longstanding consideration when you are trying to learn something new or aiming to figure out how to solve a vexing problem. If you perchance know someone that already is skilled in the matter at hand, do you want them to show you via examples or a demonstration how to get things done, or would you prefer instead to be told how to proceed via a set of laid out instructions?
I dare say that some people would much prefer one of those options over the other. Some of us would readily welcome examples or a demonstration, namely the show-me approach. Others of us would instead want a crisply conveyed set of explicit instructions, essentially the tell-me approach. The choice seems like an arbitrary personal preference that depends upon the person receiving the needed guidance.
Anton Chekhov, the famous playwright, said this about telling versus showing: “Don’t tell me the moon is shining; show me the glint of light on broken glass.”
Voltaire, a noted writer and philosopher, indicated this about the act of telling: “The instruction we find in books is like fire. We fetch it from our neighbors, kindle it at home, communicate it to others, and it becomes the property of all.”
There you have it, the value of show-me and the value of tell-me, each proffering distinct advantages and particular disadvantages. They are in an endless battle with each other. Sometimes the show-me prevails and wins the honor badge. Not to be outdone, sometimes they tell-me reigns supreme. Back and forth the tussle endures.
In today’s column, I am going to continue my ongoing special series about prompt engineering and will be tackling the persistent dilemma of whether to use a show-me technique as your best-in-class prompting method or whether to use the tell-me technique instead. Generative AI entails the entry of prompts that are one way or another going to spur the generation of results or outputs based on your posed question or problem. You want to come up with prompts that are going to be most successful in getting the generative AI aimed toward your quest for a suitable and sensible response.
Here’s the million-dollar question about the show-me versus tell-me prompting conundrum:
- Should you enter a prompt that demonstrates to the generative AI an indication of what you want (show it), or should you enter a prompt that gives explicit instructions delineating what you want (tell it)?
Which way do you vote?
I will endeavor to enlighten you as to the tradeoffs involved, plus provide heady guidance for those wishing to further improve their prompt engineering prowess. Welcome to an in-depth inquisition into the “show me” versus “tell me” contentious riddle.
Before I dive into the crux of the debate, let’s make sure we are all on the same page when it comes to the keystones of prompt engineering and generative AI.
Prompt Engineering Is A Cornerstone For Generative AI
As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.
For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful looks at the latest in this expanding and evolving realm, including this coverage:
- (1) Practical use of imperfect prompts toward devising superb prompts (see the link here).
- (2) Use of persistent context or custom instructions for prompt priming (see the link here).
- (3) Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
- (4) Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
- (5) Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
- (6) Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
- (7) Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
- (8) Additional coverage including the use of macros and the astute use of end-goal planning when using generative AI (see the link here).
Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.
Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:
- The use of generative AI can altogether succeed or fail based on the prompt that you enter.
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
With the above as an overarching perspective, we are ready to jump into today’s discussion.
Consider The Human Elements Of The Debate
Let’s start by considering how humans make use of the show-me style of dialogue versus the tell-me style of discussion. I do so with a bit of trepidation because I don’t want anyone to be led down the path of anthropomorphizing AI. In current times, AI is not sentient and should not be equated to the sentience of humans. I will do my best to make that same alert when we get into the generative AI details.
With that caveat, imagine that a friend of yours is a superb chef. You are interested in making a meal that you’ve never made before (one that you heard briefly about on the radio). You naturally go to your culinary artist pal and ask them if they can give you some pointers or guidance on how to cook the meal.
Easy-peasy.
Suppose that your cooking maestro says to you that they will show you how it is done. The two of you go to a fully stocked kitchen. The wonderment begins. With a lot of pizazz and flair, you watch in gaping amazement as the meal is put together. It was a sight to behold.
Now then, you might be the kind of person that this is perfect for. Good for you. You closely observed what was happening. You memorized the moves. All in all, you seem to now know how to cook the desired meal. But, there are some wrinkles. Your kitchen is not as well-stocked. You’ll need to make various changes and substitutions. In that sense of things, the demonstration was more of an example of what to do. You still need to fit the show-me into the specifics of what will work for you.
Some people might say that this show-me method in this instance was useless for them. They might have been greatly entertained, but the sad thing is that they learned very little about how to cook the desired meal. They go back to their own kitchen and don’t have a clue of where to even get started. Somehow, there might be a more suitable means that the chef could have employed for them.
Consider the alternative approach of the tell-me.
We shall start the cooking tale over with a clean slate. You ask your friend the chef how to cook the meal that you have in mind. Your pal proceeds to write down step-by-step instructions. The instructions are handed to you. It would seem to be entirely straightforward and transparent. You ought to be able to walk into your kitchen and get the meal put together. As they say, a monkey could do it.
Unfortunately, some people might find the set of instructions to be quite wanting. Keep in mind that in this variant of the tale, we are pretending that no demonstration or show me has taken place. All that exists is a bunch of instructions on a sheet of paper. This can be daunting for some. They are not readily able to make the leap from a set of instructions to the actual act of cooking the meal. If they could have possibly seen the meal being prepared, they would have a much more tangible and visceral sense of what to do.
I trust that the cooking example helps to illuminate key facets of the show-me versus tell-me debate.
You will encounter some people in this world that insist that the only way to learn something or figure out something is by being given guidance of a show-me caliber. Forget about the use of instructions. Don’t need them. Just demonstrate what is needed and the rest will all fall into line.
The other side says the complete opposite. The only way to learn something or figure it out is by being given explicit instructions that meticulously indicate what needs to be undertaken. Forget about the demonstrations. Don’t need them. Just provide a detailed list of what is needed and the rest will all fall in line.
Perhaps this is akin to the infamous Old West feud of the Hatfields and the McCoys.
Some lessons to be extracted from the cooking story concerning show-me versus tell-me:
- The person giving the guidance might have a preference for show-me versus tell-me, thus the choice of the matter is already potentially decided before you even get underway.
- If you try to get the guiding person to switch to the other style that they don’t prefer, you might get watered-down guidance that doesn’t do you much good (such as a lousy demonstration or a flimsy set of instructions).
- Even if the person clings to their preferred style, they aren’t necessarily any good at it and you can end up with an underwhelming show-me or a sketchy set of tell-me instructions.
- The person obtaining the guidance might be better suited to one approach over the other yet be stuck getting the guidance in a style that doesn’t befit them.
- A demonstration or example can be right on, but there is also a chance the show-me can be confounding, incomplete, or a mess.
- A set of instructions can be right on, but there is also a chance that tell-me can be confounding, incomplete, or a mess.
- The circumstance of the question or problem being solved can be a significant factor in whether show-me versus tell-me might be better suited than the other style.
- In a given circumstance, the show-me might be faster and easier to convey, but then again, other circumstances might reveal that the tell-me is faster and easier to convey.
- Another aspect is the likely gap between the show-me and the application of the show-me, and the same goes for the potential gap between the tell-me and the application of the tell-me (such as ultimately cooking the meal based on whichever style had been exercised). This is an extrapolation problem.
- Etc.
In the case of people overall, can we truly and undisputedly proclaim that one of the two approaches is for sure always the perfect way to go?
I doubt it.
A myriad of factors come to play. What is the nature of the person giving guidance? What is the nature of the person receiving the guidance? What is the type of question or problem being solved? How much time is available to impart the guidance? And so on.
Now that we’ve covered the fundamentals of the show-me versus tell-me, we can shift gears and explore how this pertains to prompt engineering and generative AI.
Fasten your seat belts.
Show-Me Versus Tell-Me In Prompt Engineering
Let’s take a journey together.
A person enters a prompt into generative AI.
The prompt is supposed to spur the generative AI towards answering a pressing question or solving a problem that the person wants to have solved. If the prompt is afield of the situation, the generative AI is going to likely also go afield. You won’t get a suitable answer. This can be exasperating. Furthermore, if you are paying for your use of the generative AI, this can be costly in the sense that the more times you need to try a variety of (unsuccessful) prompts, one after another, you are chewing up precious dollars in doing so.
The smart thing to do is make use of recommended prompt engineering techniques that can increase the odds of devising a prompt that will be successful. Of the myriad of such techniques, you might wisely choose to compose a prompt as either a show-me (a demonstration or an example) or a tell-me (an instruction or documentation).
Either of these two styles can nudge the generative AI to home in on whatever your question or problem consists of.
Let’s explore the demonstration style of a prompt.
For a demonstration type of prompt, you need to come up with an example or demonstration of what you want to have undertaken. If I wanted generative AI to produce an essay about how to cook an omelet, I might provide a prompt describing how a frittata is cooked. The generative AI might be able to identify or extrapolate from that example and generate an essay about cooking omelets.
The showcasing of one example is usually coined as a one-shot approach. I might opt to provide several examples to further the likelihood of a suitable extrapolation. This is typically referred to as a few-shot approach. By and large, you are upping your odds with a few-shot over just a one-shot. There is more content for generative AI to leverage. A downside for you is that coming up with more than one example might be burdensome and you’d prefer to avoid having to concoct several.
Another consideration about demonstration style prompting is that coming up with relevant examples might be arduous for you. An example ought to be within the ballpark of whatever the question or problem is. Any entered example that is far afield can confound things. Suppose I provided a prompt about how to cook French fries and my question was how to cook an omelet, this seems dubious as a close enough example and the generative AI might not get my drift.
As you can plainly note, there are tradeoffs associated with the demonstration style of a prompt. Likewise, all manner of prompts will inherently have tradeoffs. There isn’t any perfect prompt per se. Generative AI is like a box of chocolates in that you never know what you’ll get. The aim is to compose prompts that will most likely get the generative AI close enough to your question or problem and be able to produce a suitably sufficient response.
Let’s explore the documentation style of a prompt.
The documentation style consists of listing out a set of instructions for the generative AI (this is sometimes referred to as a zero-shot approach, namely that there aren’t any examples that you provide with your instructions). Suppose I know how to generally cook just about anything, so I list the steps that I normally take when cooking overall. I then ask the generative AI how to cook an omelet. The generic list of how to cook will perhaps spur the generative AI enough for a specific elucidation of cooking omelets.
Of course, things don’t always go as planned. It is conceivable that my provided instructions are afield of the matter underway. Suppose my instructions cover how to change the spark plugs on a car, having nothing to do with cooking whatsoever. The odds that those instructions will aid the generative AI in deriving an explanation of how to cook an omelet would seem remote.
The thing is, you never really know whether an afield documentation approach (the tell-me) or an afield demonstration (the show-me) will throw off the generative AI. Maybe so, maybe not. Surprisingly, at times the generative AI will take a seemingly non sequitur prompt and nonetheless still hit the bull’s eye. I would recommend that you not seek to test that irregular capacity and instead always aim to provide pertinent prompts.
You will in the long run be happier with what generative AI generates for you.
One additional significant risk that you always face with generative AI is that the generated response might contain oddball aspects. No matter what your prompt consists of, you essentially have no guaranteed means to avoid having something oddish appear. The generative AI can make an error and miscalculate at times, produce a falsehood, or emit what is known as an AI hallucination (this is verbiage that I disfavor, as mentioned earlier herein, since it is another kind of anthropomorphizing, but anyway has become a popular phrase and refers to the possibility that the generative AI will make-up facts or figures that are entirely fictitious).
The nature of your demonstration or documentation can undoubtedly egg on the generative AI toward producing an oddish result (note the omelet-related pun!). I mention this because any demonstration or documentation as a prompt should be mindfully composed. If you include extraneous aspects or purposely try to be funny or otherwise vary from a more serious and focused route, this tends to increase the chances that the generative AI will go afield.
Here are my eight tips about the show-me versus tell-me styles of prompting:
- (1) Will the type of problem or question lend itself to show-me versus tell-me?
- (2) Have you previously used one or the other on a similar problem or question?
- (3) Do you have one or the other already in hand?
- (4) Is one either more or less likely to aid the generative AI?
- (5) After trying one, review and consider whether to try the other one.
- (6) Can you bear the time and cost of doing both?
- (7) If you can do both, is it easier or harder to sequence or interleave them?
- (8) What lessons learned about show-me versus tell-me can you discern afterward?
You will observe in my suggested tips above that you do not necessarily need to conceive of this as a mutually exclusive situation. Some people argue vehemently that they will always only choose the demonstration style or always only choose the documentation style. This might be due to having tried both of the styles and eventually landing on a preference for one over the other. It could also be a haphazard or essentially random anchoring on their part.
I am an advocate of using the right style for the appropriate circumstance. It is the Goldilocks viewpoint. You don’t want to select a choice that is either too hot or too cold. You want whichever one is best for the situation at hand. Meanwhile, keep the other style in your back pocket and use it in conjunction as warranted.
Don’t fall for a false dichotomy on this.
In any case, we can still consider which of the two styles or approaches are best suited for a given circumstance or condition. That is a very useful matter to pursue. You might be familiar with the old saying about possessing only one tool such as a hammer. If the only tool you know or have is a hammer, the rest of the world looks like a nail. There will be a tendency to use the hammer even when doing so is either ineffective or counterproductive. Having familiarity with multiple tools is handy, and on top of this knowing when to use each such tool is even handier.
Speaking of using multiple tools and particularly in prompt engineering, you can use the show-me and the tell-me in conjunction with each other, plus, you can use one or both in conjunction with additional prompt engineering techniques. For example, I’ve covered at length the use of personas as a prompt design approach, see my discussion at the link here. You indicate to generative AI that it is to adopt a persona such as pretending to be a lawyer or a medical doctor.
You could enter a prompt that gets the generative AI to adopt a persona and also provides a demonstration and/or documentation associated with the question or problem to be solved. Or maybe you want to use the show-me or tell-me to aid in delineating the nature of the persona. For instance, suppose you want a persona of a medical doctor that is infused with an especially good bedside manner, you might indicate to the generative AI that you want it to do so and also depict an example or demonstration. This could be a scenario in which you concoct a medical doctor interacting with a patient and being extra attentive. The generative AI will hopefully determine from your example the degree of bedside manner that you want the pretend medical doctor to exhibit.
Now that I’ve covered the essence of the show-me versus tell-me, let’s next take a close look at some recent research on this intriguing and important strategy for your prompt engineering toolkit.
Research On Show-Me Versus Tell-Me
I will explore a recent research study entitled “Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models” by authors Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, and Tomas Pfister that was posted online on August 1, 2023. I’ll showcase the study via selected excerpts from the research paper. You are encouraged to read the full study and relish the numerous details and nuances underlying the thought-provoking research presented.
Let’s take a look at some key precepts of the research effort (excerpts from the paper):
- “Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool’s usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorically and invariably becomes intractable.”
- “Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation—descriptions for the individual tool usage—over demonstrations. We substantiate our claim through three main empirical findings on tasks across both vision and language modalities.”
I trust that you can discern that this is an analysis of the show-me versus tell-me, or more formally noted as the demonstration versus the documentation approaches for prompt engineering.
They refer to the task of getting generative AI to make use of tools. I believe this warrants my sharing some additional context to help get you up to speed. Please allow me a moment to do so.
I’ve previously covered that generative AI is greatly being expanded in terms of capabilities by allowing these AI apps to call out to other applications, see my discussion about APIs (application programming interfaces) and generative AI add-ons, at the link here and the link here. For example, you might ask generative AI to calculate a complex mathematical formula, for which intrinsically this is not within the structure of the generative AI to properly perform. The generative AI might then via an API or add-on access a full-blown online calculator that can far exceed whatever arithmetic features the generative AI already is built with.
Suppose that a generative AI app has access to dozens of apps. How will the generative AI discern which app to invoke in a given circumstance? You certainly don’t want to have to ask the user to indicate which external tool should be used. We want the generative AI to make that determination. The problem grows considerably as the number of available tools rises. Getting the generative AI to be informed about perhaps hundreds of externally accessible tools is going to be onerous, by and large.
One ardent belief is that you ought to feed examples or demonstrations of how to use a given tool and let the generative AI pattern match based on those examples. By doing that kind of data training, you can, later on, have the generative AI be able to presumably select and use the appropriate tool when relevant to do so. As with my earlier warnings, there is no guarantee that this will always ensure that the right tool is used at the right times, or that even the generative AI will opt to invoke a tool.
Remember, we are talking about a box of chocolates.
An alternative perspective is that rather than using examples or demonstrations, perhaps the more viable route would be to feed documentation about the tools into the generative AI. The idea is that the documentation that describes the tools can be pattern-matched and henceforth the AI app will choose a given tool and be able to utilize the tool when so needed. You won’t for sure have an ironclad on that, but it seems a worthy shot.
I believe that brings you sufficiently up-to-speed.
Let’s get back to the research study.
For this particular research effort, the researchers sought to explore the few-shot demonstrations approach versus the zero-shot documentation reading approach. They mainly used ChatGPT, though this could be done with other generative AI apps too. A bit of precaution, in general, would be that usually, these kinds of analyses can apply to most generative AI apps, though please exercise due caution in generalizing since sometimes a specific generative AI might be of such a different structure that you cannot readily apply the lessons gleaned from one AI to another.
The tools that they selected consisted of roughly two-hundred tools available for cloud services in the Google Cloud Platform (GCP). A generative AI app can issue commands for the GCP via the use of the command-line interface (CLI) such as for creating a virtual machine, image editing, video tracking, etc.
How would you get a generative AI app such as ChatGPT data-trained on the capabilities of these approximately 200 hundred tools and be able to have the AI adequately select a tool when needed, along with invoking the tool in a fashion that will produce usable results that come back to the generative AI for purposes of answering questions or solving problems?
Consider the show-me versus the tell-me.
Creating a ton of examples or demonstrations might take a lot of hard work. Perhaps feeding canned documentation would be a lot easier. But if the documentation doesn’t cut the mustard, you might have wasted time, effort, and cost by using the documentation for this purpose. Perhaps the examples would be a more prudent way to go. Round and round this merry-go-round goes.
It is indeed a conundrum.
The use of examples or demonstrations has often been considered the more advantageous avenue. That is a basic assumption worthy of digging into. We need a kick to the pants from time to time, shaking us up out of mindless ruts.
Here’s what the paper has to say:
- “LLMs are expected to find patterns within these demos and generalize them for new tasks.”
- “We argue that this reliance on demos in tool using is unnecessary in some cases, and might be even limiting. In fact, recent work finds that LLMs tend to be sensitive to demos, and carefully selecting demos is needed to avoid biasing or overfitting to a particular usage.”
- “Just as a craftsman doesn’t need to see a new tool being demonstrated and can instead discern their capabilities from reading a user manual for the tool, we seek to enable LLMs to learn how to use tools without seeing any demos.”
- “Our work provides an alternative to demonstrations: tool documentation (doc).”
The emphasis of the study is that the tell-me might be a solid option, while the show-me can have distinct disadvantages, including that the extrapolation or generalizations derived by the generative AI might be incorrect or otherwise faulty. Maybe this is less so with the documentation or tell-me approach. Due to space limitations here, I won’t be able to go into the nitty-gritty details of the research, though again I encourage you to read the full paper if you would like to see the experiments they performed.
Bottom-line, they concluded this:
- “Surprisingly, when provided with tool docs, LLMs’ zero-shot tool-using performance is on par or even better than their few-shot counterparts, showing that including docs is an effective way to sidestep the few-shot demos needed.”
They concluded that in this instance, based on their experimental design and a slew of other key assumptions, the zero-shot documentation approach (the tell-me) was found to be on par with and in some ways even better than the few-shot demonstrations approach (the show-me).
Does this mean you can forever hence loudly proclaim that the tell-me is the one and only way to always proceed?
Nope.
You would be making a big mistake proffering such a declaration. In this one study, the outcome came out that way. This helps to gently knock people on the head and awaken them to not fall into the trap of exclusively using the show-me and never leveraging the tell-me. The tell-me should always be on your dance card (along with the show-me). Period, end of story.
Conclusion
For readers that have seen some of my prior postings about advances in prompt engineering, you might recall that I often have been using as a base scenario a legal case involving two executives of a firm (Bob and Alice), wherein Bob seemingly abridges a fiduciary duty at the firm. I have tried out the various prompting strategies in that scenario.
I did the same with the show-me versus the tell-me. If there is reader interest, I can cover that in another posting.
The upshot was that the show-me versus the tell-me did not especially differentiate much in the generated response by the AI app for the legal scenario being used. I would attribute this to the fact that the scenario is one that the generative AI seems to be able to readily solve anyway, absent of either a show-me or a tell-me.
This takes us back to my above-listed tips or suggestions. Make sure to consider the nature of the question or problem that you are seeking to have the generative AI tackle. A show-me might be more effective, or a tell-me might be more effective, or neither one might make a difference. I suppose at least I could conclude that in that legal scenario, they were both essentially equal in whatever impact they had.
Shifting gears, among humans, the debate over the human communication merits of show-me versus tell-me will endlessly occur. This makes sense since there are a zillion variables that determine whether a show-me or a tell-me is the optimum or best choice in any given situation. You can talk yourself blue in the face and be talking past someone else to justify why you opt toward one or the other approach.
For generative AI, make sure to consider the context of the situation and I urge that you keep your options open, including these:
- (1) Show-me (only): Demonstration (only)
- (2) Tell-me (only): Documentation (only)
- (3) Show-me and tell-me: Demonstration and documentation jointly
- (4) Show-me then tell-me: Demonstration first and then documentation
- (5) Tell-me then show-me: Documentation first then demonstration
- (6) Show-me and tell-me interleaved: Demonstration and documentation interleaved
- (7) Combined with other prompting strategies: Do any of the above and combine with personas, chain-of-thought, skeleton-of-thought, etc.
You will need to use your wits to ascertain which approach has the appropriate payoff for your generative AI usage. This isn’t an on-or-off choice. The show-me and the tell-me are a set of tools in your prompt engineering repertoire. Use them wisely.
A final remark for now.
Actor Tom Hanks in the movie Forest Gump said this (spoiler alert): “Mama always had a way of explaining things so I could understand them.” The implied notion for our purposes here is that on-target answers sometimes benefited greatly from the show-me, sometimes benefit from the tell-me, or both, or neither, thus well illustrating that you have to employ a situationally suitable prompting technique in the right way at the right time.
“That’s all I have to say about that.”
Read the full article here