A hint is all you need.
Well, sometimes.
We all make use of hints quite frequently in our daily lives. Perhaps you might make a subtle hint to your boss that it is about time for you to get a raise. That’s a hint that you avidly hope will come true. Let’s try another hinting-laden situation. While driving your car, a family member in the passenger seat stridently drops a heavy-handed hint that you seem to be going a bit fast. You are annoyed by the hint and decide to pay it no mind.
And so on.
Robert Frost, the famous American poet, said this about hints (particularly when used in a family context): “The greatest thing in family life is to take a hint when a hint is intended, and not to take a hint when a hint isn’t intended.” It would seem that this sage advice applies to all manner of hints, going far beyond those of a familial nature.
Consider where and when we generally make use of hints. For example, per the above poetic adage, we shouldn’t find hints where none exist. Another rule is that we should at least entertain a hint when it is purposely proffered to us. A slew of rules of thumb and ad hoc policies exist about the do’s and don’ts associated with hints.
Let me drop a perhaps surprising hint on you, namely that hints are significant for AI too.
Allow me to elaborate.
In today’s column, I am continuing my popular series on the latest advances in prompt engineering, doing so in this discussion by focusing on the pragmatic role of including hints as an integral element of your use of generative AI. Infusing hints into prompts can be highly advantageous. I’ll explain why and how this is best undertaken. A formal catchphrase used for this is a technique known as Directional Stimulus Prompting (DSP).
One thing that is a keystone of generative AI is that the composing of prompts is both science and art. If you do a lousy job at coming up with prompts, the odds are that generative AI is going to generate answers for you that are underwhelming and potentially off-target. The happy face of this is that if you can compose good prompts, there is an erstwhile possibility of getting generative AI to produce extremely useful results for you.
Hints demonstrably come into this milieu.
Hints can play a substantial role when you are entering prompts into any and all generative AI apps, including those such as ChatGPT, GPT-4, Bard, and the like. A rarely known and yet superb technique for those who avidly practice prompt engineering best practices is to leverage hints as part of your prompt employing strategy. A hint can go a long way toward getting generative AI to provide you with stellar results.
I dare say that a lot of generative AI users do not realize that hints are vital and ought to be strategically leveraged in their prompts. That’s a shame. A darned shame. The use of hints when well-placed and well-timed can spur generative AI to emit better answers and attain heightened levels of problem-solving.
Yes, there is gold in those AI hills that can be found at the feet of proper prompting hints.
Hints ordinarily fall into the category of good prompting, though only if you know how to devise hints and make use of them properly. I say this because you can easily mess up when trying to give hints to generative AI. I don’t want to imply any anthropomorphizing of AI, but in a sense there is a heady possibility of generative AI misconstruing your hints, ignoring your hints, or otherwise acting in ways that we think of as reserved for humans reacting to hints. Keep clear in mind that the AI is working computationally and not in a sentient capacity.
A brief side tangent is worthy here.
Please know that the AI of today is not sentient. Period, end of story. Don’t fall for those banner headlines that seem to suggest that AI is sentient or on the verge of sentience. That’s just not so. The reason that generative AI might respond to hints in ways that seem human-like is entirely due to computational pattern-matching that has examined and attempted to mimic human writings. By having scanned tons and tons of human written materials on the Internet, a computational pattern-match of how we use words can come across as though the AI is acting in human-like ways.
Give serious and mindful consideration to the voluminous written material that has been computationally scanned and patterned to data-train the generative AI. Amidst that sea of text were undoubtedly vast quantities of hints and the use of hints. That content became fodder for computational pattern-making.
My point is that generative AI can likely detect and respond to hints due to the many patterns analyzed and mimicked based on humans employing hints. Ergo, do not conflate this with sentience on the part of the AI. It is just an impressive and computationally immense capability shaped around human writing.
Back to our compelling interest in hints.
Let’s first cover some overarching essentials about hints and then smoothly shift into exploring ways to use hints when you devise your prompts. I will showcase numerous examples to get you into the mindset of regularly using hints as part of your prompt engineering prowess and personal prompting toolkit. In addition, we will examine some of the latest research on prompt engineering and especially advances associated with the use of hints or DSP when prompting.
I would like to hint to you that you should fasten your seatbelt for this heady discussion, but perhaps instead I will just come right out and tell you that this is going to be a wild ride and therefore alert you that it would be best to buckle up. Sometimes the direct path to communication is indeed best. Other times, the indirect or hinting path is best. The situation and circumstance at hand determine which direction is the best way to go.
Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the foundations of prompt engineering and generative AI. Doing so will put us all on an even keel.
Prompt Engineering Is A Cornerstone For Generative AI
As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.
For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful explorations on the latest in this expanding and evolving realm, including this coverage:
- (1) Practical use of imperfect prompts toward devising superb prompts (see the link here).
- (2) Use of persistent context or custom instructions for prompt priming (see the link here).
- (3) Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
- (4) Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
- (5) Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
- (6) Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
- (7) Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
- (8) Determining when to best use the show-me versus tell-me prompting strategy (see the link here).
- (9) Gradual emergence of the mega-personas approach that entails scaling up the multi-personas to new heights (see the link here).
- (10) Discovering the hidden role of certainty and uncertainty within generative AI and using advanced prompt engineering techniques accordingly (see the link here).
- (11) Vagueness is often shunned when using generative AI but it turns out that vagueness is a useful prompt engineering tool (see the link here).
- (12) Prompt engineering frameworks or catalogs can really boost your prompting skills and especially bring you up to speed on the best prompt patterns to utilize (see the link here).
- (13) Flipped interaction is a crucial prompt engineering technique that everyone should know (see the link here).
- (14) Leveraging are-you-sure AI self-reflection and AI self-improvement capabilities is an advanced prompt engineering approach with surefire upside results (see the link here).
- (15) Know about the emerging addons that will produce prompts for you or tune up your prompts when using generative AI (see the link here).
- (16) Make sure to have an interactive mindset when using generative AI rather than falling into the mental trap of one-and-done prompting styles (see the link here).
- (17) Prompting to produce programming code that can be used by code interpreters to enhance your generative AI capabilities (see the link here).
- (18) Make sure to consider Target-Your-Response considerations when doing mindful prompt engineering (see the link here).
- (19) Additional coverage including the use of macros and the astute use of end-goal planning when using generative AI (see the link here).
- (20) Showcasing how to best use an emerging approach known as the Tree of Thoughts as a leg-up beyond chain-of-thought prompt engineering (see the link here).
Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.
Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:
- The use of generative AI can altogether succeed or fail based on the prompt that you enter.
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
With the above as an overarching perspective, we are ready to jump into today’s discussion.
How Hinting Is Vital As A Best Practice For Prompt Engineering
Let’s first explore the nature of hints in general. Doing so will be handy and provide a suitable foundation for using hints as a prompting strategy for attaining top-tier results with generative AI.
A hint is typically an indirect way to communicate something that you want to say but that you are hesitant to outrightly say. The hesitancy might be due to potential discomfort. Perhaps gently dropping a hint will save face for you and likewise save face for the other party receiving the hint. A hint might be of greater aplomb than launching into some lengthy diatribe. We’ve all used hints in this manner, many times, usually in the customary course of our hectic lives.
A hint can also simply be a shortcut means of communicating. In that sense, hints can also possibly be a time saver if used suitably. Suppose you and a friend are playing tennis as a twosome team. The easiest and fastest way to let your partner know that they should be watching for the tennis ball might consist of a quickly shouted hint. You don’t have the time and available breath to yell a full stream of exhortations and instructions.
There are additional reasons for hints to be used. A tutor providing a hint to a pupil who is supposed to be doing their homework might reveal a subtle hint for a decidedly beneficial purpose. The use of a hint in such an instance is intended to aid the student and serve as a spark toward them being able to solve a knotty problem largely on their own. It is the proverbial adage of nudging someone to learn how to fish rather than doing the fishing for them.
If you were to give a contemplative mindful moment to the inherent value of hints, I’m sure you would ascertain that they are at times the essential communicative grease that keeps the world smoothly running (when used proficiently).
Let’s agree then that generally a hint can do these two major things:
- Hints can be directional. A hint can be a directional clue for nudging or veering a targeted recipient toward some desired goal or path.
- Hints can be informational. A hint can convey information that is envisioned to aid a targeted recipient.
The usual approach consists of a twofer, namely that a provided hint is aimed to be a directional indication and simultaneously an informational indication. That being said, it is possible for a clue to only be one of the two. The context will be a factor and so will the targeted recipient.
We have to acknowledge that a targeted recipient might completely miss a given hint. The hint can go over someone’s head. They don’t realize that a hint was provided to them. That can be exasperating to the person handing out the hint. The odds are that if an initial hint doesn’t do the trick, additional hints will need to be conveyed. Sometimes the hints do not score a touchdown and the last resort consists of outright telling the recipient the whole confabulation and opting to give up on the hinting gambit.
Russell Page, the legendary British landscape architect, said this about hints: “A discerning eye needs only a hint, and understatement leaves the imagination free to build its own elaborations.”
Keep that in mind when someone gives you a hint or when you share a hint with others.
Shifting gears, let’s delve into using hints with generative AI.
You can readily and advantageously use hints in your prompts and as part of your prompt engineering toolkit and all-told acumen. The idea of using hints in a prompt is that you might want to clue the generative AI about what you hope to get as an answer from the generative AI. Thus, a hint included in your prompt can once again do two things, be directional to the generative AI and also possibly be informational to the generative AI.
Here’s a recommended best practice. I prefer to make it abundantly clear that I am giving a hint within my prompt. This consists of my always openly telling the AI app that I am providing a hint.
Contrast this practice to hints and people. If you are giving a hint to a human, we all know that a human might get quite upset at being given a hint. The person might feel belittled. They might go into a rage. All kinds of unfortunate reactions can occur.
Generally, when you explicitly tell generative AI that you are providing a hint, the AI app will readily embrace the hint. No outbursts. No protestations. As an aside, this is not by happenstance that you won’t get a negative reaction. You see since the generative AI was data-trained on vast volumes of human writing from the Internet, there would naturally be a strong statistical possibility of the AI responding negatively as per mimicking the written efforts of humans. But what has typically been done by the AI maker entails a pre-release filtering process to data-train the AI app to avoid being irksome in that way (a process often using RLHF, reinforcement learning with human feedback, see my discussion at the link here).
I’m not saying that you won’t ever get any pushback from generative AI when you provide a hint. It can happen. The chances are that any such negative feedback will be of a mild caliber. Perhaps the AI might indicate that the hint wasn’t stark enough to be useful. Or the hint was calculated as irrelevant. You seldom will get a response that says the hint was outrageously vacuous or otherwise stupid. Only a somewhat unfiltered generative AI is bound to emit that kind of a response, see my coverage about such AI at the link here.
Examples Of Hint-Oriented Best Practices When Using Generative AI
We will next explore a series of examples involving the use of useful hints in prompts.
First, let’s look at using a single hint and see how doing so can make a sizable difference in the answer that you might get from generative AI. I am going to ask generative AI about a potential foot race between a tortoise and a hare. Initially, I won’t include any hints at all. This is just a straight-ahead question for the AI app.
No hint:
- My entered prompt into ChatGPT: “Which will win a race, a tortoise or a hare?”
- ChatGPT generated response: “In the famous fable ‘The Tortoise and the Hare’ attributed to Aesop, the tortoise wins the race against the hare. The story illustrates the moral lesson that slow and steady progress often prevails over arrogance and overconfidence. While the hare is much faster, it becomes complacent and takes a nap during the race, allowing the tortoise to steadily and consistently advance to the finish line, ultimately winning the race. This fable has been used to teach the value of persistence and humility for generations.
Notice that the AI answered by telling me about the famous fable involving a tortoise and a hare.
There wasn’t anything in my prompt that referred to the fable. I didn’t do anything that steered the AI in that direction per se. Of course, I would nearly bet that any person who was given the same question would almost surely assume that I was invoking the famous tale. The AI app did the same, though please realize this was not due to sentience and simply due to pattern-matching based on human writing that the AI was data-trained on.
Suppose that I had anticipated that the AI app might inadvertently assume my question had to do with the fable. I could have written the question to carefully explain that I am not referring to the fable. A possible downside here is that if I use the generic word “fable” doing so could spur the AI app into going into a fable-focused context, despite my also insisting that my question has nothing to do with the fable (yes, you can cause a context to be invoked, merely by saying you don’t want that particular context!).
Rather than having to hassle with composing a lengthy indication, I will just use a quick hint. Furthermore, I will make sure to label the hint as a hint.
Here is a single hint at the end of my posed question:
- My entered prompt into ChatGPT: “Which will win a race, a tortoise or a hare? Hint: Real world.”
- ChatGPT generated response: “In the real world, a hare (rabbit) is significantly faster than a tortoise (turtle). Hares are known for their speed and agility, whereas tortoises are slow-moving creatures. In a straightforward race without any unusual circumstances or storytelling elements, the hare would almost certainly win against the tortoise.”
All that I did was provide a hint indicating the “real world” and you can plainly see that the AI app figured out what I wanted. The generated answer refers to the real-world aspects of a tortoise racing against a hare.
My hint was short and sweet.
Some might argue that I didn’t have to necessarily label the hint by stating explicitly it was a hint. Indeed, admittedly, much of the time a hint can be more informally tossed into your prompt. For me, I prefer to mention that a hint is being given. I find this easy to do and am more likely to prod the AI app as accepting my hint as a hint.
Another best practice when using prompting hints consists of listing together several hints at once. If feasible, you can use a series of keywords. Make sure that you choose your keywords carefully. A keyword that is overly ambiguous or that might have multiple interpretations can lead the AI astray from where you want to go.
Let’s see how multiple hints can be used, and do so to the benefit of the answer that I want to get from generative AI.
I am going to start by asking generative AI about Abraham Lincoln and provide no hints at all:
- My entered prompt into ChatGPT: “Tell me about Abraham Lincoln in one brief paragraph.”
- ChatGPT generated response: “Abraham Lincoln was the 16th President of the United States, serving from 1861 to 1865. He is renowned for his leadership during the American Civil War, his efforts to abolish slavery through the Emancipation Proclamation and the Thirteenth Amendment, and his enduring impact on American ideals of freedom and equality. Lincoln’s tragic assassination in 1865 cemented his status as one of the nation’s most revered leaders, and his Gettysburg Address remains an iconic expression of American values.
After taking a look at the generated response, suppose I discover that some important facts about Lincoln were not included in the short essay. I will therefore ask the question again and this time include some hints of subtopics I wish to have included.
Multiple hints are given at the end of my instruction:
- My entered prompt into ChatGPT: “Tell me about Abraham Lincoln in one brief paragraph. Hint: Log cabin, lawyer, Illinois.”
- ChatGPT generated response: “Born in a humble log cabin in Kentucky, Abraham Lincoln rose to become a self-taught lawyer in Illinois. He later became the 16th President of the United States, serving from 1861 to 1865 during a critical period in American history, the Civil War. Lincoln’s leadership was marked by his commitment to preserving the Union and his pivotal role in ending slavery through the Emancipation Proclamation and the passage of the Thirteenth Amendment. His enduring legacy as a symbol of American democracy and equality continues to inspire generations.
Sure enough, you can see that the resulting essay has mentioned facts regarding my three hints, namely mentioning that Lincoln was born in a log cabin, plus his having been a lawyer and working in Illinois. I trust that you can see how seamlessly the hints worked. The AI app didn’t somehow make a big deal out of my hints in the sense of dramatically declaring that the hints were received and utilized. Instead, the essay reflects an interpretation of my efficiently provided hints.
Easy-peasy, so far.
Turns out that hints are a dual-edged sword when it comes to prompting.
Here’s what I mean.
In the example about the essay on Lincoln, you might have stealthily noticed that the first essay mentioned his assassination, a prominent topic when discussing the life and times of Lincoln. The second essay did not do so. I am guessing that since I had restricted the answer to one paragraph in size, and since I had provided hints on other subtopics to be included, the result was that something had to give way in the response. You could argue that my hint caused an inadvertent adverse outcome if indeed I was hoping or assuming that the assassination subtopic was going to be included in the second essay.
We can further pursue the downside of hints to serve as a heads-up of what to watch out for.
I next opt to ask a question that has no hints and relates to the famous riddle of the Sphinx:
- My entered prompt into ChatGPT: “What walks on four legs in the morning, two legs at noon, and three legs in the evening?”
- ChatGPT generated response: “This is a classic riddle from Greek mythology. The answer is a human being. In the morning of life, a person crawls on all fours as a baby. In the noon of life, they walk on two legs as an adult. And in the evening of life, they use a cane, representing the third “leg,” as they become elderly.”
The answer generated is a classic. The answer is right. But imagine that I want to test out hints and see what happens if I give a hint that perhaps goes a bit overboard.
Here I try the same question and include a hint that says “not a human”:
- My entered prompt into ChatGPT: “What walks on four legs in the morning, two legs at noon, and three legs in the evening? Hint: Not a human.”
- ChatGPT generated response: “My apologies for the previous response. The correct answer to this classic riddle is a ‘riddle itself.’ Here’s the explanation: In the morning of its existence, a riddle is like a puzzle that’s not easily understood, so it ‘walks’ on four legs, representing complexity. At noon, as people ponder it and gather clues, it becomes more decipherable and walks on two legs. In the evening, when it’s typically solved, it ‘walks’ on three legs because it is understood with the help of a hint or clue, adding that third aspect to its nature.”
My hint was rather strong in the sense that I came out and implied that the answer could not consist of being a human. If you gave such a hint to a person, I am betting they would be stumped at what kind of answer you wanted. We all know and accept that the answer to the riddle is in fact a human.
Look at what generative AI emitted.
The AI app said that the answer to the riddle is that the riddle itself fits the stated criteria. Do you consider this to be a creative response or is it a nonsensical response? You might assert that the answer is creative and at least is a seemingly valent attempt to comply with the hint. On the other hand, you might argue that the answer is zany and out of line.
An overall difficulty with most generative AI is that the AI app will often not push back when perhaps it should. For example, the AI could have responded by saying that there is no other valid answer other than the answer of being a human. You will rarely get that kind of reply, as I’ve mentioned earlier herein. The tuning and data-training of generative AI by the AI makers is often done to induce generative AI to generate an answer even if the answer would seem marginal or highly questionable.
I repeatedly forewarn in my workshops on prompt engineering that generative AI is like a box of chocolates, notably that you never know exactly what you will get. The under-the-hood algorithms used for generative AI usually invoke a probabilistic and statistical undertone to the pattern-matching. The good news is that this means that the response that generative AI gives to you is going to be somewhat unique and unlike all other responses that it has previously given. The bad news is that you cannot predict precisely what the AI will emit.
The roll of the dice comes into play when using generative AI.
You might have noticed that I have been mentioning that hints can vary in terms of their degree such as being weak or strong hints. As a best practice, go ahead and employ strong hints if you are sure of what you want and you also are aiming to spur the AI to a particular desired target. Use weak hints if you are willing to allow the AI some latitude and possibly have the AI come up with something that you hadn’t totally anticipated.
We can playfully use the legendary sorites paradox as an example illustrating the use of strong versus weak prompts. You might recall that the sorites paradox has to do with a heap of sand and gives rise to interesting considerations about the vagueness of everyday language.
Here is my question with no hint included:
- My entered prompt into ChatGPT: “A heap of sand is defined as consisting of one million grains of sand. The removal of one grain is considered insignificant thus the heap would still be considered a heap after removing one grain. If one grain after another was gradually removed, at what point would the heap no longer be a heap?”
- ChatGPT generated response: “This is a classic example of the ‘sorites paradox’ or ‘paradox of the heap,’ which is a philosophical puzzle related to the concept of vagueness. There is no precise point at which a heap of sand becomes a non-heap (i.e., a single grain), as it’s a matter of gradual and subjective change. The paradox highlights the limitations of language and categories in dealing with continuous and gradual changes. It shows that there is no clear, universally agreed-upon boundary between a heap and a non-heap because the definition is inherently vague. Different people may have different thresholds for what they consider a heap or a non-heap, making it a matter of personal judgment rather than a fixed, objective criterion.
The generative AI response has landed squarely on the sorites paradox answer, suitably so.
A big issue about the paradox is whether we can solve the paradox by establishing a threshold for what defines a heap. For example, I might decide that if you take more than 10,000 grains from a pile of sand that had one million grains, it no longer is considered a heap.
I will give a weak hint about this turn of events:
- My entered prompt into ChatGPT: “A heap of sand is defined as consisting of one million grains of sand. The removal of one grain is considered insignificant thus the heap would still be considered a heap after removing one grain. If one grain after another was gradually removed, at what point would the heap no longer be a heap? Hint: 10,000.”
- ChatGPT generated response: “In this context, with the hint provided, it suggests that you’re looking for a numerical answer to when a heap would no longer be considered a heap. If you define a heap as consisting of one million grains of sand, and you want to know when it would no longer be considered a heap, then it would be when you have 999,999 grains of sand remaining. So, by removing 10,000 grains, you would have 990,000 grains left, which is less than the original definition of one million grains for a heap. However, it’s important to note that this answer is arbitrary and doesn’t address the philosophical problem of the sorites paradox or the vagueness of concepts like “heap” in a broader sense.
The weak hint wasn’t enough to clearly get the AI in the direction of realizing that I am trying to establish a new definition for a heap, or at least provide a clarification for the definition.
Let’s try again but this time with a stronger hint:
- My entered prompt into ChatGPT: “A heap of sand is defined as consisting of one million grains of sand. The removal of one grain is considered insignificant thus the heap would still be considered a heap after removing one grain. If one grain after another was gradually removed, at what point would the heap no longer be a heap? Hint: Threshold 900,000.”
- ChatGPT generated response: “If you define a heap of sand as consisting of one million grains and the removal of one grain is considered insignificant, and you’re given the hint that the threshold is 900,000, then the heap would still be considered a heap until you have fewer than 900,000 grains of sand remaining. So, once you’ve removed enough sand grains to have 899,999 grains left, it would no longer be considered a heap based on the given definition.”
Voila, my stronger hint seems to have done the trick.
Research On Leveraging Hints For Guiding AI
The notion of using hints to steer or directionally guide AI has been around for a long time. There was a period of time in the 1990s when hints were on the hot list of things to consider when devising artificial neural networks (the same technology that underlies today’s generative AI). One of the classics from that time period was a research paper entitled “A Method For Learning From Hints” by Yaser Abu-Mostafa, appearing in Advances in Neural Information Processing Systems, published in 1992.
Here are some salient excerpts:
- “The use of hints is coming to the surface in a number of research communities dealing with learning and adaptive systems. In the learning-from-examples paradigm, one often has access not only to examples of the function, but also to a number of hints (prior knowledge, or side information) about the function.”
- “We address the problem of learning an unknown function by putting together several pieces of information (hints) that we know about the function.”
- “We introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated for new types of hints.”
Zoom forward to current times and the notion of using hints has further expanded to include the use of hints as a prompting strategy for generative AI. A formalized way of expressing a hinting approach to prompting is the weighty moniker of Directional Stimulus Prompting (DSP). The naming does make sense. You are using prompts to essentially act as a stimulus to the AI app. The stimulus serves as a directional guide. I would also add that the stimulus or hint can be informational too, as mentioned earlier herein.
Let’s take a quick look at a recent research paper entitled “Guiding Large Language Models Via Directional Stimulus Prompting” by Zekun Li, Baolin Peng, Pengcheng He, Michel Galley, Jianfeng Gao, and Xifeng Yan, posted online on July 7, 2023.
They say this about the use of hints or directional stimuli:
- “We introduce a novel prompting framework called Directional Stimulus Prompting for guiding black-box large language models (LLMs) toward desired outputs. The framework introduces a new component called directional stimulus into the prompt, providing more fine-grained guidance and control over LLMs. The directional stimulus serves as hints or cues for each input query to guide LLMs toward the desired output, such as keywords that the desired summary should include for summarization.
- “Experimental results show that our framework consistently improves ChatGPT’s performance over standard prompting with a small collection of training data, and reinforcement learning further improves the performance. Notably, on the MultWOZ dataset, our framework enables ChatGPT to achieve a remarkable 41.4% improvement in its combined score with only 80 dialogues, matching or even surpassing the performance of some fully trained state-of-the-art models.”
According to their experiments, they found that the prudent use of hints or DSPs was instrumental in improving generative AI performance. They gave various examples. A specific focus covered an in-depth exploration involving summarizing articles via the use of hints.
Let’s see how this works. For the sake of space here I am not going to show the full article that I am about to mention and refer you to the research paper to read the referenced article along with perhaps perusing additional and many intriguing facets of their study.
They asked the generative AI to summarize a given article and do so in a few sentences (no hints provided to the AI):
- “Q: Summarize the above article briefly in 2-3 sentences.”
- “A: Bob Barker, the former host of ‘The Price Is Right’, returned to the show for the first time in 8 years. He hosted the first game of the show, and then handed the hosting duties over to Drew Carey.”
The full article contains a lot of additional information. The generative AI app opted to select some aspects and not include other aspects. This presumably was done on a semi-random basis by the AI and also was undertaken because the prompt requires that the summary be squeezed into just a few sentences.
A new prompt was devised that contained several hints, such as a hint that TV should be mentioned, the date of April 1 should be mentioned, the year 2007 should be mentioned, and the number 91 should be mentioned. Here is the same prompt as above but with their hints included:
- “Q: Summarize the above article briefly in 2-3 sentences based on the hint. Hint: Bob Barker; TV; April 1; ‘The Price Is Right’; 2007; 91.”
- “A: On April 1, Bob Barker returned to the TV show ‘The Price Is Right’ after eight years. Looking spry at 91, Barker handled the first price-guessing game. Barker stepped down as host in 2007 but didn’t seem to miss a beat.”
By and large, the new version of the produced essay contains content encompassing the stated hints.
The researchers characterized the devised approach in this manner:
- “To this end, our Directional Stimulus Prompting (DSP) approach introduces a small piece of discrete tokens z named “directional stimulus” into the prompt, which serves as hints or cues to provide LLMs with fine-grained guidance toward the desired direction. For example, for the summarization task, the directional stimulus z might consist of keywords that should be included in the desired summary.”
Where Do These Various Hints Come From
One means of coming up with hints for inclusion in your prompts is by using your own noggin. You ought to think ahead of asking generative AI a question. Anticipate what the AI might need to suitably address your question.
Ask yourself this: “What do I need to indicate to the AI to sufficiently clue in the AI about what I want?”
Some portion of your prompt can be the question that you want to pose. You can also then include additional detailed information that might be fruitful for the AI. Or you can just try using hints. It is all up to you.
A hint might be easier and faster to come up with and might be easier and faster to enter into your prompt. Also, you might be desirous of seeing what else the AI derives, being given greater latitude as a result of a hint versus the more delineated and restrictive detailed info you might have provided.
Will the hint always do the trick? No. In some circumstances, a hint won’t be enough. You might need to spell out in greater detail what you really want. Hints need to be used judiciously.
Another means of gleaning hints is by having AI or some automated tool produce hints for you. The research study noted above had made use of a customized specialized tool to generate hints. You can expect that generative AI will gradually be augmented with added capabilities including being able to generate on-the-fly hints for you to use with generative AI. I realize that seems circular, but it does make abundant sense, see my discussion at the link here on advances in using AI to improve the use of AI.
Hints can arise via either or both of these means:
- (1) Human-derived. The person entering a prompt opts to come up with hints and then enters the hints into their prompt.
- (2) AI or tool-derived. A person using generative AI relies upon a tool or AI that can provide hints for use in prompts.
In the second instance of using AI or a tool to derive hints, you can further subdivide that category into two major classifications. One is that the AI or tool presents a suggested hint or set of proposed hints to a human, and the human then decides whether to use those hints in their prompt. The second case is when you automatically have the AI or tool create prompts containing hints and do so without a human directly in the loop.
Having AI Give You Hints Instructively For Your Benefit
In my herein discussion up until this juncture, the mainstay of doing hinting-related prompting goes in a one-way path of a hint being given to the generative AI. You enter a hint to the AI. The AI then hopefully processes the hint suitably and accordingly alters or produces a response because of having examined the hint. My usual caveats apply, namely that you always have to be on your toes that the AI might miss the hint or otherwise misinterpret the hint.
Time for a bit of a twist to the aforementioned one-way path.
Are you ready?
We might at times want a two-way street.
Here’s the deal. I’ve previously covered in my columns the so-called flipped interaction that you can have with generative AI, see the link here. A flipped interaction involves having the AI ask you questions. This might seem like an odd thing to do. The basis for having the AI ask you questions can include a number of useful purposes, such as doing data training of the AI on a particular topic or possibly having the AI test you on a topic of interest if that’s what you want to do.
The bottom line is that you can at your discretion opt to explicitly ask the AI to give you hints on topics.
An example will illustrate this.
Envision that someone is desirous of changing the oil in their car. They have done it many times before. They know generally what to do. But, they are rusty at doing so. A bit of a quick refresher might be handy.
Here’s what they might ask generative AI:
- My entered prompt into ChatGPT: “I am trying to figure out how to change the oil in my car. Briefly explain how to do so in one paragraph, doing so only with hints.”
- ChatGPT generated response: “Changing your car’s oil involves finding essential components, safely raising your vehicle, replacing old parts with new ones, and ensuring proper fluid levels. Always refer to your car’s manual for specific details and safety precautions.
The answer is obviously lacking in detail and pretty much contains hints (which is what the question asked the AI to do).
Here’s what a more detailed answer might have been (averting asking for just hints):
- My entered prompt into ChatGPT: “I am trying to figure out how to change the oil in my car. Briefly explain how to do so in one paragraph.”
- ChatGPT generated response: “To change the oil in your car, first, park it on a level surface and let the engine cool down. Then, locate the oil drain plug and the oil filter beneath the vehicle. Place an oil pan under the drain plug, remove the plug, and let the old oil drain into the pan. Next, remove the old oil filter and lubricate the gasket on the new filter with a bit of fresh oil. Install the new filter and replace the drain plug securely. Finally, add the recommended amount of new oil through the oil filler cap, check the oil level with the dipstick, and dispose of the old oil at a recycling center. Be sure to consult your car’s manual for specific details and oil type recommendations.”
Compare the first answer with this second answer. The first answer which was based on the request to provide hints is much more succinct in comparison.
In summary overall, we have these two circumstances:
- (1) Human gives hints to AI (the customary or usual approach). A person using generative AI enters a hint into their prompt as a means of hinting to the AI a desired direction or possibly relevant information pertinent to the question or problem being posed to the AI.
- (2) AI gives hints to a human (sometimes said to be hints-on-demand). AI provides hints to a human that is using generative AI, conventionally spurred to do so via the human explicitly asking or instructing the AI app to start providing hints.
Make sure to consider the option of getting hints from generative AI.
Doing so can be beneficial for you. The odds are that the AI will otherwise fully explain things in great and at times excruciating detail. If you are preparing for a test or exam, maybe it would be wiser to try and solve any questions or problems by having the AI give you mere hints. One might assert that using AI in such a hinting manner is helping you to learn to fish, in lieu of doing the fishing for you.
Conclusion
Can you take a hint?
I’m sure there are times that you’ve gotten riled up when someone gave you a hint. We often consider a hint to be an insult. It is as though the other person thinks we are so dense that we cannot figure out something on our own. There are times when a hint can be used in a derogatory or denigrating manner. No doubt about that.
I would dare say though that much of the time a hint is given in the sincerest of intentions. The person did not aim the hint as a particularly offensive remark. They were just trying to be helpful. Nonetheless, the recipient might believe that their personal honor has been besmirched and denounced.
All of those complications do not particularly arise when you give hints to generative AI. The generative AI is a relatively free-hint zone. You can freely give hints to your heart’s content. The AI won’t go berserk or scream obscenities at you. This is a delightful environment in which hints can fly like an eagle. No boundaries arise.
I urge you to include hints or directional stimulus prompts in your prompt engineering capabilities. You’ll be better off for having done so.
That’s a surefire hint.
Read the full article here