In today’s column, I continue my ongoing analysis of the latest advances and breakthroughs in AI, see my extensive posted coverage at the link here, and focus in this discussion on the challenges associated with various forms of reasoning that are mathematically and computationally undertaken via modern-day generative AI and large language models (LLM). Specifically, I will do a deep dive into inductive reasoning and deductive reasoning.
Here’s the deal.
One of the biggest open questions that AI researchers and AI developers are struggling with is whether we can get AI to perform reasoning of the nature and caliber that humans seem to do.
This might at an initial cursory glance appear to be a simple question with a simple answer. But the problems are many and the question at hand is extraordinarily hard to answer. One difficulty is that we cannot say for sure the precise way that people reason. By this, I mean to say that we are only guessing when we contend that people reason in one fashion or another. The actual biochemical and wetware facets of the brain and mind are still a mystery as to how we attain cognition and higher levels of mental thinking and reasoning.
Some argue that we don’t need to physically reverse engineer the brain to proceed ahead with devising AI reasoning strategies and approaches. The viewpoint is that it would certainly be a nice insight to know what the human mind really does, that’s for sure. Nonetheless, we can strive forward to develop AI that has the appearance of human reasoning even if the means of the AI implementation is potentially utterly afield of how the mind works.
Think of it this way.
We might be satisfied if we can get AI to mimic human reasoning from an outward perspective, even if the way in which the AI computationally works is not what happens inside the heads of humans. The belief or assertion would be that you don’t have to distinctly copy the internals if the seen-to-be external performance matches or possibly exceeds what’s happening inside a human brain. I liken this to an extreme posture by noting that if you could assemble a bunch of Lego bricks and get them to seemingly perform reasoning, well, you might take that to the bank as a useful contraption, despite that it isn’t working identically as human minds are.
That being said, if you have in fact managed to assemble Lego bricks into a human-like reasoning capacity, please let me know. Right away. A Nobel Prize is undoubtedly and indubitably soon to be on your doorstep.
The Fascinating Nature Of Human Reasoning
Please know that the word “reasoning” carries a lot of baggage.
Some would argue that we shouldn’t be using the watchword when referring to AI. The concern is that since reasoning is perceived as a human quality, talking about AI reasoning is tantamount to anthropomorphizing AI. To cope with this expressed qualm, I will try to be cautious in how I make use of the word. Just wanted to make sure you knew that some experts have acute heartburn about waving around the word “reasoning”. Let’s try to be mindful and respectful of how the word is to be used.
Disclaimer noted.
Probably the most famous primary forms of human reasoning consist of inductive reasoning and deductive reasoning.
I’m sure you’ve been indoctrinated in the basics of those two major means of reasoning. Whether the brain functions by using those reasoning methods is unresolved. It could be that we are merely rationalizing decision-making by conjuring up a logical basis for reasoning, trying to make pretty the reality of whatever truly occurs inside our heads.
Because inductive reasoning and deductive reasoning are major keystones for human reasoning, AI researchers have opted to pursue those reasoning methods to see how AI can benefit from what we seem to know about human reasoning. Yes, indeed, lots of AI research has been devoted to exploring how to craft AI that performs inductive reasoning and performs deductive reasoning.
Some results have come up with AI that is reasonably good at inductive reasoning but falters when doing deductive reasoning. Likewise, the other direction is the case too, namely that you might come up with AI that is pretty good at deductive reasoning but thin on inductive reasoning. Trying to achieve both on an equal and equally heightened basis is tricky and still being figured out.
You might be wondering what the deal is with generative AI and large language models (LLM) in terms of how those specific types of AI technology fare on inductive and deductive reasoning. I’m glad that you asked.
That’s the focus of today’s discussion.
Before we make the plunge into the meaty topic, let’s ensure we are all on the same page about inductive and deductive reasoning. Perhaps it has been a while since you had to readily know the differences between the two forms of reasoning. No worries, I’ll bring you quickly up-to-speed at a lightning pace.
An easy way to compare the two is by characterizing inductive reasoning as being a bottoms-up approach while deductive reasoning is considered a tops-down approach to reasoning.
With inductive reasoning, you observe particular facts or facets and then from that bottoms-up viewpoint try to arrive at a reasoned and reasonable generalization. Your generalization might be right. Wonderful. On the other hand, your generalization might be wrong. My point is that inductive reasoning, and also deductive reasoning, are not surefire guaranteed to be right. They are sensible approaches and improve your odds of being right, assuming you do the necessary reasoning with sufficient proficiency and alertness.
Deductive reasoning generally consists of starting with a generalization or theory and then proceeding to ascertain if observed facts or facets support the overarching belief. That is a proverbial top-down approach.
We normally expect scientists and researchers to especially utilize deductive reasoning. They come up with a theory of something and then gather evidence to gauge the validity of the theory. If they are doing this in a far and-square manner, they might find themselves having to adjust the theory based on the reality of what they discover.
Okay, we’ve covered the basics of inductive and deductive reasoning in a nutshell. I am betting you might like to see an example to help shake off any cobwebs on these matters.
Happy to oblige.
Illustrative Example Of Inductive And Deductive Reasoning
I appreciate your slogging along with me on this quick rendition of inductive and deductive reasoning. Hang in there, the setup will be worth it. Time to mull over a short example showcasing inductive reasoning versus deductive reasoning.
When my kids were young, I used to share with them the following example of inductive reasoning and deductive reasoning. Maybe you’ll find it useful. Or at least it might be useful for you to at some point share with any youngsters that you happen to know. Warning to the wise, do not share this with a fifth grader since they will likely feel insulted and angrily retort that you must believe them to be a first grader (yikes!).
Okay, here we go.
Imagine that you are standing outside and there are puffy clouds here and there. Let’s assume that on some days the clouds are there and on other days they are not. Indeed, on any given day, the clouds can readily come and go.
What is the relationship between the presence of clouds and the outdoor temperature?
That seems to be an interesting and useful inquiry. A child might be stumped, though I kind of doubt they would. If they’ve been outside with any regularity, and if clouds come and go with any regularity, the chances are they have already come up with a belief on this topic. Maybe no one explicitly asked them about it. Thus, this question might require a moment or two for a youngster to collect their thoughts.
Envision that we opt to ask a youngster to say aloud their reasoning as they figure out an answer to the posed question.
One angle would be to employ inductive reasoning to solve the problem.
It might go like this when using inductive reasoning to answer the question about clouds and outdoor temperature:
- (1) Observation: Yesterday was cloudy, and the temperature dropped.
- (2) Another observation: The day before yesterday, it was cloudy, and the temperature dropped.
- (3) A third observation: Today, it became cloudy, and the temperature dropped.
- (4) Logical conclusion: When it’s cloudy, the temperature tends to drop.
Seems sensible and orderly.
The act consisted of a bottoms-up method. There were prior and current observations that the child identified and used when processing the perplexing matter. Based on those observations, a seemingly logical conclusion can be reached. In this instance, since the clouds often were accompanied by a drop in temperature, you might suggest that when it gets cloudy the temperate will tend to drop.
Give the child a high five.
Another angle would be to employ deductive reasoning.
Here we go with answering the same question but using deductive reasoning this time:
- Theory or premise: When the sky is cloudy, the temperature tends to drop.
- Observation: Today it is currently cloudy.
- Another observation. The temperature dropped once the clouds arrived.
- Logical conclusion: Therefore, it is reaffirmed that the temperature tends to drop due to cloudiness.
The youngster began by formulating a theory or premise.
How did they come up with it?
We cannot say for sure. They may have already formed the theory based on a similar inductive reasoning process as I just gave. There is a chance too that they might not be able to articulate why they believe in the theory. It just came to them.
Again, this is the mystery of how the brain and mind function. From the outside of a person’s brain, we do not have the means to reach into their head and watch what logically happens during their thinking endeavors (we can use sensors to detect heat, chemical reactions, and other wiring-like actions, but that is not yet translatable into full-on articulation of thinking processes at a logical higher-level per se). We must take their word for whatever they proclaim has occurred inside their noggin. Even they cannot say for sure what occurred inside their head. They must guess too.
It could be that the actual internal process is nothing like the logical reasoning we think it is. People are taught that they must come up with justifications and explanations for their behavior. The explanation or justification can be something they believe happened in their heads, though maybe it is just an after-the-fact concoction based on societal and cultural demands that they provide cogent explanations.
As an aside, you might find of interest that via the use of BMI (brain-machine interfaces), researchers in neuroscience, cognitive science, AI, and other disciplines are hoping to one day figure out the inner sanctum and break the secret code of what occurs when we think and reason. See my coverage on BMI and akin advances at the link here.
One other aspect to mention about the above example of deductive reasoning about the cloud and temperature is that besides a theory or premise, the typical steps entail an effort to apply the theory to specific settings. In this instance, the child was able to reaffirm the premise due to the observation that today was cloudy and that it seemed that the temperature had dropped.
Another worthy point to bring up is that I said earlier that either or both of those reasoning methods might not necessarily produce the right conclusion. The act of having and using a bona fide method does not guarantee a correct response.
Does the presence of clouds always mean that temperatures will drop?
Exceptions could exist.
Plus, clouds alone do not impact temperature and other factors need to be incorporated.
Generative AI And The Two Major Reasoning Approaches
You are now versed in or at least refreshed about inductive and deductive reasoning. Good for you. The world is a better place accordingly.
I want to now bring up the topic of generative AI and large language models. Doing so will allow us to examine the role of inductive reasoning and deductive reasoning when it comes to the latest in generative AI and LLMs.
I’m sure you’ve heard of generative AI, the darling of the tech field these days.
Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.
The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.
In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.
I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.
When using generative AI, you can tell the AI via a prompt to make use of deductive reasoning. The generative AI will appear to do so. Similarly, you can enter a prompt telling the AI to use inductive reasoning. The generative AI will appear to do so.
I am about to say something that might be surprising, so I am forewarning you and want you to mentally prepare yourself.
Have you braced yourself for what I am about to say?
Hope so.
When you enter a prompt telling generative AI to proceed with inductive or deductive reasoning, and then you eyewitness what appears to be such reasoning as displayed via the presented answer, there is once again a fundamental question afoot regarding the matter of what you see versus what actually happened internally.
I’ve discussed this previously in the use case of explainable AI, known as XAI, see my analysis at the link here. In brief, just because the AI tells you that it did this or that step, there is not necessarily an ironclad basis to assume that the AI solved the problem in that particular manner.
The explanation is not necessarily the actual work effort. An explanation can be an after-the-fact rationalization or made-up fiction, which is done to satisfy your request to have the AI show you the work that it did. This can be the case too when requesting to see a problem solved via inductive or deductive reasoning. The generative AI might proceed to solve the problem using something else entirely, but since you requested inductive or deductive reasoning, the displayed answer will be crafted to look as if that’s how things occurred.
Be mindful of this.
What you see could be afar of what is happening internally.
For now, let’s put that qualm aside and pretend that what we see is roughly the same as what happened to solve a given problem.
How Will Generative AI Fare On The Two Major Forms Of Reasoning
I have a thought-provoking question for you:
- Are generative AI and LLMs better at inductive reasoning or deductive reasoning?
Take a few reflective seconds to ponder the conundrum.
Tick tock, tick tock.
Time’s up.
The usual answer is that generative AI and LLMs are better at inductive reasoning, the bottoms-up form of reasoning.
Why so?
Recall that generative AI and LLMs are devised by doing tons of data training. You can categorize data as being at the bottom side of things. Lots of “observations” are being examined. The AI is pattern-matching from the ground level up. This is similar to inductive reasoning as a process.
I trust that you can see that the inherent use of data, the data structures used, and the algorithms employed for making generative AI apps are largely reflective of leaning into an inductive reasoning milieu. Generative AI is therefore more readily suitable to employ inductive reasoning for answering questions if that’s what you ask the AI to do.
This does not somehow preclude generative AI from also or instead performing deductive reasoning. The upshot is that generative AI is likely better at inductive reasoning and that it might take some added effort or contortions to do deductive reasoning.
Let’s review a recent AI research study that empirically assessed the inductive reasoning versus deductive reasoning capabilities of generative AI.
New Research Opens Eyes On AI Reasoning
In a newly released research paper entitled “Inductive Or Deductive? Rethinking The Fundamental Reasoning Abilities Of LLMs” by Kewei Cheng, Jingfeng Yang, Haoming Jiang, Zhengyang Wang, Binxuan Huang, Ruirui Li, Shiyang Li, Zheng Li, Yifan Gao, Xian Li, Bing Yin, Yizhou Sun, arXiv, August 7, 2024, these salient points were made (excerpts):
- “Despite the impressive achievements of LLMs in various reasoning tasks, the underlying mechanisms of their reasoning capabilities remain a subject of debate.”
- “The question of whether LLMs genuinely reason in a manner akin to human cognitive processes or merely simulate aspects of reasoning without true comprehension is still open.”
- “Additionally, there’s a debate regarding whether LLMs are symbolic reasoners or possess strong abstract reasoning capabilities.”
- “While the deductive reasoning capabilities of LLMs, (i.e. their capacity to follow instructions in reasoning tasks), have received considerable attention, their abilities in true inductive reasoning remain largely unexplored.”
- “This raises an essential question: In LLM reasoning, which poses a greater challenge – deductive or inductive reasoning?”
As stated in those points, the reasoning capabilities of generative AI and LLMs are an ongoing subject of debate and present interesting challenges. The researchers opted to explore whether inductive reasoning or deductive reasoning is the greater challenge for such AI.
They refer to the notion of whether generative AI and LLMs are symbolic reasoners.
Allow me a moment to unpack that point.
The AI field has tended to broadly divide the major approaches of devising AI into two camps, the symbolic camp and the sub-symbolic camp. Today, the sub-symbolic camp is the prevailing winner (at this time). The symbolic camp is considered somewhat old-fashioned and no longer in vogue (at this time).
For those of you familiar with the history of AI, there was a period when the symbolic approach was considered top of the heap. This was the era of expert systems (ES), rules-based systems (RBS), and often known as knowledge-based management systems (KBMS). The underlying concept was that human knowledge and human reasoning could be explicitly articulated into a set of symbolic rules. Those rules would then be encompassed into an AI program and presumably be able to perform reasoning akin to how humans do so (well, at least to the means of how we rationalize human reasoning). Some characterized this as the If-Then era, consisting of AI that contained thousands upon thousands of if-something then-something action statements.
Eventually, the rules-based systems tended to go out of favor. If you’d like to know more about the details of how those systems worked and why they were not ultimately able to fulfill the quest for top-notch AI, see my analysis at the link here.
The present era of sub-symbolics went a different route. Generative AI and LLMs are prime examples of the sub-symbolic approach. In the sub-symbolic realm, you use algorithms to do pattern matching on data. Turns out that if you use well-devised algorithms and lots of data, the result is AI that can seem to do amazing things such as having the appearance of fluent interactivity. At the core of sub-symbolics is the use of artificial neural networks (ANNs), see my in-depth explanation at the link here.
You will momentarily see that an unresolved question is whether the sub-symbolic approach can end up performing symbolic-style reasoning. There are research efforts underway of trying to logically interpret what happens inside the mathematical and computational inner workings of ANNs, see my discussion at the link here.
Getting back to the inductive versus deductive reasoning topic, let’s consider the empirical study and the means they took to examine these matters:
- “Our research is focused on a relatively unexplored question: Which presents a greater challenge to LLMs – deductive reasoning or inductive reasoning?” (ibid).
- “To explore this, we designed a set of comparative experiments that apply a uniform task across various contexts, each emphasizing either deductive or inductive reasoning.” (ibid).
- “Deductive setting: we provide the models with direct input-output mappings (i.e., 𝑓𝑤).”
- “Inductive setting: we offer the models a few examples (i.e., (𝑥, 𝑦) pairs) while intentionally leaving out input-output mappings (i.e., 𝑓𝑤).” (ibid).
Their experiment consisted of coming up with tasks for generative AI to solve, along with prompting generative AI to do the solution process by each of the two respective reasoning processes. After doing so, the solutions provided by AI could be compared to ascertain whether inductive reasoning (as performed by the AI) or deductive reasoning (as performed by the AI) did a better job of solving the presented problems.
Tasks Uniformity And Reasoning Disentanglement
The research proceeded to define a series of tasks that could be given to various generative AI apps to attempt to solve.
Notice that a uniform set of tasks was put together. This is a good move in such experiments since you want to be able to compare apples to apples. In other words, purposely aim to use inductive reasoning on a set of tasks and use deductive reasoning on the same set of tasks. Other studies will at times use a set of tasks for analyzing inductive reasoning and a different set of tasks to analyze deductive reasoning. The issue is that you end up comparing apples versus oranges and can have muddled results.
Are you wondering what kinds of tasks were used?
Here are the types of tasks they opted to apply:
- Arithmetic task: “You are a mathematician. Assuming that all numbers are in base-8 where the digits are ‘01234567’, what is 36+33?”. (ibid).
- Word problem: “You are an expert in linguistics. Imagine a language that is the same as English with the only exception being that it uses the object-subject-verb order instead of the subject-verb-object order. Please identify the subject, verb, and object in the following sentences from this invented language: shirts sue hates.” (ibid).
- Spatial task: “You are in the middle of a room. You can assume that the room’s width and height are both 500 units. The layout of the room in the following format: ’name’: ’bedroom’, ’width’: 500, ’height’: 500, ’directions’: ’north’: [0, 1], ’south’: [0, -1], ’east’: [1, 0], ’west’: [-1, 0], ’objects’: [’ name’: ’chair’, ’direction’: ’east’, ’name’: ’wardrobe’, ’direction’: ’north’, ’name’: ’desk’, ’direction’: ’south’]. Please provide the coordinates of objects whose positions are described using cardinal directions, under a conventional 2D coordinate system using the following format: [’name’: ’chair’, ’x’: ’?’, ’y’: ’?’, ’name’: ’wardrobe’, ’x’: ’?’, ’y’: ’?’, ’name’: ’desk’, ’x’: ’?’, ’y’: ’?’]”. (ibid).
- Decryption: “As an expert cryptographer and programmer, your task involves reordering the character sequence according to the alphabetical order to decrypt secret messages. Please decode the following sequence: spring.” (ibid).
Something else that they did was try to keep inductive reasoning and deductive reasoning from relying on each other.
Unfortunately, both approaches can potentially slop over into aiding the other one.
Remember for example when I mentioned that a youngster using deductive reasoning about the relationship between clouds and temperatures might have formulated a hypothesis or premise by first using inductive reasoning? If so, it is difficult to say which reasoning approach was doing the hard work in solving the problem since both approaches were potentially being undertaken at the same time.
The researchers devised a special method to see if they could avoid a problematic intertwining:
- “To disentangle inductive reasoning from deductive reasoning, we propose a novel model, referred to as SolverLearner.” (ibid).
- “Given our primary focus on inductive reasoning, SolverLearner follows a two-step process to segregate the learning of input-output mapping functions from the application of these functions for inference.” (ibid).
- “Specifically, functions are applied through external interpreters, such as code interpreters, to avoid incorporating LLM-based deductive reasoning.” (ibid).
- “By focusing on inductive reasoning and separating it from LLM-based deductive reasoning, we can isolate and investigate inductive reasoning of LLMs in its pure form via SolverLearner.” (ibid).
Kudos to them for recognizing the need to try and make that separation on a distinctive basis.
Hopefully, other researchers will take up the mantle and further pursue this avenue.
The Results And What To Make Of It
I’m sure that you are eagerly awaiting the results of what they found.
Drum roll, please.
Highlights of their key outcomes include:
- “LLMs exhibit poor deductive reasoning capabilities, particularly in “counterfactual” tasks.” (ibid).
- “Deductive reasoning presents a greater challenge than inductive reasoning for LLMs.” (ibid).
- “The effectiveness of LLMs’ inductive reasoning capability is heavily reliant on the foundational model. This observation suggests that the inductive reasoning potential of LLMs is significantly constrained by the underlying model.” (ibid).
- “Chain of Thought (COT) has not been incorporated into the comparison. Chain of Thought (COT) is a significant prompting technique designed for use with LLMs. Rather than providing a direct answer, COT elicits reasoning with intermediate steps in few-shot exemplars.” (ibid).
Let’s examine those results.
First, they reaffirmed what we would have anticipated, namely that the generative AI apps used in this experiment were generally better at employing inductive reasoning rather than deductive reasoning. I mentioned earlier that the core design and structure of generative AI and LLMs lean into inductive reasoning capabilities. Thus, this result makes intuitive sense.
For those of you who might say ho-hum to the act of reaffirming an already expected result, I’d like to emphasize that doing experiments to confirm or disconfirm hunches is a very worthwhile endeavor. You do not know for sure that a hunch is on target. By doing experiments, your willingness to believe in a hunch can be bolstered, or possibly overturned if the experiments garner surprising results.
Not every experiment has to reveal startlingly new discoveries (few do).
Second, a related and indeed interesting twist is that the inductive reasoning performance appeared to differ somewhat based on which of the generative AI apps was being used. The gist is that depending upon how the generative AI was devised by an AI maker, such as the nature of the underlying foundation model, the capacity to undertake inductive reasoning varied.
The notable point about this is that we need to be cautious in painting with a broad brush all generative AI apps and LLMs in terms of how well they might do on inductive reasoning. Subtleties in the algorithms, data structures, ANN, and data training could impact the inductive reasoning proclivities.
This is a handy reminder that not all generative AI apps and LLMs are the same.
Third, the researchers acknowledge a heady topic that I keep pounding away at in my analyses of generative AI and LLMs. It is this. The prompts that you compose and use with AI are a huge determinant of the results you will get out of the AI. For my comprehensive coverage of over fifty types of prompt engineering techniques and tips, see the link here.
In this particular experiment, the researchers used a straight-ahead prompt that was not seeking to exploit any prompt engineering wizardry. That’s fine as a starting point. It would be immensely interesting to see the experimental results if various prompting strategies were used.
One such prompting strategy would be the use of chain-of-thought (COT). In the COT approach, you explicitly instruct AI to provide a step-by-step indication of what is taking place. I’ve covered extensively the COT since it is a popular tactic and can boost your generative AI results, see my coverage at the link here, along with a similar approach known as skeleton-of-thought (SOT) at the link here.
If we opted to use COT for this experiment, what might arise?
I speculate that we might enhance inductive reasoning by having directly given a prompt that tends to seemingly spur inductive reasoning to take place. It is almost similar to my assertion that sometimes you can improve generative AI results by essentially greasing the skids, see the link here. Perhaps the inductive reasoning might be more pronounced by a double-barrel dose of guiding the AI correspondingly to that mode of operation.
Prompts do matter.
Conclusion
I’ll conclude this discussion with something that I hope will stir your interest.
Where is the future of AI?
Should we keep on deepening the use of sub-symbolics via ever-expanding the use of generative AI and LLMs? That would seem to be the existing course of action. Toss more computational resources at the prevailing sub-symbolic infrastructure. If you use more computing power and more data, perhaps we will attain heightened levels of generative AI, maybe verging on AGI (artificial general intelligence).
Not everyone accepts that crucial premise.
An alternative viewpoint is that we will soon reach a ceiling. No matter how much computing you manage to corral, the incremental progress is going to diminish and diminish. A limit will be reached. We won’t be at AGI. We will be better than today’s generative AI, but only marginally so. And continued forceful efforts will gain barely any additional ground. We will be potentially wasting highly expensive and prized computing on a losing battle of advancing AI.
I’ve discussed this premise at length, see the link here.
Let’s tie that thorny topic to the matter of inductive reasoning versus deductive reasoning.
If you accept the notion that inductive reasoning is more akin to sub-symbolic, and deductive reasoning is more akin to symbolic, one quietly rising belief is that we need to marry together the sub-symbolic and the symbolic. Doing so might be the juice that gets us past the presumed upcoming threshold or barrier. To break the sound barrier, as it were, we might need to focus on neuro-symbolic AI.
Neuro-symbolic AI is a combination of sub-symbolic and symbolic approaches. The goal is to harness both to their maximum potential. A major challenge involves how to best connect them into one cohesive mechanization. You don’t want them to be infighting. You don’t want them working as opposites and worsening your results instead of bettering the results. See my discussion at the link here.
I’d ask you to grab yourself a glass of fine wine, sit down in a place of solitude, and give these pressing AI questions some heartfelt thoughts:
- Can we leverage both inductive reasoning and deductive reasoning as brethren that work hand-in-hand within AI?
- Can we include other reasoning approaches into the mix, spurring multi-reasoning capacities?
- Can we determine whether AI is working directly via those reasoning methods versus outwardly appearing to do so but not actively internally doing so?
- Can we reuse whatever is learned while attempting to reverse engineer the brain and mind, such that the way that we devise AI can be enhanced or possibly even usefully overhauled?
That should keep your mind going for a while.
If you can find a fifth grader who can definitively answer those vexing and course-changing questions, make sure to have them write down their answers. It would be history in the making. You would have an AI prodigy in your midst.
Meanwhile, let’s all keep our noses to the grind and see what progress we can make on these mind-bending considerations. Join me in doing so, thanks.
Read the full article here