Reasoning on Gen AI's Impossible Challenge of Reasoning with the Block of Needs
|
|
The debate about the reasoning capabilities of the major AI products on the market has been a hot topic over the past few months. With the opposing parties' underlying interests often left unmentioned, studies pointing one way or the other come out at a seemingly-increasing rate. For those interested in a more deductive approach to this discussion — relying less on studies, of which objectivity and robustness can be difficult to gauge — the paper model A Simple Block of Needs™ can be a powerfully-efficient tool.
|
|
Making the invisible visible with the Block of Needs
|
|
The 3D model A Simple Block of Needs™, which we introduced in the previous newsletter, enables you to concisely analyze and generate insights with regard to the reasoning capability of the current major large language models (LLMs). Are these tools capable of human-like intelligence? In two recent LinkedIn posts, which are copied below, I point out two key interrelated challenges:
1. Inherent trouble with grasping higher-level goals/needs; and
2. Inherent trouble with acquiring and deploying consistent worldviews.
|
|
1. On Gen AI's inherent trouble with grasping higher-level goals/needs
|
|
Let’s use the free cut-and-fold model A Simple Block of Needs™ to build an explanation that even grandma can understand.
First, some basics. In simple terms, Generative AI is a system that outputs *useful* content by predicting the next token (i.e., word) in a sequence. LLMs are a subset of Gen AI.
Note that we assume that these AI systems, like most products, aim to be user-centric. To paraphrase the late cognitive scientist Daniel Dennett, they are tools, not colleagues.
Now, if the cylinders in the image were basic chunks of information, how would a Gen AI generate the cylinder that follows blue and red? Using its core functionality, Gen AI would identify patterns that include the sequence blue->red, and then infer the most-likely token that should follow.
However, the user (i.e., needs, problems, goals, objectives) is given little consideration. Gen AI looks primarily at patterns of tokens, not patterns of needs.
Investor Paul Graham recently wrote, “If AI makes writing code more of a commodity, understanding users' problems will become the most important component of starting a startup.”
While new refinements — like the “chain of thought” approach, which engages the user to gradually assemble a more-acceptable result — allows for some level of real-time modeling of the user’s needs, this all remains largely at the lower levels of the needs hierarchy.
A true customer-centric approach would more-deeply capture the user’s needs and motivations as drivers of action, which would thus allow for the creation of higher value with higher levels of efficiency. For example, identifying need ABCD would easily suggest need C and need D in the lower-level structure of needs — and, through them, the associated tokens.
A Simple Block of Needs™ is a minimalist embodiment of one-need theory of behavior, illustrating a very simple instance of a hierarchy of needs.
As the individual interacts with the environment, the mind creates this structure through a process of aggregation-disaggregation, while also moving through time like a cross-section scan that cuts slices along the time dimension. So, when addressing need A, the higher-level needs AB and ABCD are also partially addressed.
And this is key here. These higher-level needs are naturally broader, more abstract, and *not directly addressable* so they can “survive” and provide consistent overarching guidance, which also expands into the individual’s uncertain future.
While the higher-level needs is where the higher customer value lies, those are elusive representations that might be impossible to capture and manipulate. At least, not by text, which is at the heart of Gen AI.
Meta’s top AI scientist Yann LeCun recently said, “We’re never gonna get to human-level intelligence just by churning text.”
Gen AI will be fine. But when it comes to higher-level needs (also goals, objectives, problems), the comedian Mitch Hedberg said it best, “I think Bigfoot is blurry. That’s the problem. It’s not the photographer’s fault.”
|
|
2. On Gen AI's inherent trouble with acquiring and deploying consistent worldviews
|
|
A simple visual for a deeper understanding of the "thinking, fast and slow" concept that has recently become central to the AI narrative. Also, a useful insight for the AI user or the human-machine unit.
Where the “system 1 and system 2 of thinking” narrative in the AI world currently stands:
— Daniela Amodei, President at Anthropic, announcing the newest model, Claude 3.7 Sonnet, yesterday Feb 24 2025: “Just as humans use a single brain for both quick responses and deep reflection, we believe reasoning should be an integrated capability of frontier models rather than a separate model entirely.”
— Yann LeCun, VP & Chief Scientist at Meta, giving the talk "The Shape of AI to Come" at the AI Action Summit 2025 two weeks ago on Feb 11 2025: “This type of inference [inference through optimization: objective-driven AI] would be more akin to what psychologists call system 2 in sort of human mind, if you want… System 2 is when you think about what action or sequence of actions you’re going to take before you take them, you think about something before doing it. And then system 1 is when you can do the thing without thinking about it, you know, it becomes sort of subconscious. So, LLMs are system 1, what I am proposing is system 2.”
— Noam Brown, Research Scientist at OpenAI AI, giving the TED Conferences talk "AI won't plateau — if we give it time to think" at TEDAI San Francisco in Oct 2024: “the history of AI progress, over the past 5 years, can be summarized in one word: scale. So far, that has meant calling up the system 1 training of these models. Now, we have a new paradigm. One where we can scale up system 2 thinking as well. And we are just at the beginning of calling up in this direction.”
Daniel Kahneman’s “Thinking, Fast and Slow” is one of my favorite books. (In the past, I gifted a copy to each member of the Marketing team for the holidays.) While he did not set to dig into what system 2 might mean, Kahneman’s metaphor of “system 1 vs system 2” of thinking remains useful.
In 2018, I used it to put forth and visually articulate the notion of Theory Effect. That visual, which I refined slightly in 2024 and included here, places reasoning, as a mix of system 1 and system 2, on a continuum. With a simple animation, it also shows that, as we learn more about a problem, we naturally tend to use more of the system 1 to address it.
Back to the AI narrative. While the ‘system 1 vs system 2’ dichotomy is useful, the biggest challenge for AI (and particularly gen AI) remains user-centricity, as I described in my previous post. Being able to automatically engage a different reasoning system mix requires not only the capability of categorizing the problem, but also that of capturing the user’s subjective perception of the problem's difficulty level.
For now, you make that call. “Users can also adjust how long model thinks for,” as CNBC’s Kate Rooney noted when discussing Anthropic’s latest release.
|
|
Assemble the cut-and-fold 3D model A Simple Block of Needs™ to instantly acquire a new, powerful understanding of how needs are naturally structured, underlying the behavior of people, products, companies, and economies. A minimalist embodiment of Cristian Mitreanu’s one-need theory of behavior — which explains that humans naturally aggregate and disaggregate the overarching need generically-labeled “successful existence” to create a tree of needs (or goals) that drive their actions — the Block offers a robust foundation for efficient and consistent decision-making in both your personal and professional lives, in the context of people and technology.
|
|
|
|