Blog

  • AI Revolution: The Rise of Machines Replacing Human Roles

    AI Revolution: The Rise of Machines Replacing Human Roles

    The replacement of humans, who do white collar work, is a constant topic of conversation in the present times. The world is awash with talk of the intelligence of LLMs, and the oncoming threat of AGI. But what are the limitations of LLMs, and what, if anything, do humans have as an advantage to LLMs.

    I’ll list what I think below. Some of these may seem l

    The Personal Touch

    This article is written by a human*, but the image above is generated by an AI (ChatGPT 4o). The power and productivity leveraging by AI is clear-as-day; the image above would have taken me many hours of work to produce such a clean result, and AI has provided it in seconds. However, it is not my image, it is not what I pictured in my mind when I thought of the image first, but it does the same job. It is something that fits the prompt, however vague that is.

    Now for someone, who is an expert artist (not me), to produce their minds-eye image, it may be easier to create the image, than to describe exactly what they want; and in fact, for any written work (code/writing etc.) it is in the very act of writing that you are describing what you want. If what you describe in a prompt, is enough to produce the correct result, then by definition the prompt is enough. The rest is just the act of translation, the presentation of the information in an easily digestible form. However, the actual information content will be the same.

    Obviously, this changes if you do not know what to write, or you do not know enough to create the content in the first place. Then the AI acts as a search engine, gathering and collating information, as you would do normally on a day-to-day.

    So, what does this mean. The human value is therefore, the personal touch, the uniqueness that you bring to the table, that is a result of your own experience. This manifests as your ability to describe your own inner thoughts, be it through writing, images, or any creative outlet. It is this unique perspective that has always been difficult to describe to others, and is equally difficult to describe to an AI.

    Symbolic Reasoning/Abstraction

    This is a complex one, and requires an understanding of what symbolic reasoning is, and how it leads to abstraction. Of the current LLMs, including “reasoning” ones, this is the major downfall, and is something (at time of writing) that is still being worked on.

    According to ChatGPT 4o, symbolic reasoning is:

    The process of manipulating abstract symbols according to logical rules to draw conclusions or solve problems.

    This definition not only applies to mathematics, but also to the symbolism of literature. The ability to use metaphor to substitute one symbol for another in a film or book, is a way to make abstract topics relatable. For example, in horror films, primal fears, become manifest in monstrous form, and the situation and context is built around them with is analogous to reality, or how that fear comes about in reality. The ability to swap the symbol of a monster in place of fear, is a kind of symbolic manipulation, and the understanding of the context is the reasoning.

    So, symbolic reasoning is key to abstract and higher level thinking. It is in some ways the embodiment of intelligence, but is missing from LLMs. To understand why, you need to understand the transformer architecture. I won’t go into a large amount of detail here (please refer to the link article), but LLMs excel at pattern matching.

    Pattern matching is LLMs can be demonstrated by a counter-example, something that is difficult for them to solve (without external assistance). Let’s take a string of words, and ask LLMs to sort them:

    banana, apple, cucumber, apricot

    We then provide the LLM with a prompt:

    Please sort the following: banana, apple, cucumber, apricot

    Now the transformer uses something called self-attention, that is it looks at each word in the prompt, and asks which words are most relevant for the other words. Crucially, this attention mechanism is built into the system, and is fixed, but does change based on the words in the prompt. You can view the first four words as a kind of instruction to prep this matrix; but the matrix has been trained, not on the four words following (or more specifically on their ordering). To produce the next word in the response, the attention matrix itself would have to shift attention to the word apple; but the statistical nature of the training means this won’t be the most common case. In fact, in general, the attention should be even amongst all the words. This leads on to symbolic reasoning.

    From the brief explanation above, it is clear to see that the LLM does not see symbols. In a human mind, each position in the list is a different symbol, and so we can manipulate them as such; not simply something to shift attention to. An LLM does not see it that way, and never will. This is clearly demonstrated by the ARC-AGI challenge which humans can do almost instantly. How human brains do it is a mystery, but I try to explore it here.

    If anyone ever reads this, I know they will try the above prompt, and see that it does in fact work. This is because behind-the-scenes, the AI models have access to tools. They can convert the request to code, and run the code locally. It may seem like this solves the problem, but the point is that the symbolic reasoning should be embedded in the intelligence to perform true abstract reasoning.

    * All but the headline, and otherwise stated, that is.

  • The Compressing Effect of Automation

    The Compressing Effect of Automation

    Automation has been a thread that has run through all of human history. Previously, it has largely replaced heavy and some skilled manual labour. Now it is replacing repetitive intellectual work. This means that the realm of human work is being compressed into a smaller and smaller subset of viable economic work. Without repetition the work becomes exploratory, work that is traditionally risky, and difficult to value.

    Pattern based intelligence has appeared as if from nowhere in the last few years. Its ability to generate code, writing or art automatically has sent a shock through industry; devaluing the perceive value of human work. Partial automation has brought apparent devaluation before; photo editing, code autocomplete, spreadsheets; but ultimately led to increased productivity. However, this is different, it is seemingly more dynamic; adapted to any well specified task.

    The question remains, what intellectual work is left for humans to perform ? The answer is frontier work; work not defined by well understood and repeated patterns. However, this work is inherently exploratory, with long feedback loops, and high failure rates. It naturally has the economics of research, and innovation.

    This means there is a mismatch between standard labour markets and the new economic model. The vast majority of jobs are repeatable, with predictable output. Traditionally, research-like activity is a small part of most companies, who shy away from gambling with their bottom-line. Universities, public funded laboratories, and a handful of large firms get around this with subsidies, monopolies, or regulation; but most do or cannot.

    The new form of intelligent automation is intensifying this. The replacement of repetitive pattern based work at the heart of the labour market, will lead to employment being compressed into the outer edge; the frontier. The result is not mass unemployment, but growing instability: fewer roles, higher variance, delayed validation, and increased pressure on individuals to absorb uncertainty personally.

    All that remains if for employees to absorb uncertainty on behalf of others. The difficulty is that most labour markets are not designed to pay for uncertainty, only for output. Until that changes, disruption is not an anomaly; it is the natural state of the system.

  • Gödel’s Incomplete Islands

    Gödel’s Incomplete Islands

    A thought crossed my mind today about Gödel’s incompleteness theorem that I wanted to make a note of. Recently, I came across the theorem again recently when I saw a video espousing Roger Penrose’s stance on quantum conciousness. I was trying to understand what the theorem meant, and what Penrose’s idea was.Today, I came up with an analogy, and wanted to note it down as an aid to memory. It may be an incomplete (pun intended) analogy in itself, so I may expand it later, the more I learn.

    Gödel’s theorem states that given a set of rules (axioms) that underlie a mathematical structure, there are some truths in that structure that cannot be proved from those rules. This is strange because it means there are some things that may be true, that are also unprovable.

    The Gödel theorem achieves it goal by using a self-referential formula:

    $$
    G_T \;\equiv\; \neg \mathrm{Prov}_T(\left\langle G_T \right\rangle)
    $$

    Here we state that the sentence, $G_T$, is exactly equal to the statement that $G_T$ is not provable.[^1] A contradiction. This means that there are some sentences that can be constructed from our rules that are unprovable, but true! The actual Gödel formula seems fairly contrived. Something that seems purely theoretical. What use is a self contradictory statement in mathematics ? But in fact, more examples have been found in the wild, and they are not so abstract. So, we must try to comprehend this curio and this can be achieved through analogy.

    The analogy is to think of a series of islands in the sea. Most of the islands are connected by bridges; rules that connect the sentences. However, some islands form a discrete archipelago; between them the rules that reinforce their group structure. These islands exist, their sentences are true, but there is no path from our islands to them.

    Penrose’s argument is that something to be provable by a computer, it can only get there through bridges. Our robot (computer) is roaming around exploring everything, with a certain myopia, only able to see the bridges in front of it.

    Humans on the other hand can see the far-off island; the one not connected to ours. Gödel showed this by the very act of understanding his incompleteness theorem; it is the very epitome of the far-off island.

    Penrose theorises that the ability of us to see the island means we have some thought beyond computation. As we are in the physical world then this must be based on something in reality that is not computable. Penrose surmises that this is the collapse of the wave function. Something that can be seen as a mathematical trick, but something that is a break in the Schrödinger equation. An equation that is used to evolve a quantum state. He doesn’t claim to know where this happens in the brain, but has proffered a hypothesis.

    However, a counterargument could be, is there a meta-system of logic. Something that exists outside our system of bridges; a series of underwater tunnels perhaps. Is is possible that there exists a higher order set of rules, or a transformation of our mathematical structure that allows the islands to be connected.

    [^1] This is reminiscent of the halting problem, and Turing may have been inspired by this. Briefly, a program H decides if a program P stops. Then a new program J is created that when passed H, does the opposite. J called on itself then creates a paradox; it cannot decide itself.