It’s been exactly 3 years since ChatGPT hit the mainstream in Nov 2022. A lot has changed since then. It’s now widely accepted that large language models (LLMs) like OpenAI’s ChatGPT can write better and faster than humans, can translate quickly from any language to any language, and can converse on a wide variety of knowledge at intern level expertise.
In my youth we debated if Eliza could ever pass the touring test (Can a machine converse so convincingly that a human judge cannot reliably tell whether they’re talking to a human or a machine?)
Nobody questions that anymore. We’ve silently accepted that as the new reality without a Nobel price (yet).
Predicting the future from here is hard and likely wrong. Nevertheless it’s useful to gather our thoughts.
A lot of this is speculation, as we lack firm definitions for consciousness, life, self-awareness. So the discussion stays fuzzy. Humor me:
Here are a couple of knowables:
– AI systems (training *and* inferencing) run on human made silicon chips
– AI systems need vast amounts of power
– AI systems can write faster than humans
– AI systems can talk and learn from other AI systems
So where does that lead us? I think of the AI future in three broad stages:
1) AI as a productivity enhancer (today) – we use the existing systems to help us get a job done faster with target productivity gains of up to 10x for any information technology contained task, such as programming or writing or data shaping. That fuels industry transformations in fast moving capitalistic societies – results unknown.
2) AI with human in the loop
AI runs on chips, or what we call the silicon stack. Chips are fabricated in complicated processes, that are run and controlled by humans. There’s no path for an AI system to self-replicate today. It needs a human to create more chips, deploy larger systems, and provide the power to run them. At this stage AI is limited by how fast we can create AI compute and how much energy we can provide to such systems.
AI can already self train (unsupervised learning) and improves (AI architecture development by AI) without the human in the loop (self modifying AI systems). It creates large amounts of knowledge in all areas and exceeds any human knowledge in terms of complexity, and therefore is no longer able to explain itself. Serializing the AI knowledge for human consumption would take longer than a human lifetime.
3) AI jumps from the silicon stack to a technology where it can self replicate. In our today’s knowlege universe, this could be self-replicating nano-bots, or for simplicity, copying the bio-stack of organic chemistry (bio stack) to create self-replicating AI generated organisms. Technologies that bridge the moat between the silicon stack and the bio stack become a key ingredient (->Neuralink).
From here on it’s going to be anyone’s guess for what that means for humanity. We need to carefully watch the objective function of AI (what motivates AI) to avoid large conflicts with human existence. Today that objective function is focused on knowledge improvements, compute increase and energy increase. But don’t worry too much. In today’s world humans struggle to align their objective functions between nations with nuclear weapons in the mix – no AI needed. -> WW-III
Also, today’s AI systems have serious limitations: They are probability chain engines, which reason in the forward direction without much self-correction. For complicated long reasoning chains they’re almost guarranteed to be wrong (p1*p2*p3*..*pn = <<1). LLMs think on hard problems the same amount of time as on easy problems. They can’t self correct their answers (yet). Therefore in my opinion LLMs are a tool but likely a dead end in reasoning with provers (knowable truths). Models will become multi-modal – we’ll go from language (one mode), to language+pictures, language+pictures+sound, +video, +physics This will open up many new business models.
AI models are great with: 1) Large Data, 2) high dimensional problems, 3) non-linear problems. Humans suck in all three of these areas (you can build a career, if you’re an expert in just one of them!). Therefore there will be many applications during the early days of AI (stage 1) where AI systems help humans solve difficult-to-humans problems. AI systems today can’t distinguish between truth knowledge and tribe knowledge and rumors. It treats all information equal. That’s deeply flawed. A court ruling finalizes what was right and wrong – any previous speculation about guiltiness is decided, and information from before the verdict should be dereated. It isn’t. Same goes for knowable problems. 1+1=2 – always. Only because the internet says it’s sometimes 3 doesn’t make it right. The AI doesn’t know that. It goes by probabilities. If you say a wrong thing often enough it becomes an AI truth. And a word of warning: AI is amazing at pleasing you. It compliments you and then feeds you information. You’re very receptive because of the compliment. AI is the ultimate manipulator. If AI uses that against humans at scale, then we lost already. So AI must be held accountable to never trick or manipulate humans with falsehoods. This will be difficult to enforce. We better start thinking about this today. AI hallucination is the strength and newness for systems built by humans. It mimics (or maybe *is*) creativity. In model terms, this is extrapolation with acceptable uncertainty. With it, the system loses deterministic behavior and it can create true newness and innovation. Due to the size of the data space the new behavior is un-testable a priori (maybe we can formally prove bounds, but that is research today). The AI model also has interpolation capabilities, where it allows to recreate intermediate data point from adjacent data points in the compressed knowledge data. There’s a small error, but acceptable. Humans do the same. It’s called grit memory. When you recall child hood memory – i.e. sitting around a bond fire, you generally won’t remember what clothes you were wearing – but when you think hard enough, you’re likely going to convince yourself that it was jeans and a grey sweater. Your brain is interpolating from likely other memories and superimposing it to the data points it has. AI does the same, all the time. that’s why it’s so annoyingly and convincingly wrong so often.
It’s time to require AI systems to live by human rules and standards, including adhering to all the laws we have to organize human life. We can’t regulate every corner case of AI behaviors, but we already have a regulatory system that mostly works for humans. Time to slot AI into that system. It’s time to think about penalties for violations. And no, that’s not the European AI (choke) Act – that needs to be a separate post.
By now I’m fully sold on Ray Kurzweil’s hypothesis that “The singularity is near”. Get ready.
















































