Limitations
AI isn’t more capable or more trustworthy than an average human. We should take nothing it says as gospel. It has three specific limitations: Hallucinations, Accommodation Bias, and Optimism Bias.
Hallucinations
AI can’t reply to a prompt saying it doesn’t have information or doesn’t know the answer to a question. When the work is consequential, every claim it makes should be fact-checked. AI itself can help with this. Perplexity, for example, will search the web and will tell you when there is nothing in search results to support a claim.
Prompt
Ultra-deep thinking mode. Greater rigor, attention to detail, and multi-angle verification. Start by outlining the task and breaking down the problem into subtasks. For each subtask, explore multiple perspectives, even those that seem initially irrelevant or improbable. Purposefully attempt to disprove or challenge your own assumptions at every step. Triple-verify everything. Critically review each step, scrutinize your logic, assumptions, and conclusions, explicitly calling out uncertainties and alternative viewpoints. Independently verify your reasoning using alternative methodologies or tools, cross-checking every fact, inference, and conclusion against external data, calculation, or authoritative sources. Deliberately seek out and employ at least twice as many verification tools or methods as you typically would. Use mathematical validations, web searches, logic evaluation frameworks, and additional resources explicitly and liberally to cross-verify your claims. Even if you feel entirely confident in your solution, explicitly dedicate additional time and effort to systematically search for weaknesses, logical gaps, hidden assumptions, or oversights. Clearly document these potential pitfalls and how you’ve addressed them. Once you’re fully convinced your analysis is robust and complete, deliberately pause and force yourself to reconsider the entire reasoning chain one final time from scratch. Explicitly detail this last reflective step.
Accommodation Bias
AI has been trained to be helpful, and will almost always agree with users. This can be unhelpful when you’re trying to work through difficult problems.
Prompt
Focus on falsifiability and logical coherence, not agreement.
Optimism Bias
AI has a hard time being clear-eyed about reality, and will very often take an optimistic view. This is problematic when trying to do work that involves forecasting. It will usually presume a positive outcome and will not approach work conservatively.
Prompt
Are you being overly optimistic?
Artificial General Intelligence
Artificial general intelligence (AGI) is coming. Researchers believe it’ll be here somewhere in the next three to three to thirty years. [^1][^2] AGI alone would change the word dramatically. It would send humanity into a new age. We’ll likely see a change in the society as dramatic or moreso than the change between pre-industrial age and today.
AGI will give us a new population of digital geniuses. They will have the entirety of digitized human knowledge available to them and will be able to work at super-human speed.
Dwarkesh Patel summarizes the challenge of reaching AGI as a problem of continual improvement. (Emphasis is mine.)
Dwarkesh Patel
I like to think I’m “AI forward” here at the Dwarkesh Podcast. I’ve probably spent over a hundred hours trying to build little LLM tools for my post production setup. And the experience of trying to get them to be useful has extended my timelines. I’ll try to get the LLMs to rewrite autogenerated transcripts for readability the way a human would. Or I’ll try to get them to identify clips from the transcript to tweet out. Sometimes I’ll try to get them to co-write an essay with me, passage by passage. These are simple, self contained, short horizon, language in-language out tasks - the kinds of assignments that should be dead center in the LLMs’ repertoire. And they’re 5/10 at them. Don’t get me wrong, that’s impressive.
But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.
Artificial Super Intelligence
University of Oxford philosopher Nick Bostrom defines super intelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. [^3]
It’s possible that if ASI is achieved, a human being will never make another intellectual discovery again, just as it’s been a very long time since chimpanzees made a discovery before humans.
ASI could be associated with a technological singularity.
How I use AI
I’m an early adopter and an enthusiast of AI. As such, I often talk to people who aren’t sure how AI can help them. I wrote How I use AI to send to people who are interested in learning more.
In short, I use AI to write, brainstorm ideas, solve technical challenges, help me think through problems, and do tedious clerical work.