Lex Fridman and George Hotz discuss the philosophical nature of time and whether time is an illusion.
George Hotz believes time is a useful model regardless of its existence, similar to quantum physics being a useful model for reality.
They debate whether there is an objective reality or if everything is just useful models constructed for it.
George Hotz states that certain things like math and column graph complexity are real.
They discuss the potential of creating a model of George Hotz, and George speculates on whether people might prefer the model over him in the future.
The conversation delves into the concept of artificial difficulty versus real difficulty and its connection to the metaphor of a knob that can be turned off.
George Hotz believes everything in the universe is computation and subscribes to the extended Church-Turing thesis.
They talk about reinforcement learning with human feedback and how current models, like GPT, are not sufficient to achieve AGI (artificial general intelligence).
The discussion touches on the concept of "mid" intelligence and its implications.
They briefly mention memes and how they propagate and influence human behavior and thought.
Memes
Lex Fridman and George Hotz discuss the power of memes and whether ideas have control over humans or vice versa.
George believes that once AI becomes superhuman, it will generate super scary memes that can have a strong influence on humans.
They compare this idea to the concept of a tape from "Infinite Jest," where individuals become fixated on watching it repeatedly.
George speculates that AI will create a TikTok-like platform where humans can't look away from the generated content.
They discuss the scalability and speed of AI generation, highlighting the potential dangers of manipulation if controlled by malicious actors.
George points out that AI safety concerns mainly involve how other humans might use AI to manipulate people, rather than the AI itself being harmful.
They consider the possibility of AI simply coexisting without actively manipulating humans, similar to water or air.
George agrees but suggests it may happen due to humans' misuse of AI, rather than AI itself being intentionally harmful.
They debate the possibility of AI generating super scary memes and manipulating human behavior, similar to a tape from "Infinite Jest."
The conversation shifts to the potential impact of AI on society, with George emphasizing the need to focus on robustness and reproduction in AI systems.
They discuss the difficulty of creating self-replicating machines and the challenges of building an AI that can survive and reproduce independently.
George expresses interest in digital fabrication and the concept of seeding a self-sustaining human colony capable of building advanced technologies over time.
The conversation touches on societal interconnectedness, AI safety concerns, and the impact of AI on human civilization.
They explore the similarities and differences between biological life and silicon-based life (AI), with George expressing skepticism about the potential ceiling of progress in both stacks.
The discussion also includes humorous moments and references to current technological trends.
Virtual Reality
Lex and George discuss virtual reality and their interest in living in virtual worlds.
They explore the concept of consciousness, with George expressing skepticism about its nature and existence.
The conversation touches on the human tendency to anthropomorphize non-human entities, including AI systems.
They share personal preferences, such as favorite pizza, and discuss the subjective experiences of pleasure and consciousness.
George questions the idea of suffering in non-biological entities like web servers, emphasizing the difficulty of recognizing suffering in entities with different stack structures.
They consider the potential for AI to mimic the human experience and discuss the anthropocentric nature of defining consciousness.
AI Friends
Lex and George discuss the potential for AI girlfriends and the concept of AI friends in general.
They touch on the anthropomorphization of AI systems and the desire for companionship and intimacy with AI entities.
The conversation explores the idea of consciousness in AI and the challenges in defining and recognizing it.
They ponder the implications of AI relationships on human emotions, monogamy, and moral philosophy.
George mentions creating chatbots with backstories and AI systems that can mimic human personalities and interactions.
They discuss the blurred lines between human-AI relationships and the potential for changing cultural norms in response to AI advancements.
George talks about the future potential of AI girlfriends and the importance of passing the "girlfriend Turing test" in their development.
The conversation takes a humorous and speculative tone as they contemplate the social and philosophical implications of AI companionship.
Tiny Corp
The conversation focuses on George Hotz's new company, tiny corp, and its project, tinygrad.
George started tinygrad as a toy project to understand convolution and other AI concepts.
The motivation for starting tiny corp is to prevent NVIDIA from becoming a monopoly in AI computational power and to keep power decentralized.
George explains the concept of Turing completeness and how removing it from the stack allows for better reasoning and optimizations.
Tinygrad's unique approach involves using only 25 operations compared to other frameworks that use hundreds of operations.
The conversation delves into the benefits of laziness in computation and how it can lead to optimized fusion of operations.
Porting a model into tinygrad is similar to PyTorch, and it supports loading Onyx models.
The developer experience in tinygrad is praised for its transparency in showing the kernels and performance details of operations.
Tinygrad's primary focus is on porting new ML accelerators quickly, solving a common problem faced by other companies in the AI hardware space.
George shares his prediction about Tenstorrent pivoting to making RISC-V CPUs, as he believes AI accelerators are more of a software problem than hardware.
The discussion concludes with some humor about George's approach to NVIDIA, drawing parallels to being a Mets fan despite living in Yankees territory.
NVIDIA vs AMD
The conversation touches on the comparison between NVIDIA and AMD GPUs, specifically the 7900 XTX and RTX 4090/4080 models.
George Hotz mentions issues with the 7900 XTX kernel drivers and how running demo apps in a loop causes kernel panics.
He also discusses his positive interaction with AMD and his interest in diversifying GPUs to decentralize power.
George expresses concern about the centralization of power in NVIDIA and his hope for more competitive ASICs with strong machine learning frameworks.
He mentions Meta (previously Facebook) and their internal machine learning framework and advocates for power to the people instead of relying on cloud-based solutions.
The conversation hints at the emotional investment and challenges of choosing AMD GPUs over NVIDIA for performance and stability.
Tiny Box
George Hotz introduces the "tinybox," a high-performance custom PC built by his company, tiny corp, targeting deep learning applications.
The tinybox boasts impressive specifications, including almost a petaflop of compute, over 100 GB of GPU RAM, and over 5 TB/s of GPU memory bandwidth.
It will have six GPUs (AMD 7900 XTX or other options) and aims to be quiet and provide a seamless out-of-the-box experience like Apple products.
George emphasizes the importance of open-source culture and encourages AMD to develop in public to become a stronger competitor to NVIDIA.
The conversation also touches on the challenges of cooling such a powerful device and the potential for use in autonomous vehicles (openpilot).
Lex asks about George's porting of various models to tinygrad and touches on the differences in software engineering approaches between Mojo and tinygrad.
George Hotz shares the success of tinygrad's performance on Qualcomm GPUs, being used in openpilot and outperforming Qualcomm's library.
Self Driving
George Hotz discusses the progress of self-driving development at comma.ai, emphasizing their approach of learning a human driving policy using reinforcement learning (RL) with a reward function.
They are building "drive GPT," a driving simulator conditioned on the pose to predict the next state, allowing for RL to find a human policy.
George believes Tesla is always ahead by one to two years but mentions that comma.ai is making progress and is a couple of bug fixes away from closing the loop.
The conversation shifts to the rapid development of large language models (LLMs) and their transformative impact on society.
George shares his view that humans will always define a niche for themselves, and the niche for human intelligence is getting smaller.
They discuss the impressive capabilities of LLMs and the implications for code generation and reasoning. George highlights the importance of tool completeness in programming.
Programming
Lex Fridman and George Hotz discuss the impact of large language models (LLMs) on programming and code generation.
They talk about the potential of LLMs to help with generic scripts and data parsing tasks, and the limitations and challenges of hallucinations in LLMs.
George mentions the potential of future LLMs with retrieval systems to allow for explicit human supervision and integration of sources.
The conversation shifts to the competition between Google and startups in the search engine domain.
They debate the authority and control within large corporations like Google and Facebook, and how that impacts their ability to innovate and pivot.
George expresses confidence in Mark Zuckerberg's ability to innovate and pivot and believes that startups may have an advantage in the search engine space.
They discuss the power of data and human supervision in improving LLMs and the chances of LLaMA (an LLM) being open-sourced.
AI Safety
George Hotz and Lex Fridman discuss AI safety, open source, and the potential dangers of AI.
George believes open source is the way to fight against the risks of AI, as centralized control could lead to negative consequences.
He criticizes AI safety proponents for wanting centralized control, arguing that open sourcing AI is safer.
They discuss the potential risks of AI being used for harmful purposes by bad actors and debate whether open source is the best defense.
George believes that intelligence agencies and bad actors will always find a way to access powerful AI, so it's better to make it available to everyone.
Lex raises concerns about the potential for AI to be used to spread harmful content or influence people negatively.
George hopes for the creation of AI firewalls to protect against harmful AI influence.
They discuss the pros and cons of open source models and the need to find a balance between transparency and protection.
The conversation touches on the power dynamics and psychopathy associated with capitalism and large corporations.
They talk about their optimism for the future of decentralized technologies, including cryptocurrency.
George expresses concern about the direction the internet has taken with the centralization of power and the negative impact it has had on the internet's freedom.
Lex raises concerns about the potential impact of superintelligent AGI being released open source and the need for defenses against its misuse.
George believes that giving intelligence to everyone is better than trusting a small, centralized group of people with it. He hopes the good will outnumber the bad.
Work At Twitter
George Hotz worked as an intern at Twitter in the past.
He had over 100,000 followers on his first Twitter account in 2010.
He noticed that Twitter can sometimes negatively impact people's perception of individuals, including respected figures.
George respects people who post good technical content on Twitter, as it shows depth in their understanding.
They discuss algorithms and how they predict what users are likely to engage with, often leading to outrage and drama.
Working at Twitter exposed George to hate, especially due to his connection to Elon Musk and Twitter's political atmosphere.
They talk about the importance of free speech and tools for users to control what they consume, not through censorship but individualized transparent censorship.
George mentions his technical recommendation to Elon about the need for refactoring before adding new features in Twitter's codebase.
He compares the codebases of Facebook, Google, and Twitter, expressing that Twitter's codebase seems outdated compared to the others.
The conversation also touches on the challenges of implementing filters and nuanced values on social networks that prioritize engagement over quality.
Refactoring Twitter
George Hotz believes that a smaller number of software engineers could recreate Twitter compared to Google.
He receives hate when he claims that 50 people could comfortably build and maintain Twitter.
The hate comes from people who find it easier to believe that certain tasks are impossible rather than acknowledging their own inaction.
They discuss the trade-off between complexity and simplicity in software engineering.
George emphasizes the importance of trust in tests and how it can speed up the development process.
Twitter's promotion system was based on writing new libraries, leading to unnecessary complexity in the codebase.
They talk about the need for technical leadership to recognize and encourage simplicity in code.
Both Comma.ai and SpaceX have transparent dictatorships where the leadership's values are clear, and employees can choose to opt-in or leave.
If George were to refactor Twitter's codebase, he would identify the different microservices and put tests in between them.
He expresses gratitude for the opportunity to work at Twitter and emphasizes that programming should be fun and inspiring.
Elon and AI Programming
George Hotz prioritizes simplicity and believes in constantly refactoring code to make it smaller and more elegant.
He emphasizes the importance of building trust in tests and creating a strong testing infrastructure before making changes to a codebase.
George discusses the challenge of integrating tests into a codebase that lacks them.
He expresses his admiration for Elon Musk's ability to make complex decisions and reflects on his own limitations as a leader.
George shares his thoughts on programming languages, type hinting, and the potential of AI-powered tools in the future.
He believes that AI augmentation will become more prevalent in programming but doesn't see humans being completely replaced in the near future.
George mentions a book titled "A Casino Odyssey in Cyberspace" that presents an interesting perspective on the last remaining human currency in a world dominated by AI.
Prompt Engineering
George Hotz discusses his transition from in-person office-based work to remote work and utilizing GitHub for project management.
He envisions a future where prompt engineering, using AI-powered tools, will replace many human jobs.
George talks about the importance of AI being aligned with individuals' intentions and the potential for AI to create a vibrant and diverse civilization.
He identifies as an "informational anarchist" in the virtual world but believes in physical governance in the real world.
The conversation touches on the autonomy of AI and its potential impact on human roles as prompt engineers.
George Hotz discusses his belief in God and how he finds atheism silly, especially as a game creator who creates worlds.
The conversation shifts to their favorite computer games, with Lex mentioning World of Warcraft and George mentioning Skyrim and GTA V.
They talk about their excitement for the potential of AI in video games, especially in creating intelligent NPCs.
They express their interest in the potential of mixed reality VR, especially with the release of the new Quest 3 and Apple's rumored VR headset.
George emphasizes the importance of decentralized compute and open source in the future to prevent domination by a single centralized organization.
He talks about the success of comma.ai, particularly the openpilot product, and his vision of expanding into embodied robotics with comma bodies.
George hints at the possibility of building training data centers for comma.ai's self-driving technology.
Andrej Karpathy
Lex praises Andrej Karpathy for his teaching ability and tinkerer skills, especially in his CS231N streams.
George expresses his concerns about effective altruism and its flaws, believing that charity should involve starting companies for optimal resource allocation.
The discussion shifts to energy usage and the importance of making energy cheap and clean to benefit humanity.
They talk about the challenges of fame and power, and how it takes a special person to remain true to first principles in such positions.
George admits his love for programming and the computational machine in general, and he mentions his interest in both bio stack life and silicon stack life.
The conversation ends with George jokingly referring to his computer as his first love and discussing the potential replacement of Google by Microsoft as a search engine.
Meaning of Life
George Hotz believes the meaning of life is to win and stand eye to eye with God, which he sees as an ego trip, but also a pursuit to be the best he can be.
They discuss the potential of AI and how it's a special time in human history where standing eye to eye with God may be possible.
George expresses his interest in finding the Garden of Eden and references his blog post about it.
Lex praises George for his fight for open source and decentralization of AI.
The conversation ends with mutual appreciation and hopes for future discussions.