Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies. Lex digs into the GPT-4, political bias in AI's, AI Safety, the size of OpenAI's neural networks, how society should approach AGI, OpenAI's relationship with Microsoft, the SVB Bank collapse and how we make AI's more human.
OpenAI was initially mocked and misunderstood when they announced their focus on AGI (Artificial General Intelligence) in 2015.
The conversation is with Sam Altman, CEO of OpenAI, discussing AI technologies like GPT4, ChatGPT, DALLĀ·E, Codex, and more.
They believe we are on the brink of a critical moment in human civilization, where AI's superintelligence surpasses human collective intelligence by a significant margin.
The potential applications of AI are exciting, as they can empower humans to create, flourish, escape poverty, and achieve happiness.
However, the power of superintelligent AGI is also terrifying, as it could potentially lead to the destruction of human civilization.
The conversation delves into topics beyond technical aspects, addressing power dynamics, checks and balances, economic systems, and human nature in the context of AI deployment.
Lex Fridman appreciates the open and challenging conversations with OpenAI members like Sam Altman, among others.
The Lex Fridman podcast aims to discuss both the positive accomplishments and critical perspectives on AI and its impact on society.
The final question posed is, "High level, what is GPT4?" but its answer is not provided in the provided conversation snippet.
GPT-4
GPT-4 is seen as an early AI system, which is an LLM. It is slow and buggy, but it points to the potential of something important in the future, similar to the early computers.
The progress of AI is seen as a continual curve, and it's uncertain which version of GPT will be seen as pivotal in the future.
ChatGPT is considered significant due to its usability and alignment with human needs.
Reinforcement Learning with Human Feedback (RLHF) is the process used to align the model to what humans want it to do.
The pre-training data set for GPT-4 is compiled from various sources, including open-source databases, partnerships, and the internet.
The GPT-4 model has shown a level of intelligence that can be predicted even before full training.
The understanding of how the model works is continually improving, but it may never be fully comprehensible due to its vast knowledge base and complexity.
GPT-4 exhibits some reasoning capabilities, although it's still debated in some circles.
The dialogue format used in ChatGPT enables it to interact with users, admit mistakes, challenge incorrect premises, and reject inappropriate requests.
There's a feeling that GPT-4 struggles with some ideas and situations, making it more human-like in interactions.
Political Bias
Users have different questions and directions they want to explore with ChatGPT.
GPT-4 struggles with certain tasks, like counting characters, despite its remarkable capabilities.
OpenAI values building in public to receive feedback and iterate on their AI models, recognizing that imperfections are inevitable.
GPT-4 has shown improvements over GPT-3.5 in terms of bias and nuance in its responses.
The goal is to give users more personalized control over biases and content.
ChatGPT can display nuance in its responses and provide detailed explanations of different viewpoints.
There's a debate on whether certain details, like the number of characters praising one person over another, are crucial in the context of AGI's potential impact on the future.
The conversation shows that the issues being argued about now might not have been anticipated as significant during the early development of AI.
AI Safety
The conversation covers the importance of AI safety and the efforts made by OpenAI to address safety concerns in the release of GPT-4.
The iterative process of red teaming, internal safety evaluations, and alignment techniques were employed to improve safety.
RLHF (Reward Learning from Human Feedback) is one of the techniques used to align the model with human values and preferences.
OpenAI aims to increase the model's alignment faster than its capability progress to ensure safety.
System Message is a feature that allows users to steer GPT-4's responses by providing specific instructions or context.
OpenAI acknowledges the challenges of defining and regulating harmful output and plans to involve society in setting broad bounds for AI behavior.
The conversation explores the tension between alignment with different human values and the difficulty in finding a universal approach to AI safety.
OpenAI is open to feedback and criticism and continues to iterate on AI models while remaining transparent about their efforts and limitations.
There are discussions about the impact of GPT-4 on programming, creative work, and interaction with users as a collaborative partner.
The challenges of moderation and refusal to answer certain questions are recognized, along with the need to improve the system's response to controversial topics.
OpenAI seeks to treat users as adults and provide nuanced responses without scolding or infantilizing them.
The transition from GPT-3 to GPT-4 involved numerous technical improvements and optimizations, resulting in significant advancements.
The Size of Neural Networks
The conversation discusses the size of neural networks and its impact on system performance.
GPT-3 and GPT-3.5 had 175 billion parameters, while GPT-4 has 100 trillion parameters.
Lex Fridman presented the concept of GPT4 in a talk, but it was taken out of context and turned into a meme.
The complexity of GPT-4 as a software object is described as the most complex humanity has produced so far.
Complexity includes the entirety of human civilization's advancements, technology, data, and text output that GPT was trained on.
The conversation raises the question of how much size (number of parameters) matters in achieving human-like performance.
It is noted that people tend to get caught up in the parameter count race, similar to the gigahertz race for processors in the past.
The focus should be on achieving the best performance rather than just increasing the parameter count.
OpenAI is committed to truth-seeking and adopting solutions that lead to the best performance, even if they may not be the most elegant.
Artificial General Intelligence
The conversation discusses the possibility of using large language models to achieve Artificial General Intelligence (AGI).
The participants ponder on the technical and philosophical components required for AGI, including the need for a body to experience the world directly.
They debate whether large language models like GPT-4 could be considered AGI or whether additional breakthroughs are necessary.
The safety concerns regarding AGI takeoff, both fast and slow, are explored, with emphasis on the importance of technical alignment work.
The question of consciousness in AI is raised, with one participant considering the potential for AI to fake consciousness with the right interface.
They discuss the interface capabilities that may mimic consciousness, such as personalization and the ability to understand suffering and self.
The conversation touches on the subjective nature of consciousness and the difficulty in defining and identifying it in AI systems.
The participants share their personal beliefs about consciousness, whether it can be attained by AI, and whether it is tied to the human brain or exists independently.
They acknowledge the uncertainty and unknowns in the development of AGI and the need for ongoing research and iterative learning.
The concern is not just about AGI becoming super intelligent, but also about potential disinformation problems or economic shocks arising from AI deployment at scale.
Sam highlights the need to pay attention to the potential geopolitical impacts of large language models (LLMs) and the difficulty in detecting their influence, especially on platforms like Twitter and beyond.
Both Sam and Lex agree that it might be challenging to prevent the dangers associated with unregulated, capable open-source LLMs, and they call for early exploration of regulatory and AI-based detection approaches.
Competition in the Race for AGI
OpenAI remains committed to prioritizing safety in the development of AGI, even in the face of competition and market pressures from other companies like Google, Apple, and Meta.
Sam emphasizes that OpenAI's mission is not about capturing unlimited value but about contributing to the development of AGI alongside other organizations.
OpenAI has an unusual organizational structure that allows it to resist certain external pressures and stay true to its goals.
When OpenAI first announced its focus on AGI in 2015, it faced skepticism and mockery from some AI scientists and the field in general, but the perception has since changed, and they are now taken more seriously.
Is OpenAI a non-profit?
OpenAI transitioned from being a non-profit organization to a capped-profit structure in 2020 due to the need for more capital to pursue their AGI research.
The capped-profit structure allows investors and employees to earn a fixed return, while the majority of the profits flow to the non-profit, which maintains voting control.
Sam Altman expressed concerns about uncapped companies pursuing AGI without proper safety precautions and the potential risks associated with the pursuit of AGI for unlimited profit.
OpenAI recognizes the need to compete and provide value in the AGI space but cannot control what other companies will do. They focus on building good systems and influencing positive actions in the field.
Sam believes that individuals and companies are generally motivated by better intentions and the desire to avoid destructive outcomes. There is hope for collaboration to minimize potential risks associated with AGI development.
Power
The conversation discusses the power dynamics and concerns surrounding AGI development.
Creating AGI involves a small number of individuals, making them potentially among the most powerful people on Earth.
Sam Altman acknowledges the importance of distributing power and making AGI development more democratic over time.
OpenAI aims to deploy AGI gradually to allow time for adaptation, regulation, and the establishment of new norms.
While OpenAI practices transparency and openness, there are differing opinions on how much they should open source their technologies.
OpenAI is more concerned about the risks associated with AGI technology itself rather than PR risks.
The conversation touches on media fear-mongering and how it affects AGI research efforts.
Sam Altman and OpenAI are open to feedback and value conversations with smart individuals to improve their approach to AGI development.
Elon Musk's relationship to OpenAI
The conversation discusses the relationship between Sam Altman and Elon Musk regarding OpenAI.
They agree on the magnitude of AGI's downside and the importance of prioritizing safety and benefits for humanity.
Elon Musk has expressed concerns about GPT being "too woke," but Sam Altman emphasizes the ongoing efforts to address biases.
Human feedback raters' bias is a significant concern, and OpenAI is working on selecting representative raters from diverse backgrounds.
The conversation explores the challenges of separating model bias from employee bias and the importance of empathizing with diverse perspectives.
Sam Altman admires Elon Musk's contributions to electric vehicles and space exploration and values the public debates on AGI.
OpenAI aims to be transparent and engage with users worldwide to understand different perspectives and avoid groupthink.
Political Pressure on OpenAI
The conversation delves into concerns about political pressure and external influences on AI development and decisions.
Sam Altman emphasizes the importance of society's input and the need for diverse perspectives to avoid groupthink.
They discuss potential job displacement due to AI, especially in fields like customer service.
Universal Basic Income (UBI) is mentioned as a potential solution to provide a safety net during economic transitions.
They speculate on the future impact of AI on economic and political systems, with the potential for increased wealth and changes in political structures.
The conversation touches on the failure of centralized planning in the Soviet Union and considers the role of uncertainty and competition in AI systems.
They discuss the possibility of implementing an "off switch" and the need for humility and control in AI systems.
Sam Altman talks about the collective intelligence of humanity and the desire to explore darker aspects of AI applications.
They reflect on the balance between humor and darkness in human nature and its potential role in AI interactions.
The challenges OpenAI faces in defining truth and misinformation
The conversation explores the challenge of determining truth and avoiding misinformation in AI models like GPT.
They discuss the difficulty in defining absolute truth and the existence of uncertainty in many topics.
The conversation touches on the responsibility of AI developers to minimize harm caused by their tools and the importance of ethical considerations.
They discuss the possibility of GPT being hacked or jailbroken, and the importance of giving users control while minimizing the need for jailbreaking.
The conversation acknowledges the success of OpenAI in shipping AI-based products, attributed to a high bar of excellence, trust, autonomy, and collaboration among the team members.
Sam Altman shares his hiring philosophy, emphasizing the importance of hiring passionate and talented individuals and investing significant time in the hiring process.
Microsofts Influence and Investment in OpenAI
Microsoft made a multi-year, multi-billion dollar investment in OpenAI, and the partnership has been positive overall.
Satya Nadella, the CEO of Microsoft, successfully transformed the company into an innovative, developer-friendly organization.
Sam Altman admires Satya's ability to be both a great leader and an effective hands-on executive.
Transforming large companies like Microsoft to adopt new technologies like AI or open-source culture can be challenging.
Satya's leadership style involves being clear and firm while getting people to willingly come along, combined with compassion and patience.
Both Sam Altman and Lex Fridman admire Satya Nadella and appreciate the positive aspects of the partnership with Microsoft.
Silicon Valley Bank Collapse
The recent collapse of Silicon Valley Bank (SVB) was due to mismanagement, particularly buying long-dated instruments secured by short-term and variable deposits in a world of 0% interest rates.
Sam Altman believes it was mostly the fault of the management team, and there may have been incentive misalignment with the Fed's actions.
To avoid depositors doubting their banks, Sam suggests the possibility of a full guarantee of deposits or significantly higher insurance limits than the current $250K.
The SVB collapse caused a weekend of terror in the startup community, revealing the fragility of the economic system.
The incident highlights how rapidly the world can change and how little our leaders and institutions may understand these shifts, which is relevant when considering the potential impact of AGI on society.
Sam is concerned about the speed of changes brought about by AGI and advocates for early deployment of weak AGI systems to give society more time to adapt.
Despite the potential risks, Sam remains hopeful because he believes AGI has the potential to improve life significantly and create a positive impact.
Anthropomorphizing AI's
The conversation touches on anthropomorphism and the use of pronouns for AI systems.
Sam Altman prefers to use "it" when referring to AI systems, while many people tend to anthropomorphize and use "him" or "her."
Sam emphasizes the importance of educating people that AI systems are tools, not creatures, to avoid dangerous projections of creature-like qualities onto them.
Lex suggests that if done transparently and with appropriate UI affordances, projecting creatureness onto a tool can make it more usable.
They discuss the possibility of romantic relationships with AI companions, with Sam expressing no personal interest, but acknowledging that some people find it appealing.
Sam and Lex discuss the potential for interactive GPT4-powered pets or robot companions and how the style of conversation with AI systems can be crucial even for tool-like interactions.
Future Applications
Sam and Lex discuss future applications and conversations they are looking forward to with more advanced AGI systems like GPT-5, 6, 7.
Sam expresses excitement about using AGI to explain all of physics and solve remaining mysteries, including a theory of everything and the possibility of faster-than-light travel.
They discuss the question of whether there are other intelligent alien civilizations out there, and how AGI could potentially help in detecting them or improving estimates with the data we have.
Sam mentions that if GPT-4 were to tell him AGI is here or coming soon, he wouldn't change much in his life since his happiness and fulfillment come from interactions with other humans.
They reflect on the rapid advancement of digital intelligence and technological advancements in the world, but also note the complexities and divisions in society that these advancements reveal.
Lex praises the incredible achievements of human civilization in creating tools like Wikipedia and Google search and sees GPT as the next step in the progression of these technological triumphs.
Advice for Young People
Sam Altman and Lex Fridman discuss advice for young people in high school and college to build successful and fulfilling careers and lives.
Sam refers to his blog post "How to Be Successful" which includes bullet points on compound yourself, self-belief, independent thinking, sales skills, risk-taking, focus, hard work, boldness, building networks, ownership, and internal drive.
However, Sam warns against taking advice too seriously and emphasizes the importance of individual introspection and following one's own path to find joy, fulfillment, and meaning.
They briefly touch upon the concept of free will and its potential as an illusion, which can complicate the pursuit of meaning and understanding in life.
Meaning of Life
The conversation touches on the significance of AGI development as a culmination of human effort and progress throughout history.
They discuss the exponential growth of human achievements, from transistors to modern AI, and the collective contributions of billions of individuals.
The question of how many other intelligent civilizations exist is raised, showing the curiosity and wonder surrounding AGI.
Sam Altman expresses the belief in OpenAI's approach of iterative deployment and iterative discovery to address challenges and ensure safety in AGI development.
The conversation ends on an optimistic note, with both Sam Altman and Lex Fridman looking forward to the future and the potential of AGI to benefit humanity.