OpenAI's Product Leader Discusses Their Strategy & the Future of AI
Ep. 9, Unsupervised Learning
On the 9th episode of Unsupervised Learning we had an amazing discussion with Peter Welinder, VP of Product and Partnerships at OpenAI. See full episode below!
We sat down with Peter to discuss where the value lies in the foundation model stack, thoughts on open vs. closed LLMs, the biggest risks in AI, and more! You can listen to the full conversation on Spotify and Apple, watch the episode on YouTube, or read the highlights below. Stay tuned for our upcoming episode with Daphne Koller, the founder of AI drug discovery company Insitro, and former Co-Founder of Coursera!
⚡ Highlight 1: Value accrual in the AI space
“I suspect that most of it will accrue at the application layer across a lot of different applications….And I think central to that [for us] is really enabling as many builders as possible to build products on top of this technology.”
Peter discusses the need to drive down costs, and increase the speed in the foundation model layer to enable value creation in the application layer. We also discussed the need for LLM tooling to build out the infrastructure layer for future applications.
⚡ Highlight 2: Open source vs. closed source LLMs
Following the release of GPT-3 and ChatGPT, there has been an explosion of open-source models released from Bloom to Stanford’s Alpaca, which was fine-tuned from Facebook’s LLaMa. However, it is still an open debate on whether open or closed source models will win the LLM race.
“I think there's always going to be a category of models or AI systems that will be way, way better than the open source versions of those same models. We have seen that play out in a lot of other areas like desktop OS. Linux has been been a thing for 30 years and it's never really caught up to Mac OS X and iOS and Windows and so on.”
With Peter we discuss scenarios when it make sense to use open source vs. closed source LLMs, and why he thinks closed source models will continue to dominate in the immediate future.
⚡ Highlight 3: Biggest risks in AI
With the advent of ChatGPT, researchers, regulators, and much of the general public became concerned with the potential safety threats of LLMs.
“I think there's a few different areas, like I think there's a lot of work that can be done on understanding what's going on in these models. They're still fairly black box, so interpretability…I think there's also others around what is alignment? How do you specify goals? How do you specify the guardrails and so on.”
We discussed the lack of research in the safety of superintelligent systems, and how OpenAI is collaborating with researchers who want to pursue this work using their models.
Companies like Anthropic, founded by former OpenAI researchers, have made significant strides in AI Safety research areas such as alignment, constitutional AI, and moral self-correction.
Watch an excerpt from the full episode below where Peter explains why OpenAI focused all their efforts on building LLMs!