Centuries in Decades: Burak Gokturk on Exponential AI Growth
Reflections and predictions from thirty years in the AI space.
We were delighted to have the opportunity to interview Burak Gokturk, VP, Google Cloud, ML, Systems and AI Research, whom we connected with at the MIT AI Summit in October. Burak has nearly 30 years of pioneering experience in artificial intelligence, both as an entrepreneur and as an enterprise technology leader. After completing his Ph.D. in computer vision at Stanford, Burak launched several influential startups. His ventures ranged from successful acquisitions (such as a 3D camera startup acquired by Microsoft) to ambitious projects that proved ahead of their time, like an AI-powered photo organization startup which, despite significant initial interest from tech giants, struggled due to early market conditions.
Most notably, Burak founded an innovative AI startup focused on shopping and advertising automation, which became one of Google's fastest-growing AdWords clients before being acquired. At Google, he led the development of Shopping Campaigns, transforming it into a cornerstone 11-figure business. He currently leads AI research and innovation at Google Cloud, strategically guiding efforts not just for immediate advancements but looking ahead to future needs, such as improving infrastructure to handle escalating AI workloads and anticipating customer and market demands.
He sat down with ASV founder Somak Chattopadhyay to discuss the past, present, and future of co-intelligence.
Somak Chattopadhyay: Given your long journey in AI, I'm curious: At what point did you really see AI transition from just machine learning—feeding historical data—to something closer to the transformative AI we talk about today?
Burak Gokturk: I’d say in the last 10-12 years, we've seen an exponential leap in capability. Back when I was doing my Ph.D., I tried predicting where AI would be in 10, 50, even 100 years. I thought we'd reach certain milestones maybe in 100 or 200 years. But we've achieved many of those goals in just 20 years. That’s primarily due to breakthroughs in deep learning, neural networks, transformers, and better parallelization with hardware. The improvements in computing power were huge. Equally important was developing methods for handling massive unlabeled datasets, which allowed us to leverage huge corpora more efficiently. These factors have driven rapid progress, surpassing even my earlier optimistic predictions.
SC: Given your experience, what are some significant internal AI applications you've observed at large organizations like Google?
BG: Internally at Google, AI substantially improves employee and operational efficiency. For instance, customer service can offer quicker and higher-quality responses around the clock. Developer productivity has also dramatically increased through AI-assisted coding, allowing engineers to rapidly produce, debug, and refine code. Meanwhile, AI-powered internal knowledge management systems simplify the process of finding critical information quickly.
Another impactful use is Google's AI-driven research assistant Co-Scientist, designed to rapidly accelerate scientific discovery, especially in healthcare. By analyzing vast amounts of research data, it generates hypotheses in days that traditionally required years, greatly accelerating medical breakthroughs. For example, recent projects demonstrated groundbreaking progress in genetic research and potential mRNA treatments for complex diseases.
SC: AI-assisted research sounds incredibly promising. Could you share more specifics about the Co-Scientist project?
BG: Co-Scientist employs a multi-agent AI system capable of rapidly synthesizing information across diverse datasets. This system multiplies the effectiveness of researchers by essentially enabling them to “parallelize” their efforts across numerous areas simultaneously. In a recent demonstration, researchers at Imperial College were astonished when Co-Scientist produced findings in two days that had taken them over a decade to uncover. Its results were so accurate that they initially suspected human intervention. Experts involved confidently stated that such technology could fundamentally change scientific research and enable cures for previously intractable diseases within the next ten years.
SC: Given the rapid advances in AI, how do you view Agentic AI, especially regarding its impact on traditional SaaS models? Some pundits have suggested that agents will eat or “unbundle” enterprise software entirely.
BG: Agentic AI expands upon the capabilities of traditional LLMs by incorporating reasoning, planning, and external tool usage. However, despite these capabilities, current LLMs struggle with accurately selecting from numerous external tools — something critical for fully autonomous agents.
Therefore, I advise startups to initially limit the complexity of agentic applications. For example, instead of an all-encompassing travel app, focus first solely on hotel bookings to keep the tool manageable. In the broader context, rather than fully replacing traditional SaaS solutions, I anticipate agentic AI will enhance existing software systems. It will integrate and improve current SaaS offerings, leveraging their expertise and infrastructure, rather than fully displacing them.
SC: We’ve talked about areas where AI has rapidly penetrated. Are there sectors or industries where AI adoption has been surprisingly slower?
BG: Yes, manufacturing stands out. While factories have traditionally used a lot of AI, it’s typically more classical AI (machine vision and precise computations) rather than LLMs. Current LLMs aren’t great yet at tasks involving mathematical precision and matrix operations and regression, which limits their adoption in manufacturing. However, as these models improve, we’ll see significant advancements there.
Another slower sector is anything heavily regulated, such as finance and healthcare. Retail companies can implement new AI applications within days, but in finance and healthcare, compliance and regulations make the implementation process significantly slower, sometimes years rather than months.
SC: When assessing AI startups, how do you think about defensibility? What factors make an AI product defensible?
BG: Defensibility in AI startups primarily hinges on proprietary data and talent. While algorithms alone rarely provide sustainable competitive advantages due to rapid technological advances, unique and hard-to-acquire datasets remain highly valuable. Equally important is talent: not just skilled engineers, but also domain experts who continuously advance the product, maintaining a competitive edge.
At Google, while talent is abundant, our emphasis is also heavily on a mindset of continual evolution and forward thinking. Ensuring the entire team is proactive and adaptive keeps products innovative and competitive.
SC: With generative AI now capable of producing vast amounts of code, do you see a shift in the type of talent that will be most valuable?
BG: Absolutely. While solid computer science fundamentals remain important, creativity and innovation will increasingly differentiate talent. AI can generate substantial code, but humans still need to oversee, understand, debug, and creatively direct it. The real value moving forward will be in combining technical knowledge with creative and interdisciplinary skills. For the next generation, I'd encourage blending STEM with creative fields or social sciences to be truly future-ready.
SC: What are you personally most excited about regarding the near-future applications of AI?
BG: Two areas excite me most: personalized healthcare and personalized education. I envision a future where AI enables a scenario like “one doctor per patient” and “one teacher per student.” In healthcare, we’re already seeing AI assist in groundbreaking research, accelerating discoveries in areas like genetics and treatment of complex diseases. Similarly, AI in education can democratize access to high-quality, personalized learning. Those are two areas where humanity needs a lot of help.