
The MIT AI Conference was held in New York City on Saturday, October 26 at Convene Brookfield Place. The conference assembled experts from a range of organizations operating on the frontier of generative AI. Armory Square’s managing partner Somak Chattopadhyay was invited to moderate a panel on Enterprise AI Infrastructure, featuring the following panelists:
Daniela Braga, founder/CEO of Defined.AI
Jonathan Cohen, VP of Applied Research at Nvidia
Jay Dawani, co-founder/CEO of Lemurian Labs
Dr. Sherry Marcus, Generative AI Leadership at Amazon Web Services
The prevailing attitude at the conference was one of exuberant uncertainty. As Ethan Mollick, the Wharton professor and author of Co-Intelligence, summed it up: “No one knows anything” (quite yet). Still, Mollick’s agnosticism didn’t stop the summit’s lineup of domain experts from doling out hot takes and predictions. The topics ranged from global policy conundrums to Spotify’s androidal DJ to the importance of a favorable “Wow to WTF ratio,” captivating an audience made up of investors, operators, and executives eager to read the tea leaves on all things automation.
While the spread remains wide on questions like the advent of Artificial General Intelligence, or AGI (a messianic entity regarded as either inevitable or impossible, depending whom you ask), consensus has begun to emerge on several more practical, immediate concerns. Below are five of our key takeaways.
1. Bottom-Up Beats Top-Down
McKinsey polled executives from a pool of companies (including several in the Fortune 500) about their conversion rates for AI pilots, and found that less than 5 percent had proceeded to signed contracts. On the bright side, this number has nowhere to go but up (and some panelists estimated it was significantly higher). Less optimistically, it demonstrates the extent to which many of today’s AI applications continue to over-promise and underdeliver — at least from the vantage point of hype-wary business leaders. CXOs face an uphill battle in attempting to rupture data silos and wrap their heads around all the complications and knock-on effects of AI implementations. Proving out ROI isn’t easy, and innumerable hoops must be jumped through before the dotted line is signed. Therefore, AI tools often have a better shot of sticking when they enter from the ground floor.
For most users, AI tools should work like magic. They should be as inexplicable as they are impactful. The end user does not need to know why AI works, she simply needs to see that it does work before gleefully incorporating it into her workflow. AI tools will be most successful when they flow upstream from resourceful contributors rather than downstream from torturous RFPs or abstract executive mandates.
2. Voice is the UI of the Future
It’s hard to imagine Macbook keyboards as antiques, but our descendants may soon regard them with the same bemusement we direct towards floppy disks and rotary phones. The shift to voice-first interactions, underwritten by LLMs, will result in more intuitive, frictionless, and responsive interfaces befitting the era of "intelligence on demand." Crucially, voice AI will increasingly be able to operate without first converting voice to text, further driving down latency and making “querying” feel more like good old conversating.
If you’ve ever felt self-conscious about your low wpm scores on typing tests, this is your time to rejoice. Qwerty’s reign may be nearing its end.

3. Infrastructure Poses Significant Bottlenecks
The three core pillars of enterprise AI infrastructure are compute, training data, and human talent. Among these, human talent seems poised to create the most significant long-term challenges for enterprises. Not only is there a scarcity of experienced AI professionals, but as companies race to build and deploy AI-driven solutions, the competition for top talent has hit a boiling point. Attracting and retaining AI experts is a strategic imperative for many enterprises, driving salaries into the seven figures and requiring substantial investments in employee development and retention programs.
Compute costs, while widely expected to decrease, may not fall fast enough to keep pace with increasing demand. This phenomenon, often described as Jevons Paradox, means that as compute costs drop, enterprises may end up consuming more resources overall, resulting in little to no reduction in total spend.
As the cost-benefit scales tilt, superior-quality training data emerges as a critical near-term differentiator, while talent scarcity remains the toughest, and likely most durable bottleneck for scaling enterprise AI.
4. AI Co-founders Are the Real Deal
It’s never been such a good time to be a non-technical entrepreneur. LLMs are increasingly able to plug key skill and knowledge gaps, and while they’ve yet to render first-time CTOs obsolescent, they can significantly expedite productization. Yet, somewhat paradoxically, it’s founders who do have technical chops who stand to gain the most from LLMs. As we noted in a previous blog post, AI allows builders already inclined towards structured thinking to iterate on their ideas at a blistering pace. We’re increasingly seeing brilliant new products spun up in a matter of days, not weeks.
To offer just one example: in the span of a single weekend, a pair of MIT students managed to build an interior design app that auditions furniture items in empty or unfinished rooms, responding to specific stylistic requests and enabling users to order said furniture items from Amazon with the click of a button. (And yes, we know what you’re thinking: do they integrate with West Elm?)
5. “Dogfooding” and “Shadow AI” Drive Innovation
“Dogfooding”—or using one’s own AI products internally before releasing them to the public—has become essential for companies looking to refine and validate their solutions. Mammoth tech companies like Google and Microsoft rely on internal testing to catch potential issues early and crowdsource feedback, but increasingly, smaller startups are following suit. This hands-on approach allows teams to discover limitations and edge cases they might miss otherwise, making the AI products more robust and user-friendly at launch.
In tandem, “Shadow AI” is emerging as a parallel trend. As you might have guessed, shadow AI refers to AI tools that employees or teams adopt independently, without IT oversight, circumventing official procurement or vetting processes. While shadow AI can boost productivity by enabling faster experimentation and personalization, it can also introduce risks, especially around data security and compliance. Enterprises must now grapple with striking a balance between fostering innovation and maintaining governance over AI usage, as shadow AI becomes an integral (if sometimes under-the-radar) part of the AI adoption landscape.
Questions? Comments? Counterarguments? Chime in below.