How AI Is Transforming Higher Ed at Cornell
An interview with Ayham Boucher, Head of Cornell's AI Innovation Lab.
Generative AI is reshaping higher education faster than anyone expected, and Ayham Boucher is right in the thick of it. As Head of AI Innovation at Cornell, he’s leading an experimental approach to integrating AI into teaching, research, and administration without losing sight of what makes education human. In this wide-ranging conversation, we get into everything from the future of the essay to whether voice interfaces will ever replace the keyboard, plus the hidden risks of getting too good at working with AI.
Preston DeGarmo: To start, I'd love to hear more about your background and how you got into your role as Head of AI Innovation at Cornell.
Ayham Boucher: Sure. I’ve always been drawn to AI — building with deep neural networks, integrating them into bigger systems to make them smarter. But honestly, it was just one part of what I did for a long time.
When ChatGPT came out, though, it felt like the floodgates opened. Suddenly there was just so much more you could do. Around that time, Cornell’s IT group reached out and asked if I’d be interested in leading some of our GenAI efforts institution-wide. I thought it sounded like a great opportunity to have an impact across all these different departments and colleges.
Along the way, I also started teaching at the Bowers School of Computer Science and Information Science, and launched the AI Innovation Lab. So there’s a lot going on. And what’s amazing is that at Cornell, AI isn’t just coming from engineering. It's everywhere. You’ve got humanities, law, life sciences, all kinds of people using AI in different ways. A big part of my job is supporting that energy and channeling it into real projects.
PD: It sounds like the Innovation Lab was more of an organic development rather than a top-down mandate.
AB: Exactly. Cornell did set up three big task forces — one each for education, research, and administration — to explore how AI could be used across the university. But there wasn’t a direct mandate to build something like the Lab. That kind of emerged naturally.
I saw that there was just so much interest, so many people experimenting in their own silos. The idea was: why not create a place where students, researchers, and staff could come together, learn fast, and actually build solutions that help the university? We train students with a two-week AI bootcamp, pair them with teams across campus, and then we run projects in these quick sprints where it’s okay to fail fast and pivot. That mindset has really paid off.
PD: Could you share some examples of projects that have come out of the lab?
AB: We’re working on about 24 projects right now, so there’s a lot to choose from. Many of these, like using AI to streamline ticketing systems or knowledge bases, are pretty standard. But there are more interesting ones, too. For example, we worked with the vet school and our medical campus in Qatar to improve the feedback clinical students get during their rotations. We used AI to analyze past feedback and spot where communication could be stronger. It sounds small, but good feedback is critical in clinical training.
Another project supports academic advisors, making it easier for them to get a full view of a student’s background, interests, and challenges before a meeting. Instead of pulling data from five different systems, they get a holistic picture immediately. It makes those conversations much more productive.
We also work on the admin side, helping with things like processing invoices and purchase orders, or with compliance in research and facilities. That’s the unglamorous but hugely important stuff that keeps a big university running.
And then there’s this really cool side where we partner with researchers in fields like humanities or law. These are non-coders who need to work with large language models. We bring the technical expertise, they bring the domain expertise, and together we build something they wouldn’t have been able to do on their own.
PD: That's a great segue to discussing AI's role in the classroom. In light of serious concerns about cheating and reduced literacy skills, how do you see AI integrating responsibly into education at Cornell?
AB: Yeah, it's definitely complicated. We set up an AI advisory council with different committees, and one of them specifically looks at education and methodology.
Academic integrity is one of the biggest concerns. Some faculty love AI tools for things like auto-grading, since it saves a ton of time. Others hate it, because for them, assignments are how they connect with students, figure out where they’re struggling, and coach them. So it’s not one-size-fits-all.
We leave it up to faculty to decide what role AI should play in their classes. We give them templates and resources, but the choice is theirs. The downside is that students end up juggling different rules for different classes, and that can get confusing fast.
Beyond that, there’s the bigger issue that a lot of traditional learning methods (homework, essays, even the timing of deadlines) were optimized over decades. They’re effective. And GenAI just blows those up. Students can get help instantly now, whether it’s encouraged or not. So we’re not just rethinking how we assess students. We’re rethinking how we teach.
But there’s a lot of opportunity too. Techniques like role-playing, simulations, Socratic dialogue, which are super powerful but used to be hard to scale, suddenly become way more doable with AI. It’s a huge shift.
PD: With AI use being so pervasive among college students, your students are likely experimenting with AI even more than you are. Have your students taught you anything interesting or new about AI usage?
AB: Oh, 100 percent. Every class, we kick off with a 10-minute share session. Students bring new tools they've tried, tips they’ve picked up, whatever’s working for them.
One week someone will be raving about Windsurf, another week it's Cursor, another time it’s GitHub Copilot. Sometimes they bring up a new model or agent that's just dropped, but it’s not just name-dropping tools. We dig into what’s good, what’s buggy, what helps with certain types of tasks.
Honestly, I learn as much from them as they do from me. It keeps the whole class sharp and plugged into how fast things are moving.
PD: On another note, there's been talk of voice interfaces replacing keyboards, especially in AI interactions. What are your thoughts on that?
AB: It's an interesting idea, and it definitely works in some cases. When you’re driving, for instance, voice is amazing. But on campus, or in public spaces? Not so much. Students value privacy. You don’t want to be sitting on a bus or in the library talking out loud to your AI assistant. It’s just easier and more private to type.
Even in my classrooms, students prefer typing when they’re working with AI tools. I think keyboards are going to stick around a lot longer than some people predict.
PD: What are your thoughts on the future of the essay? Traditionally, it’s been a pillar of the college experience — it certainly was for me. Is the essay at risk?
AB: Essays are absolutely at risk, but not in the way people might think. The essay isn’t just a writing exercise. It's a thinking tool. It forces you to structure ideas, build arguments, see gaps in your logic. It's how you learn, not just how you show what you know.
With GenAI, a lot of that struggle can get short-circuited. You can generate a decent essay in seconds. But you lose the experience of wrestling with ideas. So yeah, we need to rethink the role of essays. Maybe it’s about focusing more on process — showing drafts, showing how your thinking evolves — instead of just submitting a polished final product. We have to innovate, but it’s going to take some trial and error.
PD: What about your personal experience with AI? Do you have favorite tools or practices?
AB: At this point, pretty much all my coding work is offloaded to GenAI. I also use it a lot for brainstorming, when I’m trying to come up with new ideas or think through a project from a different angle. I’ll phrase prompts in ways that kind of force the AI to challenge me or come up with ideas I wouldn’t naturally think of. Otherwise, it’s too easy to just get an echo chamber that feeds you back what you already believe.
One thing I’ve been enjoying is doing research using the new GPT o3 model. It gives you the deeply researched, thoughtful responses without taking forever. Earlier, if you wanted really detailed answers, you had to wait five or ten minutes. Now, it’s more like 40–50 seconds, fast enough that you can stay in the flow and keep a conversation going.
I also use AI to fill in little knowledge gaps. Like the other day, I couldn’t remember exactly how large language models handle tokens — do they use an embedding at the very start or just a lookup table? Without AI, I would’ve had to dig through tons of papers to find that little detail. Instead, it took me 30 seconds to get a straight answer and move on.
The flip side is, I’m definitely getting rusty. I still know how to code, but if I’m honest, the muscle memory is fading. Now I have to set aside time just to practice coding, not for work, but just to keep the skills sharp. Otherwise, you get too dependent.
PD: Lastly, can you talk more about the ethical considerations at Cornell regarding AI implementation?
AB: Ethics is front and center for us. We actually rate every project idea not just on technical feasibility but also on ethical considerations. And it’s usually not a gray area, it’s either a yes or a no. For example, if someone came to me and said, "Hey, let’s use AI to help with admissions decisions," that’s a hard no. Even if it’s technically possible, it's just not the right place to deploy AI. But if it’s something like processing invoices or helping with research compliance, those are areas where AI makes perfect sense. It's about freeing up human time to do higher-value work, not outsourcing core ethical responsibilities.
RAG systems (retrieval augmented generation) are a good example. Even though new models can ingest huge amounts of information, you don't want to just dump everything into them. The more selective you are with what you feed into the model, the better and more accurate the results are. Good RAG design isn't just technical. It’s ethical too, because it affects how trustworthy your AI outputs are.
And across all of this, we’ve been lucky that our provost really gets it. She comes from an AI research background herself, so she understands that it’s not just about building cool tech. It’s about meeting the technology with the human side: coaching, training, change management. That’s a huge part of making AI work at a place like Cornell.