INTERVIEW: Tina Austin – How one UCLA professor is transforming education across three disciplines

UC Tech Community Member Profile - Tina Austin - Computational Biology Teaching Faculty. UCLA Logo

A conversation with Tina Austin on ethical AI use, critical thinking, and the future of learning.

When Tina Austin began experimenting with AI in her biomedical research classroom, she had no intention of becoming what some might call an “AI influencer.” What started as simple curiosity about how language models work has evolved into a multidisciplinary teaching approach that’s catching attention across the UC system and beyond.

UC Tech Community Member Profile Interview – Tina Austin, Computational Biology Teaching Faculty, UCLA, talks about her work in working with curious students and faculty across UCLA, California State University, and shares a key message about the use of AI in the classroom and beyond.

From One Discipline to Three

Austin’s journey illustrates AI’s potential to break down academic silos. Previously focused solely on biomedical research, she now teaches across three distinct disciplines: (1) computational biology; (2) social sciences and humanities; (3) and communication. In her computational biology classes, students use custom GPTs to debug code and brainstorm solutions. Her work in social sciences explores AI bias and ethics, where students discovered gender disparities in AI-generated podcasts. And in her graduate communication course on “Critical Thinking in the Age of AI,” students with no coding experience are building AI agents and custom bots.

“There’s no way in the previous days before AI, I could teach across three different disciplines with the limited time that I have,” Austin reflects.

Her experience teaching research deconstruction in biomedical science and faculty development workshops has translated into a nuanced and critical take on AI papers she shares on LinkedIn, especially those that are overhyped or misinterpreted, earning her a following of readers who rely on her to demystify complex AI topics.

Austin is building interdisciplinary AI resources for faculty and her weekly series, Too Much Hype Tuesdays, has become a space for readers looking to make sense of complex AI topics

Through her workshops, Austin aims to empower faculty to become leaders in their own disciplines

The Process Over Product Philosophy

Austin’s approach centers on what she calls “process over product.” Rather than seeking perfect AI-generated outputs, she encourages students to embrace the “messy” thinking process. “I want to see a messy attempt to get to an answer, rather than a perfect answer, which is what AI will spit out,” she explains.

This philosophy extends to her assessment methods. Austin designs evaluations that test knowledge “in the age of AI” rather than simply checking if students can arrive at correct answers. The goal is fostering metacognition and reflection over grade optimization.

Key Principles for Educators

Austin offers several guidelines for educators considering AI integration:

  1. Stay curious but critical. Don’t view AI as either a cheating tool or something to avoid entirely. Instead, experiment thoughtfully and ask: “What problem am I trying to solve, and can AI help meaningfully?”
  2. Use AI as a thought partner. Rather than copying prompts or accepting first answers, engage with AI tools to explore different perspectives and approaches to problems.
  3. Remember: Not everything needs an AI solution. Just because AI can do something doesn’t mean it should. Focus on applications that genuinely add value to learning objectives.
  4. Prioritize privacy and ethics. Don’t share personally identifiable information or student data with AI tools. Follow FERPA guidelines and institutional policies.

Preparing Students for an AI-Driven Future

With employers increasingly expecting AI familiarity, Austin believes students need technological fluency – but with important caveats. “It’s important to have some kind of technological savvy, to be aware about these tools, but not be naive,” she notes. This means understanding different tools’ strengths, weaknesses, privacy concerns, and data ownership issues.

Austin warns against over-reliance, citing Stanford research showing decreased critical thinking since generative AI’s rise. Her solution? Interactive classroom discussions, live debates, and collaborative exploration of AI tools’ capabilities and limitations.

The Bigger Picture

Austin sees academia as uniquely positioned to address AI’s rapid evolution in education. Unlike medical devices that undergo years of clinical trials, AI tools evolve too quickly for traditional vetting processes. This creates both opportunities and responsibilities for universities to serve as thoughtful evaluators and implementers of AI technology.

Her work has gained recognition beyond UCLA, including selection as an “AI innovator” at the ASU GSV Summit and invitations to speak across the UC system. But for Austin, the real reward comes from watching students engage critically with these powerful new tools.

“The more we have open discussions in the classroom with students, and the more they are involved in those conversations, the better I feel that my class is going,” she says. “We’re all unboxing this together at the same time.”

As AI continues reshaping education, Austin’s approach offers a balanced path forward: embrace the technology’s potential while maintaining focus on the fundamental goal of developing thoughtful, critical thinkers prepared for whatever comes next.


[Editor’s note: Tina Austin spoke at the UC AI Council event “Generative AI and the Art of the Possible: Upskilling for the Future” on May 21, alongside colleagues from UC Irvine and UCLA Extension and  will be speaking at UC-Tech on June 26 in Riverside. She is also selected to speak at UC San Diego Process Palooza, August 6-7. Join us!]