Growing program enrollment to meet the demands of a business can be daunting, and yet evaluating the process systematically can unlock unexpected growth opportunities. That’s exactly what happened when Nicole Lim, Program Representative at UCLA Extension’s Engineering Department, set out to launch a new Systems Engineering course. Rather than relying on conventional outreach methods, she led a cross-functional effort grounded a systematic evaluation process: using artificial intelligence. Using prompts and interacting with ChatGPT in a thoughtful, systematic manner, the teamsurfacied insights, validated them, and then reshaped how the program connected with prospectivestudents.
This wasn’t just a data-driven decision and a simple query. It was based on a process of of prompting ChatGPT, interpreting its responses, and testing those ideas collaboratively across a cross-functional team and based on prior experiences and data. And over time, it led to a breakthrough concept and resulted in equally astounding results, enrollment didn’t just meet expectations — it more than doubled, rising from a projected 100 students to 261.
But it wasn’t the result that mattered most, it was how they got there, and the fact that the project has changed the way the team uses AI to support their work.
Starting with ChatGPT and Prompting the Right Questions
Lim’s first step was exploring how ChatGPT could help them think differently about marketing and outreach. Rather than plugging in a prompt and taking the first answer, she and her team approached the tool like a thought partner. “I asked it really important questions like, ‘How should we strategize our business development?’” she said. But the insights didn’t come from AI alone — they came from what the team chose to do with it.
Each team member generated prompts independently and brought the results to group discussions. “We would all go write prompts in ChatGPT, show the outcome and analysis, and then we discuss those outcomes within our team and select which ones to adopt,” Lim explained. This iterative back-and-forth became the foundation of their planning — a continuous feedback loop between AI suggestions and human judgment.
Personalized Webinars Over Generic Outreach
One of the first major insights that surfaced through this process was the realization that general information sessions weren’t cutting it. The team had been relying on broad webinars, even though many of their prospective students came from specific companies with their own speific perspectives and needs
Thanks to insights generated by the process outlined above, Lim and her team adopted a radically new approach – one based on creating webinars tailored to prospective students from specific employers: “Each webinar had really specific information tailored for that company, such as the enrollment process and the specific method to get the voucher.” In addition, they invited past students from those companies to join and share their experience during the webinar itself— They also individualized, by company, a single point of contact for the program. In this manner, the team expanded on the company-specific webinar concept and brought it to their entire approach to customer service. n. “We had one contact per company, and they joined the company-specific webinar toanswer questions and give a face to the screening program,” she added.
The personalized format helped streamline the decision-making process for participants, eliminating confusion and creating a more relevant, trusted connection.
Building a Collaborative and Ethical Culture Around AI
Throughout the project, collaboration remained central. Lim relied on visual workboards to organize priorities and invited the team to choose which tasks matched their strengths. She also credits the BruinTech community with helping her design the project framework and set a clear timeline for execution.
Responsible AI use was a constant priority. “The main priority is AI literacy,” Lim noted. “Knowing that there could be some errors in the outputs, or knowing that there could be some bias in it that could affect the outcome.” Her team avoided using any student data and referred to companies only in broad terms, ensuring privacy was protected.
The process of AI adoption wasn’t immediate either. For example, Lim shared that her supervisor was initially unsure how ChatGPT could help — until it saved her hours on a repetitive task.
She didn’t see how it could help her workload, until she used it for organizing data in an Excel sheet… Instead of spending two hours on it, she just spent 15 minutes fixing it.
Better Questions, Better Outcomes
Lim emphasized that AI didn’t hand them a roadmap — it simply helped them reflect more deeply on their approach.
ChatGPT didn’t give us the final answer…It just helped us ask better questions.
Her story demonstrates that success isn’t found in the tool alone, but in how a team uses it: thoughtfully, collaboratively, and with a clear commitment to improving the experience for every learner involved.
Contact

Project Managment Professional
UCLA Extension
Author

Digital Design & Content Intern
UCOP