From left, Maxwell Dean David M. Van Slyke with fireside chat guests Jeanette Moy, commissioner of the New York State Office of General Services, and Jeff Rubin, Syracuse University's chief digital officer (Photos by Chuck Wainwright)
Maxwell Fireside Chat Examines AI’s Role in Government and Higher Education
Artificial intelligence (AI) is reshaping how governments operate, how universities teach and how public institutions make decisions.
That was the central message of a recent fireside chat hosted by the . Dean moderated the conversation which brought together two leaders working at the forefront of AI adoption: , commissioner of the New York State Office of General Services (OGS), and , Syracuse University’s senior vice president for digital transformation and chief digital officer.
“The question before us is not whether AI will transform public life,” Van Slyke said. “It’s whether our institutions are ready to lead that transformation thoughtfully, equitably and effectively.”

Personalizing Learning and Expanding Access
Rubin opened the March 26 event with a claim about the stakes for higher education: AI, he said, has the potential to transform how universities teach in ways not seen in 200 years. “The idea of a professor standing in front of a room, lecturing—and students taking notes and then being assessed through projects, papers and exams—that model has not shifted,” he said. “What AI allows you to do is personalize learning.”
Personalization at scale has long been a challenge because no instructor can simultaneously tailor a course to every student’s pace and needs, he said. AI changes that equation.
Rubin shared how Syracuse has deployed more than 30,000 AI licenses across campus to drive equitable access and data security. Some students had already purchased AI tools on their own, while others could not afford them, he pointed out. Faculty and staff also needed a secure environment for uploading sensitive documents without routing data through commercial platforms.
Rubin also highlighted a less-discussed dimension of the University’s AI work: a private wireless network, built in partnership with JMA Wireless, that supports thermal sensors in academic buildings across campus. The sensors detect occupancy without capturing identifying information, allowing the University to optimize janitorial services, plan building capacity and, eventually, adjust heating and cooling based on actual use patterns.
A Measured Approach to Government AI
Moy noted that the state’s deliberate pace of technology adoption is a necessary safeguard rather than a liability. “I would contend that it’s important that government is risk-averse,” she said. “The information that we hold is really important—Medicaid data, health data, testing information. The importance of that stewardship becomes paramount.”
Her office oversees roughly 30 million square feet of state real estate, manages 1,500 procurement contracts valued at $44 billion and administers a design and construction portfolio of approximately $5.7 billion. Moy described the agency’s AI strategy as a measured approach. It involves first identifying low-risk, high-value applications, then building the data infrastructure to support them, and ensuring legal and operational frameworks are in place before scaling.
Moy said one of OGS’s most tangible AI investments is in procurement search. Agencies and municipalities navigating the state’s contract catalog often struggle to find what they need, undermining the efficiency those contracts are designed to provide. Moy said AI-assisted search is a logical starting point: low risk, no job displacement and an immediate opportunity to test what the technology can do.
The agency is also piloting AI-powered document summarization tools for bid documents and contract histories which are reported to save up to three hours per day.
Moy noted that backlogs present another opportunity, as they are a universal challenge across the public sector. She explained that while AI could help alleviate some of those challenges, agencies must be cautious; they cannot hand out productivity tools to every worker without first creating the right frameworks.
Jobs, Regulation and What Comes Next
Both speakers addressed audience concerns about AI’s impact on jobs—a topic that has gained urgency in New York following Governor Kathy Hochul’s , which is tasked with studying AI’s effects on the labor market.
Rubin cited research suggesting that less than 1% of the 1.2 million layoffs recorded in 2025 were directly attributable to AI, arguing that economic factors and structural business decisions are doing more to reshape the workforce than the technology itself. He expressed confidence that AI will ultimately create more jobs than it displaces, though he acknowledged that every job will change.
“If you don’t know how to incorporate AI into your domain and discipline, you will be at a disadvantage,” he said. “Students need to have the tools and the classes.”
Moy recalled the dot-com era and the transformation of publishing that upended models at institutions like the Brooklyn Public Library, where she once served as chief strategy officer. The fear and exuberance that accompanied those transitions, she said, mirrors what society is experiencing today.
“We want to make sure that we’re thinking about it ethically, that we’re balancing it according to public need,” she said. “And we’re having active conversations about those trade-offs.”
Both panelists returned repeatedly to the theme of transparency in AI systems, government data and institutional communications.
Rubin pointed to Anthropic’s practice of publishing system prompts as a model for responsible AI deployment and noted that Syracuse recently launched an AI-powered course search tool, called , that similarly makes its operating parameters visible. He also raised the challenge of AI-generated media and the difficulty of distinguishing real content from fabricated content online.

An Open and Ongoing Dialogue
The conversation drew questions from the audience.
A first-year Maxwell student and member of the University’s United AI club asked what precedent a recent court ruling holding social media platforms liable for algorithmic harm to minors sets for the future of AI regulation and whether platforms like ChatGPT should face similar oversight.
Rubin was direct: “We made the mistake with social media. These companies should have an obligation to have guardrails.”
Moy pointed to Hochul’s recent policy proposals targeting addictive technology, including requirements for more restrictive default settings on children’s accounts. She acknowledged that government is often a step behind rapid technological change, but argued that intervention becomes necessary when innovation results in public harm.
A second student raised concerns about AI’s potential to enable fraud, including falsified documents and biased algorithms.
“These are very real questions,” she said, emphasizing that OGS is working to understand its uses and risks. She argued that the answer isn’t avoiding AI but understanding it well enough to spot its misuse. “If we don’t understand it, we will fall behind.”
Rubin agreed, framing the detection challenge as both technological and philosophical: As AI becomes embedded in everything from autocomplete to document editing, defining what counts as “AI-generated” becomes increasingly difficult. “My gut is almost every piece of content out there will have some AI piece to it, assisting us,” he said. “So, it’s a technology challenge and a societal challenge.”
Van Slyke closed by noting that Maxwell’s role in preparing students for public service has always meant equipping them not just with technical knowledge, but with the ability to navigate the policy, governance and ethical dimensions that accompany it.
“The question is not what will AI do to our institutions,” he said. “It’s what will we choose to do with it.”