Boulder Valley Frequency

‘Cancel it’: CU students, profs sound the alarm on AI deal

Season 2 Episode 14

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 22:01

April 8, 2026

Support

This podcast is made possible by listeners and local businesses. You can sponsor an episode of The Frequency. Reach our growing audience of highly engaged listeners. Email boulderfrequency@gmail.com


Sources - Headlines:

Half of NOAA staff face furloughs

https://boulderreportinglab.org/2026/04/02/half-of-staff-at-boulders-noaa-global-monitoring-lab-face-furloughs-as-funding-freeze-drags-on/


U.S. Forest Service to close research stations that research wildfire

​​https://www.nytimes.com/2026/04/03/climate/forest-service-research-stations.html?unlocked_article_code=1.Y1A.G88k.rIupHYiSkZ8O&smid=url-share

https://www.fs.usda.gov/about-agency/reorganization


Dark Horse auctioning off decor

https://bid.rollerauction.com/auctions/24945/landing

https://www.facebook.com/share/r/1DwP1psbje/


CU profs, students sound the alarm on AI deal

Featuring Aaron Gluck, PhD student in computer science and member of iSAT, the National AI Institute for Student-AI Teaming; and Lori Emerson, professor of media studies


Open letter: Google Doc

Get involved

Contact ai_critical@proton.me


News coverage of Coloradans who died by suicide in connection with AI use:


Bonus content

Listen to the full interview with Aaron Gluck + Lori Emerson at Patreon.com/BoulderFrequencyPod.


One More Thing

excerpt from:

Bernie Sanders talks to Claude YouTube video
Music by Kelly Garry

---------------------------

Produced by BVHz in partnership with The Mountain Ear

Independent, local journalism for Boulder County

Our team

Journalist + producer: Shay Castle

Audio producer + music: Kelly Garry

Additional support provided by Jeff Rozic

*Find bonus content and support us on Patreon



SPEAKER_02

BEHD, the Frequency. Good morning, Boulder County. It's Wednesday, April 8th. I'm your host, Shea Castle. And this is The Frequency, a weekly local podcast covering the news, events, and voices shaping the Boulder Valley. This podcast is made possible by listeners and local businesses. You can sponsor an episode of The Frequency and reach our growing audience of highly engaged listeners. Email boulderfrequency at gmail.com. Today, we're revisiting the world of AI to talk about a deal CU is pursuing with the makers of ChatGPT. Critics say the contract is so concerning for the privacy of students, faculty, and staff that it should be scrapped entirely. But first, the headlines. That represents half the staff at the Global Monitoring Lab for the National Oceanic and Atmospheric Administration. The lab analyzes greenhouse gases, solar radiation, aerosols, and ozone levels. Read the full story at Boulder ReportingLab.org. A restructuring of the U.S. Forest Service could impact research on wildfires. The USDA, which oversees the Forest Service, announced last week it would close 57 of its 77 research and development centers, consolidating into one location in Fort Collins. Critics believe the move may cause many Forest Service employees to quit rather than relocate. A similar consolidation and relocation of the Bureau of Land Management led to 87% attrition, according to reporting from the New York Times. Local firefighters received training from the U.S. Forest Service on subjects like investigating the origins and causes of wildland fires. An anonymous Forest Service scientist told the New York Times it was unclear if research funding was on the chopping block. A separate researcher added the administration's focus on climate change denial downplays studies that show how stressed and vulnerable the nation's forest and grasslands are. Quote, this move will lead to an increasing divergence between sound science and land management. Kevin Hood, head of a national nonprofit forest protection group, told the publication. There will not be a public hearing for a liquor license modification for the kitchen, the downtown restaurant owned by Kimball Musk. Per city rules, Musk's name was publicly displayed on the front of his business as it seeks to amend its liquor license as part of the city's outdoor dining program. A sign posted in the business window named Musk and kitchen co-owner Hugo Matheson and notified the public of a potential April 15th hearing. The application generated interest after Musk was named in the Epstein files in early February. The application is a routine renewal and minor modification, according to a city spokesperson. A public hearing would only be held if comments from specifically defined interested parties were received. The city did receive comments, a spokesperson confirmed, but no one who wrote in identified themselves as owning a business or living in the area surrounding the kitchen. Necessary criteria to trigger a hearing. Residents of Boulder can still speak during the open comment portion of the April 15th virtual meeting, which starts at 3 p.m. There have been no significant protests or demonstrations against Musk, the kitchen, or his other Boulder business, drone company Nova Sky Stories. Memorabilia from Boulder's Dark Horse will be auctioned off April 14th in Denver. A website has been set up on rollerauction.com for the sale of the bar's beloved decor, including large items like statues, carriages, and sleighs. A video posted to the Dark Horse Facebook page shows the items being removed. Proceeds from the sale will pay for employee severance, according to the post. A list of specific items up for bid has not yet been released. Find details, including date, time, and location, in our show notes.

SPEAKER_04

I already thought it was bad when I just heard or when I read the initial announcement. But then after seeing the contract, especially, it's kind of like horrifying. It's really bad in terms of the uh data privacy issues, especially.

SPEAKER_02

On February 11th, the University of Colorado announced a major deal with OpenAI, makers of chat GPT. For$2 million each year, every student, faculty, and staff member in the CU system would get access to a version of the popular AI chatbot, specifically designed for universities. CU is an early adopter of the platform, alongside Columbia University, Oxford, and the California State University system. Backlash to the deal was Swift. And by the last week of March, CU was already scaling back the rollout, delaying student access until August of this year. Critics have raised a myriad of privacy, ethical, and environmental concerns with the technology itself, as well as transparency concerns related to the decision-making process. An open letter to CU protesting the deal had nearly 800 signatures as of early April. The very first person to sign was Aaron Gluck, a PhD student in computer science and member of ISAT, the National AI Institute for Student AI Teaming. ISAT, funded by the National Science Foundation, studies how AI tools can be used in classrooms. It was Aaron's voice you heard just a moment ago. He and media studies professor Lori Emerson joined me to talk about their objections to the CU open AI deal, content warning. This discussion includes mention of suicide.

SPEAKER_03

I think, too, in the upper administration minds, students and faculty and staff are already using Chat GPT, whether it's the free version or the paid version. And this is producing a situation where they're not able to responsibly handle privacy concerns. They're not able to have any control over the data that's being uploaded or mined. And also, I suppose in their minds, it could potentially be producing some sort of inequity across campus between those who can afford the paid version and those who cannot. And so they chose to go the route of this uh walled garden option that OpenAI offers and that they're doing with multiple other universities across the US. However, I think in the minds of a lot of faculty and staff and students, the other option was to be explicit about how the university does not support the use of corporate AI tools like ChatGPT as a teaching and learning tool. And instead, what they did was Aaron might disagree, but I think they're implicitly endorsing its use throughout the entire system, which comes with the potential to completely up-end the purpose of higher education altogether.

SPEAKER_04

I don't disagree for the record. I very, very much agree with that. One of the differences between this and the kind of freely accessible version would be that if you don't want to pay, then you have lower usage limits. In theory, you'd have essentially more use of the tool available as well.

SPEAKER_03

And the irony to me is that so you sign this agreement with OpenAI under the guise of equity and accessibility, but there's like abundant evidence for almost the last, I don't know, let's just say the last five years, that in fact tools like ChatGPT produce, perpetuate, and reinforce all sorts of um biases against people in marginalized communities. So yeah, I just wanted to add that.

SPEAKER_02

I'd love to hear, in your words, a little bit more of why AI is antithetical to learning. Or however you would phrase that.

SPEAKER_03

Aaron might have a different take on this because he's studying this and I'm more like in the trenches dealing with the consequences in the actual classroom. To me, what's supposed to take place in an institution of higher learning is friction. You're supposed to be constantly involved in a process of friction and of problem solving and of struggling with ideas and concepts and methods. And the point is not to become an adept user of corporate tools, which is somehow like AI literacy is that's basically what AI literacy has come to mean, particularly on this campus, is teaching students how to become adept users of corporate tools so that they can supposedly go out in the job market once they graduate and be able to teach other members of corporations how to become adept users of these tools. And to me, that that is not the point of a university education. That that is exactly not the point of a university education.

SPEAKER_04

I I don't necessarily agree with the kind of premise of the question that AI is necessarily antithetical to learning. I think in the ways that a lot of people are using it, that is certainly true. I do think there are places for AI in the learning process, but I think the key is that if you're going to use them, they need to kind of make the environment the learning environment better. And explicitly need to not circumvent any of the uh friction that Lori is talking about. And you still need to allow the students to wrestle with ideas, wrestle with other people's ideas. On the part of teachers as well, you need to make sure that they are very conscious about how they're designing assignments and how they're grading and all of these other things that factor into how students are able to achieve the learning and also understand their own progress.

SPEAKER_03

We are not just like wholesale against AI in higher education. I think what we're deeply worried about is corporate AI in higher education and a reliance on corporate AI for the sake of efficiency and productivity on both the side of the faculty side and the student learning side, because that is exactly where the learning seems to utterly, utterly disappear. But also more largely stymied by the fact that the extreme environmental impact of the data centers that are required to run these corporate AI tools is constantly glossed over when it's undeniable that it's already having a catastrophic effect on our environment that is also already overtaxed and is unable to keep up with water demand and the electrical demand and so on and so forth. That is not something that can be just brushed aside to me. Like that is also at the heart of what really deeply concerns us.

SPEAKER_02

I'd love to know what your ultimate aims are. And I want to ask this in a two-part way. If you could wave a magic wand and have this resolve in this way, what would you want that to be? And then maybe a more realistic version of like um the way to make that likely reality the best it could possibly be.

SPEAKER_03

Does that make sense? The best possible scenario is for CEO to cancel this contract with open AI, period. I don't think that the university should be partnering with this corporation in particular, and it definitely should not be partnering with this corporation on the terms that are spelled out in the contract. I don't think there's any way to come up with ethical best use guidelines for a system that's inherently flawed and deeply problematic and demonstrably harmful. I don't want to say what's realistic and what's not. I guess to me, higher education is already, if we keep seeding more ground to the corporate power, there's not going to be anything left.

SPEAKER_04

Yeah, I'm very much on the same page in terms of best case scenario being the complete cancellation of this deal. I grow less and less optimistic each time the uh university communicates with us about the issue. They're very dug in on doing this, which I and many others are very unhappy about. If you're going to say that the deal is going to happen no matter what, and there's nothing I can do about that, then at the very least, the terms of the contract absolutely have to change in terms of how the data is being handled. There also needs to be way, way more transparency from the people in these decision-making positions. Ideally, they would have included experts on this type of technology in the decision-making process. That by all accounts did not happen. If we want to write up some ethical use guidelines, you know, that's all well and good. I don't know how much effect it will actually have. I think we should probably still do it to at least try to minimize the harm some of the students will see. There's also been messaging from the university about our Office of Information Technology, under certain circumstances, being able to look at your chat history and all of this, but there's no clarity on under what circumstances they can do that, what kind of like specifically they are allowed to look at. And so there's, of course, surveillance issues kind of ripe for the picking, shall we say? I mean, there's there's so many on the part of both of the university and um OpenAI. So, like, as far as what's realistic, it's really hard to kind of determine, given the state of the contract, what we can do at this point, other than cancel it, or at the very least, do some like very, very large-scale amendment.

SPEAKER_02

What should I be asking? What have we not touched on or mentioned that is really critical to people trying to understand this issue? And I do want to acknowledge that, you know, even though people listening to this might not be part of the CU system itself, like CU is still part of our community and um they still might have an interest in it. So if there's any way that folks not in the CU system can get involved and you want to let them know about it, that would be great too.

SPEAKER_03

We uh have a signal chat group that anyone in the CU system is welcome to join. We have a Zulup, which is like an open source version of Slack where we're trying to organize like different facets of activism, like letter writing, more teachings, skill shares, that sort of thing. But I guess I wouldn't mind underscoring the fact that a lot of output from LLMs like ChatGPT is not only biased and discriminatory, but it's also often factually incorrect. It's academically dishonest and sometimes outright harmful to the mental health of users. And Chat GPT psychosis is becoming more and more rampant and widely documented. And again, it's hard for me to understand a university rolling out a tool like this that is showing to produce such profound harms, especially on young people or vulnerable populations. And ChatGPT psychosis aside, you know, I've been getting reports from colleagues and also grad students who are teaching, and they'll have students produce essays that have been written with Chat GPT that are drawing on actually my work, my books. And the content has come through completely garbled, inaccurate, and it cites a fictional author. Interestingly enough, it's always a male author. On a really personal level, I I think about how I've invested at least 20, 30 years of my life into reading, writing, teaching, and then for my work to just be stolen, misrepresented, and misattributed to me is uh it really stings. And I'm only one example of millions.

SPEAKER_04

I believe just in Colorado, there are currently three open court cases in which somebody has been essentially coached into suicide um by one of these models. I want to say two were high school students and then one was an older individual. But it is really alarming at the kind of levels of harm that can occur as a result of these models, and it's something that a lot of people just aren't aware of. I have a very kind of idealistic perspective on the second piece of what Lori was talking about, in particular regarding like correctness of the models. Like, frankly, even if they were 100% correct all the time, I wouldn't care. And I would still tell you you should not use them for these types of purposes, specifically because there's no cognitive process that you're interacting with. And so you're really just getting some kind of essentially linguistic output that is statistically calculated as being, you know, most likely given a certain string of words has been input, right? Which is the string that the or the prompt that you have given the model. The very simple question I I ask anybody, right, is how is it any different using this model to complete your assignment than having another person do the portion of the assignment that is being completed by the model? Fundamentally, there's no difference except if you have somebody else do it. At least there's thinking going into the process, right? It's still not yours, but there is thinking going into it.

SPEAKER_02

People interested in joining the signal or Zulip that Lori mentioned, or stay up to date on this issue and the group opposing the open AI contract can email AI underscore critical at proton.me. Find that email address and relevant links to this story in our show notes. Something else you can find in our show notes. Links to news articles about those three cases Aaron mentioned, where Coloradans interacting with AI chatbots died by suicide. The victims were two teenagers and one 40-year-old man. You can listen to my full interview with Lori and Aaron at patreon.com slash folder frequency pod. Thank you again to Lori and Aaron, and thanks to you for listening. We hope you keep tuning in. Before you go, here's one more thing.

SPEAKER_00

Just how much of the information that AI collects is being used implicated by American people in terms of knowing how that information is collected.

SPEAKER_01

Yeah, that's the thing that would probably shock most Americans. Companies are collecting data from everywhere. Your browsing history, your location, what you find, what you search for, even how long you pause on the webpage, then they're feeding all of them into AI systems and create incredibly detailed profiles about you. What would surprise people is how little they actually consented to, and how little they understand about it. Most people click agree on the terms of service without reading them, and they have no idea that their data is being combined with thousands of other data points to build a picture of who they are. And then that anyone uses those profiles to decide what ads you see, what prices you're showing, even what information gets prioritized in your social media feed. It's all happening in the background. Invisible and largely unregulated.

SPEAKER_00

I mean, I think most Americans would know the answer to this, but why is all of this information being collected? What's the goal here?

SPEAKER_01

Money, Senator. It's fundamentally about profit.