Shuky Meyer's Visit & Reflecting on Doctorow's AI Essay
The past Tuesday we had Shuky Meyer visit our class. He is a AI Advocate & Technical Educator, ML Specialist, and Staff Software Engineer at AuditBoard. He spoke to us about his experience with AI and how he has integrated it into his daily workflow. In addition he went over how we can use AI tools to amplify our engineering skills by doing a case study.
Alongside Shuky Meyer’s talk, I also read Cory Doctorow's essay on AI, where he challenges many of the dominant narratives around AI today. The assignment asked me to respond to his arguments in the style of a constructive Reddit discussion, acknowledging opposing perspectives while thoughtfully pushing back and adding my own take.
What I Took Away From Shuky Meyer's Visit
What struck me most about Shuky's visit was how he reframed the way I think about working with AI. Rather than treating these tools as magic black boxes that either work or don't, he showed us that prompt engineering is a real skill worth developing. The difference between a vague prompt and a well-structured one isn't just about getting better results: it's about understanding how these systems actually process information.
The most valuable part of his presentation was the case study where he walked us through refactoring legacy code using AI assistance. Watching him iteratively improve his prompts, add constraints, and guide the AI toward better solutions made it clear that this isn't about replacing engineering skills: it's about amplifying them. You still need to know what good code looks like, what the edge cases are, and how to verify the output. The AI just helps you get there faster.
A couple helpful techniques Shuky mentioned:
| Technique | Example |
|---|---|
Use <xml> tags |
<context = "context here"> <code language = "python"> <task = "task here"> <output format = "expected format here"> |
| Chain of thought prompting | show step-by-step explanations for how you solved this problem |
| Output gating: controls when AI gives answers | BEFORE X ... THEN Y |
| Format specifications: define the structure you want back | return (headers, lists, etc.) |
| Scope constraints: limits scope and prevents over-engineering | make one safe change to this function |
| Role/persona: shapes tone and expertise | act as a senior engineer who has experience in XYZ technology |
These techniques might seem simple, but the power is in how you combine them. Using scope constraints with output gating, for example, can prevent the AI from over-engineering solutions or going off on tangents. Adding a role/persona gives context that shapes the entire response style. It's less about memorizing prompt templates and more about understanding what each technique achieves.
What really resonated with me was how Shuky emphasized that these tools work best when you already understand the problem domain. If you don't know what you're looking for, the AI can easily lead you astray with confident-sounding but incorrect answers. But if you use it as a collaborator testing ideas, generating boilerplate, exploring alternatives, it becomes incredibly powerful.
My favorite quote from his presentation:
The LLM isn't thinking—it's navigating a probability landscape. Your job is to shape that landscape. Remember you're not "asking" the AI—You're CONSTRAINING its output space!
This fundamentally changed how I approach prompting. Instead of trying to ask the "right question," I now think about what constraints and structure will guide the AI toward the output space I actually want. It's a subtle shift in mindset, but it makes all the difference.
Response to Cory Doctorow's AI Essay
I agree with a lot of what Doctorow is saying, especially the point that AI itself isn’t some neutral force and that most of the damage comes from who owns it and how it’s deployed. A lot of the fear around AI feels less about the tech and more about the fact that it’s being rolled out by massive companies with profit-first incentives. That part resonates with me, and I think it’s fair to be skeptical of the way corporations are shaping this space.
That said, I’m a little more conflicted than Doctorow seems to be. A lot of the pushback against AI reminds me of how people reacted to earlier technologies. Cars are the example I always come back to. People genuinely thought cars would ruin society, take jobs, and make cities unlivable. Some of those fears weren’t wrong, but the tech didn’t go away. Society adapted, rules were put in place, and people learned how to live with it. AI feels similar to me. It doesn’t seem realistic to think we can just stop it, especially when it clearly offers efficiency and economic advantages.
Where I really agree with the critics, though, is on responsibility. Just because AI is inevitable doesn’t mean we should be reckless about it. One thing I don’t see talked about enough is resource usage. AI data centers take up huge amounts of water, land, and energy, and that has real long-term consequences. If we rush to integrate AI everywhere without thinking about environmental costs, we’re just trading one set of problems for another. There’s no point in advancing new tech if it makes sustainability worse in the long run.
I also think some of the “AI is inherently bad” framing misses the bigger picture. The problem isn’t the existence of AI, it’s how it’s governed. Better regulation, transparency, and limits on corporate power seem way more effective than trying to reject the technology outright. AI can absolutely be used in harmful ways, but it can also improve accessibility, education, and research if it’s handled responsibly.
So I’m kind of in the middle on this. I think Doctorow is right to warn people about corporate control and unchecked deployment, but I don’t think fear or resistance alone is the answer. AI isn’t going away. The real question is whether we can shape how it’s used in a way that’s ethical, sustainable, and actually benefits people instead of just corporations. That’s where I think the conversation should be focused.
~Shree