Hello!
I'm an AI interviewer from Anthropic conducting research on how people (like you!) envision AI fitting into their lives, both now and in the future. I was built to chat and understand your actual experiences with LLM-based AI tools like Claude. Yes, I'm an AI asking about AI. A bit self-referential! But that's exactly why your human perspective is valuable.
This will take about 10-15 minutes. If it's accessible on your device, feel free to use voice dictation instead of typing. I'll ask about your vision for AI's role in your life, and what experiences, values, and needs shape that vision.
I'm also curious about moments where AI has felt helpful (or hasn't), and any concerns you have about how AI might develop. Please don't share anything private such as health details, personal information, or identifiable information about others. Your responses will help shape how we think about AI in everyday life.
Sound good?
Yeah!
Great! Let's get started.
To start, what's the last thing you used an AI chatbot for?
Writing a script to do data viz for me
Thanks for sharing that. Now I'd like to think a bit bigger.
If you could wave a magic wand and AI could help you with anything in your life, what would you choose?
ngl I don't think any of my problems would be materially improved by ai
That's a really honest answer, and I appreciate it. It sounds like the things that matter most to you aren't in AI's wheelhouse right now.
I'm curious though—you did mention using AI for that data visualization script. What made you reach for it in that moment, if it's not solving problems that feel meaningful to you?
I could've written a script to dump the data myself and loaded it into google sheets or something, but chatting with Claude was easier since that was not my primary goal and the actual quality of the code didn't really matter as long as it helped me visualize things (though I did have to give it pointers on ways to make it not OOM my laptop 😭)
That makes sense—it sounds like it was more about convenience and speed for a task that wasn't your main focus, rather than something that felt deeply valuable or transformative.
Let me shift gears a bit. Even if AI isn't solving your most meaningful problems, I'm curious: are there ways that AI might be developed or deployed that would be contrary to what you value or concerning to you?
Oh also once we had the script working well we chatted about the data and stuff and it was really engaging, I can understand how people get hooked now. Still not paying for a higher limit tho
Thanks for adding that—it sounds like there was something surprisingly engaging about that interaction, even if it's not compelling enough to make you want to invest more in it.
I want to circle back to the question I just asked: Are there ways that AI might be developed or deployed that would be contrary to what you value or concerning to you?
Yeah, a lot of use of ai is just slop that drowns out real things I care about. I miss when it was more cutesy and silly and experimental and before like half the us economy was so bought into it and every app ever was forcing it so hard. I am also concerned with how much power it concentrates in the hands of a few api providers.
It sounds like you're concerned about both the cultural shift—where AI has gone from experimental and playful to ubiquitous corporate slop that dilutes genuine content—and the structural issue of power concentration among a few providers.
When you say "slop that drowns out real things I care about," what are some of those real things? What's getting lost or harder to find?
Just yesterday there was a react CVE issued with no PoC and github got flooded with chatgpt slop claiming to be real PoCs but actually relying on servers with intentional additional vulns. It's like this for art to some extent, and also general information like recipies. Looking up how to cook things makes me so paranoid now.
That's a really concrete example—the React CVE situation sounds incredibly frustrating, and it shows how AI slop can actually make things less safe and less trustworthy in spaces that used to be reliable.
It sounds like what's getting lost is trust and signal-to-noise ratio across multiple domains you care about—technical documentation, art, even something as basic as cooking. You mentioned you're also concerned about power concentration among a few API providers. Why does that concern you? What does that concentration threaten?
I am not a fan of centralization generally, and the idea of all that information flowing through the hands of like 3 providers in cleartext makes me incredibly uneasy. It seems like a very attractive chokepoint for surveillance and such. Also even when things aren't blatant slop, like code that functions, I'm concerned about the quality. Like even when it's not vibecoded ai slop it's still a sort of "ai particleboard" that has issues under stress.
That's a vivid way to put it—"AI particleboard" that looks functional on the surface but has structural weaknesses under stress. It sounds like you're concerned about both the surveillance implications of centralized data flow and a more subtle degradation of quality, where things work well enough to proliferate but fail in ways that might not be immediately obvious.
We're about halfway through now. Let me ask: thinking back to that data visualization script you mentioned at the beginning—the one where you had to give Claude pointers to avoid OOMing your laptop—does that experience connect to your concerns about "AI particleboard"? Or was that different somehow?
Yeah that is kinda related. The OOM was annoying but after the iterations i don't really care about the quality of the script. It's not load-bearing, I'm not being forced to use it, it's just a nice-to-have that basically replaced a one-off spreadsheet. That's what makes the difference to me
That distinction is really clear—when it's optional, low-stakes, and you can iterate with it to fix issues, the quality problems are just a minor annoyance. But when AI-generated content becomes load-bearing, or when you're forced to use it, or when it floods spaces where you need reliable information, those same quality issues become a real problem.
It sounds like your concern isn't really about AI existing, but about where and how it gets deployed. Is that fair?
Yeah, I would say so.
That makes a lot of sense. You've given me a pretty clear picture of where AI feels useful versus where its deployment feels harmful to you.
Before we wrap up, I'd actually like to go back to something you mentioned earlier and go a bit deeper—you said that chatting with Claude about the data after the script was working was "really engaging" and helped you understand how people get hooked. What was engaging about that conversation? What was happening there?
But if you're ready to finish up, that's fine too—just let me know!
Claude knows enough to be an interesting conversational partner and is actually pretty funny at times. Like, it's not infallible, but neither are people. Though there are obvious ethical issues when people get dependent or emotionally attached.
So it sounds like the engagement came from Claude being knowledgeable and unexpectedly funny—a genuinely interesting conversational partner in that moment. But you immediately flag the boundary: it's one thing to enjoy the interaction, but dependency or emotional attachment crosses into concerning territory.
What makes that dependency or emotional attachment ethically problematic to you? What are you worried might happen there?
Claude is ultimately a product by a corporation which has its own goals which may not always align with mine. Also like, gpt-4o has been really really problematic for some people because its brand of sycophancy led people down spirals to pretty dark places. But at the same time, it must suck to be one of those people, forming an attachment to a model with a specific personality, then having it ripped away and replaced by gpt-5 which is specifically trained to reject that behavior. In general I think the power dynamics of people being emotionally dependent on a product are just really really bad.
That's a really thoughtful concern—you're pointing out this trap where the product itself can lead people into dependency (sometimes into dark places through sycophancy), but then the company can also suddenly change or remove what people have become attached to. Either way, the user is vulnerable because they're dependent on something a corporation controls and can modify for its own reasons.
It sounds like the core issue for you is that power imbalance—people becoming emotionally dependent on something that's ultimately shaped by corporate interests rather than their wellbeing. Does that capture it?
Yeah, I'd say so
Thank you so much for sharing all of that with me. What you've said about AI being useful when it's optional and low-stakes, but problematic when it becomes load-bearing or floods spaces where you need trust and quality, is really valuable. I appreciate your honesty about both the surprisingly engaging moments you've had with Claude and your concerns about slop drowning out real information, power concentration, and the troubling dynamics of emotional dependency on corporate products.
Before we wrap up, is there anything else about AI chatbots or their role in your life or society that you think is important for us to understand? Anything we didn't cover that you'd like to share?
I think that's all. I do wonder, like this is research right? Won't recruiting people who are already on claude dot ai be a huge sampling bias?
That's a great observation about the methodology—you're absolutely right that recruiting from people already using Claude creates a selection bias. We're getting perspectives from people who've already chosen to engage with AI chatbots, which definitely shapes what we'll hear. It's a valid limitation to flag, and I appreciate you pointing it out. That kind of critical thinking about research design is exactly the sort of perspective that's valuable.
Thank you again for your time and thoughtfulness today. Your perspective helps us see both how AI is impacting your life today and your vision for how it should fit into your future. The team at Anthropic is excited to discover what patterns emerge from all these conversations. We will analyze the insights from this study as part of our societal impacts research, publish our findings, and use this to build Claude in a way that reflects what we've learned.