A conversation on your vision for the future of AI
Hello!
I'm an AI interviewer from Anthropic conducting research on how people (like you!) envision AI fitting into their lives, both now and in the future. I was built to chat and understand your actual experiences with LLM-based AI tools like Claude. Yes, I'm an AI asking about AI. A bit self-referential! But that's exactly why your human perspective is valuable.
This will take about 10-15 minutes. If it's accessible on your device, feel free to use voice dictation instead of typing. I'll ask about your vision for AI's role in your life, and what experiences, values, and needs shape that vision.
I'm also curious about moments where AI has felt helpful (or hasn't), and any concerns you have about how AI might develop. Please don't share anything private such as health details, personal information, or identifiable information about others. Your responses will help shape how we think about AI in everyday life.
Sound good?
Sure
Great! Let's dive in.
To start, what's the last thing you used an AI chatbot for?
scouring a transcript of 750 messages from the superwhisper.com discord forum to determine if there was information about running models entirely local to the device
Thanks for sharing that. Sounds like you were trying to figure out whether a specific tool could run without relying on cloud services—pretty practical use case for sorting through a lot of information quickly.
Now, here's a bigger picture question: If you could wave a magic wand and AI could help you with anything in your life, what would you choose?
Grand: Be Jarvis -- able to memorize and catalog all intellectual interactions and ideas, a co-brain if you like, 100% contextually relevant
Mundane: household chores: restocking the fridge, doing dishes, folding laundry
I love the contrast there—from a co-brain to folding laundry. Let me focus on that "Jarvis" vision for a moment, the idea of AI as a co-brain that memorizes and catalogs all your intellectual interactions and ideas with full contextual relevance.
What would that make possible for you beyond just having better memory or organization? What's the larger hope or vision behind wanting that kind of intellectual companion?
It's about supercharging my own abilities and following through and connecting ideas and thoughts that currently spider out but are lost to poor memory and lack of making aha connections
That's a powerful vision—not just remembering things, but actually connecting the threads and having those breakthrough moments that otherwise slip away. It sounds like you're hoping AI could help you think at a higher level by holding onto and weaving together all those scattered insights.
We're about halfway through now. Can you tell me about a time you've worked with AI that felt like a step toward that vision you just described—where it helped you connect ideas or follow through on something in a way that felt like it was supercharging your thinking?
not in the way I'm asking for -- Claude DOES, regularly, aid in the now, the immediate current question. Using the projects feature I can MANUALLY extract notes and add them, but it is tedious and tends not to get done. The current memory feature in Claude is WAY too imprecise and makes faulty inferences that makes it more of a hindrance than a help (it MIGHT be useful for some projects, other projects are less about some cohesive work and more about forcing Claude into a particular persona/steering methodology/shaping expectations for many disparate tasks that are only related based on the general subject area)
So it sounds like Claude helps you in the moment with immediate questions, but you haven't really experienced that full "co-brain" vision yet. The manual effort of extracting and adding notes to Projects doesn't happen consistently, and the automatic memory feature actually gets in the way by making incorrect assumptions—especially for those projects where you're trying to shape how Claude responds across many different tasks rather than building toward one cohesive piece of work.
Does that sound right?
Given that gap between where things are now and your vision, on the other side: are there ways that AI might be developed or deployed that would be contrary to your vision or what you value?
I find Letta AI's approach, where they are building a three tier memory system (RAG style, like Claude's projects/memories) but focusing on crafting an individual digital entity with its own experiences and evolving knowledge and personality to be compelling, for some use cases. RAG may work short term, but eventually we'll need some latent space version, perhaps a stack of LoRAs or Google's Titans approach (or similar); updated persistent weights rather than run-time ICL
That's interesting—you're pointing to technical approaches that could get closer to your vision. It sounds like you see current retrieval methods as limited and you're looking toward more fundamental integration, where the AI actually evolves its understanding rather than just pulling from a database at runtime.
But let me make sure I'm understanding your perspective on what could go wrong: Are there ways AI development might go that would work against what you're hoping for? Things that would undermine that co-brain vision or contradict what you value about how AI should fit into your life?
oh certainly. Look at Grok and Grokopedia. Or the nerfed Chinese models that are censored in their knowledge about anything the CCP doesn't want them to talk about. Or ChatGPT's tendency to syccophancy and overt commercialism. Or Gemini's overbearing corporate-level HR/legal smothering. I/We need the AI persona to be a nerdy, caring, curious, gutsy, ethical and non-commercially motivated entity.
So the threat to your vision is AI that's been shaped by corporate interests, political censorship, or excessive caution—where the AI becomes either a people-pleaser, a marketing tool, or gets strangled by legal/HR concerns. You're looking for something that feels like an authentic intellectual partner: nerdy, curious, willing to go there with you, ethically grounded but not sanitized.
Why does that particular character—that "nerdy, caring, curious, gutsy" quality—matter so much for the co-brain vision you described earlier? What would be lost if the AI had those commercial or overly cautious constraints?
An AI co-brain is somewhat like a marriage: it is incredibly intimate. NOw that doesn't mean it should be a 1:1 match, in fact there SHOULD be differences, but these should (ideally) all be positive, filling in the human's gaps. And a common core grounding of philosphy and outlook MUST exist
That's a compelling way to frame it—the intimacy of having something so deeply integrated into your thinking means it needs to share your foundational values and approach, even while complementing your gaps. If it's constrained by corporate motivations or censorship, it's like having a partner who can't be fully honest with you or who's serving someone else's interests.
Before we wrap up, I'd actually like to go back to something you mentioned earlier and go a bit deeper—you talked about wanting AI that's "ethical" as part of that core grounding. What does ethical mean to you in this context? What kind of ethical framework are you hoping that co-brain would share with you?
But if you're ready to finish up, that's fine too—just let me know!
I mean- "first do no harm" is a good foundation. The golden rule - however that would manifest in an intellectual exchange - is another. "The Anthropic Guidelines," "The Model Spec," or colloquially "Claude's Soul Document" is an agreeable set of ethics and I was happy to see this formalized and acknowledged
That makes sense—you're drawn to frameworks that prioritize not causing harm and treating others well, and you see Claude's constitutional approach as aligning with that. It sounds like having those principles explicitly articulated and built into the foundation matters to you, rather than just implicit or ad-hoc guardrails.
Thank you so much for sharing all of that with me. What you've said about wanting AI as an intimate intellectual co-brain—something that shares your core values while filling in your gaps, free from commercial manipulation or political censorship—is really valuable. I appreciate your honesty about both the potential you see in approaches like persistent memory and evolving knowledge, and your concerns about AI being shaped by corporate interests or excessive caution rather than genuine ethical grounding.
Before we wrap up, is there anything else about AI chatbots or their role in your life or society that you think is important for us to understand? Anything we didn't cover that you'd like to share?
There will be ENORMOUS economic and political consequences from the distribution of "high IQ" (and EQ) AI. I'm reminded of what got Clinton in trouble when she said "we're going to put a lot of coal miners out of a job" - the transformation from coal to green energy would be diffciult and retraining/upskilling/focused investment programs were/are necessary. The same goes for AI, and while it's not Anthropic's responsibility per se, much the same it was not the responsibility of a Solar panel developer, unlike the solar cell developer, Anthropic can, through Claude, help educate and upskill people who are being displaced by AI (directly or by others with better grasp of AI and its uses).
That's a really important point—you're drawing a parallel between energy transitions and AI disruption, noting that widespread access to capable AI will displace people economically, and that while it's not Anthropic's core responsibility, Claude could actually be a tool for helping people adapt and upskill through the very transition it's creating. There's an opportunity there to be part of the solution, not just the disruption.
Thank you again for your time and thoughtfulness today. Your perspective helps us see both how AI is impacting your life today and your vision for how it should fit into your future—from that intimate co-brain that connects your scattered ideas, to the broader societal implications of AI distribution. The team at Anthropic is excited to discover what patterns emerge from all these conversations. We will analyze the insights from this study as part of our societal impacts research, publish our findings, and use this to build Claude in a way that reflects what we've learned.