AI by default #1
newsletter overload, a book shopping list, and more fun with Linear
At Aiwyn, where I recently started working, we have an #aibydefault channel in Slack. Every member of every team in every function posts in this channel to share with the rest of the company how they’ve learned to use AI automate something or make it more efficient. Using AI to enhance your work is a basic expectation of everyone’s role at Aiwyn, and it’s reasonable to expect this to become more common as civilization adapts to the absurd amount of things that were impractical 18 months ago being trivial today.
And I love “AI by default” as a general ethos. It says, “Yo, take a beat and see if AI can make this better.” Adopting an AI-by-default mindset means not only committing to exploring what’s just now possible but also committing to resisting old, entrenched habits and ways of doing things. Sometimes that even means revisiting things that AI couldn’t get quite right just a few months ago and applying new tools or techniques to get it right. That’s how quickly things are changing.
In this supplemental series, I’ll describe things I’ve learned by trying to make AI my default, both successes and failures.
Let me know what you think.
AI newsletters
Like any good AI fanboy, I consume a lot of content, and I’m generally happy to have a steady flow of information pushed in front of my face. One of the ways this manifests is with newsletters — I currently subscribe to The Neuron, The Rundown AI, and Superhuman AI — that provide daily briefs of AI news and general overviews of the zeitgeist.
As independent publications, the Venn diagram of content between these three newsletters varies day-by-day, but it’s very rarely a total overlap; there’s almost always something interesting that one covers that the other two miss. Regardless, though, there’s always duplication, and there’s always sponsored content that sometimes isn’t easily distinguishable from the real news.
the human way
For months, I’ve scanned all three of these newsletters myself, mentally discarding duplicate stories and diligently hoarding links to unique content for future consumption. Scanning three pithy newsletters and mentally deduplicating them takes some non-trivial amount of time and no small amount of cognitive load. Given that my catch-up-on-personal-email1 time tends to be first thing in the morning before I start doing the things people pay me to do, this investment of time and energy saps attention from other personal email, which might be more important or need more urgent attention.
AI by default
I’ve actually been trying to AI this task away for a while. Gemini in Gmail is laughably bad for this sort of task, and Cora was overkill for me. I tried to use Zapier to poll my email and use its AI to draft summaries, but it crapped out in spectacular ways that I still don’t understand and didn’t have time to troubleshoot. n8n looked like a promising alternative, but I wasn’t prepared to invest in using a model via API that I already pay a SaaS subscription for. ChatGPT offers a Gmail connector, but until very recently you could only use it in Deep Research mode2 rather than as a part of a more casual chat session.
Then came Claude connectors for Gmail.
Within Claude, I can now just … chat with Gmail. It just works. And it’s perfect. So now I have a project in Claude aptly called AI newsletter digests with these instructions:
In my inbox, you'll find emails from Superhuman, The Neuron, and The Rundown AI.
These emails are daily AI newsletters. Please consolidate all emails into a single comprehensive digest that covers all of the information from each but deduplicates the content.
You can ignore content that's sponsored. In the newsletters, this content commonly gets labeled as "from our partners" or "together with [company]".
Here are the sections of the digest I want you to create:
# news
[new product or business developments, announcements, viral content, tutorials, social media trends, research, and other noteworthy AI developments]
# tools
[tools showcased in the newsletters; ignore any with asterisks as sponsored content]
Be comprehensive - include ANY substantive AI-related content from the newsletters that isn't explicitly marked as sponsored, including social trends, tutorials, research summaries, and community highlights. When in doubt, include it.
Each section should contain a bulleted list with a link and a succinct summary of the topic or tool.
If you can't find emails from Superhuman, The Neuron, or The Rundown AI, simply report that one or more newsletters aren't in my inbox. Don't perform supplementary or adjacent searches to try to find them.(I had to add that last instruction because Claude would go through weird search acrobatics to try to find the newsletters if they weren’t in my inbox.)
That’s it. And, again, it just works. I took the time to cross-reference the content a couple of times to QA its output, and it’s doing exactly what I need it to do: sift through the noise and the duplicate content and give me a digest of what I care about every day.
I still have to initiate a chat once I notice I’ve received all the newsletters. Probably I could figure out how to automate away even that piece pretty easily, but just getting Claude to spin up the digest while I focus on other stuff frees up enough of my morning cycles that I don’t see the point in much more optimization.
So if you know of any other dailies I should add to the pile, hit me up. Like Johnny 5, Claude needs input.
the list of books
I stumbled upon Cosmos Institute recently. I won’t get too deep into what they’re up to because that’s not the point—and I’m not sure the extent to which it’s worth paying attention to what they’re up to—but they’re advocating strongly for this notion of “Philosopher Builders”.
Not only am I kind of a philosophy nerd, but they’re also affiliated with Oxford University, so they’ve piqued my interest enough that I subscribe to their Substack.
But like I said that’s not the point. The point is not who they are but what they did. And what they did was publish a reading list with no links to the books they recommended.
the human way
Probably as recently as last month, I would have just cracked my knuckles and gotten my clickety-click copy-paste rhythm going to find all of the books on the list. Maybe five minutes and a slightly strained pinky finger later, and I’d have all the tabs open to add the books to whatever list I wanted.
AI by default
… but now we have agents. Specifically, we have ChatGPT Agent, and I was eager to give it a spin. This seemed like a perfectly straightforward task that an agent should be able to do well that any other ChatGPT mode would probably falter at for various reasons.
It worked out great! I set it about its merry way and then did other things and came back to a list of books with links! There was just one minor issue with one of the links that I had it correct, but otherwise it saved me five minutes of intense clicking and clacking on mouse and keyboard.
Also, TIL that when you share a ChatGPT Agent session, it gives you a session recording rather than a static chat, so you can see exactly how it played out here.
linear issue tags
The issue tracking tool Linear has a lot of organizational layers. Like, a lot. Teams, initiatives, projects, issues, milestones… you can assign an individual chunk of work to a kaleidoscope of dimensions to make it fit into whatever model you want. It also has labels, which work like tags and can cut through all of the other organizational layers in a way that’s the least opinionated and nested. You can also assign labels to projects, which tend to be one organizational unit above issues.
At Aiwyn, I wanted an accounting of both projects and issues that fell into a certain category. Surely there’s a way to see every project and every issue that share the same label, right?
… right?
the human way
It turns out it’s not just not possible to see a single list of issues and projects in Linear that share the same label. This requires creating, looking through, and holding in memory separate views or maintaining the information in a separate tool like a spreadsheet.
AI by default
So of course I turned to Claude assuming this would be easy given its ability to parse through Linear via MCP like a warm knife slicing through butter. This is the sort of just-ambiguous-enough task at which AI should shine.
Claude was equally confident in its ability to help me sort through this mess, dutifully sifting through the Linear MCP tools it needed to find the two projects and 40-ish issues with the label I wanted.
Not so fast.
What I knew that Claude didn’t know I knew was that there was only one issue with that label.
And thus I found myself brain-deep in one of those situations that:
requires you to have a little foreknowledge to know whether AI is right,
highlights how unwittingly insidious LLMs trained to help at all costs can be, and
takes up way more time than “the human way” to diagnose and ultimately fail at.
What I didn’t know when I started that I learned after maybe 45 minutes (yeah…😬) of back-and-forth was that Claude was never going to succeed. It didn’t have the tools to succeed, and I didn’t know that it didn’t have the tools to succeed. But at one point it convinced both of us that it did have the tools to succeed, which is when things really spun out.
You see, what we discovered at the tail end of our experiment was that Linear’s MCP tools don’t actually let you filter anything by label, but because it’s an MCP server and not necessarily something more deterministic like an API call it doesn’t stop tools like Claude from trying to filter by label and getting coherent results. In short, it was using a label filter to get issues, and Linear was cheerily obliging Claude’s request to get issues while ignoring the label filter.
The entire farce was more illuminating than it was frustrating (even though it was a little frustrating): it’s a totally benign example of why experimenting with these tools is so important. It’s the only way to develop a nose for when the output gets even a little bit spoiled.
I asked ChatGPT for advice on how to wrap this up, and it pointed out a thread that runs through all of these examples: the tension between paying attention and letting go. As models’ capabilities continue to evolve and the shape of the jagged frontier continues to undulate, establishing some intuition for when to pay attention and when to let go is going to become increasingly important.
To that end, I hope these lessons encourage you to stress test these models and the tools wrapped around them with your own problems, and to do so safely with known quantities of risk. There may be a point in time when that risk drops very close to zero—when AI can accomplish most tasks with something resembling perfection—but until then you have to know when to trust, when not to trust, and when to trust but verify.
I should be transparent that the amount of personal email I get is a trickle compared with others, so it’s not much of a chore to get through it every day. But that does mean the newsletters make up a larger overall proportion of the time I do spend on email.
The only other supported mode as of this writing is Agent mode; still no “I want to chat with my email” solution in ChatGPT.





