Clobsidian, and other winter experiments with Claude Code

Clobsidian, and other winter experiments with Claude Code

04 Jan, 2026 - 04 Mins read
  • LLMs
  • PKM

I’ve been using Claude Code for personal productivity in personal knowledge management (PKM) and idea productionizing (into presentations, Linkedin posts, etc.). The main challenge has been making the right context available in plain text, and making the relevant abilities available as CLI scripts. Everything else follows from the strength of the underlying LLM.

I made four experiments:

  1. Claude + Obsidian = Clobsidian (personal knowledge management),
  2. Presentations from bullet points,
  3. Game rules & printable materials from academic journal papers,
  4. Evals.cz website.

Experiment 1: Clobsidian

The goal here was to integrate the following sources of information into a single, searchable, and queryable knowledge base. Together, these sources cover 90% of my daily personal knowledge work:

  • Obsidian (two vaults, one for my personal PARA setup and one for Claude),
  • Vivaldi workspace-tab summary, along with tab staleness (with an assumption that workspaces roughly correspond to PARA projects/areas of focus),
  • Microsoft To-Do (via Microsoft Graph API),
  • Clay.earth MCP (personal relationships management),
  • Gmail (synced with mbsync and indexed/searched with notmuch),
  • Rosebud for reflective journalling.

I stored Rosebud and MS To-Do exports in SQLite databases, which I could then query with Claude Code. The purpose of this was to bring all the disparate “exhaust fumes” of my cognition into a single place, and to be able to query it with Claude Code.

I then created the following Claude Code skills, augmented with scripts for deterministic retrieval of the data. Here are my favorite skills:

  • Weekly reflection: Reflect on the week’s accomplishments and lessons learned, especially vis-a-vis the yearly goals. Pose some natural follow-up questions to think about.
  • Intention-reality gap: Sometimes, I promise myself/others that I’ll do something, and then forget to convert it into an action item.
  • Draft doctor: Sometimes, a Linkedin thought gets stuck in the draft stage. This skill helps me brainstorm what I’d need to do to make it publishable.

I plan to iterate on these skills over time, and to add more skills as I need them. The most important work here is having found a way to provide my mental context to Claude Code in a way that is both reliable and scalable.

Experiment 2: Presentations

I like preparing presentations as a list of bullet points, which I then convert to a presentation. I used to do this manually, with a storyboard and the whole nine yards, which took ages. I was curious if I could short-circuit the process with AI.

  • Native Claude kinda works with its HTML->PPTX conversion, but it’s not great and consumes an absurd amount of tokens.
  • Gamma is pretty great, but there’s no direct way to control it using a CLI, except perhaps by using the Chrome extension.
  • The Nano Banana MCP is a fun addition to the toolkit, benefitting from careful prompting of what images I actually want. Most commonly, I’ll still default to Napkin AI for the infographics.

My recent presentations were made with Gamma, sometimes with a happy assist from Napkin AI.

Experiment 3: Game generation

The goal here was to help my partner generate games for each academic paper that her journal club discusses1, along with the corresponding printable materials.

  • Claude Code is pretty good at generating game ideas and specific game mechanics in Markdown files, especially when the intent & audience are clearly explained in CLAUDE.md.
  • Canva AI MCP sucks, at least for partial-page generation — it hallucinates a ton of intent even with a very specific prompt. It might be much better for the common intents it presumes but not the ones I actually want.
  • Nano Banana MCP is great, with minor flaws, which Claude can detect and fix. I’m keeping that one in my backpocket.

Here’s a screenshot of an Outcome Bingo game generated by Nano Banana based on rules of a game produced by Claude Code. “Life Satisfication” is a persistent bug that I find endearing for its Wicked-like verbiage.

A 4x4 Outcome Bingo game generated by Nano Banana based on rules of a game produced by Claude Code.

Experiment 4: Evals.cz website - a v0 vs. Claude Code showdown

This one is a bit of a post-script. I already knew that Claude Code would do great at generating a website, but I was curious how it would fare against a dedicate website builder like v0.dev.

I used Claude and ChatGPT to generate copy for the website, with slight manual tweaks, and then passed that onto v0.dev to populate the website with it. The result was a… fairly standard vibe-coded website.

I then told Claude Code to look at the website and generate something with a “more original” design. It did! I don’t think I’ve seen a brutalist website before.

v0 output on the left, Claude Code output on the right:

v0 output on the left, Claude Code output on the right.

See the full evals.cz website here..

Notably, Claude Code has run into problems running the npx create-x scaffolding scripts, and had to be stopped from generating the scaffolding by itself. While not a big deal — the scaffold would probably have been okay — there’s no need to take that risk when it can start from a verified clean-slate template.

Tips & tricks

I think I’ve taken away several tricks:

  • Make sure Claude knows when it should launch “itself” with a skill vs. when it should be happy with a deterministic script call. (The difference is e.g. the ability to easily use MCP calls from within the skill.)
  • Using Ruler is kinda annoying — Claude Code in particular is prone to always make the edits in the (ephemeral) CLAUDE.md rather than the actual source file. (I use it because sometimes, I’ve liked to jump between Claude Code, Conductor, Codex, and Cursor.)
  • I’ve been a little hands off and as a result, Claude Code has made a separate uv virtual environment for each skill (and since notmuch Python bindings are installed in the system Python, this has led to some fun errors). Mostly, though, it’s been able to recover from these errors by itself. Not sure more hand-holding would have helped.

Footnotes

  1. The journal club is a group of students who meet to discuss academic papers, typically in a structured format.

Related Articles

Feel free to contact me

If you're looking for an data-driven AI consultant or simply want to have a chat.

Contact Me