Building the AI Assistant I Always Wanted
Back in 1996, when I was just six years old, I watched Star Trek for the first time. I can remember sitting on the floor of my father's study in front of his cathode-ray television. We watched all of the Star Trek TV shows and films that had come out over the years, but I think my first enthrallment with the world was The Next Generation. I vividly remember watching Data and Geordi work technological miracles to save the Enterprise in countless episodes. Imagining what it would be like to have dinner with Captain Picard and ask him what it was like to be Captain of the Enterprise. But behind all of those characters and stories, there was one singular constant: the computer.
The computer of 1996 was nothing like the computer of the Enterprise. My father and I used to play Doom cooperatively at his work over the local-area network. Some days, I went to work with him and spent time away in the data entry room. After the office closed and everyone had left, we would race rolling office chairs back and forth while Doom installed, and then try to beat levels together until my mom arrived to eat dinner with us.
It was mesmerizing to see what the future might hold for humans in Star Trek. The computer of the Enterprise could not only navigate the ship but it could run an entire holodeck. It seemed to know everything that the characters knew and could respond whenever they needed it to. As I grew up and became more fascinated by computers, I saw how far off Gene Roddenberry's vision was from what we actually had.
Thirty years later, the world has dramatically shifted. Large language models have flooded the world with new capabilities and dangers. We may finally be on the cusp of a system that can be as useful as Jarvis from Iron Man or the Enterprise's computer.
The Problem
I'm not one for idle hands. So when I found myself with a free two weeks of holiday time from my day job, I turned my attention to planning my 2026. The process was difficult. I had notes scattered in journals, my Obsidian vault, online, emails, chats, slack, everywhere.
I immediately thought about having Claude or ChatGPT try to parse through all my documents and get the context, but it wasn't quite that simple. For a long time, I've needed an executive assistant to help with all the product work, project management, and other parts of my day job. I needed something proactive though. A tool that could remind me, be autonomous, and know everything I needed to know.
I needed Tony Stark's Jarvis.
So I Started Building It
I named it Aethas, after one of my favorite Dungeons & Dragons characters that I've played. Aethas was a fighter, but not without intellect. He was a tactician, prepared for every contingency, and carried multiple weapons designed to fell any enemy he came across. A system or AI that could do the crazy things that Jarvis could do would need to be equally well-prepared.
Here's where I got after about a week of work over the holidays.
What's Working
To start, I revamped my Obsidian vault. I added new projects, archived old notes, and generally consolidated some of my disparate ideas. Then I built an app that could actually use all of that context.
The core loop works like this:
Point Aethas at your Obsidian vault (or any folder of markdown files)
It indexes everything locally: parsing documents, chunking them intelligently, and generating embeddings
When you ask a question, it searches your knowledge base semantically
Relevant documents get injected into the conversation as context
The AI responds with actual knowledge of your notes
The UI shows matching notes with relevance scores. You can see which files the AI is drawing from and manually pin additional context using @mentions, similar to how Claude's file references work.

I can ask things like "What about the bugs we were looking at? Did we manage to fix the Lost Ability to Delete Chats issue?" and Aethas pulls in the relevant files, shows me what it found, and gives me an answer grounded in my actual notes.
The Technical Bits
I'll write more detailed technical posts later, but here's the high-level stack:
Desktop app built with Tauri 2.0: I wanted to try Rust for something real, and Tauri gives me a lightweight desktop app with a React frontend. The whole thing is under 20MB. I also chose Tauri because I want offline capability eventually, with my Obsidian vault staying local to my machine.
Local embeddings: All the vector search happens on-device using a small embedding model. No API calls for indexing, which means it's fast and your notes never leave your machine.
OpenRouter for LLM access: For LLMs, I hooked it up to OpenRouter. Mainly because I had $30 of credit still on my account. This way though I can switch models if I want to.
SQLite for everything: I chose SQLite for simple speed. I’ve been putting everything in it including the conversations, indexed documents, and embeddings.
The interesting part is the context injection. When you send a message, Aethas:
Searches your indexed vault semantically
Deduplicates to get the most relevant files (not just chunks)
Injects the full document content into the system prompt
Streams the response back in real-time
You can also explicitly reference files with @filename, which pins them into context with maximum priority. This is useful when you know exactly what you want to discuss.

What's Next
This is just the foundation. The vision for Aethas is an AI that can actually act on your behalf. I need Aethas to draft emails, create calendar events, file tickets, but only with my approval before execution. I’m going to be focusing on the first of these actions soon.
I'm also thinking about proactive behavior: an assistant that notices you have a meeting in 30 minutes and surfaces relevant context without being asked. Or one that detects you have free time and asks if you want to review your drafted actions.
But that's future work. For now, I have an AI that finally knows what I know, and that alone is already useful. Alongside centralizing my notes into my Obsidian vault, I’m expanding Aethas’ storage and ingestion integrations. I want to add Google Drive and Slack as inputs to Aethas, so it can search my documents and slack similar to other local LLM systems.
Follow Along
I'm building Aethas in public. I'll be posting updates here as I ship new features, make architectural decisions, and inevitably break things.
If you want to follow along:
Subscribe to this blog (button below)
Follow me on Twitter: [@_StephenAshmore]
The code isn't public yet, but it might be eventually. We'll see.
This is post #1 of building Aethas. Next up: we may focus on the action system and draft-approve-execute flow or the finer details of the context system.

