We run a multi-tenant Rails application with sensitive data and layered authorization. In this post, I walk through how I added the first AI agent tool using RubyLLM, Pundit policies, and our existing Algolia search, without introducing a parallel system or loosening constraints.
I’m a Director of Engineering at Mon Ami, a US-based start-up building a SaaS solution for Aging and Disability Case Workers. We built a large Ruby on Rails monolith over the last 7 years.
It’s a multi-tenant solution where data sensitivity is crucial. We have multiple layers of access checks, but to simplify the story, we’ll assume it’s all abstracted away into a Pundit policy.
While I would not describe us as a group dealing with Big Data problems, we do have a lot of data. Looking up clients’ records is, in particular, an action that is just not performant enough with raw database operations, so we built an Algolia index to make it work.
Given all that: the big monolith, complicated data access rules, and the nature of the business we are in, building an AI agent has not yet been a primary concern for us.
Personal note I (incorrectly) convinced myself over the last few months that there’s no low-hanging fruit that would work for our product and business. This is a story of just how wrong I was.
SF Ruby, and the disconnect
I was at SF Ruby, in San Francisco, a few weeks ago. Most of the tracks were, of course, heavily focused on AI. Lots of stories from people building AIs into all sorts of products using Ruby and Rails,
They were good talks. But most of them assumed a kind of software I don’t work on — systems without strong boundaries, without multi-tenant concerns, without deeply embedded authorization rules.
I kept thinking: this is interesting, but it doesn’t map cleanly to my world. At Mon Ami, we can’t just release a pilot unless it passes strict data access checks.
... continue reading