Getting LLMs to write code that fits neatly into a larger project can be tricky. This week’s ‘Works on My Machine’ delves into a pattern we rely on heavily in the Sublayer gem that significantly helps: leveraging polymorphism and loose coupling. The core idea? Treat your code’s structure itself as the prompt for the AI.
What It Does
The Sublayer gem leverages small, self-contained Generators and Actions with clear interfaces. This polymorphic design is something LLMs are surprisingly good at replicating. Given just a few examples, LLMs are able to generate new variations of these components for whatever it is you need, and are frequently usable without any need to modify them.
In this demo:
We start by describing the the new Generator we want in natural language using the `sublayer generate:generator` CLI command
Behind the scenes, the CLI uses an LLM (like Gemini, in this case, but you can use any you’d like) and provides it with our description plus examples of existing Generators
The LLM generates a new, working Ruby class that correctly implements the required Generator interface, often ready to use immediately.
Finally, we briefly look at the SublayerGeneratorGenerator class itself - the code that generates generators - to show how it packages the description and code examples for the LLM.
This technique is also how the automatic Action generation from AI Programs While I Sleep works, though it includes many more examples in the prompt.
Why It Matters
What this shows is that by structuring your code so you can focus the LLM on replicating well-defined, polymorphic interfaces, you sidestep the need to explain the entire system’s functionality. The LLM only needs to focus on getting an interface correct, making the problem much easier.
Typically, as a codebase grows it gets more complex and interconnected making it harder for humans or AI to modify it without breaking something else somewhere. I’ve heard many stories of vibe and semi-vibe coding that as an application gets larger the AI does worse and worse. This is an architectural issue.
However, with a heavily polymorphic design, you actually see returns to scale when using an LLM. Because if each component follows the same base interface, adding the 10th or 100th component doesn’t make the task harder for the AI. In fact, it can make the AI better at generating the next component.
Why? Because each new, correctly implemented component serves as another high-quality, few-shot example for the LLM. The more varied examples of components that the AI sees, the better it becomes at pattern matching and in-context learning for that specific type of component. It will have a richer ‘mental model’ derived from your actual code, increasing the likelihood of generating correct, usable code for the next component you ask it to create. And the loosely coupled nature means the complexity of other parts of the system interfere less if at all.
How To Get It
The core piece of interesting code here is in the meta SublayerGeneratorGenerator which takes the description from the CLI and combines it with an example of a generator along with a description to generate you a new Generator.
Another version of this technique is also in use for Sublayer Actions both in the Sublayer gem CLI and in the sublayerapp/sublayer_actions repo. That generator can be found here, which uses all the Actions in the repository as examples for the LLM.
If this sparks any thoughts or you’d like to discuss different ways to use this loose coupling/polymorphic design pattern, leave a comment or stop in the Sublayer discord and say hi!
Share this post