Skip to main content
Burla is finally public
  1. Posts/
  2. .NET/

Burla is finally public

·3 mins

I finally had the guts to publish Burla.

I have been using LLMs for coding since the beginning, and over the last few months I went properly down the rabbit hole: different hosted models, local models, agentic workflows, migration prompts, planning loops, the whole lot. At some point I realised I did not just want opinions about AI tooling. I wanted a real project that could absorb all that experimentation and prove whether any of it was actually useful.

Burla became that project.

I rewrote it more times than I want to admit. Partly because the library itself kept getting better, partly because I was using it as a pressure test for the way I work with AI. I wanted to build something real, not a toy demo, and see what actually holds up when you care about clarity, testability, documentation, and long-term maintenance.

The main lesson was pretty simple: AI is powerful, but it works much better when the thing you are building is explicit. When the API is consistent, the docs are clear, and the workflow does not rely on hidden context, both humans and LLMs make better decisions.

That is the lens Burla was built through.

Mocking without the nonsense
#

If I had to compress it into one sentence, Burla is a lightweight .NET mocking library built for explicit tests, strict defaults, and real migrations.

That means I cared less about building the biggest possible feature matrix, and more about a few things that matter in real codebases:

  • tests should read clearly in reviews
  • missing setup should fail loudly
  • async support should feel normal
  • moving from Moq or NSubstitute should not turn into archaeology
  • the API should stay consistent enough that AI tools can generate and migrate code reliably

That last point is not marketing fluff. Burla was built with LLMs in the loop, for a world where LLMs are part of the engineering workflow. I wanted a library that humans can read quickly and AI tools can use without wasting half the context window on mock-framework quirks.

Why I am sharing it now
#

At the start, Burla was mostly for me.

I needed a serious project to test how far AI-assisted development can go when you keep normal engineering standards high. Over time it stopped feeling like a private experiment and started feeling useful. Useful enough that other people might actually want it, and useful enough that it gives me a solid base for the kind of tooling I want to explore next around testing, migrations, and helping engineers adapt their workflow to the age of AI.

Where to start
#

If you want to take a look, the best places are:

If the broader AI angle resonates, this also ties closely to how I think about onboarding AI into your codebase and the kind of explicit, well-documented workflows I want more tools to support.

I also want to give the migration problem and the design bets behind Burla their own follow-up, because that part deserves more room than a launch note.

Burla is public now. That still feels a bit surreal, but there it is.

If you try it, I hope it saves you some pain. And if nothing else, it is the clearest example I have so far of what I mean when I say we need to build tools for the way engineers actually work now, not for the way we worked two years ago.

Happy coding!