Programing and Finance


Mon 14 March 2022

Kristin and I have been thinking hard about retirement, and trying to determine when we will be able to pull the trigger. I keep pretty close tabs on our finances and I have a good handle on how our retirement assets are growing. I know what our net worth is, within about a +/-10% margin. We have met with a financial advisor, who plugged our numbers into their magic tool, and the answer we received is pretty much the same as I have estimated for myself. Bottom line is that we know where we are and where we need to be in order to be able to retire.

One thing I've been looking for in all this is a tool that we can use to play "what-if" scenarios. What if we retire one year ahead of plan? What if I work for one or two more years? What is the impact of a recession? what if we buy a $150K super-camper instead of a $10K used beater? I have not found any tool that provides that sort of capability, so of course, I decided to create my own. As it turns out, this is not as simple as I initially thought it would be.

My first approach to this was to build a big spreadsheet with all my "accounts" in columns, organized into assets, liabilities, equity, income and expenses. It seemed like a good method, but cracks appeared quickly, as I needed to code all sorts of conditionals, indirections and table lookups to implement different scenarios. Not scalable, and there was no practical way to simulate market behavior to compute statistics. So, the spreadsheet went out the door.

I paused to do a bit more research into accounting tools and practices and found Plain Text Accounting, which then led me to Beancount. I liked what I saw, and gave it a try, using Lua to automate the creation of transactions for Beancount to process. This worked to a point, but it wasn't easy to capture and present the output of Beancount without doing a lot of work. So I dumped Beancount and wrote a transaction processor in Lua, basically replicating some of the core function of Beancount. This was better but still not good enough; creating the what-if analyses was also more work and less flexible than I wanted. I also started struggling with CSV creation - it's not hard in Lua, but it's also not dirt-simple like Python.

So, I switched to Python, essentially translated the Lua code to Python, and made some usability improvements along the way. I retained the transaction-based model that I evolved from Beancount, made pretty good progress, and was happy with how it was going. Alas. the joy did not last long! As I began to implement random-walk-based market modeling, I found that it did not play well with the transaction model. It was clumsy to capture market linkage with transactions; I needed to generate transactions that were a percentage of an account to capture the growth of that account. I had begun by pregenerating all the transactions for the entire duration of the simulation, but it seemed better to switch to iteration-by-iteration transaction generation. But once at that level, I questioned the value of a transaction-based model - why not just directly compute the next iteration from the results of the previous iteration? This was looking more like just a big set of equations instead of a bunch of transactions. This approach too went out the door.

I started (again) from scratch, but with a simple period-by-period update using a set of equations with adjustable parameters. To handle changes of behavior over time, I only need to change those parameters that change when the date of a change occurs. For example, when the house gets sold, the mortgage payment amount is set to zero (along with other adjustments like cashing out the equity). This has led to a significant reduction of complexity in the code, and a significant improvement in conciseness and readability. Although I haven't finished implementing all the functionality I want in this tool, I can just "feel" that this is the right approach. (Yet even as I write that, I have a little bit of doubt in the back of my mind!)

What's the moral of the story here? Abstractions matter! From abstractions, we create algorithms and write code, and the code captures the abstraction; but the code is also constrained by the limitations of the abstraction. Treating a financial forecasting problem as an accounting problem is not likely to work very well, if at all; they are different problem domains. This is something I've experienced many times over in my life as a software developer, but what always surprises me when it happens is that I don't usually get much advance notice that I'm going down the wrong path, until I get smacked square in the face with a problem too big to ignore. The only recourse is to back up, gather my wits, reconsider the nature of the problem, maybe do a bit of prototyping as a proof-of-concept, and then set off again on a new course. I suspect this isn't unique to me, but is rather a universal truth in the life of a software developer.