The following is part of a series of posts called "Building a household metrics dashboard".
This series details the various steps I took setting up a household dashboard to give the family better insights in a number of different areas such as finances and health, as well as a few other useful metrics and information.
I’ve recently started putting together a household Grafana installation with the idea of tracking a number of different ‘life’ metrics. I feel that by having readily available graphs of certain metrics, such as financial positions, consumption of various resources as well as health related tracking displayed prominently in the house, it will help my wife and I to be more mindful and make better decisions in future.
I’m also hopeful that having highly responsive real-time or lagging metrics as part of our normal daily family life will help to educate our kids in strong, real world terms. One such metric which is a good example here is the power consumption in the house. Power consumption is a pretty abstract concept so it can be difficult for people to grasp, especially with no actual feedback to learn from. Most people aren’t aware of the vast differences between appliances in the home as well, typically in orders of magnitude. Yes, heating your oven to the required temperature can take the same amount of power it takes to light your kitchen constantly for two weeks.
Setting up this system, I’ve mounted a TV on a pretty prominent wall space in the kitchen. I’ve used an old ACEPC which I’ve used for other projects as the main CPU and installed Ubuntu 20.04.
For most of my personal projects, I will often deploy them via Netlify for static sites such as this website, or deploy them on one of my hosted Kubernetes clusters. In this case though, as the data is quite personal / sensitive, I’m to using the local ACEPC machine for hosting everything and only allow it to be accessed on the local network.
My first target for data was to consolidate all the financial data from every one of the accounts that both my wife and I have. This includes all income and expenses, tracking of daily account balances including all liabilities and asset accounts.
It turns out that this is extremely difficult if possible at all, even in todays online world. Banks and customer facing APIs appear to be absolutely mutually exclusive.
My initial POC involved using the fantastic project puppeteer to log into each site and scrape the data I needed. I managed to complete about 10 integrations over the course of two days while getting some deep experience with puppeteer, but this approach proved very brittle and I got the sense that I’d be needing to fix these scraping scripts a lot more often than I wanted to. Logging into these sites also meant that I’d need to store all of these credentials together in the clear, something I wasn’t going to be comfortable enough with.
A much easier solution came up when I finally looked at my wife’s Mint account. Mint allows you to add many different accounts with the idea of building an overview of your financial position. It even allows for integration with Australian institutions even though I don’t believe they offer their services within Australia.
Adding all of our accounts into Mint and then scraping just the one site seemed like a much better idea, and it turned out luckily that many people appear to have walked this path previously with the popular project mintapi already being available to do just this.
mintapi not only allowed me to pull all the information I was after but more, such as
current credit ratings which was a nice bonus. Another big bonus of using Mint was the fact that
they use their own internally assigned ids, allowing for much easier persistance of the data
avoiding the need for logic to decipher distinct transactions, an issue that is difficult to deal
with when scrapping a typical transaction list.
While I am storing timeseries data and using Grafana for displaying this data, I steered clear of using Graphite or any other typical timeseries data store, instead settling on MySQL.
The reasoning here is that I want to build this database out over the coming years. It may not always be timeseries data that I want to store, it may even be highly relational, and it is also the case of sticking with something I have good knowledge of. Choosing a highly mature industry stalwart is always a great choice when this project has time horizons in the vicinity of decades.
The data from Mint has allowed me to create a number of highly useful dashboards which include:
I have also implemented Finnhub as a data source which allows us to see OHLC charts for some of the stocks that we follow closely.
Over all, having this financial information on a series of dashboards in the kitchen is proving to be even more useful than I first thought it would.
It’s also worth mentioning that this is quite the ‘nerdy’ project. You can likely imagine my wife’s
apprehension as I discussed my plans and when I installed the TV in the kitchen area. Once things
were up and running though, her level of buy-in on the project was higher than I expected. She can
definitely see the value in having this system. We agreed to only have it displaying in the
mornings before work hours and then after work hours until bed time. This was easily done by
running a crontask that sets the brightness of the screen to one and zero to back out the screen
at times we don’t want it visible such as
xrandr -d :0 --output HDMI-2 --brightness 0. I actually
feel that having only short periods of time when the dashboard is visible makes us pay better
attention to it compared to if it was just always on.
Following on from financial data, I’m going to look at implementing the following: