The Exakt Health sports injury app
Lucia Payo

Lucia Payo

Four Principles For Software Engineering

Four principles to create a healthy and sustainable engineering culture.

You can also read this article and leave comments on medium

At exakt health we aim to create a healthy and sustainable engineering culture. In this article, I want to talk about the principles that help us create this culture. These four principles are not only good for the software that we build, but they also put the people that build it in the center. The goal is to deliver the best solutions while keeping the team’s motivation and happiness high. We do this by reducing barriers as much as possible and investing in things that make us feel productive and in control. In our view, this is the only way to have a healthy and sustainable tech culture that also delivers a product with high standards. 


The goal of readability is to make everything we create as easy as possible for another person to read and understand. This goes from the code, to the documentation, to anything that needs to be used and maintained. Readability is at the core of many other engineering good practices, it’s people who work and maintain the codebases, so the easier it gets for us to understand, the more efficient we work and the fewer issues we introduce.

Readability should be at the top of our principles and not be compromised unless doing so makes us fail to deliver the product and quality requirements. For instance, let’s say we can improve the performance of some functionality by sacrificing readability. In this case, we need to confirm first the impact of this performance improvement, is it really necessary? If it is, performance becomes a requirement. The original code doesn’t deliver the performance requirement so we need to write this logic differently to accommodate the level of performance we want in the most readable way possible. On the other hand, if we don’t have objective and, preferably, measurable reasons to think that there is an issue with performance worth solving, we should not sacrifice readability. We are leveraging the machine’s power so the code is easier to work with, which is fine.

Writing readable code is the second most important goal of a developer, only after writing code that does what is meant to do.

But, what are the characteristics of readable code? There is plenty of material about this topic, here I leave two of my favorite articles:


This principle is about solving the problem at hand in the most concise way possible. Simplicity also applies to everything we create as engineers, not just code, and goes hand in hand with readability. Similarly to readability, we should keep simplicity at the top of our principles and only make our code more complex when the problem we are trying to solve becomes intrinsically more complex.

We are lucky to work in an industry where people share their experiences and work selflessly, making software development rich in architectures, practices, frameworks, libraries, etc. Thanks to this, we have a lot of options to choose from, but even though it is important to know what are the possibilities, it is even more important to know what to use when.

Every time we introduce a new framework, architecture or library, the complexity of our code increases. This is a necessary evil, thanks to that we add layers of abstraction and we can solve more advanced problems, but there is a fine line between including something that will truly make us more productive and something that will create more problems than it solves. For instance, including a dependency injection library will help us decouple the creation of instances from the places where they are used, allowing us to abstract the actual implementation of the instance passed as long as it complies with a definition. This, by extension, makes our code easier to test, more flexible and loosely coupled. On the other hand, it can make instance creation — one of the most basic tasks — pretty obscure, putting a black box at the heart of our program. It will introduce a set of drawbacks that we’ll have to deal with forever after: learning curve, hard to debug problems, more difficult hiring/onboarding, etc. We need to carefully consider if using a library like that is worth it in our context or if we can get the same benefits — or the most important ones — with a simpler approach that might not require any external library.

There are many practices, architectures and libraries available but all of them boil down to solving a much smaller set of common problems. Focusing on these bottom-line problems will help us choose the best tool for the job.

The 80/20 rule — or Pareto principle — is a great way to weigh the different approaches. What this principle states is that, for many events, roughly 80% of the effects come from 20% of the causes. I think this principle applies very well to software engineering and it’s very useful when it comes to making decisions. It helps you to stay pragmatic and not fall into the rabbit hole of perfectionism.

For instance, It is common to have several ways to solve one problem. Usually, these ways range from fast and hacky to perfect in every detail. I believe picking the one closer to the 80/20 rule is a good approach: 80% of the value is produced by 20% of the effort and complexity. Then we can take the remaining 20% and apply the 80/20 rule again or decide the existing solution is good enough for the time. This way we can also deliver value consistently, instead of waiting until 100% is done.


Automation removes the mental weight of doing recurring tasks so we can focus on the important tasks at hand. The fewer things we focus on, the better we work. It’s very satisfying when we are creating value fast and safely, automation is a big ally in this field. Without it we are forced to:

  • remember the details of how to do satellite tasks that are necessary but don’t add up to the value creation per se. Having to do the context switch to perform these tasks back and forth, not only takes time, but also takes energy and can be a big motivation killer.
  • spend time on something that an automated process can do much better, killing our productivity. Also, the risk of making mistakes is high, especially when our motivation goes down.

Here are three things I consider are worth automating since the very beginning of a project:

  • Testing — A very big part of development is trying out the software we create to make sure everything works as expected. Writing tests is automating the process of “making sure everything works as expected”. Not only we will be saving a lot of time in the future, but we will also offload a lot of information around what needs to be tested and how from our heads, making space for more relevant things.
  • Static Analysis — Having automatic checks for common smells, wrong formatting, etc. is a low-hanging fruit that will help us write better code and perform better code reviews since it allows us to focus on the logic.
  • Mobile Releases, deployments, etc. — For instance, mobile releases require a lot of steps: freeze code, build the production variant, sign with production key/certificates, upload the artifact to stores, upload mapping/dSYM files, etc. It’s also a sensitive operation, we don’t want to make mistakes, and it’s recurring, so perfect candidate for automation.

But automation is not complete if we don’t automate also when these checks are triggered so we don’t forget to run them when they matter the most. For that, we can use pipelines, like a PR pipeline that makes sure unit tests and static analysis pass before asking for a review, or a release pipeline that triggers integration or UI tests and then goes through the release steps. GitHooks are also a good tool, for instance in exakt health we use a pre-commit GitHook to make sure the branch is named after a ticker number and automatically append the ticket number to every commit message. This way we can get more context about any change in the code if we need to in the future.

By automating repetitive tasks we can excel at the things we can’t automate.

One thing to keep in mind is that it is easy to overdo it with automation, that’s why it’s important to keep the simplicity principle in mind. We need to automate only those things that cause a real problem for us and remember the 80/20 rule. Done right, automation should organically grow with the complexity of the project and organization.


Observability is a very important element in making good decisions. Thanks to observability we can validate assumptions and know if something is working as expected. It helps us decide what to do and when to do it.

One of the hardest decisions in software development is to figure out what is the sweet spot for a solution or feature when it comes to requirements vs complexity. Is what we built good enough? Is it working as expected? Is it useful? Thanks to observability we can start with a simple solution and iterate from there. It’s like running experiments to gather data points that will help us evolve the solution from simplistic to excellent.

Observability gives us the intel to pick the battles that are the most important.

Humans are very good at making assumptions, it helps us move fast and not be paralyzed at every step we do. The downside of this skill is that we are wrong many times, furthermore, if we are not careful, we’ll develop big blind spots. To counteract this downside, the first thing we need to do is to acknowledge that it is highly probable that we made wrong assumptions, and then we need to identify where these assumptions are. Reflecting on these two questions can help with this process:

1. How do we know we are building the right thing?

From product features to developer productivity solutions, it is easy to overdo it or miss the point. Most of the time — if not all of the time — it is impossible to know what the perfect solution is. Acknowledging that we don’t always know what is best and we don’t have all the answers is key. This way we can think about how we are going to get the information to help us decide what to do before investing lots of effort and resources. There are two key moments when we should spend some time reflecting:

  • Before engaging in a project or task — Make sure that the problem exists, is worth solving and is framed. Spending some time reflecting and gathering data can save a lot of time later. Not only because we may find out the problem doesn’t have as big of an impact as we thought, but even when the problem is worth solving, it’s very important to clearly define what is the goal we want to achieve so we don’t over-engineer. If we can put a measurable metric to this goal, the better.
  • Before delivering a solution — Think about what are the data points that could be extracted to validate that the solution is solving the problem. One common example is introducing analytics events in key points of the program before releasing it to the public.

2. How do we know if something is not working as expected?

If we did the previous point well, we will have data points that give us insights on how well what we built is doing. This information will help us steer our decisions and priorities in the right direction. It is important to build a strategy to process this data and continuously observe it.

Also, from the tech side, we need to make sure the software we create doesn’t have bugs. For instance, while coding, it is easy to fall into the trap of assuming the program cannot be in a certain state, even though this state is technically possible. We might be tempted to either completely ignore this case or write some code that does “nothing” since in our minds this case is impossible. If we do this repeatedly it’s just a matter of time and volume before our program will run into these “impossible” dead-ends. The worst part is that we won’t even know when they happen.

In any system, if something can happen, it will happen. It’s just about time.

Nowadays there are plenty of tools to track and raise alerts when something is not working as expected like crashlytics or performance tracking. Leveraging them as much as we can is one of the smartest things we can do to maintain our software.

One important side note about observability

The data we gather is a support element in our decision making but we are the ones putting all the pieces together to make the final call. Many times, the data is trivial and straight forward and it’s easy to make a good decision, but there will be times when the data is too narrow or even biased, only measuring a specific part for specific conditions. After all, it’s us, the masters of assumptions and biases, the ones designing and introducing the observability. For that reason, it is very important to maintain our critical thinking since 100% objective observability doesn’t exist.

If you reached this far, thanks for reading! Here are the four key messages of the article in a nutshell:

  • Write code so someone without coding background could guess what it does.
  • Recognize when a solution is good enough for the time and situation so you can move on.
  • Let machines do the boring and repetitive tasks so you can focus on the fun and creative parts.
  • Take your time to observe and reflect before making decisions.

Contact Us