Engineering velocity banner image.

Engineering velocity at Improbable.

15 May 2019

Peter Mounce is a Software Engineer at Improbable. He likes making computers do the boring repetitive stuff so people don’t have to.

At Improbable, we aim to continuously improve. Though this happens to all areas of the business (finance, people-operations, talent, marketing; really, honest), I want to focus on how we apply that to our engineering practice and its surroundings.

Broadly speaking, our engineering velocity (EV) team has two closely linked objectives:

  1. Optimise, company-wide, how we confidently deliver software to our customers by changing things to promote more productivity (see (1) Things we've changed below.)
  2. Produce better engineers.

That's great bizbabble, Pete, but what do you actually do?

At parties, when I'm asked what I do, I answer: “I'm a cross between a software plumber, an engineering lubricant and a guidance counsellor. I ask questions a lot.”

To be clear, in EV we're trying to improve the relationship that our engineers have with their developer experience here at Improbable. We want them to be happy here for a long time. Like all relationships, it tends to not fall apart because of some big thing - you wouldn't have gotten together if there was a big obvious problem. Relationships fail because of an accumulation of frustrations and disappointments that remain unaddressed. Our aim is to head that stuff off before it becomes serious.

To deal with this, we have a set of written principles within EV; here's the core of them:

  1. Be an example of what developers should expect from Improbable.
  2. Multiply impact by creating an ecosystem of integrated tools.
  3. Enable all Improbable engineering teams to do their best work.
  4. Carrots, not sticks.
  5. Make “The Right Thing To Do” also be the “Easiest Thing To Do”.
  6. Learn how to enable better engineering practices in games development.
    Also, perhaps most importantly, 

  7. Always be curious.

On that last one, we're learning all the time. While as engineers we can be to some extent our own customers, we definitely don't know everything about everything. So, we need to be learning all the time.

How do we optimise software delivery?

Well, we think of it as a pipeline. You know, like e-commerce funnels, HTTP request processing pipelines, or continuous delivery pipelines. Here's how I imagine it:

That’s the map of where we might apply ourselves. We want to apply effort as far to the entry-point of the pipeline (diagram above) as possible since success there probably magnifies earlier and over time. For example, let’s say we improve how an engineer on-boards through a consistent workflow for making a change, or a common place to find documentation or by automating workstation setup. Well, that engineer has now “levelled up” earlier in their career and we get compound interest with a higher base for the duration they’re at Improbable.

Software is delivered by a set of people, with processes, sets of tools and sets of infrastructure. We're instrumenting these things - that is, adding telemetry to them - so we can do continuous data-driven improvement. We try to make the pipelines publish metrics, then brainstorm (with our customers) what we think might reduce friction in the pipeline, then we do the first bit, see if the metrics improve and by how much, and then iterate. Then we train our customers to self-service this optimisation. No magic there, really.

How do we produce better engineers?

Improbable has concentrated several highly experienced engineers into the EV team; as such, we are able to apply that experience consistently and coherently. It helps that we force ourselves to be our own customer - no cheating.

We regularly have engineers from other teams rotate through the EV team so we're able to act as an ad-hoc academy. It's synergistic - we're learning and interacting with our internal customers hands-on, and they're learning about our tooling, approach, and opportunities to apply those things “back home” in their original team.

One thing we started doing is easing how engineers onboard into being on-call by exposing them to operations tooling earlier. Our continuous integration ships structured build-logs, metrics and traces to the same stack as our production workloads do, so now the barrier to onboarding into on-call is reduced because engineers are already day-to-day familiar with that set of tools and techniques. More about that in a later post.

(1) “Things we do”

What are “things”? Nothing is really off the table. Here are some examples of “things”, big and small:

    • Documentation. We write docs for our products and processes and treat it as a bug if someone asking a question cannot be answered by whoever is on-call, and that answer doesn't include a helpful link into our docs. Documentation is a great way of reducing how many times you're interrupted, and being interrupted less is better for your productivity.
    • Linting. We have lots of linting running in CI. We want more linters and static analysis. It's more productive to fix things automatically and have engineers talk about distributed world simulation than where to put braces in code.
    • Gathering requirements. We help teams find, engage with, and gather requirements from their customers. Solving problems via software comes after knowing who your customers are and what they want, then checking back regularly.
    • Look back. Participate in retros, post-mortems, corrections-of-error, help with the definition of acceptance criteria so we know when “done” happens, and we prevent problems from happening again vs band-aids.
    • Lead by example. Our docs are comprehensive. Our tickets have objective acceptance criteria. We ask “why” a lot so we can get context to decide on “how” to proceed. We pay down the debt we accumulate. We design with customer UX foremost and long-term maintainability and extension in mind (because we know we need this via observation, not because we gold-plate). We avoid becoming a bottleneck by training others.
    • Process improvement. We consult with teams around how to define a repeatable release process for their software to their customers.
    • Debt improvement. Consulting with teams around where to best apply their limited resources, based on collective past experiences.
    • Sharing improvements. Spotting great local optimisations and promoting them to other teams. We love seeing someone else’s toil-saving tool get adopted more widely, and we get behind and push whenever possible.
    • Reliable builds. Engineers shouldn’t need to interact with continuous integration beyond a set of green ticks on their PRs - jobs should not fail for reasons beyond a coding error. “Retry it on a different agent” is something we’ve mostly eliminated from our vocabulary!
    • Help the helpers. We're allies to those who want to do better. Sometimes, there’s an engineer or set of engineers who want to make things better but need help convincing others that the work is worth the effort. We’re in a position to help since we interact company-wide and have a great overview of who’s doing what, and why. Sometimes, they really just need to know that someone is willing to back them up.
    • Gather, publish and measure things. Build passes and failures, duration. Which jobs are the critical path? Cycle times on tickets through workflows - how long does it take to get from ticket created to a happy customer and the stages on that journey? PR time-to-first-response and time-to-first-decision. We recognise that we need to measure things before we can decide where to spend effort optimising.
    • Build & release engineering. While we definitely won't write your build & release automation for you, we'll help, train you, show you what others have done and generally be a sounding board - we've collectively bled all over systems automation in our time.

Get in touch 

We hope you've gained something from learning how we keep our engineering teams happy and confidently productive.

That said, if you have any silver bullets for how to reliably have people learn from your mistakes ahead of learning from their own, we'd love to hear from you. We're learning all the time too.