Treat engineers as users
# December 30, 2021
Engineers are happiest when they're starting a new project. Certainly some excitement comes from pure novelty but some is deeper. You have a blank slate to build. You can take a step back and consider new languages or libraries. Compilation feels fast and expressive. You change one line in your IDE and watch your browser refresh in realtime.
Once that project matures, something changes from those early days. Your build times grow into the awkward twenty second range, just long enough to answer a few messages on slack but probably not long enough to refactor it with a faster runtime. You fiddle with libraries and microservices in different repositories, putting up a handful of PRs when you're trying to ship a single new feature. Your unit tests grows into the thousands, with little organization or narrative structure that guides you through what has already been written.
All these things add up. And they add up to your stack being a pain to use in practice. Building is no longer fun. It feels like a chore - worse, it feels like work. This is the core of the irony that drives executive teams crazy: going from 0 to 1 in a project is easier than going from 1 to 1.5.
It's an underemphasized asset of successful engineering startups - they make developing enjoyable. More companies need to follow their lead and treat their internal teams like users. Give them a UX that they can enjoy. If an experience isn't good enough for an open source patron or an end client user, don't subject your teams to the same.
Organization Design
Large organizations will likely have a designated architecture team, or minimally an "architect" level staff role that's integrated into the engineering teams. I've worked with architects with varying degrees of appetite to dig into the weeds. Almost always, the ones that dive into the details (code reviews and all) are the most impactful for the company.
If you have an architecture team, make them build applications that use their designs. The annoyance of most technologies isn't in the core behavior - if it were, you would quickly notice and correct the problem. The problem are the edge cases; the undocumented environment variables, the crashes, or the workflow molasses when you're trying to move quickly. No matter the extensiveness of the tech design process, there's no substitute for actually using the frameworks. The real learning should be focused on the sense of intuitiveness when you're trying to do something and the tooling helps or hurts. This only happens when the rubber meets the road.
If you don't have a centralized architecture job description, carve out explicit time for your team to build tooling to make their lives easier. This should be done intentionally to avoid a proliferation of pet projects that barely move the needle. I recommend a series of human profiling sessions to articulate the actual needs of your team.
Human Profiling
It's hard to self-analyze where your own development painpoints are located. Instead, separate concerns. Make one engineer the "client" and one engineer the "solution architect." Perform a human profiling session much like you would for programatic performance optimization. Where is the client spending the most time? Check back with this engineer when they're creating a new feature, when they're in the middle of a feature, and when they're getting ready to ship.
Start of a feature
- What repositories do they have to clone or install? Is everything installable via a single command or does it require a series of bash commands, brew installs, and local environment setup?
- Is the documentation clear or are they facing unexpected bugs with version conflicts?
- What code are they duplicating to get things started? Is there one conventional template for a crud API? Where does it live?
Middle of a feature
- How much time are they spending in the code of one repository? Are they having to context switch frequently into different repositories or languages? Each context switch has momentum costs as they have to warm their mental cache to the different interface conventions.
- How many times do they have to dive into the underlying library code? This might point to lacking documentation or a leaky abstraction in underlying layer.
- How long are they waiting to compile? How frequently do they compile and view the finished product? What do they do in the meantime while waiting?
End of a feature
- When are users writing tests? Periodically through the development process or right at the end?
- How many test types are being utilized in the tests; unit tests, integration, UI? Are they mocking web requests? How many lines does it take to test a single function?
- How do they deploy it into production? How long do they wait for code reviews?
Since this need-finding journey will last somewhere from a sprint to a quarter depending on the size of the feature, I recommend maintaining a wiki page with all the insights and ideas that stem from these sessions. Build up a user persona for each engineer you interview. What tools can help them execute faster? When ideas are continually populated on this page and reference observed developer inefficiencies, it becomes easier to pop a goal off the stack when time permits.
Delivery
I've run far too many sh
scripts and jupyter notebooks that are pitched as "one off" requests. As someone who has written my fair share of these scripts, let's level with ourselves. There is no such thing as one-off-requests. Tools are built for a reason and that reason likely won't go away once you've run the script. It will either be run periodically by others or its logic found useful enough to be retrofitted in the future. Build it right (or right enough) the first time to avoid dealing with bash file hell.
Tactically, I recommend giving your team a host of CLI tools in one place with usage documentation on par with an open source library. These tools should be discoverable. A good litmus test is whether a new engineer will be able to figure out the commands to run just via the --help command. Our own package is called ai-team
and is deployable through jfrog (a private pypi repository). By convention, if the CLI tools need other dependencies installed on the host operating system, they will install them at usage time. So every command we need on a daily basis is only two terminal commands away.
pip install ai-team
ai-team run --help
Not all problems are going to require a new tool. In fact, the best improvements to developer workflows are improving the tools they're already using. Refactor an API to be clearer or require less user code to do the same thing. Make conventions for the most common cases. Better document expected behavior with a clearer CLI contract. All of these are worth prioritizing to the same level as creating something brand new. Iteration is key.
Going The Extra Mile
It may feel that getting your tooling from 0-1 is taking time away from the work that you're actually trying to do. And certainly, I've seen some situations where building tools is done at nauseam. I've found that timeboxing these efforts is the best way to balance benefit to developers with these real costs of execution. These efforts can expand or shrink to the time that you allot for them, so calendar them for maximum impact - one day a sprint is typically a good trade off.
This limited time forces a prioritization exercise of which tooling is going to give maximum yield for fixed effort. Pop that idea off your wiki page and get to implementing. As sprints progress and you deal with more niche painpoints, you can slowly inch back to the initial surge of creation.
Nothing brings a smile to someone's face more than tooling that lets them focus on their unique product. We work to ship. Make going from 1 to 1.5 a bit faster and your engineering teams will thank you for going the extra mile.
Special thanks to Jeff Zoch and Cole Simmons for their thoughts on an early draft of this post.