In my last blog post, “The Framework Trap,” I talked about all the problems we ran into when engineers, designers, and product team members were working together. I’ve come to see that Scrum, with its strong emphasis on the right processes, has actually been more of a hindrance than a help to our team. While we were waiting, we did our quarterly feedback survey to see what was getting in the way and where we’re doing well as a team. We use an internal tool that works like 15five. One of the things we found out was that we had lost agility and action transfer. Scrum is supposed to help you develop both of these skills. To figure out what went wrong, we did a few root cause analyses and retrospectives to find the causes and come up with solutions. We’re currently implementing these changes. But since team satisfaction has gone up a lot, I’d like to share these in general terms along with the underlying problem.

Context Matters

One thing we learned from the root cause analysis was that context wasn’t always shared. To understand this, it’s important to realize that we have a hybrid team. The Customer Success Team and the Product Managers spend most of their time in the office, so they have pretty good informal communication. But most engineers and designers work remotely a lot, and they’re spread out across three different continents. Over time, a structure had formed where the product manager had a good network with Customer Success and also worked closely with the designer to develop designs. This would usually happen in video calls, and sometimes Customer Success reps would join in to give feedback on the first drafts or explain the context of customer issues. Engineers were hardly ever around for these calls. After the first detailed designs were finished, they were tested in interviews with customers. Engineers were usually invited to these interviews, but they could decline if they wanted. They often took advantage of this freedom. Once the designs had been tested and the customer’s feedback had been incorporated, they were presented to the engineers to check their feasibility. If the product manager or designer had any questions, they’d be consulted beforehand. This meant that the engineers sometimes only dealt with the domain-specific background info after the sprint planning during implementation. This was obviously way too late, and it often led to the complexity of individual features being underestimated. On top of that, the engineers were at a big disadvantage because they didn’t have the right context. Since they weren’t regularly forced to participate in discussions between product and design, the value, risk, and usability risks were addressed early on, but not the feasibility risk.

We took a few steps to deal with the issue. To encourage everyone to talk early and together, we decided to have at least one video call per week, with the product manager, the designer, and an engineer. Everyone else is also invited. During the call, the product manager talks about the current customer issues that need to be resolved. It’s important to understand that this isn’t a list of all the problems customers have reported. It’s a summary of the issues that we as a company want to solve. This happens at a level where, for example, an Excel spreadsheet is used to show how current problems are being solved and what extra complexity is involved, like due to linked data in documents, etc. Sometimes, the product manager also presents the initial solutions they’ve come up with. We’re currently trying out Figma Make and Loveable to quickly create an interface that shows what the solution might look like. The call usually includes a presentation with background information. This includes the regulations that influence the solution, how exactly the feature will impact the real world, and what certain terms mean. This makes sure that everyone working on a problem has the info they need to solve it. The call is recorded and made available to everyone. This lets you review explanations and deal with problems like dropped connections and different time zones.

Early Feedback

Another thing we found in the root cause analysis is that we gave a lot of early feedback on the concrete implementation of new features. Usually, an engineer develops a feature locally, and then it’s pushed to the dev instance. There, it’s tested to see if there are any errors and if the feature works as intended. If that’s the case and there are no errors in the unit tests or end-to-end tests, the feature stays and is moved to the staging instance with the next deployment. The staging instance is basically a simulation of the production environment. Since deployments to the staging environment are currently still bundled, we’ve got to test the whole set of new features on staging too. If everything’s running smoothly here, we’ll deploy to the production instance. Since the engineers have such different personalities, we got feedback at different times. People who wanted to quickly perform heroic deeds usually asked for feedback after deployment to the dev instance. This led to a lot of time being spent on a problem that wasn’t fully understood, so the solution wasn’t implemented in the best way. Sometimes, when feedback requests got lost or weren’t answered properly by the product and design teams, features were sent to staging, and then changes took way longer, especially if the feature couldn’t be rolled out to customers in this form.

We decided that we’d look for feedback during development on localhost. This helps make sure that things aren’t accidentally put into production. To prevent bottlenecks, some of the engineers took the initiative to record short videos of the feature explaining what they had implemented and how, and sent this product and design for feedback. Then they could keep working on other projects while the design and product teams gathered feedback.


Leave a Reply

Your email address will not be published. Required fields are marked *