Decouple Line Organization from Requirement Area
After almost 10 years, I got chance again to work on a LeSS Huge adoption. Facing different challenges and reflecting on my experience 10 years ago, I am proposing an experiment here to decouple line organization from requirement area.
In 2007, I experienced an organizational transformation in NSN (Nokia Siemens Networks) to adopt LeSS, at which time the name "LeSS" had not yet existed. We had transformed the organization into a LeSS Huge setting with a few requirement areas, in each area there was APO (Area Product Owner) and Area manager. Area manager was the line manager for the area. We used the same name for requirement area and line organization, for example, the area I worked for was Traffic & Transport, both as requirement area and line organization. So, requirement area and line organization are coupled.
Even though the workload in one requirement area is more stable than one feature, if we follow priority based on customer value, it is inevitable that the workload varies as time goes. So, today you need 5 teams working on this requirement area, tomorrow you need 6 teams. I am exaggerating, this would not be "today/this sprint" vs. "tomorrow/next sprint", but more like "this quarter/this year" vs. "next quarter/next year". Anyway, this happens. When it happens, LeSS recommends to move team, rather than individuals, to other requirement area. When requirement area and line organization are coupled, it means that the team would also change the line organization. As you can image, line change is never easy. Everybody may agree that this makes sense and support, the necessary justification and convincing others carries big overhead. Even today when I reflected back, I could still feel the very pain. Yes, the silo among requirement areas was clearly there and the coupling with line organization made it worse. Interestingly, the developed silo was also one of the reasons why we chose to couple line organization with requirement area, because that way, line organization would have more product ownership, not for the whole product, but for the requirement area.
Although it was painful experience to move teams to different requirement area, it did not happen often, as the workload seemed stable in requirement area. In retrospect, i suspect that the prioritization decision may consciously or unconsciously take the capacity of requirement areas into account.
Recently, I encounter a different challenge. In the context of my LeSS coaching client, their workload between two requirement areas varies release by release. Say, there are 5 teams in each requirement area. In release 1, based on priority, 60% of work is from requirement area A, and 40% of work is from requirement area B. That translates into 6 teams for requirement area A and 4 teams for requirement area B. However in release 2, only 40% of work is from requirement area A, while 60% of work is from requirement area B. If we have requirement area and line organization coupled, we basically have two options. First, we do not follow the priority strictly and take the work considering the capacity in each requirement area. Second, we move teams to different requirement areas release by release, as line organization is coupled, we change their line organization as well. As their release cycle is 3-4 months, it would be hectic to make so frequent line change.
In fact, we have the third opinion, which is to decouple line organization from requirement area. Once it is decoupled, we may move teams across requirement areas but not change their line organization. Let's illustrate this with the below diagram (RA = Requirement Area).
2 line organizations (A and B), each having 5 teams
2 requirement areas (RA1 and RA2), with varying number of teams
Release 1, 4 teams for RA1 and 6 teams for RA2
Release 2, 6 teams for RA1 and 4 teams for RA2
The name for requirement areas is often associated with product domains (customer domains, rather than architecture domains). Hotel, Flight, etc. would be suitable names for requirement areas in Ctrip type of product, assuming that each is big enough to justify as its own area. However, we name line organization without referring to product domains. It could simply be product line group A, B and etc.
In LeSS Huge, one rule says that each team specializes in one RA. In this case, we can't let A5 and B5 specialize in RA1 and RA2, respectively. Instead, we would like them to be able to work for both RA1 and RA2. Would that cause problem? Let's first understand the rationale behind the rule. It is usually difficult for any team not to specialize in any area, as the whole product in LeSS Huge is too complex for any team. This holds true by and large. However, there are a couple of subtle differences here.
- We are talking about minority of teams, for most teams (A1-4 and B1-4), they still specialize in one requirement area. It is likely to enable small number of teams who can specialize in more than one requirement area.
- Team A5 and B5 may not specialize in RA1 and RA2 completely, but to some extent, e.g. some sub-areas in both RA1 and RA2. The key is to have the flexibility in addressing the workload variation in requirement areas across different releases.
Another potential downside for the decoupling is that line organization would not develop strong product ownership. While this is true for requirement area, too strong ownership for one requirement area may lead to silos within one product. Thus, the decoupling also has the potential in reducing silos if we can make any line organization care more about the whole product.
Regardless of what choice you make - either coupling or decoupling, I suggest you to understand deeply those forces in the dynamic, thus, make informed choice.
Create Discomfort for Continuous Improvement
How often do you feel the discomfort? If it is everyday, you burn out; if it is seldom, you probably do not grow much.
Sitting on the plane to visit a client in US, I felt discomfort, because of different culture, different language, and more. I expected this to come when I accepted the job. I intended to create this discomfort so that I would grow.
How could we create discomfort to grow the team?
You may start from where you are, which is one of Kanban principles. It helps reduce the change resistance. On the other hand, does it create enough discomfort? Recently, one client asked me to give suggestion on whether to adopt Scrum or Kanban. One thing I paid special attention to was if it would create right amount of discomfort. For Scrum, 1/3 of people felt positive and were eager to try, the other 1/3 felt challenging and hesitated, still the other 1/3 were unclear. While for Kanban, most of them felt comfortable. Then, I suggested to go with Scrum, as I saw that the discomfort level trigged by the change was not too high to burn them out, but enough to make them grow and improve.
Get stories done by the end of every Sprint creates the drive for improvement. You need both clear Done and Sprint as timebox. If you do not have clear definition of Done, it is easy to get "Done" as you change it to fit whatever gets completed at the end of Sprint. I have also seen that team planned one day between Sprints to accommodate not being able to get done. Scrum works as mirror, providing the transparency and exposing problems for your improvement. Done within timebox is supposed to create discomfort for team's growth.
3. Expand Done
If you can constantly get done by the end of Sprint, you become comfortable again. Don't keep in that state too long, create discomfort again. If there is still gap between your current Done and PSPI, expand your Done. Expect that you get problems in getting Done after it is expanded, which is good, because you create discomfort again. Team grows when they solve problems that prevent them from getting this expanded Done.
4. Shorten Sprint
I find that still many teams like 4-week Sprint, because they can do mini-waterfall while still getting done at the end of Sprint. Again, if you observe that team feels very comfortable, it may not be effective in exposing their problems and creating drive for improvement. In that case, try to shorten the Sprint and assess the discomfort level, would it enable team to stop doing mini-waterfall and seek more effective collaboration ways of developing software?
5. Go faster, create more value, etc.
Next challenge could be going faster, creating more value, etc. Or, create your own High Performance Tree as your team vision to help identify the gap. With gap, you feel discomfort again to continuously improve.
It is a journey, enjoy it!
Two Approaches for Solving Dependency
In the recent email exchanges, one topic came up as whether introducing more coordination meetings and roles actually solves dependency problem or just "shifts the burden".
Shifting the burden is one common system archetype, which consists of two balancing loops and one reinforcing loop. For the topic of dependency, I draw the below CLD (Causal Loop Diagram).
Two balancing (B) loops
Two balancing loops illustrate two approaches for solving dependency problem. In scaling environment with multiple teams, the dependency across teams often slows down the development, and poses big challenge for Agile at scale. There are a few scaling frameworks, and they address various scaling challenges differently. For this specific challenge, the two approaches (two balancing loops in CLD) actually represent SAFe way and LeSS way of solving dependency problem.
SAFe tries to manage dependencies. One essential activity in SAFe is PI planning, where dependencies are identified and coordination is fostered. SAFe's focus is on PI (Program Increment), consisting of a few Sprints. Where you have component teams, it is still possible to synchronize all relevant parts into the same PI, but it would be impossible to always synchronize those into the same Sprint. So, it helps to certain extent.
LeSS tries to remove dependencies. One key element in LeSS is feature team, which means to design team structure so that the cross-team planning dependencies are avoided. LeSS's focus is on Sprint, same as Scrum, which makes it critical to adopt feature teams. Feature team adoption addresses the root cause of dependency problem, but as it requires organization redesign, it is more disruptive and needs stronger motivation.
One reinforcing (R) loop
This is the addictive loop. Would the adoption of SAFe reduce the chance for deeper organizational change promoted in LeSS? Once SAFe solves the dependency problem to some extent, the feeling of control goes up, thus the motivation for deeper change goes down. That creates the reinforcing loop to favor the approach of managing rather than removing dependencies. Whether does this addictive loop exist? It depends on whether it is seen good enough to deliver in PI, and whether PI planning is seen effective in solving dependencies.
In my recent experience, the organization I worked with adopted SAFe before moving towards feature teams. They did not see that PI planning was sufficient in solving dependencies, "especially when there is change, which is inevitable in their environment". That makes me wonder that the pace of change is another variable affecting the dynamic.
The focus on PI, rather than Sprint, reduces the ability to respond to change. The assumption to make PI planning somewhat effective is that PI content is relatively stable during PI. Change is very disruptive to PI plan.
Therefore, I could imagine that "shifting the burden" dynamic exists when focus on PI is good enough and there is little change during PI, because then dependencies are seen well managed, so that the motivation for organizational change towards feature teams would be low.
Revisit feature team
I visited a client recently on consulting LeSS. They assured me that they had feature teams in place, "they are stable as line organization unit, and the size varies from 10 to 15 people." The large size smelled. I learned further... It turned out to be like this.
Line team A is one of the few feature teams in their organization. In line team A, there are 12 members, named as A1-A12. At one point of time, this is how the work is done inside line team A. They are developing 5 features, and different members are working on different features. After a while, it changes. They form different teams for new features, as shown in the below.
So, what is the real team here, the line team A or various feature x teams?
What defines a real team? Roughly based on the book "The Wisdom of Teams: Creating the High-Performance Organization".
- common goal
- shared responsibility
- interdependency among members
In short, the members collaborate to achieve common goal with shared responsibility.
As only those people working on feature x are responsible for its delivery, rather than the whole team collectively responsible for set of features, I would say that the line team is not a real team, it is more like a working group. The real collaboration unit are those feature x teams. So, we get stable line A working group and dynamic feature x team.
In traditional matrix organization, feature project is formed with members from various functional line groups to deliver a feature. However, the dynamic feature x team here has some major difference than feature project. The below table from feature team primer summarizes the differences between feature team and feature project.
stable team that stays together for years and works on many features
temporary group of people created for one feature or project
shared team responsibility for all the work
individual responsibility for 'their' part based on specialization
controlled by a project manager
results in a simple single-line
organization (no matrix!)
results in a matrix organization with resource pools
team members are dedicated - 100% allocated - to the team
members are part-time on many projects because of specialization
Except for the first point, feature x team in this organization is pretty much in line with the definition of feature team. The main difference is whether it is stable or temporary team.
Why does stability matter? The main supportive evidence comes from the book "Leading Teams: Setting the Stage for Great Performances", where it is stated that team performance peaks at team's year 3-4.
How do we create stable feature team from the current state? As line A is already stable, we may try to make line A team real team. It means that the whole line A team takes shared responsibility for set of features, and it self-organizes to deliver with collective ownership.
Software development is inherently uncertain, and team will inevitably encounter unexpected things. The whole team can respond to those and take collective effort to adapt. For example, when 3 members (A1/A2/A3) work on feature 1 and find that it is way more work than expected, and feature 1 has the highest priority for the whole team, they bring it up and discuss with other members and together decide how to adapt. As it becomes real team, it makes much more sense to hold collaborative activities such as planning, daily standup, review and retrospective with the whole team. The team leader or team coach focuses on increasing the whole team awareness of the overall situation and fostering the shared responsibility in making adaptation.
The challenge with this approach is the large size. Based on both research and practical experience, small-size team is preferred. In Scrum, 7+/-2 is recommended size, and I would personally recommend 5-7. Small team is usually more effective, and it is more possible to create collective ownership. Therefore, instead of building the large line team, we may split it into two smaller teams. Each team is taking shared responsibility for set of features, and two teams are both stable over time.
Once you make it work, you may then blur the boundary between them and develop the broader sense of ownership. Boundary creates identity, which is good for team development; while boundary creates silo too, which is bad for the whole product. We try to hit the balance. Will you gradually make the large line team work again? I do not know, and it would be interesting to experiment...
Shared backlog with one priority
In the context of multiple teams working on one product, I try to figure out where priority is set.
I first ask, "do we have one priority at product level and shared by all teams, or do we have priority at team level?"
The answer is often, "we set priority at product level".
Then I ask further, "what names do your teams have?"
I often get names associated with domains, such as "production, shipping, and finance".
"So, what if we have most of high-priority items in production? what would shipping and finance teams do?"
"Well, they still do shipping and finance items."
"Even when those items are not in high priority?"
Do you see the inconsistency between one priority and team names associated with domain? Sometimes, I give sarcastic comment to make the point, "so, you do ask your customers to change requirement because our production team is pretty occupied but our other teams are available, don't you?":-)
Agility is about responding to customer change, rather than changing priority because of our own constraints.
The below diagram shows the shift, at least on the paper.
The name such as Wei/Shu/Wu from Three Kingdoms detaches from domains, while it can still associate with the product, i.e. product x/Wei, product x/Shu, product x/Wu.
In this case, as long as the item is high priority, any available team will work on it. Chances are, in one sprint, two teams are working on production items, and none of them is working on finance item, if that is the priority. The key is, the whole organization - 3 teams - can work on any item from any domain, according to priority. That's where agility lies at.
How do we achieve this? The easy part is to remove the label associated with product domain. The hard part is to develop the skills necessary to work across domains.
If you happen to reorganize your teams (e.g. create long-lived teams during your LeSS adoption), you incorporate this while designing new teams. During the recent self-designing workshop, we simply put an additional constraint - every team needs to cross domains. This constraint can be met by forming new teams with members who used to work in different domains.
While the most flexibility comes from any team being able to work on any domain, we also get the most (short-term) challenge in getting items done as skills are scattered the most. To balance this, we may choose a more gradual approach by setting the constraint of any team at least crossing two domains, rather than all 3 domains. In many cases, this has already helped achieve sufficient flexibility.
A final note, when we do not label teams with domains, they may still develop domain speciality over time. However, when we detach specialization and responsibility, we do not overly develop constraint.
Paths to PSPI
In scaling environment, it is common that PSPI is not yet achievable in every sprint. Even though the ultimate goal would be the team producing PSPI at the end of every short (say, 2-week) sprint, we begin with imperfect Done, i.e. there is undone work between current Done and PSPI.
Let's explore different strategies to deal with undone work. We examine two factors, who does undone work; and how often it is done.
This is common approach - we do undone work at the end of release, in a period called with different names such as release sprint, stabilization sprint, hardening sprint, etc. In essence, this is the period we clean undone work to achieve PSPI.
Teams may do all this work. They are dedicated in achieving PSPI before working on the next release.
Separate Undone unit may do this work. Regular Scrum teams hand over the undone work to Undone unit. They may immediately move to next release, most likely with some reservation. When there is big gap towards PSPI, they may even reserve all their effort until reaching PSPI. Possibly after some period, they move to next release with some reservation.
See more details from "Practices for Scaling Lean & Agile Development", Chapter 5, Planning.
It is common to have some extent of staggering. The reason to stagger is for efficiency, while it creates problems such as multi-tasking, less collaboration, less visibility, less flexibility, etc. Kane Mar already blogged about this anti-pattern 10 years ago.
Hardening in every PI
PI (Program Increment) is a concept from SAFe. PI consists of a few normal sprint plus a sprint of HIP (Hardening, Innovation and Planning). Since v3.0, SAFe changed HIP to IP by removing Hardening. This is good in encouraging teams to complete undone work earlier, but in practice, Hardening largely still exists in IP sprint. It is good strategy to achieve PSPI every few sprints, when we can not yet achieve it every sprint, while it is too risky to only do once at the end of release.
In SAFe, teams and Undone unit may co-exist to raise Done to PSPI. As teams work throughout PI, it indicates that teams do not move forward until PSPI is reached. Staggering exists between hardening this PI and planning next PI, but it is somehow limited.
Another possible scenario is that teams hand over undone work to Undone unit, and move forward to next PI immediately with some reservation. They may create staggering between PIs. As IP sprint usually doesn't last long, it is less tempted to do so between PIs.
PSPI in every sprint
If team cleans undone work every sprint, it is natural to just expand Done.
Recently, I encounter a couple of cases when separate Undone unit does undone work every sprint, which creates staggering shown in the below.
Why do they want to stagger by leaving undone work to Undone unit in the following sprint, rather than expand Done to reach PSPI in the same sprint?
There are two main reasons.
1. Team lacks skill for undone work, this can be solved simply by moving people in Undone unit to team.
2. It is more efficient to stagger. Let me elaborate.
Suppose that we have 1-week manual regression test as undone work, and we have 2-week sprint.
- If we exclude manual regression test in Done by team, while this is done by undone unit in the following sprint. We have 1-week undone work for 2-week development.
- If we include manual regression test in Done by team, we can only do 1-week development, as 1-week is for manual regression test.
Of course, this line of thought assumes that staggering has no extra cost, and undone work takes fixed time (1-week). Neither is really true. On the other hand, we can acknowledge that the efficiency is indeed low with current capability of doing undone work. So, it may make sense to first improve the efficiency of doing undone work by e.g. test automation.
Actually, there is another alternative. Instead of staggering, we do 3-week sprint by moving people from Undone unit to team and expanding Done to include manual regression test. How does that compare to 2-week staggering?
At first look, it seems still more efficient with staggering. In 9-week period leading to PSPI in the end, with staggering to PSPI, 8-week of development work is done; with whole-team to PSPI, 6-week of development work is done.
With closer look, staggering affects the next sprint development, so, instead of 2-week, it may only be 1.8-week (assume 20% loss from staggered week, which could still be underestimated), then, it only gets 7.2-week rather than 8-week.
Moreover, we assume that 1-week undone work is fixed cost, which is not true. It doesn't consider the positive effect by having the whole team work, and improve, on manual regression test. Chances are, the team speeds up the automation soon it does not take 1-week any more. If it takes 0.5 week to do undone work, it gets 7.5-week instead of 6-week of development. Gradually, slow may become fast.
To be fair, with staggering, it is also possible to improve the efficiency of doing undone work. Eventually it would take undone work into the sprint, i.e. expand done, in 2-week sprint. Same with whole-team PSPI in 3-week sprint, as the cost of getting PSPI decreases, it would shorten 3-week sprint to 2-week sprint.
Therefore, there are two paths.
1. from "2-week sprint, staggering by Undone unit" to "2-week sprint, PSPI by whole team"
- Perceived efficiency, less waste as development continues. In the short term, this may contain some truth.
- It delivers PSPI every 2 weeks, while the other only delivers PSPI every 3 weeks. This may be important depending on the context - how frequent it has to deliver.
2. from "3-week sprint, PSPI by whole team" to "2-week sprint, PSPI by whole team"
- Avoid all those problems with staggering
- Faster improvement to decrease undone cost
- More learning, leading to long-term efficiency
If we think of both as intermediate steps, which would drive the improvement faster - imperfect done or long sprint?
We may have to live with undone work while we improve towards PSPI every sprint. For each piece of undone work, gradual improvement is possible. The below summarizes possible evolving paths.
Who takes PO role in client project?
The essence in PO role is to manage ROI - Return on Investment. While you are doing projects for clients, there is contract in between and ROI involves both sides, which makes who taking PO role more complicated than it is in product organization.
We are going to look at this question for two common types of contracts - time & material and fixed price (derived from fixed scope).
Time & Material
This is Agile friendly. It's straightforward that PO comes from client. For client, Investment is fixed for one team sprint, thus, he focuses on maximizing Return by prioritizing the high value work. For vendor, Return is fixed, and so is Investment. Thus, usually a business manager from vendor dealing with simple contract will suffice.
When it is done sprint by sprint, client has the right to terminate the project earlier than the initial estimate, as long as with sufficient notice period. Vendor is compensated by higher margin or shared saving for this flexibility.
Even though it is Agile friendly, i see the further improvement opportunity by linking it to team. Instead of counting how many individuals, we count teams. The price of different teams varies as the performance of them varies.
Fixed price (derived from fixed scope)
The fixed scope means little or no uncertainty in requirements. At least, it is perceived so by client. We separate two different cases under this contract mode - without client involvement (as client completely denies change) and with client involvement (as client is open to change to some extent).
- without client involvement, risk driven
You may be tempted to do waterfall in this case, but unless there is also little or no uncertainty in development, which is very rare, you still benefit from doing sprints.
Vendor managing its Investment becomes the key driver, thus, PO is from vendor. To vendor, Return is fixed in fixed price, PO actively tries to mitigate risks thus controlling cost (i.e. Investment). To client, it sees ROI as fixed during contract negotiation, thus has no desire to maximize Return during the course, as shown in asking for "fixed scope".
In this case, PO could also be the business manager negotiating with client for the contract in the first place.
- with client involvement, value driven
Client is willing to accept change during the course, and client and vendor are managing ROI collaboratively.
There are business managers from both client and vendor. One is PO, and the other as critical stakeholder to participate in inspection and adaption via sprints. It is better to have PO from client, as same person can own product vision.
In any case, they work together to decide which changes are accepted, as this affects both sides.
1. Can those changes be accommodated in the existing contract?
If there is little impact on cost, ok from vendor; for client, better ROI.
If there is slight increase on cost, it may still be ok for vendor; for client, better ROI.
If there is slight decrease on cost, better ROI for vendor; for client, still better ROI as change is made for more value.
2. Should we initiate change (change price) in contract?
In the case of increasing the price, ROI accepted from vendor, better ROI for client (more Investment, but even more Return)
In the case of decreasing the price, the saved Investment could be split between client and vendor.
Win-win is clearly possible, when client and vendor collaborate.
Time & Material is simple and Agile friendly. With Fixed price, when there is no client involvement, the benefit is limited; when client is willing to involve, even take PO role, better ROI can be achieved for both sides, through collaboration. In fact, fixed price is not fixed any more.
Two Guidelines for Metrics
During the recent Advanced ScrumMaster course, two guidelines for performance metrics emerged.
Continuous Improvement over Performance Evaluation
"Measuring and Managing Performance in Organizations" by Robert D. Austin is a great source for the topic of metrics. He made the distinction in terms of the purpose of using the metrics, either as motivational or as informational.
Motivational purpose ties to Performance Evaluation, while informational purpose ties to Continuous Improvement. Metrics could be the same, but used with different purposes. Take the example of unit test coverage. Management uses this metric as one KPI for team, the higher, the better. This is motivational. Team itself may find that coverage provides meaningful insights to guide their improvement in unit testing, thus, they decide to measure it. This is informational.
Austin claims that all metrics with motivational purpose inevitably lead to measurement dysfunction to some extent. This is what we see in reality that some teams don't add any check/assert in their unit test purely for the sake of achieving measured coverage target.
This leads to the first guidance: Continuous Improvement over Performance Evaluation
Collaboration over Accountability
When you cannot yet avoid motivational use of metrics, the technique of measuring up helps. Traditionally, we measure based on the span of control. By measuring up, we measure based on the span of influence.
Take a couple of examples. Traditionally, we measure developers based on the development output, e.g. lines of code; we measure testers based on the testing output, e.g. number of bugs found. By measuring up, we measure the collective output from the cross-functional team. This applies at broader scale too. Suppose that multiple teams work on different features and those features together form a complete customer scenario. Traditionally, we measure single team's output, e.g. delivered feature. By measuring up, we measure the collective output by multiple teams, e.g. delivered customer scenario.
Through measuring up, we promote the bigger common goal, thus more collaboration. Would this sacrifice clear accountability? Probably yes, but does it matter?
Yves Morieux's Ted Talk "How too many rules at work keep you from getting things done" makes a great point on this question.
In relay race, who's to blame (sorry, who's accountable:) when the baton drops? It's not clear, so, in order to have more clear accountability, you may introduce a third person, whose sole responsibility is to take the baton from one and pass that to the other, thus, he would be accountable when it fails. This is a system designed for failure, failure with clear accountability. However, would that help win the race?
Moreover, you may try measuring one-level up, thus, still take accountability into consideration to some extent.
This leads to the second guidance: Collaboration over Accountability
So often, people ask what metrics we shall use in Agile context. In my humble opinion, these two guidances are more critical for your success in effectively using metrics.
Note: special thanks to my co-trainer Sun Ni for great insights.
When work and skill mismatch
You have more work in one area than what the people in that area can do. You also have available people in the organization, but their skill does not match the work.
Will this happen? Believe me, it will always happen at some point, either at team level or at individual level, or both. This is simply due to dynamic work vs. "static" skill (really? maybe as snapshot, and hopefully not in the life span). When that happens, there are a few strategies to deal with it.
If you are working in matrix and responsible for one project, the obvious strategy is to grab more people (or resource, are people resources?) with the right skill and allocate them to the area having more work. What if other projects suffer even more? It's none of your business.
If you fail to get more people, another strategy is that you anyway push this through the developers working in your project. This inevitably leads to the application of developer's secret toolbox, which sacrifices quality and gradually leads to more and more legacy code, and eventually slows down the development, a lot.
If you take a bigger and longer-term view, you see that "push it" strategy creates more problems than it solves. Another strategy is simply accept the fact, pull the work based on the capacity, with the acknowledgement that the output is constrained by the bottleneck. You either extend the schedule or reduce the scope, or a bit of both.
This is wise in the sense that it respects the system, thus the system would not backfire. However, the speed or the value of delivery suffers and you may not survive.
This strategy is quite logical, as bottleneck area constrains the output, you grow the capacity in that area. It does solve the problem of delivery. However, as bottleneck area moves, it leads to staffing based on the worst scenario.
This is not lean. As it is practically hard to shrink in size, some teams and individuals will inevitably work on low-value stuff and create wastes. And, the more important waste is the underutilization of the people potential. People differ greatly from resource because people can learn. By limiting people only in their familiar areas does not show the respect for people, which is one of the two pillars in lean thinking.
Then it comes to the most sensible strategy. You treat mismatch as an opportunity to learn and expand skills. This does not mean that specialization is bad. You still utilize the specialization for efficiency, meanwhile take advantage of mismatch for people to learn new areas. This is lean. Long-lived feature team adopts this strategy, and LeSS explicitly promotes feature team structure.
- feature team primer
- more on feature team chapter in "Scaling Lean & Agile Development: Thinking and Organizational Tools for Large-Scale Scrum" book
What's in Product backlog?
What's appropriate to put into Product backlog? In order to answer this question, we first look at different views on Scrum.
Two views of Scrum
One is viewing Scrum as planning framework, then, PBI (Product Backlog Item) is the unit for work planning. The other is viewing Scrum as product development framework, then, PBI is the unit for product inspection and adaptation.
The different views lead to different thoughts about what are appropriate items in Product backlog.
What's in Product backlog?
With the view of Scrum as planning framework, everything that needs to be done can be put into Product backlog and planned in Sprints. This still makes sense until things not directly related to product come in, and some ought to be tasks in Sprint backlog. Some Product Owners mis-use Product backlog as a tool for execution. When ex-project manager takes the role of Product Owner, as they used to be execution oriented, this actually becomes a common pitfall. Over time, there is tendency to incorporate "how" items into Product backlog.
With the view of Scrum as product development framework, only things that help product inspection and adaptation are put into Product backlog. How valuable is it to inspect so as to adapt towards product vision and goals? Learning about the real customer needs is a valid PBI, as it helps move closer to successful product, assuming that it is validated learning. Prototypes and spikes are valid PBIs, as we build the product iteratively. Tasks are not, as they do not necessarily build working software and/or validated learning. Product Owners and great product managers use Product backlog as a tool for empirically developing a successful product. My colleague used Sprint goals to drive a new product development. A series of Sprint goals are actually items in Product backlog. Over time, there is tendency to incorporate "why" items into Product backlog.
Product backlog at scale
When we look at two most popular scaling framework, SAFe and LeSS. They take different views for Scrum, thus, their guides on Product backlog are also different.
In SAFe, there's no Product backlog, but team backlog, program backlog and portfolio backlog. At team level, it's defined as pretty much standard Scrum, while the use of team backlog indicates that it views Scrum as planning framework. As the team may or may not be feature team, items in team backlog may or may not be product features. At team level, the focus is mainly on execution of planned work.
In LeSS, it keeps product focus and scales only teams. LeSS views Scrum as product development framework, thus Product backlog is still product backlog. Any PBI by any team can be inspected then adapted at product level. As LeSS requires that the majority of teams are feature teams, the focus is mainly on product inspection and adaptation on Sprint basis.
To me, Product backlog is the backlog for product. It's critical to understand what our product is, then we can use Product backlog as a tool to iteratively and incrementally build it.