The Odd-e Blog

The Authors

Paths to PSPI

In scaling environment, it is common that PSPI is not yet achievable in every sprint. Even though the ultimate goal would be the team producing PSPI at the end of every short (say, 2-week) sprint, we begin with imperfect Done, i.e. there is undone work between current Done and PSPI.

Let's explore different strategies to deal with undone work. We examine two factors, who does undone work; and how often it is done.

Paths to PSPI - 1.png

Release Sprint

This is common approach - we do undone work at the end of release, in a period called with different names such as release sprint, stabilization sprint, hardening sprint, etc. In essence, this is the period we clean undone work to achieve PSPI.

Teams may do all this work. They are dedicated in achieving PSPI before working on the next release.

Separate Undone unit may do this work. Regular Scrum teams hand over the undone work to Undone unit. They may immediately move to next release, most likely with some reservation. When there is big gap towards PSPI, they may even reserve all their effort until reaching PSPI. Possibly after some period, they move to next release with some reservation.

See more details from "Practices for Scaling Lean & Agile Development", Chapter 5, Planning.

It is common to have some extent of staggering. The reason to stagger is for efficiency, while it creates problems such as multi-tasking, less collaboration, less visibility, less flexibility, etc. Kane Mar already blogged about this anti-pattern 10 years ago.

Hardening in every PI

PI (Program Increment) is a concept from SAFe. PI consists of a few normal sprint plus a sprint of HIP (Hardening, Innovation and Planning). Since v3.0, SAFe changed HIP to IP by removing Hardening. This is good in encouraging teams to complete undone work earlier, but in practice, Hardening largely still exists in IP sprint. It is good strategy to achieve PSPI every few sprints, when we can not yet achieve it every sprint, while it is too risky to only do once at the end of release.

In SAFe, teams and Undone unit may co-exist to raise Done to PSPI. As teams work throughout PI, it indicates that teams do not move forward until PSPI is reached. Staggering exists between hardening this PI and planning next PI, but it is somehow limited.

Another possible scenario is that teams hand over undone work to Undone unit, and move forward to next PI immediately with some reservation. They may create staggering between PIs. As IP sprint usually doesn't last long, it is less tempted to do so between PIs.

PSPI in every sprint

If team cleans undone work every sprint, it is natural to just expand Done.

Recently, I encounter a couple of cases when separate Undone unit does undone work every sprint, which creates staggering shown in the below.

Paths to PSPI - 2.png

Why do they want to stagger by leaving undone work to Undone unit in the following sprint, rather than expand Done to reach PSPI in the same sprint?

There are two main reasons.

1. Team lacks skill for undone work, this can be solved simply by moving people in Undone unit to team.

2. It is more efficient to stagger. Let me elaborate.

Suppose that we have 1-week manual regression test as undone work, and we have 2-week sprint.

  • If we exclude manual regression test in Done by team, while this is done by undone unit in the following sprint. We have 1-week undone work for 2-week development.
  • If we include manual regression test in Done by team, we can only do 1-week development, as 1-week is for manual regression test.

Of course, this line of thought assumes that staggering has no extra cost, and undone work takes fixed time (1-week). Neither is really true. On the other hand, we can acknowledge that the efficiency is indeed low with current capability of doing undone work. So, it may make sense to first improve the efficiency of doing undone work by e.g. test automation.

Actually, there is another alternative. Instead of staggering, we do 3-week sprint by moving people from Undone unit to team and expanding Done to include manual regression test. How does that compare to 2-week staggering?

Paths to PSPI - 3.png

At first look, it seems still more efficient with staggering. In 9-week period leading to PSPI in the end, with staggering to PSPI, 8-week of development work is done; with whole-team to PSPI, 6-week of development work is done.

With closer look, staggering affects the next sprint development, so, instead of 2-week, it may only be 1.8-week (assume 20% loss from staggered week, which could still be underestimated), then, it only gets 7.2-week rather than 8-week.

Moreover, we assume that 1-week undone work is fixed cost, which is not true. It doesn't consider the positive effect by having the whole team work, and improve, on manual regression test. Chances are, the team speeds up the automation soon it does not take 1-week any more. If it takes 0.5 week to do undone work, it gets 7.5-week instead of 6-week of development. Gradually, slow may become fast.

To be fair, with staggering, it is also possible to improve the efficiency of doing undone work. Eventually it would take undone work into the sprint, i.e. expand done, in 2-week sprint. Same with whole-team PSPI in 3-week sprint, as the cost of getting PSPI decreases, it would shorten 3-week sprint to 2-week sprint.

Therefore, there are two paths.

1. from "2-week sprint, staggering by Undone unit" to "2-week sprint, PSPI by whole team"

  • Perceived efficiency, less waste as development continues. In the short term, this may contain some truth.
  • It delivers PSPI every 2 weeks, while the other only delivers PSPI every 3 weeks. This may be important depending on the context - how frequent it has to deliver.

2. from "3-week sprint, PSPI by whole team" to "2-week sprint, PSPI by whole team"

  • Avoid all those problems with staggering
  • Faster improvement to decrease undone cost
  • More learning, leading to long-term efficiency

If we think of both as intermediate steps, which would drive the improvement faster - imperfect done or long sprint?


We may have to live with undone work while we improve towards PSPI every sprint. For each piece of undone work, gradual improvement is possible. The below summarizes possible evolving paths.

Paths to PSPI - 4.png


Who takes PO role in client project?

The essence in PO role is to manage ROI - Return on Investment. While you are doing projects for clients, there is contract in between and ROI involves both sides, which makes who taking PO role more complicated than it is in product organization.

We are going to look at this question for two common types of contracts - time & material and fixed price (derived from fixed scope).

Time & Material

This is Agile friendly. It's straightforward that PO comes from client. For client, Investment is fixed for one team sprint, thus, he focuses on maximizing Return by prioritizing the high value work. For vendor, Return is fixed, and so is Investment. Thus, usually a business manager from vendor dealing with simple contract will suffice.

When it is done sprint by sprint, client has the right to terminate the project earlier than the initial estimate, as long as with sufficient notice period. Vendor is compensated by higher margin or shared saving for this flexibility.

Even though it is Agile friendly, i see the further improvement opportunity by linking it to team. Instead of counting how many individuals, we count teams. The price of different teams varies as the performance of them varies.

Fixed price (derived from fixed scope)

The fixed scope means little or no uncertainty in requirements. At least, it is perceived so by client. We separate two different cases under this contract mode - without client involvement (as client completely denies change) and with client involvement (as client is open to change to some extent).

  • without client involvement, risk driven

You may be tempted to do waterfall in this case, but unless there is also little or no uncertainty in development, which is very rare, you still benefit from doing sprints.

Vendor managing its Investment becomes the key driver, thus, PO is from vendor. To vendor, Return is fixed in fixed price, PO actively tries to mitigate risks thus controlling cost (i.e. Investment). To client, it sees ROI as fixed during contract negotiation, thus has no desire to maximize Return during the course, as shown in asking for "fixed scope".

In this case, PO could also be the business manager negotiating with client for the contract in the first place.

  • with client involvement, value driven

Client is willing to accept change during the course, and client and vendor are managing ROI collaboratively.

There are business managers from both client and vendor. One is PO, and the other as critical stakeholder to participate in inspection and adaption via sprints. It is better to have PO from client, as same person can own product vision.

In any case, they work together to decide which changes are accepted, as this affects both sides.

1. Can those changes be accommodated in the existing contract?

If there is little impact on cost, ok from vendor; for client, better ROI.

If there is slight increase on cost, it may still be ok for vendor; for client, better ROI.

If there is slight decrease on cost, better ROI for vendor; for client, still better ROI as change is made for more value.

2. Should we initiate change (change price) in contract?

In the case of increasing the price, ROI accepted from vendor, better ROI for client (more Investment, but even more Return)

In the case of decreasing the price, the saved Investment could be split between client and vendor.

Win-win is clearly possible, when client and vendor collaborate.


Time & Material is simple and Agile friendly. With Fixed price, when there is no client involvement, the benefit is limited; when client is willing to involve, even take PO role, better ROI can be achieved for both sides, through collaboration. In fact, fixed price is not fixed any more.


Two Guidelines for Metrics

During the recent Advanced ScrumMaster course, two guidelines for performance metrics emerged.

Continuous Improvement over Performance Evaluation

"Measuring and Managing Performance in Organizations" by Robert D. Austin is a great source for the topic of metrics. He made the distinction in terms of the purpose of using the metrics, either as motivational or as informational.

Motivational purpose ties to Performance Evaluation, while informational purpose ties to Continuous Improvement. Metrics could be the same, but used with different purposes. Take the example of unit test coverage. Management uses this metric as one KPI for team, the higher, the better. This is motivational. Team itself may find that coverage provides meaningful insights to guide their improvement in unit testing, thus, they decide to measure it. This is informational.

Austin claims that all metrics with motivational purpose inevitably lead to measurement dysfunction to some extent. This is what we see in reality that some teams don't add any check/assert in their unit test purely for the sake of achieving measured coverage target.

This leads to the first guidance: Continuous Improvement over Performance Evaluation

Collaboration over Accountability

When you cannot yet avoid motivational use of metrics, the technique of measuring up helps. Traditionally, we measure based on the span of control. By measuring up, we measure based on the span of influence.

Take a couple of examples. Traditionally, we measure developers based on the development output, e.g. lines of code; we measure testers based on the testing output, e.g. number of bugs found. By measuring up, we measure the collective output from the cross-functional team. This applies at broader scale too. Suppose that multiple teams work on different features and those features together form a complete customer scenario. Traditionally, we measure single team's output, e.g. delivered feature. By measuring up, we measure the collective output by multiple teams, e.g. delivered customer scenario.

Through measuring up, we promote the bigger common goal, thus more collaboration. Would this sacrifice clear accountability? Probably yes, but does it matter?

Yves Morieux's Ted Talk "How too many rules at work keep you from getting things done" makes a great point on this question.

In relay race, who's to blame (sorry, who's accountable:) when the baton drops? It's not clear, so, in order to have more clear accountability, you may introduce a third person, whose sole responsibility is to take the baton from one and pass that to the other, thus, he would be accountable when it fails. This is a system designed for failure, failure with clear accountability. However, would that help win the race?

Moreover, you may try measuring one-level up, thus, still take accountability into consideration to some extent.

This leads to the second guidance: Collaboration over Accountability


So often, people ask what metrics we shall use in Agile context. In my humble opinion, these two guidances are more critical for your success in effectively using metrics.

Note: special thanks to my co-trainer Sun Ni for great insights.


When work and skill mismatch

You have more work in one area than what the people in that area can do. You also have available people in the organization, but their skill does not match the work.

Will this happen? Believe me, it will always happen at some point, either at team level or at individual level, or both. This is simply due to dynamic work vs. "static" skill (really? maybe as snapshot, and hopefully not in the life span). When that happens, there are a few strategies to deal with it.

  • Push it

If you are working in matrix and responsible for one project, the obvious strategy is to grab more people (or resource, are people resources?) with the right skill and allocate them to the area having more work. What if other projects suffer even more? It's none of your business.

If you fail to get more people, another strategy is that you anyway push this through the developers working in your project. This inevitably leads to the application of developer's secret toolbox, which sacrifices quality and gradually leads to more and more legacy code, and eventually slows down the development, a lot.

  • Accept it

If you take a bigger and longer-term view, you see that "push it" strategy creates more problems than it solves. Another strategy is simply accept the fact, pull the work based on the capacity, with the acknowledgement that the output is constrained by the bottleneck. You either extend the schedule or reduce the scope, or a bit of both.

This is wise in the sense that it respects the system, thus the system would not backfire. However, the speed or the value of delivery suffers and you may not survive.

  • Grow it

This strategy is quite logical, as bottleneck area constrains the output, you grow the capacity in that area. It does solve the problem of delivery. However, as bottleneck area moves, it leads to staffing based on the worst scenario.

This is not lean. As it is practically hard to shrink in size, some teams and individuals will inevitably work on low-value stuff and create wastes. And, the more important waste is the  underutilization of the people potential. People differ greatly from resource because people can learn. By limiting people only in their familiar areas does not show the respect for people, which is one of the two pillars in lean thinking.

  • Learn it

Then it comes to the most sensible strategy. You treat mismatch as an opportunity to learn and expand skills. This does not mean that specialization is bad. You still utilize the specialization for efficiency, meanwhile take advantage of mismatch for people to learn new areas. This is lean. Long-lived feature team adopts this strategy, and LeSS explicitly promotes feature team structure.


  1. feature team primer
  2. more on feature team chapter in "Scaling Lean & Agile Development: Thinking and Organizational Tools for Large-Scale Scrum" book


What's in Product backlog?

What's appropriate to put into Product backlog? In order to answer this question, we first look at different views on Scrum.

Two views of Scrum

One is viewing Scrum as planning framework, then, PBI (Product Backlog Item) is the unit for work planning. The other is viewing Scrum as product development framework, then, PBI is the unit for product inspection and adaptation.

The different views lead to different thoughts about what are appropriate items in Product backlog.

What's in Product backlog?

With the view of Scrum as planning framework, everything that needs to be done can be put into Product backlog and planned in Sprints. This still makes sense until things not directly related to product come in, and some ought to be tasks in Sprint backlog. Some Product Owners mis-use Product backlog as a tool for execution. When ex-project manager takes the role of Product Owner, as they used to be execution oriented, this actually becomes a common pitfall. Over time, there is tendency to incorporate "how" items into Product backlog.

With the view of Scrum as product development framework, only things that help product inspection and adaptation are put into Product backlog. How valuable is it to inspect so as to adapt towards product vision and goals? Learning about the real customer needs is a valid PBI, as it helps move closer to successful product, assuming that it is validated learning. Prototypes and spikes are valid PBIs, as we build the product iteratively. Tasks are not, as they do not necessarily build working software and/or validated learning. Product Owners and great product managers use Product backlog as a tool for empirically developing a successful product. My colleague used Sprint goals to drive a new product development.[1] A series of Sprint goals are actually items in Product backlog. Over time, there is tendency to incorporate "why" items into Product backlog.

Product backlog at scale

When we look at two most popular scaling framework, SAFe and LeSS. They take different views for Scrum, thus, their guides on Product backlog are also different.

In SAFe, there's no Product backlog, but team backlog, program backlog and portfolio backlog. At team level, it's defined as pretty much standard Scrum, while the use of team backlog indicates that it views Scrum as planning framework. As the team may or may not be feature team, items in team backlog may or may not be product features. At team level, the focus is mainly on execution of planned work.

In LeSS, it keeps product focus and scales only teams. LeSS views Scrum as product development framework, thus Product backlog is still product backlog. Any PBI by any team can be inspected then adapted at product level. As LeSS requires that the majority of teams are feature teams, the focus is mainly on product inspection and adaptation on Sprint basis.


To me, Product backlog is the backlog for product. It's critical to understand what our product is, then we can use Product backlog as a tool to iteratively and incrementally build it.



Split work and people

How to split work (from big to small) and how to split people (into multiple teams) are two essential and very related topics in the context of scaling Agile.

In these two dimensions (work and people), there are two common splitting strategies, through component or through feature. This leads to the below diagram.

Split work and people.jpg

Let's dive deep into each quadrant.

  • Component work by Component team

This is the traditional way of working, and strongly related to waterfall development. Feature is split into components via design by architecture group, then developed and (component) tested by component teams, eventually integrated and (system) tested by system testing group. The problems with this strategy are handoff waste, prolonged cycle time, delayed feedback, lack of flexibility, etc.

  • Feature work by Component team(s)

First split from big feature to small features, then split into component tasks is the key idea behind Agile and helps tremendously on the speed and flexibility in value delivery. When related component teams are able to collaboratively deliver the same small feature at the same time, it is quite ok. However, in practice, those component work around same feature is often out of sync, which delays feedback and value delivery. Moreover, this is a systematic problem, and can not be solved by stronger coordination. The deep assumption behind component team structure is that people are most efficient in working on familiar areas, and learning other areas is costly and slow.

I put SAFe into this quadrant, not because SAFe demands component team structure, but because SAFe implementation tends to retain current organizational structure, while component team structure is prevalent in organizations.

  • Feature work by Feature team

Feature team takes feature, and splits into smaller features to deliver. This is great in achieving agility. This systematically addresses the problems with component team. Meanwhile, it has its own challenges such as coordinating in code, maintaining the component integrity, efficient learning, etc. Furthermore, it demands organization (re)design, which usually means more radical change thus is perceived as more risky.

I put LeSS into this quadrant, because LeSS explicitly requires that the majority of teams are feature teams. Organization design is vital in LeSS adoption.

  • Component work by Feature team

This is rarely seen in practice. If you already have feature team, it makes no sense to feed them with component work. Rather feature team will take feature as input and split them into component tasks as part of their development.


Split work via feature dimension is critical element in Agile adoption, while split teams via feature dimension is critical element in LeSS adoption. This is the place we shall strive for. 


Contract game

Contract game is the traditional way that Business and R&D work together. At the beginning of the project, Business asks R&D to commit a big batch of work and holds R&D accountable for execution. The mindset behind is "plan then execute".

Clearly this is not Agile way. One of Agile values is "Customer collaboration over Contract negotiation", which exactly addresses the dysfunction of contract game. Agile is about adaptive planning, and Scrum is about empirical process. The mindset behind is "guided by goal, inspect & adapt".

  • Is Release planning predicative or adaptive?

Release planning can be a beneficial practice, as it creates the initial understanding about the work. However, in practice, it is often mistaken as upfront and predictive planning. It is  even used to provide the input for contract game. Once you see that work is split into every sprint in the release with great details, the focus is primarily on contract negotiation. With the mindset of contract game, you may see that business people only join Release planning, but not the inspection and adaptation on sprint basis.

So, in Scrum, Release planning creates the initial product backlog, and you may regard that as the first backlog refinement, which is from none to the initial. Then, every backlog refinement updates the release plan and new requirements emerge, thus, Release planning is continuous.

Backlog Refinement = Continuous Release Planning

The key to break contract game in this level is that Business people and PO are working with R&D on sprint basis, and making decisions regarding ROI every sprint.

  • How about Sprint planning?

If more people have realized that the release should be managed empirically, the sprint is still largely managed predictively - we plan in Sprint planning, then, we execute the sprint. This is a misconception, because if it is a predictive process, then, what is the point of having Daily Scrum, whose purpose is to manage it empirically with daily inspection and adaption? With the mindset of contract game, you may see that the sprint commitment is regarded as certainty, and PO disappears after Sprint planning and only sees team again in the Sprint review.

So, in Scrum, Sprint planning creates the initial sprint plan (aka. sprint backlog), and we don't sign up for all tasks in Sprint planning, only those you start doing next day. Then, every Daily Scrum updates the sprint plan and new tasks emerge, thus, Sprint planning is continuous too.

Daily Scrum = Continuous Sprint Planning

The key to break contract game in this level is that PO collaborates with team in the sprint through just-in-time review, defining sprint goal, re-negotiating the scope when necessary, occasionally joining Daily Scrum, and just talking to each other.


While PO manages release and team manages sprint to large extent, contract game is harmful at both levels. Are you in the mode of contract game?


Specialization vs. Responsibility

Specialization is about being good at something, while responsibility is about having a duty to deal with something. While they are related, it may not be good idea to couple them together.

Component specialization and Feature responsibility

Conway's law states, "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"

Once you setup team with certain component responsibility, they specialize on the component naturally. Then, this becomes the source of inflexibility, when specialization is used to justify tying those teams with those components. Evolving architecture becomes harder when it is coupled with organizational structure.

The solution is feature teams with collective code ownership. The responsibility is on feature, while component responsibility is shared, do we still have component specialization? Most likely yes, at least at individual level. Some people are more knowledgable thus better at doing certain component work. Generalizing specialists starts with some speciality, then expands to other specialities. For example, you are the main developer on component A, and you expand to work on component B and develop specialty there over time. This may apply to team level too, since some features may touch certain part of system and the related components more often than other features. Specialization just happens. Since the responsibility is on feature, thus, the component specialization does not limit us from picking the most valuable feature items.

In short, have feature responsibility, and let feature guide component specialization.

Feature specialization and Product responsibility

The same dynamic happens with feature specialization. When the team has feature responsibility, e.g. it is a "Payment" team, the team specializes on the relevant domain. Then, this becomes the source of inflexibility, when specialization is used to justify tying those teams with those features. Evolving product becomes harder when it is coupled with organizational structure.

The solution is feature teams with common product backlog. Those teams have product responsibility, and they don't have special feature area responsibility such as "Payment". Do they still have feature specialization? Most likely yes. Given the team developed features in this area, when another high-value feature comes to same area, the team likely picks this one. Over time, they specialize on this feature area. This is even a good thing since it helps develop the deep knowledge and skills, but we don't want to label them as "payment" team and define their responsibility around this feature area. When that happens, we start to select features because we have suitable team available, rather than customer value driven. Therefore, every team is product team without having name tied to feature area. Instead of having "Payment" team, we have "Gryffindor" team. For specialization, we may even want to track and take advantage of it when possible.

In short, have product responsibility, and let product guide feature specialization.

Requirement area

If you work on large-scale development and adopt LeSS, you should have heard of Requirement area. Do we want to introduce areas with clear responsibility on product domains, or areas simply as groups of teams and let specialization on product domains happen and evolve? There is no simple answer. The bottom line is, Requirement area is dynamic. When you give Requirement area clear responsibility and associate it with meaningful area name (e.g. Security), it may help build the identity thus accelerate the specialization. On the other hand, this may also lead to Requirement areas standing still forever.


Specialization and Responsibility are two different things, and we shall not confuse them. Specialization happens, and you may want to track and take advantage of it, while narrow responsibility, defined around specialization and often labelled in name, decreases flexibility and leads to local optimization.


Back to fixed scope

I'd like to revisit the rationale behind moving fixed scope to fixed time in Agile development. By understanding what is essential, we may get back to the thinking of fixed scope.

Fixed scope in traditional development

fixed scope.jpg

In traditional development, we often start by fixing the scope (of the release), then work on how much time and how many people we need. The number of people is the main cost driver in software product development.

Fixed time in Agile development

fixed time.jpg

In Agile development, we often start with fixed time and fixed cost, then work on how much scope we can deliver within those constraints. Fixed time is implemented as iteration and is also called timebox. When the team is stable, we have pretty much fixed cost. Release consists of multiple iterations, and the number of iterations may or may not be fixed.

The rationale behind moving from fixed scope to fixed time:

  • Scope often has the most flexibility, particularly when you look into details. For complex product development, we learn the right scope over time, while fixed scope reduces flexibility and makes it difficult to respond to change.
  • Increasing the number of people, although increasing cost, may not increase the speed. This is best illustrated by Brooks's law - adding manpower to a late software project makes it later.
  • Time has less flexibility due to the growing need for short time to market and even occasions when the time delay is impossible (e.g. Christmas). Timebox helps prioritize and focus, as well as build development discipline.

Back to fixed scope

If you look at the rationale, it assumes that the fixed scope is big. When it is small and minimum, the problems with fixed scope disappear. Therefore, the key problem is big fixed scope. Timebox is one approach to reduce fixed scope. Another approach is to limit WIP directly as done in Kanban. Limiting WIP helps prioritize and focus too. The remaining advantage from timebox may be the support for building development discipline.

With further scope optimization, our focus moves towards identifying the meaningful minimum. It is MMF (Minimum Marketable Feature). In terms of story mapping, it is the minimum slice rather than a single story. The time to deliver MMF is not fixed, but usually short due to the minimum scope. Once we identify MMF, we develop it and release it, with the discipline of continuous delivery. We are back to the thinking of fixed scope, but small fixed scope.


The thinking of fixed scope is not the problem. The problem is that fixed scope is too big. We solve it by reducing it. We may apply timebox, which is the way behind moving fixed scope to fixed time. We may also limit WIP directly, which is the Kanban way. Eventually, if we identify one MMF each time and make continuous delivery, we are back to fixed scope, but very small.


Tighten or loosen roles?

In my recent CSPO course, I got a question about when we should focus on the whole team (team, PO and SM), and when we should highlight different roles. Some people are somewhat confused because on one hand we define different roles and responsibilities, on the other hand we talk about the whole team. How strictly should those roles be defined and responsibilities be respected? Should we tighten or loosen roles?

MOI and Agile value

That question made me think and reminded me of MOI (Motivation, Organization, Information) mode from Jerry Weinberg. In MOI, neither too little nor too much leads to effectiveness. The most effectiveness is achieved when you strike a balance. This applies onto Motivation, Organization and Information, and organization is the dimension related to the question.


It implies that it depends on your context. You observe what makes your situation less effective, is it due to too loose or too tight organization? Roles belong to organization. Your effectiveness is low. When it is caused by lack of organization, you increase it by for example highlighting roles. When it is caused by too much organization, you decrease it by for example focusing on the whole team.

The first Agile value is "Individuals and interactions over processes and tools", roles are part of processes. If what you do depends on the situation, does it conflict with that we value more on Individuals and interactions, which implies loosening roles? No, it does not. In general, we lean towards loosening roles and favor individuals and interactions, while in your specific situation, it is still possible that you lean too far away from organization, thus you actually benefit from having a bit more processes by for example tightening roles a bit.


How do we tighten or loosen roles? CDE (Container, Difference, Exchange) from Glenda Eoyang provides useful insights, in particular, we influence the organization by changing container. Expanding container leads to more loosen roles and more room for self-organization. This includes practices such as collective code ownership, PO and team collaborating on requirement clarification, creating the whole product team, etc.