Trust between PO and team
In my recent CSPO (Certified Scrum Product Owner) course, we had a discussion exercise about how PO breaks and/or gains trust from team. I'd like to share some points so that any PO can keep those in mind while working with the team.
How does PO break trust from team?
Team pulls the right amount of work into next sprint. When PO pushes team to commit, he breaks trust from team.
Sprint is closed to change, unless PO abnormally terminates it, which should be very rare. When PO regularly initiates change inside the sprint, he breaks trust from team.
- can't clarify the requirement
PO works with team to clarify the requirements. When PO takes requirements from somewhere with second-hand, and could not clarify questions or give examples, he breaks trust from team.
Even when PO can clarify what, but if he can not state why, he breaks trust from team.
- not available during the sprint
PO works with team not only during sprint planning and review, but also throughout the sprint. If PO treats himself as "customer", and disappears during the sprint, he breaks trust from team.
- no feedback for delivered features
After product increments are delivered, PO is supposed to collect the feedback from customers and users and share it with team. If team never hears back from PO about the delivered features, PO breaks trust from team.
If PO selectively shares back to team, with only good news, which demonstrates his good work in defining the product, while PO hides bad news and decisions he made earlier, which he feels embarrassing. When PO does this and team finds it out, PO breaks trust from team.
Team estimates size in planning. When PO does it for team, even says that it is just for their reference, he breaks trust from team.
- monitor the progress within the sprint
Team self-organizes to deliver the sprint goal, which includes monitoring the progress by themselves. Once team does that and micro-manages the progress within the sprint, he breaks trust from team.
Team decides how. If PO enters into the implementation domain and interferes team from self-organizing on how, he breaks trust from team.
Likewise, you may do the same exercise from team perspective. Here is some of my initial thoughts.
How does team break trust from PO?
Waterfall team often delivers partially done work by the end of the sprint. When that happens, PO has difficulty to know where we are and loses the flexibility to adapt in next sprint, team breaks trust from PO.
Team states that the work is done, while it is not. Later, PO finds it out. Team breaks trust from PO.
When team has many production defects after delivery or accumulates technical debts, their velocity on new features gets lower over time. Team breaks the trust from PO.
PO uses velocity for long-term planning. While velocity varies greatly, PO loses predictability. When team's velocity does not get stabilized after a while, team breaks trust from PO.
- not deliver the committed work
When team consistently delivers the committed work, it gains trust from PO. While software development has inherent uncertainty, if team regularly could not deliver the committed work, it breaks trust from PO.
On the contrary, if team overemphasize the safety in delivering their commitment, it does not set challenging goal for themselves, team breaks trust from PO.
- blame PO for requirement defects
Requirement clarification is a collaborative activity. When we receive requirement defects, instead of collaboratively seeking improvement, if team blames PO for those and runs away, it breaks trust from PO.
- do not support PO for backlog refinement
PO gets support from team in backlog refinement or product discovery. When team only focuses on the current sprint and leaves PO alone for future preparation, it breaks trust from PO.
I believe that you would come up with more ideas that will help PO and team gain and maintain trust from each other. Hope that these lists provide a useful start.
Experiments on Daily Scrum
The purpose of Daily Scrum is for team to inspect and adapt towards Sprint goal. The mechanics seem so simple, while in practice its purpose is often not well achieved. Here's some experiments that may help you find better mechanics to achieve purpose.
- Separate "inspection" and "adaptation" questions
If you look at 3 defined questions, "what did you do since last Daily Scrum?" and "what got in your way?" are questions to help team inspect; while "what will you do until next Daily Scrum?" is question to help team adapt. In typical Daily Scrum, every team member reports to each other with 3 questions in one round. I have found that it is more effective, as well as more natural, to do in two rounds. In the first round, everybody reports with 2 "inspection" questions, then, update sprint burndown and other information that helps understand where we are towards Sprint goal. In the second round, we focus on the "adaptation" question, which is essentially a daily planning or re-planning.
- Story focus rather than people focus
In traditional format of daily Scrum, people take turns to report. When the team is big (e.g. 7-8 people), and/or there are many ongoing stories (e.g. more than 3), it is hard to learn the big picture by listening to everybody reporting in round robin. The adaptation is rather accidental and individual oriented. It improves when you create story focus by reporting on stories in priority order. People working on the story report status and impediments, team as a whole adapts.
- Better focus by limiting WIP
When WIP (the number of ongoing stories) is high, it makes it hard to focus while inspecting and adapting. Inspection and adaptation become easier when you only look at small number of stories. You may physically create fewer lanes in your task board, and pull story only when there is empty lane.
- Sprint Burndown variants for better transparency
Sprint Burndown provides transparency for inspection. Traditionally, burndown is on tasks, everyday we re-estimate to get the remaining hours on tasks. Does it provide good transparency to help us understand how far we are towards our sprint goal? Does it incur much cost? You want to achieve best transparency with minimal cost, thus, teams experiment various burndown techniques.
- Some teams simplify this by not re-estimating tasks, but burndown only when task is done. This reduces the cost of re-estimating, and they find that the transparency doesn't suffer, instead, it is actually closer to reality because often the last 20% of task takes 80% of efforts.
- Some teams burndown stories. With the support of good engineering practices (continuous integration, ATDD, etc.), they are able to see the progress on story level within the sprint. This provides better transparency in the spirit of "working software is the primary measure of progress".
- Some teams think that the story granularity is still too coarse for progress tracking within the sprint. They burndown acceptance tests. While doing ATDD, they track passing acceptance tests as progress indicator, which is still based on working software.
ScrumMaster coaches team on inspection and adaption. GROW (Goal-Reality-Options-Will) is a simple coaching model for awareness and responsibility. "Goal" and "Reality" questions are great in helping team raise awareness during inspecting; while "Options" and "Will" questions are great in helping team take responsibility during adapting.
Keep the purpose of daily scrum in mind, experiment different mechanics to better achieve the purpose.
What change do you introduce next?
During Agile adoption and transformation, there's an important question of "what change do you introduce next".
I see a few approaches to introduce the next change.
- Adopt Scrum, which means to introduce Scrum roles - cross-functional team, PO and SM; Scrum events - planning, daily Scrum, review and retrospective; Scrum artifacts - product backlog, sprint backlog; most importantly, delivering Potential Shippable Product Increment (PSPI) in every sprint. This is a big and radical change. And it is just a starting place. By asking to deliver PSPI every sprint, Scrum acts as a mirror, exposing your problems and weaknesses, which you introduce your next change to address them. In short, it is the radical change (adopting Scrum) followed by incremental changes in the spirit of continuous improvement.
- Adopt Kanban, which means to introduce 3-6 practices. 3 practices are visualize the work, limit WIP and manage flow. 6 practices include explicit policies, feedback loops, improvement in addition to the 3 practices. This is a less radical change than Scrum, since it does not involve immediate structural change. Kanban is more a change management technique than a development process. By limiting WIP and measuring cycle time, it exposes your problems and weaknesses so that you can improve. Kanban evolves from lean thinking, and continuous improvement is its root. In short, it is a less radical change (adopting Kanban) followed by incremental changes.
- Adopt specific practices based on experience. Scrum and Kanban are explicit methods that group a set of practices for synergy, while experience based approach contains more tacit knowledge. In David Hussman's approach to create coaching plan, he assesses 4 areas - Team and community, Iterative delivery, Product and planning, Improving and tuning, then, based on experience, suggests the next change or set of changes. The change is practice oriented, rather than method oriented. For instance, if product and planning are weak, he may suggest to adopt practices such as Agile chartering, Persona and Story mapping; if iterative delivery is more restricting them, continuous integration may be the next change to introduce. This is followed by another assessment after a while, and another round of change, which is a continuous improvement cycle too. In short, it is incremental change after incremental change.
- As change, big or small, radical or incremental, they are really a continuum, rather than a dichotomy.
- One difference lies at the first step. Scrum takes bigger step than Kanban; experience based approach is flexible but usually takes smaller steps.
- Another difference lies at looking for the next change. Kanban and Scrum have built-in mechanism to guide your next change. Kanban has stronger focus and direction than Scrum on what to improve next. Experience based approach involves more tacit knowledge.
Path through Agile fluency
In 2012, James Shore and Diana Larsen created a team's path through Agile fluency.
- 1 star: focus on value by shifting team culture - adopting Scrum/Kanban practices
- 2 star: deliver value by shifting team skills - adopting XP practices
- 3 star: optimize value by shifting organizational structure - adopting product discovery practices such as lean startup
- 4 star: optimize for systems by shifting organizational culture - looking at culture of organizations such as Semco, Valve, etc.
In essence, Agile fluency model is an experience based approach. It creates explicit knowledge as a model based on authors' experience. As comparing to David's approach embedded in his coaching plan, I see two main differences.
- Agile fluency model contains more explicit knowledge than David's coaching plan. David only describes 4 areas that he would look into, which is explicit knowledge, while he doesn't explain how to come up with effective suggestions, which involves much tacit knowledge.
- The areas specified by David are in fact quite similar to areas in Agile fluency model. Team and Community vs. 1 star area; Iterative delivery vs. 2 star area; Product and planning vs. 3 star area. While Agile fluency model suggests the progression from area to area, David seems taking an approach to progress in depth in all areas.
In my experience of introducing changes, i have mainly been using the approach of starting with Scrum adoption. I realize more and more that many organizations are not ready for this approach, thus not effective in moving forward. I'm gradually taking more experience based approach in some context.
Considering "Limits to growth", i favor more on David's approach over Agile fluency path. However, we benefit from creating more explicit knowledge around areas.
- What's ideal in each area?
- What could be progression in those areas?
- Develop some checklists to guide the assessment and suggestions
I'll start on my own, talk to my colleagues at Odd-e, and talk to David about the interests to work together.
Recently, i attended a training on Kanban. To drive continuous improvement, Kanban has its main or even sole focus on shortening the lead time. I was impressed by results from a couple of case studies shared in the training. This triggered me to think further on how improvement is achieved in Scrum.
Before getting to improvement topic, let's first understand Scrum foundation a bit more. Scrum framework is based on empirical process control, which consists of transparency, inspection and adaptation. Transparency is necessary for your inspection and adaptation, moreover, goals make inspection and adaptation effective.
There are three explicit points for inspection and adaptation built in Scrum flow. They are daily scrum, sprint review and sprint retrospective. In CSM course, we relate empirical process control to Scrum flow. We start with the purpose of those meetings - they are all for inspection and adaptation. Then, we dive deep around the below questions - what goals guide inspection and adaptation? how frequent does inspection and adaptation happen? who's leading it? how do we create transparency? what information is inspected?
Let's elaborate on what goals guide inspection and adaptation.
- For Daily Scrum, it is sprint goal that guides inspection and adaptation. We inspect what happened in the last day and where we are towards sprint goal, then we make adaptation for the coming day.
- For Sprint review, it is product goal that guides inspection and adaptation. We inspect product increment from last sprint and where we are towards product goal, then we make adaptation about what to deliver in next sprint. The product goal is dependent on the context. In the context that you release in multiple sprints, it would be your release goals. In the context that you release every sprint or even continuously deliver your product increments, it would be your product vision and roadmap.
For Sprint retrospective, it should be process goal that guides inspection and adaptation. We inspect what worked well and what to do differently in the last sprint, then, we adapt by experimenting different ways of working in the next sprint. However, the inspection against goal is usually implicit and weak. I call process goal as improvement goal, since retrospective enables continuous improvement. What's the improvement goal? What does the perfection look like? Unfortunately, it's neither defined by Scrum, nor defined by most Scrum teams. The lack of improvement goals makes the inspection and adaptation in sprint retrospective often less effective than it could be.
In practice, the lack of improvement goals leads to lots of shotgun retrospectives. Teams randomly pick up problems to solve and areas to improve. We shall benefit from setting clear improvement goals. I'll share some ideas about what improvement goals could be.
First idea is to learn from Kanban. Lead time is probably the most leveraging point. We set the only improvement goal to shorten lead time. Less is more. We shall start to measure the current lead time and monitor its trend, then analyze and act on it for improvement. Kanban also provides specific tools for that, such as limiting WIP, CFD, control chart, etc. It seems a bit absolute and single-minded to only focusing on lead time, but it indeed has profound impact and will eventually cause improvement in many areas, such as collaboration, infrastructure, engineering practices, etc.
One challenge in using lead time as improvement goal in Scrum is that Scrum uses timebox. In theory, team may work on all items in parallel and get them all done by the end of Sprint, thus, the lead time for all items is the same as sprint length. In reality, this approach involves high risk and is discouraged. Once team tends to work on small items in sequence within the sprint, it could still be useful to look at lead time even if we do timebox.
Besides lead time, there are a few alternatives as improvement goals.
- When your definition of Done is still subset of what's required to be potentially shippable, expanding Done is an effective improvement goal. However, some companies are doing continuous delivery, which is already beyond Done at the end of sprint. For them, this is already reality, rather than improvement goal.
- Team visioning to come up with improvement goals for longer period of time, which guides sprint retrospective. Team radar exercise is a way to monitor its improvement progress.
- Velocity, which should be used with much caution. In a way, it's the equivalent of lead time in timebox. Through improving the flow, lead time will be shortened, while throughput will increase. However, since there's "easier" way to get improvement on velocity - by inflating the work size in story points, it is less effective than lead time itself in teams of creating real improvement focus.
In summary, establishing improvement goals is essential for the effective inspection and adaptation in sprint retrospective. It is worth looking at how Kanban drives improvement by relentlessly focusing on lead time.
When does story mapping help?
Recently, I attended a workshop by David Hussman, and story mapping was one of the topics. Since I had rarely used story mapping, while didn't realize its Indispensability, it made me think of why.
The introduction of story mapping technique is a response to lack of context for user stories. When you have many small stories, you lose the context and don't see the big picture. Those stories have to be combined to create complete user experience. The complete user experience is also called scenario or customer journey.
However, in my experience of developing telecom systems, i have never encountered this problem. To be honest, we seldom used user story, because telecom systems are not really user oriented and user intensive. We used to think in use case. When I compare story map with use case, the customer journeys in story map are similar to main success scenario and extensions in use case. Then, we simply take those customer journeys as PBIs (Problem Backlog Items) without splitting them into steps-based user stories.
- The process of writing user stories
When we work on user stories, it's common to brainstorm around user roles. Typically, we derive user stories from user tasks. Then, we find that some user tasks have to be combined in order to complete a journey. Therefore, story mapping becomes very useful, even necessary in order to understand the context and see the big picture.
- The process of writing use cases
However, if you write use cases, it starts with actors. More precisely, you start with primary actors and its goals. There is use case for each goal, and each variation for achieving the goal becomes main success scenario and extensions. You don't start with supporting actors, while the tasks done by supporting actors are necessary to achieve the goal, thus, they are pulled into the diagram.
- Change the process of writing user stories
If we apply use case thinking, we can modify the process of writing user stories a bit.
Think about primary users/personas, not every user who is interacting with the system.
Think about goals from primary users/personas, not tasks from them.
By doing this, you get stories to achieve goals, rather than stories to do tasks. In this case, those stories are scenarios and customer journeys.
- Examples for clarification
For scenarios and customer journeys in products and systems with rich user interaction, it is still beneficial to clarify by examples, where the scenario is clarified by listing all the tasks/steps it goes through. Those tasks could be initiated by primary user, but also from supporting users and systems. However, those tasks may not be, or preferred not, regarded as stories.
The other way to look at this is from story splitting. Take scenario based stories, if they are too big for delivery, we figure out how to split them. There are many splitting ways, with splitting from steps as one way, even not the recommended way, since those steps alone don't achieve user goal, thus, not really valuable.
So far, we compared story map with use case from its thinking process. Traditionally, use case mainly acts as a way to document the system, while user story and story map are mainly to support the communication. Story map is more visual and often exists in physical space, while use case is often written in document with UML notation. With Agile modeling, it doesn't have to be so, though.
It helped me understand story mapping and its implication by thinking this through, does it help you?
Appropriate Agile Metrics
Much has been said and written about how you can measure being "Agile" in software development. I, for one, had my own share writing an article about it in an online Agile magazine back in 2011. Of which, in retrospect, makes me laugh at how bad it is. Bad in terms of ridiculous complexity. And bad in terms of missing a clear context of what the whole practice is for. All the circus was supposed to be a show by the development team, for the development team, no one else. I presented tools for them to try and gauge whether they are improving and becoming as agile as they can be. I can see that it can easily be used by people outside of the development team (yes, I'm looking at management) and misuse it. I like how someone from an Agile forum pointed out that it's good, but ultimately is just a placebo.
Almost a year ago, eBay posted an article
regarding how they use a plethora of metrics to appropriately create their performance feedback system. All is well and good until I hit these things of which I do not necessarily agree with.
"The peer feedback results not only help the management team get much more insight into each individual's performance, but also help identify and fix team-level issues that have more profound and meaningful impact on our ability to improve our work."
I believe monthly surveys targeting individual members won't give you insights on how to fix team-level issues. Why? It's fixing the whole by fixing the parts (or at least understanding the moving bits). This is typical reductionist
point of view that is prevalent in most thorough top-down mandated metrics. People are way more complex than this. The levers and switches to fine tune the team-level performance does not lie solely (if at all) in any individual performance metric
"People self-organize and share the team's performance. But how about the individual's performance within the team? I'm not supposed to micro-manage each person, but it seems the Scrum team becomes a 'black hole' to me, and I lose sight of each team member's performance behind the 'event horizon'"
So they are not supposed to micro-manage and yet need to keep track of each team member's performance. A bit conflicting statement. If they are not to micro-manage, then what's the individual monitoring for? Perhaps because it's an irrevocable company policy, and the guy making the statement above is forced to do something that conflicts with their philosophy. And now you have competing philosophies between the company and the personnel. Metrics can easily mask an underlying conflict within the system.
In summary, I would like to think the right metric is one which is contextually correct. That is, people in the right context
monitor and fix their situation, regardless of how simplistic or complex their way is. The purpose of which is for only one or a very limited set of goals (e.g., for the team to understand where they are headed, not for performance appraisals). Otherwise, gaming
the process is inevitable. With possible unexpected effects
. Much care must be practiced in dealing with what level these decisions are made. For all we know, we are blinding ourselves by masking faulty assumption by using the intricacies of the metrics we set.
Singaporeans, wake up! Why software is eating your island
During the last months, I've been working on this article. It shares stories about the software industry in Singapore and my perspective on why it will have to improve a lot in the future. I've written this article because I've frequently have discussions on the subject, but actually do not have any place where I could publish it. Therefore, if you like it, I'd appreciate suggestions on where would be a good place to publish it (other than this blog). You can find the PDF version of the article here.
Title: Singaporeans, wake up! Why software is eating your island
The old-world economies, US and Europe, are losing their advantage in 'old' industries such as consumer electronics and manufacturing, yet many don't seem to be bothered. Why? Even after years of outsourcing, US companies are still the frontrunners in the 'new' industries -- the software-centric industries. But Singapore is missing this boat.
A year ago, Marc Andreessen -- the founder of Netscape, a member of the board at HP and an important venture capitalist -- posted an important and wonderful article in the New York Times "Why Software is Eating the World".1 He describes how software is overtaking traditional businesses. The most frequently-stated example of this is the bankruptcy of Borders Bookstore in 2011 while the online bookstore Amazon.com -- a software company -- is still thriving. Closer to home, the biggest computer bookstore in Funan Digital Mall closed its doors the same year. These days, who buys technical books in a physical store?
In traditional companies, the role of software is changing from supporting the business to becoming the core business. These companies need to re-learn their 'new' core business... or else a new start-up software company will learn their 'old' core business and take over their market and they will end up like Borders or the Bookstore in Funan Mall.
In Singapore, this re-learning will be more intense. Why? In general, Singaporeans do not have an interest in learning or understanding software development (yes, there are exceptions). They'd rather be an analyst, salesman, marketeer, or even better... a manager. These jobs have traditionally been important, well-respected and well-paying but... Singaporeans, wake up! The world is changing. A career in management or sales might not be such a great idea in 2012. Careers in software are of increasing importance and you are missing all of it! If this attitude doesn't change then it will lead to mass unemployment and will seriously hit the Singaporean economy. Am I exaggerating? I don't believe so, please let me clarify.
Poor state of software
Software in Singapore is horrible. I find it unbelievable that companies can get away with badly-designed software of poor quality that isn't functional. Examples?
Singapore Air -- I enjoy ranting about Singapore Air as they provide so much to complain about. In 2011 they 'upgraded' their website. Their new website was so bad that I wasn't able to book a ticket online and I eventually changed airline. At that time, Nicholas Ionides, the spokesman for Singapore Air, reported: "As with any major IT project, we do expect teething problems but we expect to be able to iron out these issues in due course."2 No, Nicolas, projects like your website don't have teething problems, it is just shamefully poorly developed. In May 2012, Singapore Air reported an unexpected loss because of "weak travel demand and soaring jet full prices."3 A month later their site is down because it is "currently experiencing technical difficulties."4 Wonder where the weak travel demand comes from?
But perhaps Singapore Air is an exception? Not really...
Singapore Bank 5 #1 -- Our company used to have an account at Singapore Bank #1. We do all our banking via eBanking and, after a year, we changed bank as their eBanking doesn't support recurring payments -- a feature I had always assumed all eBanking systems had. The bank did always assign friendly relationship managers to us... but they seemed to miss the point: We don't need relationship managers, we need a proper eBanking system.
Singapore Bank #2 -- New bank, which does support recurring payments, yeah! We're switching banks again. Why? Their corporate eBanking can not report detailed real-time credit card information. But that's not all. Their credit card summary statements showed the wrong credit information. The bank even charged us late payment fees as we had paid the credit card based on the on-line information -- the wrong information. Their friendly relationship manager told us to use the paper statements instead, which we had always ignored. We seemed to have been the first customer to notice this huge and obvious bug, but after six months they still had not fixed it. To make matters worse, after six months they had conveniently forgotten about it. We're not sure what bank to choose next.
Singapore Stock Exchange -- A couple of months ago, I was in Hong Kong coaching at an international investment bank. I mentioned the sorry state of software development in Singapore and they all nodded in agreement and sighed. "You won't believe the Singapore stock exchange," they told me, "it is an absolute disaster."
There is hope as the government is promoting software development! Then again, how do most of the government web-sites look?
The case of the missing Singaporean developers
I mean no disrespect to construction workers, however it seems to me that Singaporeans view software development as construction work -- the dirty work done by cheap labor, the dirty work they don't want to do themselves. Every now and then I chat with computer science students and they often express the actual programming as a painful phase in their career which they have to go through in order to get promoted to something better. Promoted to project manager or business analyst -- or other jobs which I personally consider to have no future... but more about that later.
There are companies in Singapore who care about the state of their software. These are usually not Singaporean companies. An investment bank in Singapore which we work with trained all their developers in modern agile engineering practices and hardly any of them is Singaporean. Or, a start-up in Singapore of about 40 people with zero Singaporean developers. Recently, I met with a manager of an international embedded systems company in Singapore. He is Singaporean and I mentioned IDA CITREP subsidy for Singaporeans. He laughed and said that he was the only Singaporean in their team. A friend of mine is a CTO of a finance firm and his policy: "Never hire Singaporean developers as they do not know how to develop."
"Software development is unpopular because of the low salary," I'm often told. Therefore companies hire developers from India, Indonesia, Malaysia, Russia, United Kingdom, United States, France, Holland... But UK, US, and France are not 'low-wage' countries, so what is going on here? I asked exactly this question to a CTO of a finance firm and he replied that the good software developers get paid a lot higher salary than sales, marketing or project management. Why? It is easy to find sales people or wannabe managers but finding good developers is really difficult. Perhaps the low developer salary is a myth?
Talking about myths. Now and then I'm told that all software development in Europe and US is outsourced to low-wage countries. This amuses me as I never forgot the discussion I had with a CTO of a Finnish games company where he explained that the only reason for off-shoring work was that he couldn't find developers locally. Salary and the work environment for software development is good, so good in fact, that according to the Wall Street Journal, software engineer is the best job in the US in 2012.6
Yummy, an island
In January, I was in Shanghai and needed additional heating. A friend and I drove to the nearest electronics mall. I found the perfect heater and told the store assistant that I wanted to buy it. She said it was not for sale. Puzzled I looked for a less perfect one but that one was out of stock. I asked my friend what kind of mall this was -- why don't they sell things? He answered most people buy things online, not in malls!
This fundamental shift will also happen in Singapore. Software is becoming part of the core business of organizations and, with that transition, software development is a core skill to have. But in this fast-paced, software-intensive world, the software developers must deeply understand the business they are working in and work directly with users and customers to create the best solutions. The days of software developers who only understand technology and who wait for analysts to specify the work are over. Software developers need to broaden their skill, understand their domain and remove the wasteful handoff of information from analysts.
Related to this shift is the change in management style often called 'Agile'7 where cross-functional teams work directly with users and customers using short iterative feedback cycles. Some management responsibilities, especially project management, are delegated to these self-managing teams so they can respond quickly to the needs of customers. The team members balance between deep specialization and being enough of a generalist to always move the team forward. This delegation to self-managing teams who balance specialization and generalization makes specialized management jobs such as project manager gradually obsolete.
In Singapore, this shift will be tough as it requires cultural change on three different levels. On an organizational level, organizations need to understand software rather than looking at it as a cost centre which is best out-sourced and off-shored. On a management level, management need to empower people and create inspiring places to work rather than the hierarchy and micro-management control that is unfortunately common in Singaporean companies. And on the national level, we need to create a national culture wherein people chose a software development career rather than considering it beneath them.
Currently Singapore is going in the opposite direction. Universities do not promote engineering careers and the recent, stiffer criteria on employment passes8 will make it even harder to find great software developers. I definitively hope this will not lead to companies pulling their development out of Singapore. If it does then that will definitively take a big bite out of Singapore's old-fashioned economy.
2. Reported in Straits Times
3. Reported in Reuters news (9 May)
4. Reported on Singapore Air website
5. Original names are removed for now as that wasn't the point.
Book Review: There's always a duck
"There's always a duck" is a LeanPub book written by Elisabeth Hendrickson (a friend of mine). You can find the book here. Since it is a LeanPub book, I can't review it on Amazon.com therefore I'll just post it on my blog. Is this the gradual end of Amazon.com who gets taken over by internet-age publishing companies? Who knows.
Anyways, There's Always a Duck is a collection of articles and blog posts that Elisabeth has posted over the last, well, 15 years or so. The book is names after the first article with the same name. The book consists of eight named parts with each about 5 articles under it. It is around 170 pages with huge fonts.
It is hard to summarize the book as it really are independent articles which are sometimes 'accidentally' linked together. The articles are Elisabeths observations of her experiences and the lessons she has drawn from it. This could be her daughter telling her that she always sees ducks with which Elisabeth concludes that if you look carefully, you'll see familiar things around. Or the description of "normal coffee" in India with which she concludes that even 'simple' terms such as 'normal' depend a lot of who is saying them and in what context.
Most of her later articles tend to be related to Agile development, whereas her earlier articles tend to be more testing focused. Yet all of the articles have some useful lesson in it. Articles are short and easy to read which makes the book a perfect reading book for short moments in which you have nothing better to do like sitting in the train or waiting in a queue :)
Though I'm probably biased, I did enjoy the book quite a lot. It isn't a wow book that I would recommend to everyone, but it is an enjoyable book full with useful anecdotes which make you laugh and are useful. From that perspective, I would recommend it as a book that you can every now and then pick up and read an article. I'd rate the book probably 4 out of 5 stars, better than "ok" yet not a book that I'll be recommending to everyone.
I do recommend getting this book simply because if it self-published and authors like Elisabeth ought to get more support via their self-published work :)
Prefer to do DO over DI
Today, accidentally, we invented a new term: Dependency Objection :)
We started using dependency objection in design by preventing dependencies so that we don't need to inject them anymore later. Thus prefer to do DO over DI.
When PSPI does not apply
PSPI (Potentially Shippable Product Increment) is an important concept in Scrum framework. At the end of sprint, team is supposed to deliver a potentially shippable product increment. The focus of PSPI is on the readiness of the delivery.
MVP (Minimum Viable Product) is a term popularized in lean startup. It's defined as "that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort". The focus of MVP is on maximized learning with least effort.
Their focuses are different, and MVP is often not with production quality. So, the question is, are they compatible?
- PSPI is about transparency
First let us understand more about PSPI, and why Scrum demands that at the end of every sprint. Scrum is based on empirical process control, where transparency, inspection and adaptation are three legs of Scrum foundation. Transparency is absolutely necessary for effective inspection and adaption, while having PSPI at the end of every sprint provides a great deal of that.
There is a companion concept called the Definition of Done. Done defines the extent about how close we are towards being potentially shippable. Unfortunately, many teams are not able to create PSPI in every sprint, because they are not technically capable in doing so. Thus, based on the current technical capability, they define their definition of Done as a subset of what's required for being potential shippable. What's the impact on transparency when we have weak Done? The weaker the Done, the less transparency we have. We may not be able to elicit the full feedback with partially done product increment, and we may have hidden risks left later due to that.
Release burndown is one way to represent the transparency. It works by measuring velocity based on Done items and deriving the duration from the size and velocity. Notice that those are all defined, estimated and measured based on Done. If we have weak Done, the burndown point is far from the shipping date. Hardening sprints are necessary in those cases to make up for the undone work and get the product eventually shippable. Hardening sprints obscure the transparency.
In short, PSPI plays critical role for getting transparency and making effective inspection and adaptation accordingly.
- Isn't learning creating transparency
Nevertheless, the focus of MVP seems very different than PSPI. It focuses on learning, getting knowledge and reducing risks. Normally, we don't try to reach production quality in MVP, because it often doesn't yield more learning.
Jeff Patton has popularized the distinction between iterative and incremental. In his term, incremental implies "add functionality", while iterative implies "build something, then evaluate whether it'll work for us, then we make changes to it."
For incremental development, PSPI makes much sense, since that gives us the best transparency. While for iterative development, MVP makes more sense. PSPI doesn't help much in creating transparency during product discovery. The extra work to get into PSPI may be wasteful and slow down the iterating. In fact, the increase of transparency comes from the learning. The learning helps us make effective inspection and adaptation, which could be more learning needed or converting MVP to PSPI. As we'd like to capture all work into product backlog in Scrum, a separate item for this elevation is a natural choice. For most product development, the transition from product discovery by iterating to product delivery in increments is common.
Getting back to the Scrum foundation - transparency, inspection and adaptation. Transparency is the first key, while PSPI is one way to achieve that. Depending on your context, it may or may not be the most effective way. By understanding why Scrum demands PSPI (for transparency), and what other alternatives to achieve transparency are, we incorporate MVP and other techniques while doing the essence of Scrum.