Product Owner conversion from other roles

After the Scrum Master role, we look at the Product Owner.

First of all, let’s clarify what is exactly the role of a Product Owner (PO) and how is different / similar to the one of a Product Manager (PdM).
Generally in Agile the PO is intended as the customer proxy for the team: express the work to be done to achieve a goal, order and organise it into a Product Backlog and ensure that is visible and understood by everyone in the team.

This is the definition reported by the Scrum guide:

The Product Owner is the sole person responsible for managing the Product Backlog.

The Product Owner is responsible for maximizing the value of the product resulting from work of the Development Team. How this is done may vary widely across organizations, Scrum Teams, and individuals.

Note the emphasis in the last sentence about how this can greatly vary.

Not only every team, every individual, every product, every organisation are different but this definition is based on a single team, where the PO is responsible for the scope (backlog) of that team work; when you scale up and you add more teams, the PO will soon become overloaded and you will need to add more POs, even adding a “hierarchy” of roles (well, more a collaborative network).
See for example the Less framework where you have a Product Owner who ensures a cohesive vision and several Area-POs to manage each individual area in the product.

This complexity exists also in organisations using the classic PM roles: you can have one or more Product Managers (product), one or more Marketing PdMs (market/business) , one or more Business Analysts (BA, functional details) and they are fluid: the three can be one, two or three roles, dependent on the size or complexity of the challenge.

 

Keeping that in mind, let’s see what are the skills and responsibilities of a Product Owner (in broad terms) compared to a Product Manager. Continue reading “Product Owner conversion from other roles”

Advertisements

Scrum Master skills conversion from other roles

The roles of Product Owner (PO) and especially of Scrum Master (SM) can be quite difficult to define when you are first approaching Agile and assigning this position or searching someone for the role.

Notably many companies just tend to copy&paste the descriptions for classic PMs or even re-assigning people with these roles to the new Scrum titles.

Nothing is written in stone and of course every company can tailor its processes but if you are going to use Scrum as a framework to be agile, you need to be aware of the differences and nuances.

Especially converting Project Managers (PjM) to Scrum Masters is filled with problems. They are all different roles, are not even superficially similar and they all require different skills but notably different mindsets; as such any “automatic conversion” will not always work.

Moreover, SM is a role not a job title. A team member of a cross-functional team can have multiple roles, being sometime a developer, some time an architect and a tester and also acting as a SM. Not easy for a classically trained PjM to immerse in such a team.

Agile shift all responsibility from a project manager to a team, in such case many organisations ponder how best they can make their project managers useful enough.

Converting PjM to PO might be easier than PjM to SM, there is more overlap between the former two than between the latter two.
PjM can become PO if they have domain knowledge.

But if an organisation is Agile and still need a PjM with title “Agile PjM” then it is a big question mark on their Agile correct adoption and mindset.

The same reasoning applies to a Business Analyst (BA) or Product Manager (PdM) though that’s an easier transition.
Let’s first have a look at the Scrum Master transition and in the next post at the Product Owner transition. Continue reading “Scrum Master skills conversion from other roles”

Cross-validation

We have seen previously how learning the parameters of a prediction function on the same data would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice to hold out part of the available data as a test set.

The test error would be then the average error that results when predicting the response on a new observation, one that was not used in training the learning method.
In contrast, the training error is calculated by applying the learning method to the observations used in its training.
But the training error rate often is quite different from the test error rate, and in particular it can dramatically underestimate the general error.

The best solution would be to use a large designated test set, that often is not available.

Here we see a class of methods that estimate the test error by holding out a subset of the training observations from the fitting process, and then applying the learning method to those held out observations. Continue reading “Cross-validation”

Introduction to NTLK

Text is everywhere, approximately 80% of all data is estimated to be unstructured text/rich data (web pages, social networks, search queries, documents, …) and text data is growing fast, an estimated 2.5 Exabytes every day!

We have seen how to do some basic text processing in Python, now we introduce an open source framework for natural language processing that can further help to work with human languages: NLTK (Natural Language ToolKit).

Tokenise a text

Let’s see it firstly with a basic NLP task, the usual tokenisation (split a text into tokens or words).

You can follow along with a notebook in GitHub. Continue reading “Introduction to NTLK”

The overfitting problem and the bias vs. variance dilemma

We have seen what is linear regression, how to make models and algorithms for estimating the parameters of such models, how to measure the loss.

Now we see how to assess how well the considered method should perform in predicting new data, how to select amongst possible models to choose the best performing.

We will first explore the concept of training and test error, how they vary with model complexity and how they might be utilised to form a valid assessment of predictive performance. This leads directly to an important bias-variance tradeoff, which is fundamental to machine learning.

The concepts described in this post are key to all machine learning problems, well-beyond the regression setting. Continue reading “The overfitting problem and the bias vs. variance dilemma”