The way we used to deliver data and analytics work was to do what we call the “big guess up front” (BGUF).
Documents for Africa
We would create massive data and analytics strategy documents, which took months to create and get “signed off” at a senior level in the organisation, often by the “Board”. This strategy document then became set in stone, and it could never change, even if every assumption it was based upon was found to be invalid or had changed.
We would define a project plan early with a standard list of tasks and dependencies based on other projects we had estimated, and then guess the effort required to deliver these tasks and the milestone dates we might be able to achieve. These got signed off as the official plan before we ever gathered the details we needed to see if these estimates were reasonable or achievable.
We used to spend months acquiring and documenting requirements, which became a massive wishlist of every piece of data, every tool feature and function the users could think of as it was their one chance to get this capability delivered by a project.
We would spend months standing at whiteboards and stuck gathered around screens with drawing tools defining a single enterprise data warehouse model to rule them all. Or worse leave it to a single data architect to huddle over their screen on their own for weeks, weaving their modelling magic.
The testers would build test plans in infinite detail, with spreadsheets containing the hundreds if not thousands of the detailed test they would run to prove this thing we would build would match the things we wrote down.
Intro the Change Management Process
And of course, this was all based on the assumption nothing would change, or if it did change a “change request” would be raised and the “change process” would ensure all the documents and all the people would change in fluid unison. It never worked.
So at some stage, problems were escalated and a “Change Manager” would be added to the team. The Change Manager was not there to help the users transition through the changes that were required to successfully adopt this new capability. This unfortunate person was there to “manage” the changes within the project and would be akin to either the plumber or a police officer managing traffic at a roundabout.
If, somehow, the Change Managers role was deemed too involve fixing issues when they are raised, I liken their role to that of a plumber. Anybody that had an issue would handball it to the Change Manager and expect them to resolve it. If there was something messy to deal with, that you didn’t want to deal with yourself, then you called in the ‘plumber’. Of course, the Change Manager was normally just a team of one with minimal authority and budget to make things happen. Eventually, the Change Manager would become swamped with the raft of issues they now had to deal with, causing a massive blockage.
If the Change Manager’s role was defined as being the channel that managed any change issues but wasn’t responsible for actually resolving them, then I liken the role to that of a police officer directing traffic at a roundabout. If you had an issue, you couldn’t (or wouldn’t) resolve you put it into the roundabout, and the Change Manager made sure it got off the roundabout at the stop of the person or team who could resolve it. In this scenario you often also made sure that you blocked your exit lane on the roundabout as effectively as possible, to ensure the Change Manager couldn’t offload any new issues to you or your team. You did everything you could to stop these problems being allocated to your team due to your team being flat out trying to deliver to the scope in the estimated timeframe. Eventually, the roundabout would become clogged, and there would be a moratorium on any new issues. As issues still eventuate even when a moratorium for managing them exists, there was often a concept of a “parking lot” where they were placed, so they were supposedly visible but with no chance of them ever getting cleared.
Of course, all this change is bound with documents, templates and processes that are meant to streamline the management of this change, but in fact often makes it too onerous to raise any change issues at all. At some stage, a “Change Administrator” would be added to the team to manage the process, track and report the current status of each change.
In some organisations, I have seen projects terminated when they ran out of time and money, based on the initial project plan BGUF, regardless if the core tasks are completed or if the outputs had successfully passed full testing.
And the project was called a success.
The project closure left a massive amount of work for the Business as Usual (BAU) team to complete as soon as the environment went live. These BAU teams were not given additional funding or resources to undertake this work, in fact, often there was an expectation the team headcount would be reduced based on the efficiencies this new system promised to deliver.
These BAU teams then had to struggle to manage the raft of calls they got from the introduction of a new system, as well as find the time to finish the tasks left over from the now closed project, all the while maintaining their previous day jobs.
It’s not a Team Sport
One of the major outcomes of the big guess upfront is a lack of accountability for the team that receives them. They are handed documents and designs that they didn’t create and told to implement them. The context, discussions and insights that went into these artifacts have been lost as soon as they were written down and handed over, making their task difficult if not impossible.
Also, the change management process removes their accountability for resolving any issues that they identify. Any issues that are raised are typically not in their sphere of control to fix.
And this is why it fails, it’s a bunch of individuals or teams working in isolation.
I have encountered project plans that were created by Project Managers who had never worked on a data project before.
They would use a previous plan as the basis for the new development with no multiplier for the level of complexity difference between the two projects. For example, in one organisation the Project Manager scoped out the estimate to add a new source system to the current data warehouse.
Unfortunately, the new source system had some large text columns that needed to be extensively parsed to meet the business requirements, the previous source system on which the estimates were based did not. The development team was not involved in the estimates, and the budget was locked in based on the initial plan.
The team was setup to fail and fail they did.
It’s not the People it’s the way we work
The problem is not that the Change Manager is incapable of doing their role, or Project Managers are not trying to scope properly, or the delivery teams don’t want to be accountable for success. The problem is they are doing big guesses upfront and this causes some issues:
- They create strategies, documents, designs, processes and roles that cannot easily be changed when required;
- The documents are based on a large number of assumptions (guesses) as there is a raft of things they don’t know yet;
- They spend far too much time creating these when we could be using that time and resource to deliver value upfront;
- They communicate via documents not ongoing conversations;
- They start with a culture that change is bad and should be avoided at all cost;
- They somehow believe that creating these big guesses upfront reduce risks during the delivery phase, when in fact it increases them.
Agile is a set of principles, practices and patterns which have value given a certain context. It is all about collaborating on the right thing at the right time, providing value continuously, gathering feedback as early as possible and ensuring we can manage change at anytime.
Big guesses up front make this very very difficult.
As part of the building the AgileData product, we have already done a lot of the work that used to require technology and platform BGUF documents.
We are also constantly developing what we call “Magicbooks” which provide patterns for known data factories (for example Shopify), known industries (for example Insurance) and known use cases (for example Data Migration). These Magicbooks provide 80% of what is typically required and removes the need to capture BGUF data requirements. You only need to focus on the requirements for the 20% that is unique to you.
And if we don’t have a Magicbook defined for your scenario yet, we have a way of working which enabling small guesses to happen. We have an agile-storming process where we can identify the key questions you need answered and the core business process that support these questions in one hour. We can then collect, combine and consume a first cut of the data to answer those initial questions in less than a day.
We then rinse and repeat using these practises and patterns, iteratively presenting the data you need, without the need to spend months doing data or report stock-takes, or writing BGUF requirements documents.