Last month I developed a whitepaper and a webinar on data modeling in an Agile environment. These works focused on what we as data modelers can do to ensure that data models (and therefore data modelers) continue to be valued on IT projects. This blog series touches on the key takeaways from these works.
In the SCRUM methodology/process, development work is broken into 2-4 week sprints of work. These sprints are usually preceded with a sprint planning phase that may include resource assignments, logistics and the gathering of requirements from the product owner via stories. Other agile methods may have different names for these phases or methods, but they are similar in that they break work into concentrated efforts to produce functioning software.
One of the biggest challenges for data modelers is the fact that our methods cover both requirements elicitation (gathering of requirements and writing them down) *and* physical design. But most fragile sprint planning assumes that all the requirements gathering was done prior to the beginning of the sprint. So we end up with a frustrated team of developers who need a functional database the same day we get access to the stories.
This leads to frustrated team members. They are being told GO! by the team leaders/Scrum masters and yet we have not yet delivered them a database. Heck, we are still reading through the stories and writing down a bunch of questions we need to ask the product owners in order to even start our data models. Likely we are days or weeks away from generating DDL for the developers. It's not the fault of the data models. This is the faulty understanding of what a data model is.
How to Fix Fragile Data Modeling Workflows
We want to go from the image above to something that gives data modelers the same sort of head start that the business analysts/story writers have in getting the stories written. Some developers say that means we want to do Big Modeling Up Front (BMUF), a disparaging term that some developers use to keep modelers and other architects out of the agile process. These people are misunderstanding requirements like "must collect the correct sales tax" is not nearly fine grained enough to build a model. Or even code.
Sure, people do build applications that don't legally comply with tax law. But as I've said in the first post in this series, keeping the CIO out of jail is a good thing. Not harming customers is another. So what a truly agile process needs is to have data and requirements people working at the planning stage to ensure that the stories and other requirements are properly understood, at least enough to get started. Then we can reduce rework and have accurate estimates of the level of effort required to complete a sprint.
Any successful agile process treats data requirements just as importantly as application requirements. You can read in the whitepaper some more thoughts on the workflows I recommend, and I discuss some more in the webinar recording.
Do you have agile/fragile workflow observations? Let me know in the comments.