Case: solved the problem with non-compliance with release dates

Problem: There are 50 people in the organization and everything seems to be fine, everything is set up, the processes are perfected, there is cicd, there is quality control, and there are a bunch of different testing environments, but the problem is that we set ourselves deadlines and fuck-up them. Each release shifts the deadlines, and a lot of minor bugs, although the number of defects remains statistically the same.

Analysis: Analyzed the process of rolling out the product to the productive. At first glance, everything seemed good, there were well-defined stages with checkpoints and certain transitions from each to the next, high marks, teams well-coordinated and deviation from estimates + -10%.

Reason: The problem was in the length of the timebox. The guys had sprints from 1 to 3 months, there were no arguments for choosing just such a duration, except for "well, why not, there is no rush, business is not pressing...". It turned out that at the time of release, the teams collected a large amount of functionality that needed to be merged into a heap and tested, and a large number of errors arose in the process of merging and the dependence of features on one another.

Solution: After analyzing the process, they proposed to reduce the timebox to 2 weeks to avoid a snowball at the exit, or, well, it was less.

Result: It turned out that after two months of adjusting to new time intervals, the effect was not long in coming, and releases began to roll out easier. The number of bugs has decreased and, accordingly, the time spent on debugging has decreased, and the deadlines have ceased to move systematically.