I've seen this at the most basic level of contracting. To me, a lot of these problems come down to cost.
I've contracted with a dozen startups and small businesses. Most of these companies will find a young developer charging between $20/hr and $50/hr and have them start coding on their new app/website idea and feel wildly optimistic when they see the first 80% of the project come together.
Then it's time for the edge cases, and because there aren't any senior devs on the project, handling those edge cases introduces bugs. Weeks and months pass, and the project that looked like it was a surefire success descends into development hell.
But the company never fires the developer. The developer actually quits from being over stressed. Then the company goes looking for a new developer, who tells them they'd be happy to build their software for $20/hr-$50/hr. The new developer comes on, looks at the codebase, decides all of it is crap, and recommends they start over.
The new developer finishes 80% of the project in the first week.... rinse and repeat.
The company should have hired a developer or firm with a record of success, with actual, running software out in the wild. How they achieved that may or may not line up with the ideas in the OP, but at least the company would know exactly what they're paying for.
I've described this phenomenon to many-a-manager as "assuming that all progress is linear." It's not; in fact, linear progress is extremely rare in my experience. Usually, a "big bang" of progress happens on a project, whether at the beginning or at the end.
Recently, I was implementing Pivotal Cloud Foundry for a client. Everyone was assuming PCF would deliver incremental and linear amounts of value during implementation. I had to keep telling folks, "No, it doesn't work that way. PaaS is only valuable at the very end of the implementation, when all the technology is ready and the team has been trained."
I think the XP movement was, fundamentally, an attempt to treat coding as a performance piece, where progress IS close to linear because of the order you tackle problems and the way you attack them. You telegraph and you telegraph and in the end if anyone is surprised then it's because they weren't paying attention or were delusional.
It's part of why I maintain that it's much better to be 5-10% over every estimate along the way but maintain your velocity indefinitely. Better for people to be a little disappointed in you now than to drop a giant bombshell on them later. It's not a popular opinion, but it does work if you have a team with the fortitude to stick with it.
Agreed on the linear progress. I’ve been working on implementing various parts of a PaaS/SaaS solution for the past few years. I’d add that the implementation is only as good or mature as your processes. Sometimes the solution and process must mature before you realize the full value.
agreed - there's a network effect to various bits working together, and it's not apparent until they're all there.
it may be linear progress from "lines of code" or some other metric, but there's a point (or usually a few) at which peoples' perspectives on the project changes.
I've hit this a couple times on recent projects, and it's usually tied to some new UI stuff hitting people. Usually, they've seen all the various pieces, but until they can click-click-click through end to end on a process, they don't care, or get frustrated and mentally tune out of that part of the project.
At my company managers have started referring to engineers as "resources". I think it reflects the attitude towards the workers. An engineer is just a number on a spreadsheet and there is no difference between them. If something goes wrong you just hire more resources as cheaply as you can.
When I was a lad there was a bit of advice when interviewing called the Dilbert Index. When they are walking you through the building for your interview, check out how many Dilbert cartoons people have posted in their cubes. I believe the advice went on to say that if you see more than 2 you should be worried, but maybe it was a higher number.
I'm starting to believe there's a similar rule for how many copies of MMM you see on the bookshelf. I've seen as many as 5 stacked together, and that place was whackadoodle.
I use the Dilbert-o-Meter as a measure of how bad the job is once you're actually in it - how many Dilbert cartoons do you witness play out in real life exactly as in the strip. I had this happen in my first job out of university, watching strip after strip happen in front of me without any irony. I lasted three months in that job before I quit.
I now use the same metric to decide when to quit a job and move on. I'm now at a startup, and keeping a close eye on its inevitable transformation...
I'm curious if it has coincided with things going downhill at your company? It seems like this is in general a red flag, but I only worked one place that they referred to people as resources.
It was a terrible place to work if you cared about what you did at all as you never got to finish anything. They shuffled you around like cards, different teams, different buildings, different projects, basically rearranged things based on the whims of those currently with the clout as a way for them to increase their clout.
It's more prevalent at the IT department and they are a complete disaster. Boatloads of architects and managers not doing any work but farming everything to offshore developers. I think in my department it started with the opening of an office in India.
Do you ever get a strange visceral feeling when you see a video of someone about to have a really bad accident? Like your extremities are trying to crawl into your abdomen?
I get that feeling when anyone cheerfully exclaims that we're 90% done.
the pace of making changes grinds to crawl as codebase becomes larger and more complex. eventually the time spent trying to understand code, figure out what changes can be made safely and paying down tech debt outweigh the time spend making measurable improvements.
key things that help
- focusing on extensibility/abstraction in areas of code that will require ongoing changes. This is where you most need experienced and talented engineers.
- maintain testing discipline
- conservative dependency management and picking the right battles between DIY and open-source
- creating periodic blocks of time for engineers to refactor software or architecture.
Fairly early on I worked on a project that was size constrained due to pretty severe storage restrictions on the target device. Within a year we hit a point where every feature we added required that we first make space for it by winnowing down the rest of the code. Sometimes that was making the code more sophisticated, but most of the time is was more of a Antoine de Saint-Exupéry sort of expedition. It was a constant battle but in retrospect it's some of the most rewarding work I've ever done.
After that project I intuited that projects had a maximum complexity after which the wheels fall off, so to keep a project growing past that point you have to remove accidental complexity before adding new intrinsic complexity. For the limited sample size I have direct or indirect access to, this really seems to hold. And in fact someone told me that RPI teaches something very close to this in a required class for one of their Masters programs in CS.
Abstractions do not fix this problem. They put it off until the next performance emergency happens. It's an avoidance tactic and kind of paints you into a corner. Ultimately you're not looking for an abstraction (I mean, it is, but so many bad things are also abstractions that I hate to use the word for this situation). You're looking to model the problems actually being solved. You're looking for the truth.
I like to draw a distinction between abstraction and automation as architecture ideas.
Abstract stuff is "just" more abstract and adds a new concept on top of the old. Abstraction makes the coder feel all grown up and maintain the daydream: "once it's done building the features will be so much easier."
Automation is the part where we say, "this is repetitive and hard to keep track of - so make the computer do it." Automation creates leverage. And sometimes you need a truly novel abstraction to automate successfully, but in most instances you only need the same old 70's era programming constructs wired up a bit differently. Which does lead to dropping one abstraction in favor of another, as you allude to.
Even if you get into, "well we need to Optimize it, we can't just do the simple and straightforward thing" you can start automating the trivial, repetitive optimizations and get into the business of generating code. And that can be abstract, but it doesn't have to be - you don't have to incorporate a whole suite of compiler checks or a runtime model, the "compiler" can be a FSM that emits source code - but it'll be automated, and you can add the interface and checks to it that are most relevant to the problem, and make the output look human-readable to some degree.
Edit: another benefit of this approach is that your initial coding environment truly is "implementation detail" since it's the starting point and you just add the leverage you need from that point rather than feeling obligated to use the officially branded abstractions.
I've contracted with a dozen startups and small businesses. Most of these companies will find a young developer charging between $20/hr and $50/hr and have them start coding on their new app/website idea and feel wildly optimistic when they see the first 80% of the project come together.
Then it's time for the edge cases, and because there aren't any senior devs on the project, handling those edge cases introduces bugs. Weeks and months pass, and the project that looked like it was a surefire success descends into development hell.
But the company never fires the developer. The developer actually quits from being over stressed. Then the company goes looking for a new developer, who tells them they'd be happy to build their software for $20/hr-$50/hr. The new developer comes on, looks at the codebase, decides all of it is crap, and recommends they start over.
The new developer finishes 80% of the project in the first week.... rinse and repeat.
The company should have hired a developer or firm with a record of success, with actual, running software out in the wild. How they achieved that may or may not line up with the ideas in the OP, but at least the company would know exactly what they're paying for.