New Things

A big part of my career has involved change. Part of being a developer over the course of the late 90s to now involves a TON of change.

Technology change: The code I write, and what tools I do it with has changed drastically. I started coding professionally with C++ and Visual Basic. I got VERY good with C#, and made much with it. I made a fair amount of noise in F# and now I’m currently working in Java, Python, TypeScript and Kotlin.

Methodology change: I mean “how we do the code we do.” Test-driven development came up in the development community just as I did. The ubiquity of containers. Agile work patterns.

Personal change: What used to drive me was the feeling of being ‘the guy.’  I LOVED being the guy you needed to make and fix the things. Now, I love being ‘the guy’ who helps people grow.

And now, a job change!

I’ll have to update my goals, because at this point, I no longer work for the Credit Union! I now work for a well-known an online pet store! I get to work with data scientists trying to make your search and product discovery experience better.

That’s the update… I wish it was longer, and more in-depth, but I’m still learning the space, so I’m in absorb-mode.

Test-Driven Development (TDD) for the Non-Developer

When I got hired at the Credit Union as the Principal Software Developer, one of the major elements of the job is to define how software development is to be done at our Credit Union.

One of the things I frequently mention is that ever since I took over the Principal role at the credit union, we became a TDD shop.

What if you’re not a developer, and don’t know what TDD is about? What about if you just learned what the acronym TDD means?

Test-Driven Development (aka TDD) is a model of writing software defined by a particular practice; specifically, writing automated tests before writing the software itself.

This model directly contrasts with the approach of testing the software after it’s written.

It’s a very low-level coding process, sometimes called ‘Red / Green / Refactor’ (as the tools that execute tests typically indicate failed tests in red and passed tests in green.)

An Example workflow:

  • I write a test, and it initially fails. (RED).
  • Then, I write code that causes the test to succeed (GREEN.)
  • I write another test, that initially fails (RED).
  • And once the feature is complete, I refactor the code, while maintaining green passing tests.

This creates two products. The software itself, and the tests that verify its functionality.

Software is a complex piece of machinery that is easily and regularly changed, and those regular changes can cause unexpected flaws. Those tests, when automated to be as a part of the build itself, can help prevent those unintended errors. Updates to dependent systems are made easier, as software typically talks to a database, or services, or libraries, and those updates can be tested quickly, with minimal human intervention.

They also enforce a level of rigor in the system design. Developers build systems that are designed to be ‘tested’.


So why does this affect you if you don’t write code? As written, this seems like mainly a developer function.

The thing is, the technique is entirely a developer technique, but the TDD mindset is the true switch

Start with the tests. Ask yourself: how do you know the software/thing/system will work?

Say you’re a Business Analyst or a Product Owner on the fence about a feature, how will you know your feature will work? How will you know it’s effective?

You know what the feature is. You know WHY the feature should exist, and that’s normally enough to prioritize it (or not) on the backlog.

Take the TDD approach and start with the tests.

  • How will you know that the feature works?
  • What is the test that will prove the feature is valuable, and useful?
  • What would be a test that would NEGATE that assertion?

Or say you’re the analyst working on the requirements for that feature, and you’ve fleshed out all the details of the fancy new automation your team is working on. How will you KNOW that automation is adding the value that you think it should?

If you’re automating a bad process that shouldn’t even exist in the first place, could you tell?

If you’re saving the company money, how much, and how will you tell?

For a more concrete example, say you’re a UX designer. Many UX elements are discretely testable, but what about the UX being “intuitive”, or aligned with our UX strategy?

Is the information you’re presenting easier to understand, or harder?

How do you know?

Say you’re an executive, a director or VP, or even a C whatever O. You have a vision for your organization, and where we are going.

How do you know it’s working? What is the test that drives your decisions?

What is the test that lets you know your vision is communicated well, and that everyone is marching to those orders in the same direction?

Would you know when to turn around if you’re making a mistake? Or would that mistake have to linger a bit for you to know it exists?

Like software, companies and organizations are complex machines that regularly change, and those regular changes can cause unexpected flaws.

These examples have been presented in a series of questions, as this is fundamental to the TDD process I described above. Designing a code snippet, function, feature, automation, UX, or company with the tests-first helps define your knowledge, and the information you’re trying to get.


To sum up: Define your tests first. Define what you want, and how you’ll determine that you’ve gotten what you want. Define how you’ll measure your effectiveness and measure how you’ve achieved it.

Write your tests first.

P.S. For further reading about TDD for the Non-Developer, take a looksee at this post from IExtendable.

The Power of a Great Demo

I went to school at Western Washington University and got a degree in English, with a concentration in Creative Writing. In damn near every writing class, you’d hear the same 3 words. Show, don’t tell.

Independent of writing style, a scene inspires and communicates more. A quick example:

Version 1:
Bob was hurt and furious after Marie’s “I miss you.” text. “2 months and this shit?” he thought.
vs
Version 2:
The phone buzzed. A text. Marie: “I miss you.” Bob’s eyes watered a bit as he tapped the 3 dots, and hit ‘Block.’


I bring “Show, don’t Tell” up because, although it’s been embedded in my brain for the past ~20 years, a recent beer-night conversation with a friend (Hi Branden!) reminded me of those words. He was talking about the power of a good demo. Having tech to show off is simply more compelling than a simple conversation.

If you want to try something new, or convince someone of your idea, remember Branden and his idea. Show, don’t tell.


A developer I’ve been coaching finally executed a great demo on Angular for our weekly developer meeting. She spent nearly 3 months learning Angular. Her demo contained a soup to nuts implementation of a site, including tests, test coverage, a CI / CD pipeline deploying the site all the way to an azure site. She did this demo over the course of 45 minutes.

I was thinking about what sort of conversation it would be if she didn’t have the full demo. Maybe 5 minutes? Maybe she’d have been overruled, or even redirected to another technology.

But a full demo? She had the whole group listening.

Coaching Session – Is Everything Broken, or Is It Just Me?

Today’s coaching session revolved around a young developer recently moved into an established team. The developer had experience in Agile teams with specific practices, and this team’s practices were VERY different, and almost non-existent. As opposed to a product team with a defined backlog process, this new team is catch-as-catch can, and largely individualized. This developer was concerned, greatly.

“Where is the product owner? Why isn’t she engaged here? Where are the stories? Where is task breakdown?” And on and on, indicating worries great and small. He was half-convinced the entire structure was broken, and needed to be rebuilt from the ground up.

Man oh man, could I empathize with that.


There were a lot of complexities to unpack in this developers complaint, but I called out his newness to the team. He was joining an extremely stable team, with established norms and structures. It is entirely possible that the Agile mindset and culture of the team needs to change, but for an engineer, the first thing to do is get competent on the platform.

This team is a bunch of individuals working. That is a problem. Fixing the process needs to happen, for sure. That isn’t the purview of a brand-new-to-the-team engineer. Nobody is going to listen to someone who comes into an organization and immediately vents about how everything is broken.

Process absolutely matters, but for this engineer, engineering has to count first. That’s what I coached him to do. I recommended he get good at the tech FIRST, and then talk about process improvements.

After that conversation ended, I setup a meeting with the product owner. I wanted to see what sort of issues she was seeing, and if there was anything this principal software dev could help with.

Four Best Practices of CI Builds

Continuous integration builds (also called CI builds) are an absolute process game-changer, turning a local ‘works on my box’ coding and deployment strategy into an automatic and reliable process. Once a team gets a taste of an automated and reliable build process, it is easy to fall into a trap of adding all of the things to your CI build. Integration tests, security scans, performance tests, etc, until eventually, the CI build has lost the features that make a CI build implicitly valuable. The best development teams keep continuous integration builds simple.

Best Practice: With no changes to code or steps, the result of CI builds should be the same.

Responsibilities of the CI Build

A CI build’s job is to collect all developer changes, and create a releasable artifact, ensuring changes from all developers on a project are included in the product, and don’t cause conflicts with each other. A CI build must be fast, so that changes that break a build can be immediately corrected; a quick feedback loop is essential to the CI process. A CI build should perform the following steps:

  1. Collect all source code and changes.
  2. Restore dependencies.
  3. Compile / construct the build.
  4. Run unit tests and build verification tests.
  5. Create the artifact.

Assuming the same code input from step one, and no changes to any of the steps, a CI build’s result should be the same, every time.

A new CI build should be executed if the state of any of the five items above are modified. If there’s a code change anywhere, step one has changed. If any dependencies are updated, in the framework or libraries the product uses, step two has changed. If a build setting has changed, like going from 32 bit to 64 bit, or from ‘debug’ mode to ‘release’ mode, that is step three. Unit test changes typically are manifested as code changes, but any test settings changed would be an item for step four. If the artifact construction process changes at all, that’s step five.

Best Practice: Execute a new build on any individual change to those steps.

Tempting Steps You Should Avoid in CI Builds

  • Code Security Checks
  • Integration Tests

Code security checks are tempting items to add to a CI build, but fail the ‘no changes to code or steps’ rule. Most code security checks are rule-based in nature, getting updates when there is a new known vulnerability. The process is similar to a malware database updates. With no code change, and no change to steps, but an invisible-to-the-developer modification to the security changes, builds can succeed or fail seemingly randomly. Also, the scope of secure code changes can be larger than common unit / build verification failures. A security change can be very small, but more commonly is larger and design-related.

Best practice: Include code security checks on a regular schedule, not directly connected to the CI build, and have those check failures create backlog items the developers can research and correct.

Integration tests are tempting to add to a CI build, but fail the rule of failing fast. Any system large enough to have integration tests in the first place will be a fairly complex set of tests to run, which may take quite a while to execute fully. They also tend to rely on a lot of scripting and setup work.

Best practice: Include integration tests as a post-CI step in an automated pipeline separate from the CI build.


To recap: Continuous Integration builds should be quick, automated, and provide a fast feedback loop. They should contain the following steps.

  1. Collect all source code and changes from the repo.
  2. Restore any dependencies.
  3. Compile / construct the build.
  4. Execute Unit Tests and Build Verification tests.
  5. Create the artifact.

Remember the best practices of good CI builds:

  1. With no changes to code or steps, the result of CI builds should be the same.
  2. Execute a new CI Build on any changes to the five steps.
  3. Include code security checks on a regular schedule, not directly connected to the CI build, and have those check failures create backlog items the developers can research and correct.
  4. Include integration tests as a post-CI step in an automated pipeline separate from the CI build.

Happy building!

Coaching Session – Panic at the Deadline

I had a conversation with a new dev lead today. This lead was recently installed into a project with an upcoming deadline that he was genuinely worried about. As we started the conversation, the worry sort of came out of him in piles. When he was finally able to take a breath, I collected from him that 1) he believes that the code-base is largely un-salvageable (he only recently joined the team) and 2) the deadline is completely rushing at him (about a week and a half before the first delivery.) I had spent a few minutes looking through his project and coached him toward doing the following.

1. Get a CI / CD Pipeline Setup

The first thing to do here is to get a Continuous Integration and Delivery pipeline setup, so you can quickly and reliably deploy in an automated fashion. The fact is, no software comes out as a 100% bug-free product the first time and this software will be no exception. By building up the CI pipeline you’ll be able to fix things quickly and consistently. Every commit should create an artifact, and assuming success that artifact is something can release. If the software isn’t good right now, the product can get good over time.

This was the bulk of the conversation, as it felt like to him that I was recommending that he throw sh*t at a wall and hope that it sticks. The big value comes from the ability to iterate.

2. Code Doesn’t Have to be Perfect.

The fact is, when you have a deadline you have already agreed to, you have to get over the fact that the code is crap, and likely will need a rewrite. Code is a tool used to create a product, nothing more, so worrying that it’s “not good enough” is contra-indicated.

The dirty truth about software is that no code is good enough. Ever. There is always something to do better and cleaner. That is not to disparage the craftsmen out there, but a craftsman does not back out of a commitment. When you make a commitment, you keep it.

3. Keep Your Campground Clean

This one was fairly specific to the project, but in development projects un-merged branches and directories all over the place that are largely empty do not help anyone, and make the project look bigger than it is. If you have a REST API and a daemon, you don’t need a ‘Tests\Python’ folder that has nothing in it. Things that look small, will feel small.


I will check in with him again after a few days to see what progress he has made on those items. If he can get the CI/CD pipeline down, and know HOW to get the code into production boxes quickly, we can move into things like refactoring into patterns that might smooth some sailing here.

Estimates: Time-Based vs Point-Based, and when to use them.

There are two general methods for software estimates, time-based and point-based. Here are some tips on when to choose either one.

Time-Based Estimates

Time-based estimates can be tricky. They are the simplest style of estimate and that simplistic nature lends engineers to treat them flippantly. Just about every engineer has said “That’s easy, it’ll take 15 minutes” about something trivial, and then spent a full workday working on it. Time-based estimates get a bad rap from engineers, because they feel punished for missing them. Project managers like them because planning work is simpler.

Time-based estimates work best for near immediate-priority bug fixes / feature requests, and costing and budgeting breakdowns.

Point-Based Estimates

Points-based estimates are a common model in Agile shops, and are largely the same except in terms of flavor when they are requested in terms of t-shirt sizes or Fibonacci series values. They are meant to be quick estimates that give a sizing gauge for the feature without the negativity associated with a missed estimate. Points-Based estimates get a bad rap from project managers, because the statistical approach to ‘backing into’ the time something takes is not accurate enough to get an ‘on-time’ delivery of anything. Engineers like them because they don’t feel as beholden to them.

Point-based estimates work best for high-level estimates of long-term work, or as a quick way to gauge consensus between engineers.


When To Use Them

A project team will generally follow a particular methodology (Agile, SAFE, Waterfall), so you may be held to the preferences and norms of that group, but in general, follow these guidelines for successful estimating.

  • When to use a Points-Based Estimate
    1. The amount of time dedicated to estimating is low.
    2. The priority on the work is variable; your product owner will use your estimate to assist in determining its priority.
    3. Engineers have substantively different approaches to the work, or the work is largely exploratory.
    4. Your dev-to-release cycle is fast.
  • When to use a Time-Based Estimate
    1. The work is your next immediate priority item.
    2. You will use the estimate to directly assess cost.
    3. There is an understood and correct way to do the work, and will not drastically change if done by one engineer or another.
    4. Your release cycle is slow, or singular.

Two Rules to Estimating Software Features

OK, you’re a software developer, and someone’s asked you to quick look at a feature, and give them an estimate on how long it’s going to take to develop. For this example, I will refer to that feature as ‘SuperFeature’, and we will have two estimating developer examples; Gina, who examples good estimates, and Lisa, who examples less good estimates. Priya is our product owner, and since Priya owns the product, good estimates give her the information required to make a intelligent decisions. Well-formed estimates enable her to prioritize work well, and manage expectations of customers and stakeholders.

Here’s two rules to estimating features successfully.

Rule 1: Your estimate should represent doing the ‘work done in a vacuum.’

It is counter-intuitive to estimate work in this way, but creating an estimate, as if that work is being done in a vacuum is the best way for your product owner to assign the feature a priority.

Example:

Gina, who estimates in a vacuum – “SuperFeature will take about 2 weeks to do.”

Lisa, who estimates based off of her current workload – “I will need 6-8 weeks to do this.”

In the first case, Priya knows how long the feature will take to develop. She knows that Gina and Lisa are both working on very high priority items, so she gets Liam to work on that feature.

In the second case, the Priya knows how long Lisa will take to get it done, but has very little awareness of the what priority Lisa is putting on the work. At next week’s stand-up, imagine Priya’s surprise to know that Lisa hasn’t even started work on it yet!

The lesson: Your product owner owns priority of the features. An estimate should give your product owner the information required to set that priority.

Rule 2: The larger an estimate, the more detail it needs.

If your feature is large, your product owner needs to know and understand why. In order to understand the work needing to be done, it should be broken down into tasks.

Example:

Lisa, who doesn’t break the work down – “SuperFeature will take about 4 months to work on.”

Gina, who realizes the work is complicated, and Priya needs to understand the details. – “In all, SuperFeature will take about 4 months. We’ll need 3 days to start building the catalytic converter, and then a week to refit and install the Whizzbang…” etc, etc.

In the first case, it’s hard to really even start the work, or even know how to break it up. Is it 4 months altogether? Can you break the work apart? Can you create multiple work streams?

In the second, Gina gives Priya all the details she needs to break up the work accordingly. She also does so succinctly, so that ordering tasks and dependencies are clear.


Following the two rules above will help make your estimates more valuable and your relationship with your product owner more beneficial.