How to Develop Software

By Stephen P. Lepisto

Started: January 10, 2019

Overview

Creating a software application involves more than a few actions but at the highest level, software development follows these four simple steps:

  1. Implement
  2. Validate
  3. Deploy
  4. Support

In other words, pound out some code, make sure it works, hand it off to a user, then provide support.

However, there are some…subtleties…in that high level view that really need to be looked at more closely to properly develop software that is usable. For example:

  • What do you do if the application you pounded out so carefully doesn’t work the way the user expected?
  • Does the user even know how to use your application?
  • After fixing a bug, how do you make sure the bug doesn’t come back?
  • Can you determine how the bug was introduced?
  • What do you do if a user finds a bug that you cannot reproduce on your machine because they are using an old version of the application?
  • How does a user even tell you about a bug in your application?
  • Can you introduce a new feature without breaking an existing feature?
  • Did the user ask for the new feature or did you just add it because you thought it might be needed?
  • Can you remove a feature without breaking other features?
  • How do you make sure a new version of the application completely replaces an old version on the user’s system?

This document discusses all phases of how to develop and deploy software so that the above questions can be answered.

Some details will be ruthlessly summarized.

Implementation

This section is ruthlessly summarized.  There are whole books that address the chore of implementing software in excruciating detail or that wax in rhapsodic wonder at the glorious universes that flow from the mighty genius of a programmer extraordinaire.  Depends on your point of view, doesn’t it?

Requirements

When developing a new software application, determine what problem the software is going to solve for the user.  Express it in a single sentence.

When adding a new feature to an existing application, determine what problem the new feature is going to solve.  Express it in a single sentence(1).

The result of this single sentence should be questions that eventually lead to specific requirements to be implemented.

In short, know what you are going to implement before you implement it.  Oh, and be specific because it’s very hard to implement vague ideas.

For example…

Example 1: A requirement that is too vague

Requirement: This application will solve world hunger so that no one goes hungry.

Resulting questions that should immediately arise:

  1. What kind of food are we talking about?
  2. Will it support vegetarian?  What about vegan?
  3. What about those people who are lactose-intolerant?
  4. What about food allergies?
  5. How is the food going to be transported to where it is needed?
  6. Who is going to pay for transporting all that food?
  7. Who is going to pay for growing and harvesting that food in the first place?
  8. Will it run on MacOS?
  9. Will version 2 solve all housing shortages?

The immediate questions for Example 1 cover a wide range of topics because the original requirement was so vague.  Not to mention that a single software application cannot solve world hunger.

Now for a better example:

Example 2: A requirement that is focused

Requirement: This application allows a user to create formatted documents that can range in size from a single paragraph to an entire book.

Resulting questions that should immediately arise:

  1. What kind of formatting is supported?
  2. Is page numbering supported?
  3. Are chapter titles and sections supported?
  4. What about table of contents and indexes?
  5. Will it run on Linux?

The immediate questions for Example 2 focus on the expected feature set that can be readily added.

It is generally not necessary to know every last requirement before you start, but you do have to start somewhere.

Design

Now that the initial requirements are known, time to come up with a design that addresses those requirements.

For existing applications, the basic design is already in place, it’s just a matter of following that design for a new feature.

For new applications, there are books’ worth of explanations of how to design software. To ruthlessly summarize and therefore keep looking at the situation at a high level, there are a few constraints that strongly dictate the overall design of the application.

  1. Is the application expected to run on more than one operating system?
  2. How will users interact with the program?
    1. Graphical user interface?
    2. Command line interface?
    3. Web browser?
    4. No user interface at all?
  3. How critical is it the application should work as expected?
    1. If it crashes, will the user just be annoyed or will hours of work be lost?
    2. If it crashes, will thousands or even millions of dollars be at stake?
    3. If it crashes, will the patient die?

The answers to these questions have the most impact on how the software is designed and implemented.

Supporting more than one operating system affects how the code is structured so that all interactions with each operating system are isolated from the rest of the program. In a well-structured application, a very small percentage of the code ever deals directly with the operating system, and that small chunk of code can be replaced for different operating systems without changing the structure or logic of the rest of the application.

How a user interacts with the program dictates the structure of the program. A graphical interface is structured very differently from a command line interface, so those parts of the program that deal with the user should be kept separate from the rest of the application.

(Note: If a program has no user interface, then it is likely being called by other programs through an Application Program Interface (API). How this API is exposed to other programs can dictate the programming language and even the operating system to be used.)

The criticality of the application’s robustness strongly affects the design and implementation of the software. Not only does the application need to handle bad input and errors gracefully, the application needs to be structured so that it can be easily tested to prove it is robust and fit for purpose.

Keep It Simple

Whether the application is designed around functions or objects or some combination of the two, the key is keeping each of the pieces of code as simple as possible. It should be possible to look at one piece of the code and not only understand what that piece is for but whether that piece is correctly implemented, broadly speaking.  Pieces that are well-defined and separate from each other are also easier to validate.

Group together similar pieces of code and keep different groups of pieces well-separated. Groups that are well-defined and separate from each other are easier to validate.

In the end, a programmer’s job is to manage complexity.  By keeping the design and implementation as simple as possible, the complexity is kept in check and managed.

Keep this question in mind while designing software:

  • How much of the application must a developer keep in their memory in order to make a change to a part of the application without unknowingly breaking some other part?

The more complex the design, the more that has to be memorized to avoid constantly looking up other pieces of code that might interact with the code being developed.

User Interface

There are three kinds of interfaces to a program:

  1. Graphical User Interface (GUI)
  2. Command Line Interface (CLI)
  3. Application Program Interface (API)

All programs will have one of these, some programs will have two or even all three.

Which interface(s) is provided depends on who will use the program and how.

GUI

This interface uses graphical elements to represent options that can be selected by the user with a mouse pointer, touch screen or keyboard shortcut.  The design of a GUI-based program should separate the GUI from the underlying code as much as possible so the GUI can be changed without affecting the main functionality of the program.  Especially if the program is to be moved to different operating systems, where the GUI elements tend to differ.

Note: A web browser-based application counts as a GUI application; the advantage is the browser itself mostly hides the differences of how GUIs are presented in the operating system, allowing for relatively seamless support across multiple operating systems. The tricky part is supporting all desired browsers because not all browsers are created equal.

CLI

This interface is exposed from a command line or terminal and consists of one or more arguments passed to the program when the program is launched. These arguments are expressed in terms of switches, options, and simple arguments.  There should always be an option to display all the possible arguments along with a description of each argument (typically, --help, or /?).

CLI-based applications are easy to run automatically and as such are easily testable in automated test environments.

API

This interface is used by other programs to use the application.  The API consists of a set of functions and/or classes defined in some kind of header file.  Another program can include the header file so it can call into the API.

The best APIs are binary-compatible APIs called Application Binary Interfaces or ABIs. An ABI is an interface that changes in a very strict way so as to maintain backward compatibility with older programs — at a binary level; no recompiling needed.  ABIs are typically documented in an external document usually called a reference manual.

Services, daemons, and drivers are all API-based applications but are typically run in the context of the operating system, which facilitates exposing the API of the application to other software.

Services can also run on web servers, with a Web-based API.

If the API is network-based (which includes Web-based APIs), that implies the program is running on a different computer (the server) from the program(s) calling through the API. And that means the robustness of the server and its operating system must be taken into account, over and above the robustness of the program exposing the API.

Error Handling

Errors in an application are inevitable.  They come from bad input from the user or misconfiguration of the application or bad input from another application or hardware (possibly because the other application or hardware is not configured correctly or simply not working).  Plus a thousand other sources for errors.

Errors can also come from bugs in the application.

The application’s response to errors provides a strong indicator to the user as to how robust the application is.  Can the application recover gracefully from even the most egregious errors?

For best results, error handling must be built into the design of the application from the beginning.  The result of handling an error is almost always a message to the user informing them of what has happened with sufficient detail so the user can correct the situation.  The application then needs to get back to a known good and stable state.

Internally, the application can use error codes or exceptions but when an error is presented to the user, the error messages must be meaningful and actionable.  An error message should specify what went wrong, the context in which the error occurred, and any pertinent information the user might need to correct the situation.  The context is used when the application developer gets involved to troubleshoot the situation.

Good error handling is a feature of the application, not an afterthought.

It is not uncommon for error handling to take up more than half the code in a robust application.

Logging

Logging is extremely useful for debugging failing applications from a distance.  Logging is also extremely useful for certain intrepid users who want to follow the flow of an application so they might be able to fix a failing application. Those intrepid users would include validation engineers who are testing the application.

Logging provides a history of what has been going on in the application since the application started.  By following the various markers in the log, a developer can, after the application has ended, follow the flow through the program, providing the ability to retrace the steps the application took to get to a particular point.

The more complex the application, the more useful logging becomes for debugging hard-to-reproduce problems.  However, logging must be designed into the application from the beginning.  To create useful and robust logging, logs should be used for all initial debugging efforts by the developer before stepping into a debugger.  That way, the logs will eventually contain sufficient information about the flow through the application so when a user sends a log to the developer, the developer has a very good chance of debugging the problem just from the logs alone.

From a user’s perspective, logging is an annoyance that slows the application down or is too hard to enable.  However, when the application is failing for an unknown reason (possibly because error handling is not being explicit enough), the user can turn to logging to figure out what’s going wrong.

If logging is available, logging:

  1. Must be easily turned on and off
  2. Should have different levels of details, although users typically turn everything on, capture the log, then turn everything off (for performance reasons).

    Having a choice of levels of verbosity does give users a choice of balancing between always have logging enabled and maintaining performance of the application.

  3. Should provide sufficient level of detail that most debugging can be done with just the logs alone.  All errors must be logged with full context (error codes, messages, stack traces).

The one place to be careful of in logging is exposure of proprietary/secret information.  For most debugging efforts using logs, proprietary/secret information does not need to be logged since the information itself usually isn’t at fault.  Program flow is not really a secret since any decently-skilled developer can attach a debugger to the program and follow the execution.

Development Languages

Which language to use is affected by what the developers are familiar with, the nature of the application to create, and whether language preferences already exist in the current development environment.  Also, medium to large projects (3 million to 30+ million lines of code) frequently use more than one language.

There are thousands of different programming languages, and more are added each year.  However, only a handful have proven themselves useful enough over many problems across the decades to have any staying power to be in use even now, while also having enough power with which to create whole applications.  Note: Java, Javascript, and EMCAScript are the youngest languages here, at a bit over 25 years old(2).

In other words, don’t select a programming language because it’s the latest whiz-bang shiny toy to appear(3). Select a language that is currently supported, has a wealth of information about how to best use it, is supported across many or most operating systems, and has a large number of developers available who can make use of the language.

Language Year Introduced
Basic 1963
C 1973
C++ 1985
C# 2000
COBOL 1960
ECMAScript 1997
Fortran 1957
Java 1995
Javascript 1995
Lisp 1960
Python 1991

Version Control

Software changes over time, it’s the nature of the beast.  If a change is made to an application that causes the application to suddenly fail, how can this be corrected?

One obvious way is to diagnose the problem and fix it, thereby adding another change to the application.

Another way is to find the person how made the change and ask them for the details about the change.

But what if that person doesn’t remember because the change was made months ago?  Or perhaps the person has left the team?  How do you even determine who made the change in the first place?

This is where an external memory is introduced, a memory of all changes made to the software over the lifetime of the software.  That external memory is called version control.

Version control (also referred to as source control) is a database or repository of all changes made to an application over time.  As changes are made, the changes are submitted to the version control repository, along with information about when the change was made, who made the change, and what the change was for. In this way, a history is built up over time of the changes made.

With version control, it is possible to look at the history of a source file and determine when certain changes were made and who made the changes.  Version control can show the exact change at each step in the history of the file by comparing the file before the change to the file after the change.  Now it becomes very easy to determine what the change was and, from there, determine how best to fix any problems introduced by the change.

Anything that has to do with the code should be committed to version control, from code to documents to custom tools used with the code or program.  A developer should be able to check out the code from the repository and be able to build it with minimal effort(4).

There are a few rules that should be followed when using version control:

  1. Keep individual commits focused on only one change
    • It becomes very difficult to back out a single change if it is mixed with other changes that have nothing to do with the problematic change.
    • It’s okay for multiple files to be changed in a single commit so long as those changes were made for the same reason.
  2. The description of the commit should describe what the change is and why in a couple of sentences.
    • Someone (such as yourself) can look at the description of each commit and quickly decide if that commit has anything to do with what that someone is looking for.
      Otherwise, that someone has to go into each commit and examine the actual file changes to determine what was changed.  That still won’t explain why the change was made.
    • Requires trust that others are honest and accurate in their commit messages.
  3. Ensure that each commit keeps the code healthy and functional
    • It is very useful for continuous integration systems to be able to build the head of the repository at any time and have a working application.
    • This is where pre-commit validation comes in to prevent the code from being broken by changes that are to be committed.

Validation

You have completed your new application (or new feature on an existing application).  You believe everything is working and you are ready to send it to your users.

But do you really know everything is working?  Especially if you have just added a thousand lines of code in five modules across a large application that thousands of customers depend on for supporting multi-million dollar projects?

Are you really, really sure?

There are many different kinds of tests for software but they all can be broken down into three broad categories based on scope: Unit, Integration, and System.

  1. Unit testing verifies each function or class method works as expected
  2. Integration testing verifies functions or class methods can call other functions or class methods and work as expected
  3. System testing verifies the application as a whole behaves in the way a user would expect

Regression testing is a category of tests that verifies a bug does not re-occur. Regression tests can be implemented as unit, integration or system tests, depending on the nature of the bug and how it was found.

Validation must be built into the application’s design.  Unit and integration tests finds problems in code much earlier than system tests; however, the application must be designed to allow for easy unit and integration testing.

For example, a function that does six different things based on the state of the application is hard to unit test.  However, if those six different things were each made into their own function that takes parameters to do their work, then those six new functions can be individually unit-tested without requiring any existing application state.  Then the original function can just call the six new functions, passing in application state to each function.  If the original function was passed the application state rather than use it globally, then an integration test can be written to exercise the original function.

Not only does this allow the functions to be individually tested, it allows the functions to be potentially used in other ways in the application, thereby reducing redundancies.

Validation tests should be set up to be executed automatically as much as possible or at least arrange to run all tests of a particular category with a single command.  For example, unit tests should be run very frequently, preferably after each build.  That way, bugs that were just introduced can be found immediately.  When tests are arranged to be run with a single command, the tests can be easily automated, making continuous integration feasible.

Test-Driven Development

A typical approach for writing code is to write a function then write a test to prove the function works then move on.  This is fine if you remember to write the test.  In those situations, you will often find yourself with a dozen new functions and no tests.  Now you’re faced with the tedious prospect of writing tests for those functions and occasionally wondering why you wrote a function the way you did.

Enter the practice of test-driven development(5).  Here is how to implement test-driven development:

For a new function:

  1. Write a test to call the expected function
  2. Write the new function as an empty function
  3. Build the code and tests
  4. Run the tests to show the new test is passing (it had better pass since the test is empty)
  5. The basic test framework for the function is now in place

For an existing function (or the new function that was just added):

  1. Write a new test to prove the function contains the functionality that will be added
  2. Build the code and tests
  3. Run the test to prove the new test fails because the function doesn’t have the new functionality
  4. Modify the function just enough(10) to get the new test to pass
  5. Build the code and tests
  6. Run the tests to prove the new test passes
  7. Refactor the code to eliminate redundancies (if any)
  8. Where necessary, remove tests that have been replaced by later tests
  9. Repeat steps 1 through 8 until the new functionality is complete
  10. Repeat steps 1 through 9 with new functionality

With test-driven development, the tests are always there because they are written first.  And because the tests are written first, the function must be written in such a way as to be testable.  As functionality is added to the function, new tests are added to prove the functionality is working.  A suite of tests is built up in this way to catch any changes in behavior in the program as soon as the change is made.  These tests will live for as long as the function exists and are constantly run with all other tests.

With test-driven development, progress in implementing new functionality appears to be slower because tests are being written first.  However, that is not an accurate depiction of progress.  When tests are created after the code is implemented, very often the time and effort to create the tests is not counted as implementation time.  But if the time to create the tests is added in, test-driven development tends to be faster overall since bugs introduced during implementation are found much sooner.  When tests are developed after implementation, a lot of time is spent tracking down bugs found by the tests, effort that is made more complicated by the fact that there are many functions that could be the source of each bug.

For example, when writing tests after the implementation:

Programmer: “I implemented the new feature!  It has 24 functions and it only took me 8 days to complete!”

Quality Assurance Engineer: “It took me 7 days to write and debug the tests.  I found bugs in 22 of the 24 functions, which the developer spent 3 days fixing, taking away from development on another feature.”

Total time to develop and test one new feature: 15 days and the programmer is now 3 days behind schedule on the next feature.

Now an example where test-driven development was used:

Programmer: “I implemented the new feature with test-driven development!  It also has 24 functions and it took me 11 days to complete!”

Quality Assurance Engineer: “I spent a couple of hours reviewing and running the tests the programmer created.  Everything looks good.”

Total time to develop and test one new feature: 11 days + 2 hours and the programmer is on schedule for the next feature(6).

Continuous Integration

Continuous integration (CI) is a methodology where small changes are committed regularly to a version control repository and the repository is automatically built and tested.  If the build or tests fail, the developer is notified to take action.  Some CI systems are set up to automatically remove the commit if it causes a build or test to fail.  Still other approaches use a pre-commit build and test and the commit is prevented if the build or test fails (or, more likely, the commit is automatically submitted if the build and tests succeed).

The intent of CI is to always have working code that can be deployed at any time.

For CI to work, a version control repository is required from which to pull the latest code for the application.  This code and its related tests are built and the unit/integration tests are run.  If the build and the tests succeed, the installation package is built and system tests are run, with the result being an installation package tested and ready to go.

In more detail, a CI framework is set up on a server.  The CI framework watches for changes to the version control repository.  When a change is seen, the CI framework triggers the following (general) sequence of actions:

  1. Check out the latest code from version control repository onto a build machine
  2. Build the code and tests on the build machine
  3. If the build fails then report the failure and stop the actions
  4. Otherwise, run the unit tests on the built code
  5. If the unit tests fail then report the failure and stop the actions
  6. Otherwise, run integration tests on the built code
  7. If the integration tests fail then report the failure and stop the actions
  8. Create the installation package
  9. If the installation package creation fails then report the error and stop the actions
  10. Deploy the installation package to a test machine
  11. Get the system tests from the version control repository onto the test machine
  12. Run the system tests on the test machine
  13. If the system tests fail then report the error and stop the actions
  14. Otherwise, report success.

Of course there can be variations on this.  For example, if the code doesn’t need to be built, jump directly to the tests.

For a pre-commit test, the flow looks almost exactly the same except the developer manually triggers the pre-commit check, and the work flow introduces one action after action 1:

  1. Check out the latest code from version control repository onto a build machine
  2. Get the local changes from the developer’s machine and overlay on top of the checked out code
  3. Build the (modified) code and tests on the build machine
  4. etc.

If this sequence succeeds, the commit can continue; otherwise the commit is prevented.

Continuous Delivery

Continuous Delivery or Continuous Deployment (CD) takes the results of Continuous Integration and automatically delivers the resulting package to where the user can access it, such as copying it to a network share or making it available for download from a web site.  In some cases, CD might mean automatically installing an updated web service on a server.

Combining CI and CD means changes made by a developer are automatically incorporated and made available as soon as all building and testing are completed.  CD makes the most sense when the resulting changes need to go “live” as soon as possible, typically as a service on a web server.

For applications that are updated and released on a regular schedule, CD is not needed.  Instead, the mechanism used to deliver the installation package can be automated as a script and then triggered every so often to deliver the new version as taken from the CI process.

Deployment

Every application has at least one user; otherwise, there’s no reason to write the application in the first place.  To get the application into the hands of the user requires the application to be deployed.

There are two aspects of deployment that need to be considered:

  1. How is the application made available for use on a particular computer (that is, installed)
  2. Where is the application obtained from

An application is typically “installed” on a computer so that it is available for use on that computer.  An application might require other applications to be installed first.  Updates to an application can appear regularly and need to be applied to the application already “installed”.  Finally, removing an application from a computer should be straightforward and not break anything.

Ideally, the installation and removal of a program can be done automatically from a command line.  This allows the install process to be automated in a continuous integration environment to run tests on the application. And/or the application can be installed by a system manager, software that installs a whole bunch of applications at the same time.

There are several forms of deployment to consider, as detailed in the following sections.

Copy Install

A Copy-Install means copying the application to a directory on the drive of the computer that will run the application.  If that directory is listed in the search paths for finding programs, the application can be easily started from a command line.  Or a shortcut icon can be manually created on the desktop of the computer (assuming the computer is using a graphical desktop interface) so the application can be started by double-clicking the shortcut with a mouse.

To update the application, just download a new version and copy it over the top of the existing version or “install” it side-by-side with an existing application (just copy to a different directory).

To uninstall the copy-installed application, just delete the directory, remove the path from the search paths, and delete the desktop icon, if created.

Self-Running Installer

An installer is typically a self-contained file that, when executed, causes the application to be installed on the computer.  The installer takes care of creating the directory, copying the application to that directory, updating the search paths (if necessary), and creating the desktop icon (if using).  Any dependencies the application might have can be bundled in the installer and installed automatically as necessary before the application is installed.

The installer can also automatically handle upgrading an existing installation of the application.

The installer can also register the application with the operating system so that it can be uninstalled with a menu option presented by the operating system. The uninstall process removes all traces of the application from the computer.

Note: A self-running installer tends to be for the Windows operating system.

Installation Package

A third approach is to provide the application in an install package. This package is put in a known location and a tool known as a package manager is used to download and install that package, automatically taking care of downloading and installing any dependencies the package might have on other packages.

The same package manager can be used to upgrade or remove the package.

Containers

A fourth approach is to use a so-called container (such as a Docker container).  A container is essentially an image of a computer with the application and all dependencies already installed(7). Containers are useful when exact dependencies must be maintained for a particular version of the application.  Containers are immune to other installations that occur on the host computer since containers are, well, self-contained.

To upgrade an application, just get a new version of the application container and copy it over the old one — or leave it side-by-side with the old one if multiple versions are desired.

To uninstall the application, just delete the container.

Support

After an application is delivered to the user, support begins. If a user has a problem with the application or wants a new feature, or simply wants to know how to use the application, they expect some kind of helpful support.

For a company, this could be a dedicated support department full of energetic, people-friendly folks who love to help others make the best use of the applications provided by the company. They are also the wall that keeps the users and developers separate so the developers can concentrate on making more applications.

For a one-person operation, support is the developer, who may not be people-friendly because they poured all their efforts into learning how to be the best programmer and neglected those seemingly insignificant social skills all the cool kids were practicing.

To reduce the number of questions about how to use the application, make the application as easy to use as possible and then invest in writing a user guide for the application. Some users still won’t, to put it politely, Read The Fine Manual (RTFM), however, most users will read it if it’s available and reasonably well-written. Just be aware that writing a user manual requires different skills than writing code; code only needs to be understood by computers, which are dumb. Users are different.

A bug tracker and new feature tracker (often collectively known as an Issue Tracker) can go a long way towards helping support deal with users, by providing a structured means of communication that both users and support can view.

Service Level Agreements can help set expectations with users, to reduce unnecessary stress when the unexpected happens.

Finally, support for an application means easing it into retirement when it comes time to end the application. This includes plenty of warning to the users to give them time to move to a new or different application. It could also mean opening up the source for the application so other users can take on support of the application.

Bug Tracking

The application is done, tested, and deployed.  The user is using it.  Then it happens.

A bug.

The user is understandably annoyed.  However, as the application developer, you were smart enough to tell the user what to do when a bug occurs and you had arranged for a place the user could report the bug.

You were smart enough, weren’t you?

Of course you were.

There are few ways in which a developer can hear about bugs in their code:

  1. You read about it in a newspaper when the bug causes the lights to go out in Dixie
  2. You receive an irate phone call at 2:34 am in the morning from a user who found the bug
  3. You read a polite but clearly irritated e-mail from a user who found the bug
  4. You get an e-mail notification that a bug has been added to your bug tracker about the bug

Bug-tracking software not only allows for reporting of bugs, it tracks those bugs and sends out automatic notifications when the bug report is updated.  It can track what version of the application the bug was found in, how to reproduce the bug, who reported the bug, what version of the application in which the bug was fixed, and more.

Of course, like any endeavor where a user is involved, the quality of the bug report often leaves a lot to be desired.  For example:

“IT STOP WORKING WHEN I PUSH BUTTON”

But at least you know who filed the bug and can try to follow up on the problem — assuming they left their contact information and that information is valid.

Bugs are a serious matter for most users and the more critical the application is, the more serious the matter becomes.  Those users who are serious about reporting bugs can provide a lot of information, including:

  1. Steps they were doing when the bug occurred (reproduction cases)
  2. Snapshots of the error message(8)
  3. Log files(9)

A bug report has the following minimum properties:

  1. Title (so the bug can be more easily found)
  2. Description of what went wrong
    • Which might include log files generated at the time the bug was found
  3. Reproduction case (how to (possibly) reproduce the problem)
  4. Severity (for example, from most severe to least):
    1. User can’t get anything done and big money is being wasted as a result
      • Fix it NOW!
    2. User has painful work-around that will only get more irritating as time goes on
      • Fix it as soon as possible
    3. User can live with it for the time being
      • Fix it soon
    4. User doesn’t care but was bothered just enough to report it
      • Fix it when you get around to it but “I’m tellin’ ya, that typo sure looks unprofessional”
  5. Who is submitting the bug
  6. How to contact the submitter
  7. What version of the application was being used
  8. Bug state

As suggested by that last item, the bug report will move through various states until the bug is fixed or the report is rejected.  For example:

  1. New (just filed)
  2. Acknowledged (the report has been read)
  3. Rejected (the report has been rejected)
  4. In Progress (the bug is being worked on)
  5. Complete (the bug is fixed and delivered)
  6. User Verified (the bug fix has been verified by the user)
  7. Closed (the bug report has been fully resolved)

At any time, the user and the developer (plus anyone else invited to watch) can look up the bug and see the progress.

Bug Service Level Agreement

When reporting bugs, a severity is assigned the bug.  The user will tend to assign the highest level of severity while a developer will tend to view any bug that doesn’t crash the application with somewhat less urgency.  This leads to a mismatch between expectations between user and developer.  And that, in turn, can cause needless friction between the user and the developer.

The key here is to manage the user’s expectations.  This can be done through a Service Level Agreement or SLA.  An SLA dictates what the user can expect when they file a bug report.

At a minimum, the SLA specifies how quickly a bug report is responded to and how quickly a bug is fixed based on the severity.  If the user is willing to provide an honest severity rating, the developer promises to get to the bugs in a timely manner.  The SLA can also specify that the severity rating can be negotiated.

Here is an example of an SLA that might occur in a company with a dedicated software team (so there’s someone who can respond very quickly):

The Acme Company Bug Service Level Agreement

  1. Bug severity levels are:
    1. Showstopper
    2. High
    3. Medium
    4. Low
  2. New bugs are reviewed once a day throughout the work week.
  3. All open and recently closed bugs are reviewed once a week with users
  4. Showstopper bugs are prioritized above all other work except Showstoppers currently being worked on
  5. High bugs are prioritized above all other work except Showstoppers and High bugs currently being worked on
  6. Medium and Low bugs are prioritized with other work being done by the developer
  7. All bugs are acknowledged within 24 hours
  8. Showstopper bugs are to get at least a work-around within 8 hours of being acknowledged
  9. Showstopper bugs are expected to be completely fixed and the fix deployed within 2 work days
  10. High bugs are expected to be fixed within 5 work days
  11. Medium bugs are expected to be fixed within 20 work days, pending higher priority work
  12. Low bugs are expected to be fixed within 60 work days, pending higher priority work

For such an SLA to be workable, the application code must be in very good shape from the start, so that showstoppers and highs are very rare.  Of course, the SLA could have an item that says the application will not ship with any showstoppers but that requires the testing environment to be the same as the users and the tests to be at least as thorough as the users’ efforts in using the application.

Test early.  Test often.

New Feature Requests

Feature requests are what users ask for when the application can’t do what the user wants.  Feature requests typically don’t have severity levels but instead have a priority for when the features are desired. Priorities are negotiated with the user.

To this end, a feature request typically has the following minimum properties:

  1. Title (so the feature can be more easily found)
  2. Description of what the feature should do (this would include requirements)
  3. Who is submitting the new feature
  4. How to contact the submitter
  5. Version of application in which the new feature is to appear
  6. Feature state

The Feature state will be something like the following, for example:

  • New
  • Assigned
  • In Progress
  • Complete
  • Rejected

Like bugs, the user and developer can both view the new feature request so as to follow the progress.

New Feature Service Level Agreement

New features should also be mentioned in the Service Level Agreement.  For example:

The Acme Company New Feature Service Level Agreement

  1. Once a new feature is assigned, it cannot be changed without negotiation with the developer
  2. If a new feature is changed after it is assigned, the new feature’s state is changed back to New and reprioritized with other work
    • This means that any existing work on the feature will be stopped
  3. Once the feature has reached a state where it can be demonstrated and if the user agrees to the risk, the new feature can be provided to the user with an out-of-band release of the application for the sole purpose of verifying the feature works as expected.
    • At this point, minor changes can be negotiated so as to complete the new feature to the user’s satisfaction
    • If requested changes are too large in the developer’s estimation, the current work needs to be stopped and reprioritized (see step 2)

This sort of SLA for new features makes it clear that any new feature must be well thought out and be of real benefit to the user.  Prototypes can be used to demonstrate the feature to a user and this would be part of the estimate of work to be done by the developer.  Such prototyping should be done for major features just to make sure the user isn’t surprised.

Footnotes
1
A software application can be thought of as a feature of a work environment.  This kind of fractal thinking appears everywhere in software development.
2
Rust and Go are system level languages similar to C (and fulfilling the same role) but they are only in the teens at this point, so it remains to be seen what kind of staying power they have.  The fact that Rust is now supported in the Linux Kernel suggests it might have the legs to go the distance.
3
The latest whiz-bang shiny new language can be fun to play with and educational, but they are often not well-supported and/or come with only the most basic of tools, often making it a chore to create decent-sized applications.
4
The developer may be required to install a tool or two beforehand in order to build the software, such as a compiler, mainly because such tools tend to be quite large and don’t change very often.
5
Sometimes known as behavior-driven development.  Behavior-driven development focuses more on proving the behavior of the function is correct as opposed to proving the implementation of the function is correct.  It’s a matter of perspective: If you focus on behavior instead of implementation, the function can be rewritten as long as the behavior is the same and a behavior-focused test will prove the behavior has not changed.
6
7
Containerization is actually way more complicated than that but from a user perspective, it works like a self-contained image that starts up really fast.
8
This is an excellent reason for having good error handling
9
And an excellent reason for easy-to-use log files
10
“Just enough” means just that: Not the complete functionality but the minimum needed to pass the test. If the test for a function is to prove it returns True, then hardcode the function to always return True so the test passes. On the next iteration, add a slightly more complicated test and a little more functionality to satisfy the new test while keeping the old test. This is why test-driven development is sometimes considered tedious.