Test Like You Fly - Basics

Tim Chambers - July 22, 2021

In Test Like You Fly - Part 1 - Introduction we discussed the characteristics of TLYF as a practice. Let's dive right into the basics.

The examples will use Ruby as the language, but should be generic enough to apply to any systems in most any modern language. Unlike Ruby with its duck-typing approach, some technical issues are off the table for statically-typed or type-enforced languages. YMMV, but the concepts are the same.

Principles

All strategies are founded on principles. TLYF is no different.

  • Ensure the developer is not the only tester/reviewer
  • Measure code coverage - what is measured improves
    • Skip spec/factory directories when measuring coverage - they are either 100% covered or will bias the numbers
      • Artificially increases coverage
    • Render views/pages in specs to maximize coverage potential
      • Sometimes developers introduce business logic in view code
      • Note: unfortunately many coverage measurement tools do not count lines in views/pages
  • Always match test environment versions to the production environment exactly
    • Never let developers code with versions you are not going to ship
  • Test the hard-to-test stuff
    • Documentation
    • Initializers (one-time and first-time code)
    • Deploy process code (the code that installs and upgrades your app in production)
    • Data migrations (both upgrading and downgrading, if that is a feature)
    • Error messaging (go out of the way to make sure errors in error handling do not occur)
  • Automate all tests - even the hard ones - so you keep yourself honest
    • CI/CD should test the deploy processing
  • Avoid negative testing (do not use .not_to do xyz) - the possible non-results of any system is infinite

Anything declared "cannot be tested" is a critical fault risk and should be treated as such - Tenet #4 - When you cannot test like you fly - worry (or do risk management)

Coverage - "I got you covered"

What is measured improves. What is not measured leaves doubt. We have the tools to ensure we know what we have tested, and what we have not. Still, coverage is a vague term. Is the line xyzzy(123) if a == 1 covered if a is 2? Most coverage tools say YES it is. Let's examine some ways to improve the value of line of code coverage analysis.

  • Can't test code you don't have, but you can ensure code you do have is executed
  • Avoid complex single-line conditionals - more than 2 predicates is a caution
  • Break multi-part conditionals into well-named predicate methods
    • Naming methods gives you the opportunity to clearly state the one simple function the method provides
  • Avoid single in-line conditionals as much as reasonable (they count as covered even if false)
  • Prefer if/case/when statements over in-line conditionals
    • Multi-line conditionals don't obfuscate line-of-code coverage
    • They increase definition and comprehension
    • You can see which lines are "covered" and which are not
  • Keep methods short and single-responsibility (SRP)
    • 1-3 line methods are ideal - use complexity metrics as your guide
    • Keep your lines of code short - easily-read code is easier to see errors

A line of code that is not covered by specs/tests will run the first time in production (see Tenet #1)

Mocking - "Let's pretend"

All developers have moved or stubbed code outside the system-under-test to ensure clear context and to speed up tests. Each time we risk walling off misbehavior at those interfaces. This is necessary. At the boundaries are where many errors creep in.

  • Avoid mocking/stubbing whenever it is possible without negative impact
  • Mock only realistic input and responses
    • This includes data type matching of input and return values
  • Never mock entire operation sequences or message chains
  • Don't replace the code under test with mocked code responses that may be correct today and code change away from being hidden
  • Never partially mock a response - in production you won't get half an answer

Example #1 - SQL

  • Specs tested the content of SQL generated dynamically for user-expressed search options
  • Confirmed generated SQL was 'as expected' and specs ran fast (no actual DB queries were made) - it "looked" correct
  • Did NOT confirm SQL was executable by the DB
  • Application passed specs and failed to generate valid executable SQL syntax in production
  • Moral: Pass requests to the actual destination services using the same version levels and service apps used in production (as in flight)

Data - "The fabric of our lives"

Ok so data is not cotton. But it is fuel and as such can burn us. Treat data and especially changes in data with the reverence it deserves.

  • Treat immutable data as immutable in tests - don’t alter data that cannot be altered in production
  • Write-once data should have code to ensure it is immutable
    • Setter methods can prevent/announce attempts to change immutable data signalling a potential flaw
  • Ensure data represents real world data in the normal case - test edge cases to ensure they do not stretch the definition of “realistic”
    • Ex: Don’t use 1 as a common value if the normal data is in the range of 10,000..10,000,000
    • Test for negative values if unsigned
    • Use values prime relative to each other in specs that count/sum/average
      • 1 and 1 are not the same as 0 and 2 but they often end up with the same results
      • Values such as 2, 3, 5, 7 11, 13 allow us to experience differentiation
  • Select production-like data - the data should be something any typical user would recognize
    • Testing with fake data is fine, as long as the fake data simulates real-world data
    • I.e. if data is for a name field consider variety of names. If the data will be numeric introduce the proper number of decimals and large values.
  • Ensure widest possible variety of test example data content. Per Tenet #1 do not present a certain class of data for the first time in production
  • Treat meta-data as data. Ex: HTTP params are STRINGS, not objects, as they are received from a browser
    • Ex: Don’t test with Date objects when your data is date strings
    • Don’t test with integers if the input is a numeric STRING
    • Use Float values WITH decimals if the numeric values can have decimals
    • If BigDecimal values are exchanged ensure they are tested for class
    • Use the proper number of decimal places. USD$ are in dollars and cents with 2 decimal places
  • Case-sensitivity is real. Your DB might or might not be, and your language may or may not have case-insensitive classes of functions/objects
    • Make sure you test for case matching at any data interface
  • Use save! or update! where applicable to ensure failed validations bubble up and exceptions are announced
  • If your data is an Array ensure the elements are of the type you are expecting
    • If you always expect an Array to have content, test for that
  • Use foreign keys in your database to keep any associated persisted data honest
  • If using default values for columns in a database, ensure that tests prove they are not the only possible values
  • Make sure critical columns are marked not null - especially simple booleans that must be true or false - not "falsey" if nil

Example #2 - String Input

  • A certain controller expects date strings for starting and ending dates (e.g. "2021-01-03")
  • Used Date.parse() to parse and validate
  • Controller specs did not need to pass in dates because it substituted default dates when input was absent
  • Default dates were Date objects not String
  • When the default dates were substituted in, the Date.parse() failed because the Date objects were not String objects
  • Moral: Always test with expected data types. That includes when they are mocked

Example #3 - Coordinated Values

  • Parent/child database tables had a denormalized common "start date" value for performance. Start date is "immutable" for all intents and purposes after row is created in both tables
  • Specs use factories to generate child data which generates consistent parent data (exactly same value)
  • Specs then "adjust" the child's start date to test a particular condition
  • The specs pass even though the data is not consistent with real world immutability/equality between parent and child
  • Moral: Never adjust data in tests to create "impossible" combinations of real world data
  • Hint: Enforce data validations that ensure immutability of data once generated
  • Hint #2: Ensure tests fail if invalid combinations of data are encountered

The above is just a starting point for you as you examine your development and testing practices through the TLYF lens. Our third article in this series will cover "the world around you". Looking forward to your feedback.

Tim Chambers

Tim has been developing code that empowers people for a very, very long time. When he is not developing, he and his wife rescue senior dogs and provide them forever homes.

  
  

Ready to Get Started?

LET'S CONNECT