Property-based testing suggests a new way to test software, going beyond the example-based approach and stressing your code with random, realistic inputs. Follow this 3-issue mini serie about property-based testing and Kenny Baas and João Rosa speech at Codemotion Rome 2019.
Property-based testing: quick recap
In this third and last issue about property-based testing (see the first issue here, and the second one here), we’ll see how this approach — and some interesting features provided by QuickCheck, the reference library for property-based testing — can help us to set up meaningful and useful tests for our methods and functions. But before going any deeper, a quick summary about what we have learned about property-based testing so far.
Property-based tests are, in short, parameterized tests on steroids, that can feed test methods with a sequence of random inputs. Each test method must be designed to check and assert the desired behavior (property) from an input, usually within a given range. This approach gives to property-based testing the ability to “explore” what could happen in previously unforeseen conditions.
So, if we are writing a test to check our super fast method to reverse an array of strings, we should not check that given [“a”, “n”, “k”]
, the result is [“k”, “n”, “a”]
, but we could check, for instance, that reversing any random array of string we have a different array and reversing it again we have the same initial one. If or when the test fails, we have found a sequence that breaks our code.
This example introduces us to the main topics of this issue: how to generate a random input for a non-basic data/object and how to reproduce a failing run/scenario.
Realistic Case Scenario – Generators
Let’s do it building code and tests for a realistic scenario: redeeming fidelity points bonus when our customer buys 3 or more special product with a single purchase. How can we check it with property-based testing? How property-based testing can help us to improve our code? Let’s see it.
We can, as a first move, write our checks using a “plain” unit based testing. We have two different cases to check: the first one is when our customer will do a purchase that can redeem fidelity points, the second one is when the purchase can’t redeem points. Our tests can be something like the following:
Where FidelityPoints
is our class that will take care of checking if the current purchase can redeem the fidelity points (though the redeemBonus()
method) and generateCartWithSpecialProductCount()
is a helper method that will provide a list of Product with the desired count of special products in it.
A simple way to generate such list of products is something like this:
Great! Now we have tests that can actually check the desired behavior, but basically they verify a single list of products — and, to be honest, a really unrealistic list of products. As we have seen in the previous issue, with property-based testing and QuickCheck we use basic types (int, double, …) as input for our unit tests, specifying the desired range. But QuickCheck also provides the ability to pass as input any kind of object by defining our own generators. This helps us to write something like:
Moreover we can configure the generators to follow the behaviour we need for our test cases. Of course writing a generator will require some upfront investment, but it will be rewarded by the effectiveness of each test run.An example of a generator for FidelityPoints
can create a list of products of random length with a random number of special products inside it, with the ability to choose the min and the max number of special items.
The actual implementation requires some code glue, such as the definition of an annotation interface to actually tune the min/max range at test run. You can see and play with the code hosted on github. Now, every time we run the tests, a new set of inputs generated from a random seed are provided to our checks and… surprise! Sometimes the earnFidelityPointsIfYouBoughtMoreThan3SpecialProducts()
test method fails.
Debugging a Disproven Property
In order to understand why a test fail happened, we can rely on two features provided by QuickCheck: shrinking and running with seed.
When a property-based test fails, its output provides a message like the following one:
java.lang.AssertionError: Property named 'checkFailingEarnFidelityScenarioWithSeed' failed (
Expected: is <true>
but: was <false>)
With arguments: [me.elleuca.part3.FidelityPoints@5bda8e08]
Seeds for reproduction: [3260006997795642664]
Code language: JavaScript (javascript)
It means that at least one of the random inputs from the random sequence wasn’t able to prove the property. Running the test again with “shrink
” mode, will help to find “smaller” sets of input that also disprove the property.
When we have shrunken values, we can use the seed — i.e. seed for the source of randomness generator — , to reproduce the same conditions and get the same input values:
Please note that the @When
annotation used to reproduce a dispoven property also can constraint expressions allow us to filter the values that reach a property parameter (for example @When(satisfies = "#_ >= 0 && #_ <= 9") int digit
)
For more info about property-based testing you can start from the demo project implemented for this article and of course any implementation of QuickCheck.
If you are curious about why unit test in the demo project for redeemBonus()
method are fine and property-based tests fail, well, the answer is simple. Unit tests leverage on examples of what we expect to be a valid input (do you remember the specs? “Customer can redeem fidelity points when she buys 3 or more special product with a single purchase”. But the actual implementation of redeemBonus()
method checks if the customer buys 3 or more special product with a single purchase of at least 5 products.
As happens in every real project, code (and specs) changes. So make your code more robust with property-based test now 🙂