Testing and Metrics

•January 11, 2018 • Leave a Comment

Executive Summary: testing provides confidence that requirements have been implemented (acceptance criteria/tests) prior to delivery into production, metrics provide a steer on confidence that the product/service/application is delivering as per the objectives, in production.

 

Writing code has become easier in recent years with the improvements in IDE’s, build and deploy chain tooling, GitHub and tutorials and samples available on the web. In many cases it is possible to find code lying around the web which may solves part of the problem you are attempting to code. That said, writing good code is an art. Code that is well structured, tested, and that delivers a solution to the problem statement is continues to be a high bar.

 

Testing is, and will continue to be, a debated topic, with multiple divergent approaches being common, for example:

  • Unit vs integration vs end to end
  • Code then test, Test Drive Development (TDD), Behaviour Driven Development (BDD), or other.

 

Any testing is better than no testing; and in most cases, an individual’s viewpoint on testing is strongly influenced by their notion of what makes a quality product, and how quality is measured. Improving upon a viewpoint, however well informed, requires articulated hypothesis and data points of validation found in collected metrics.

 

AWS’s recent blog posting on the roulette wheel reminds us of how a data-driven culture is driving productivity and innovation at Amazon. Amazon has embraced metrics in defining objectives, and ensuring that products, services and processes meet expectation and can be optimised without reliance on “point of view”. Testing, backed by metrics, provides confidence that the delivered service will meet the expected criteria.

 

We can think of metrics and testing as providing a solid basis for confidence in application code as it progresses through the Software Development Life Cycle (SDLC), indeed, in the case of Amazon, Metrics Driven Development (MDD) are at the core of the SDLC. Metrics that validate the Definition of Done (of a story), not only validate software during test but provide real-time data-driven evidence of performance to specification in production.

 

Returning to testing: software engineers who prefer not to follow the agile path of acceptance criteria/tests, as a required part of a story, risk failing to codify the story in a way that evidences how the requirement has been satisfied.

Advertisements

Mocking with Data in a GDPR World

•January 3, 2018 • Leave a Comment

General Data Protection Regulation (GDPR) is going to impact a lot of software.  Not least test data.  A few articles provides some guidance on this particular thorny subject:

  • How GDPR Impacts Test Data Management
  • Test data management and reactive automation
  • Test data management: the hidden GDPR challenge
  • How To Create Your GDPR Compliant Test Data Management Strategy

AI Driven Retrospectives

•December 3, 2017 • 1 Comment

With the drive of AI and chatbot recently, its of no surprise to find ScrumBot.  What I think would however be more interesting is to hook the bot into a wider array of data outside of the Slack/Hipchat/Microsoft Teams world, maybe levering product metrics, as well as SDLC metrics.  Standuply appears to hint at this via the third party metrics integration.  I suspect if the bot could leverage a appropriate metrics, its would be powerful not only from a retrospective, but also from a pair engineering experience as well

Agile influences Behavior Change

•November 15, 2017 • Leave a Comment

There is often debate around what Extreme Programming (XP) / agile practices work, and what doesn’t – hotly debate are often pair programming and test-driven development.  What is often missed is that agile has an underpinning, which is the drive for behaviour changes to deliver improvements (quality) in the deliver of software.  This may seem obvious, but its worth calling out explicitly – Connection and here.

Whichever agile techniques you find that work for you and the team, just remember, the need for behaviour change is an ongoing process to drive quality improvements, its not that adoption of any particular agile technique itself will improve quality.

 

Recurrent Neural Networks (RNNs)

•August 20, 2017 • 1 Comment

Some while ago a data scientist colleague pointed me at “Practical Machine Learning With Event Streaming” over on the Monzo blog.  The key takeaway if you don’t have time to read the article fully is:

RNNs specialise in making predictions based on a sequence of actions (known as an “event time series”), e.g. user logs in → taps on “money transfer” → encounters an error.

After reading the article, I got to wondering if you can take the event stream from the Software Development Life Cycle (SDLC) process and use RNNs to predict defects?

Bots As A Mean To Motivate Behavior Change

•June 5, 2017 • Leave a Comment

Often the underlying message around Extreme Programming (XP) and agile in delivering software is actually a behaviour change required by the person/team.  Behaviour change is hard, we as individual are often stuck in our ways 🙂  Hence I was interested to come across “Chatbots As A Mean To Motivate Behavior Change”, especially given the last few years buzz around AI and bots 🙂

“Pushy AI Bots Nudge Humans to Change Behavior” also provide some thought “Researchers use artificially intelligent bot programs to stimulate collaboration and make people more effective”

Could a chat bot, driven by a neutral network or appropriate algorithm, steer a software engineer towards improving the codification of application changes ( story/requirement)?

Update: Few more recent articles provide food for thought on the above:

Evidence of Success

•April 11, 2017 • Leave a Comment

A few thoughts if you are about to venture down the road of building a software applications:

  • How will you know when you are DONE on implementing the requirements (Stories)?
  • How will you validate to the product owner that you are DONE?  Maybe consider traceability and linkage of stories to acceptance criteria to acceptance tests?
  • Ticking a box to say you have implemented all the requirements is in many ways immediate stale as soon as the code is “enhanced” post the box ticking exercise.
  • When in production, how will you know if the application is “broken”?  Considering solutions for logging, monitoring, availability etc
  • Logging is only as good as the data logged 🙂  Over use of logging will mean there is the wrong noise to signal ratio 🙂  Likewise to monitoring
  • If the application is “highly available”, prove it, in production 🙂

You could think of this as Test Driven Developer->Behaviour Driven Developer->Evidence Driven Developer, or even Evidence Driven Engineering