Tonight begins the second round of the 2017 NHL Stanley Cup playoffs for our hometown Washington Capitals. They have no doubt spent the past few days furiously practicing in preparation for facing the reigning champs, the Pittsburgh Penguins. It occurs to me that as they have practiced, their coaches didn’t just set up orange cones on the ice to represent players. They probably didn’t strap onto the goal frame the kind of target screen kids use for pickup street hockey games in suburban cul de sacs.
Instead, the ‘Red’ line went out against the ‘Blue’ line and the incredibly talented goalie, Braden Holtby, manned one of the nets. Why would they practice this way? There are a number of benefits. It lets each player practice his specific role. But more importantly, it allows for more effective practice because the players are not simply practicing for a situation that’s ‘supposed’ to happen or what is ‘expected’ – they get to practice within a state of chaos. Each player is acting out and doing things that are not necessarily expected such that every other player needs to adjust and react to ensure he is prepared in as robust a way as possible when he faces Sidney Crosby and the Penguins over the course of this 4 to 7 game series. So why am I writing about this today (well, besides using it as an opportunity to bleed off some anticipatory excitement for the game tonight)?
This is what AEGIS means when we talk about negative testing and need to Test Like You Mean It. When testing is mostly limited to the things that are supposed to happen, the resulting fielded system has a much higher risk of running into trouble in production. That’s because the software hasn’t had a chance to “practice” against the unexpected. Some have argued that it’s simply a matter of developing better test cases. That’s certainly part of it. But in an environment where interoperability is a central concern, having the opportunity to test in an ecosystem that truly represents what production will be like, including real, honest to goodness interaction among components of systems built by different parties, is the only reliable way to – Test Like You Mean It.
The target screen strapped to the goal frame allows for happy path testing of the shooters. Go for the four corners and the five hole, those are the highest percentage shots. Now what happens “in production” – it’s Game 7 in overtime meaning the next goal wins. It turns out, Marc Andre Fleury of the Penguins is doing an amazing job tonight covering those usual weak spots. If the shooters have only been practicing for the happy path, they are now in trouble when it matters most.
In health IT, when it comes to exchanging data, systems are similarly most often tested for the kinds of things that are supposed to happen in the course of such information sharing. The problems is, like the overtime hockey game, production is subject to chaos. Some of the largest IT enterprises in the world understand that. Netflix famously introduced into their environment something called the Chaos Monkey. It’s a technology that deliberately causes problems – such as taking servers offline, interrupting network traffic, etc. – to force the overall system to deal with that chaos. This allows the software engineers to observe whether the situation is handled gracefully – or not – and if not, improve it. The AEGIS Touchstone Project for FHIR and even our last generation Developers Integration Lab (DIL) provide the kind of ecosystem and the kind of detailed data-generating testing harness necessary for exercising healthcare information exchange in a way that lets you Test Like You Mean It. In our view, patient safety – for ourselves and our loved ones – is too important to do anything less.
patient safety – for ourselves and our loved ones – is too important to do anything less.
As for the Caps, I’m afraid all the live fire practice they have done this week will be for naught. You see, I was born and raised in western Pennsylvania. Let’s go Pens!