vrijdag 31 juli 2015

Contemplations from 'Common' Events

[This blog was originally published as Dutch article in TestNet Nieuws (http://nieuws.testnet.org/vak/overpeinzingen-uit-alledaagse-dingen/)]

Two weeks ago I experienced a disruption in production, a - especially for me- very serious one. I was able to navigate to a safe point and that was it. Frustrated I called the helpdesk and I started explaining what I was doing up to the moment the disruption occurred, what I did that triggered the disruption and what the impact for me was. While I was telling the story, I noticed that I was thinking about the signals that I had been ignoring up until disruption and all the workarounds that I had been applying and if I had to mention them to the support desk or not. Were they related to this problem or maybe contributed to it or weren't they related at all? Had the problem become worse over time or had my actions made it worse or maybe I had made the problem harder to solve or even unsolvable. I thought that if I had this experience that maybe tons of other users that made incident-reports from the organisation also went through the same thing. What if "my" testers had this problem that were working on a project for a long time? Or any tester for that matter in other organisations?...

My contemplations were interrupted by a voice on the other side of the line: "I'll transfer you to TechSupport". Then it went quiet on the other side. I was thinking then, that I had cursed the dull and corny waiting tunes hundreds of times before, but that I was now doubting if I was still connected now it was not there. I wondered if that was the case with the requests of users too. They tend to throw things over the wall all the time to the IT department, even worse now people are 'scrumming' and it is almost immediately realised.... we now have features in the software that people wanted really badly, now they are there, those features have exposed even worse problems or have now created a situation that users aren't serviced in a need. The silence on the other side of the line is deafening, but the clock on my phone indicating the connection time is still ticking, so apparently I still have a connection. 

I'm hesitating if I should call again and just before the 'moment suprême', a voice sounds on the other side of the line. I start explaining the disruption again from beginning to end and decide to mention I had my doubts a bit longer and that I have been ignoring signals and using workarounds. While I'm telling this, I hear the guy on the other side typing franticly and I realise that I have seen adjustments of the 'historyfield' or 'descriptionfield' itself on several occasions after the initial administration of a bug and I smirk a bit that this principle is not only applicable for 'us testers'. The tester's conscience in action. 


I'm restarting and it all seems to go into the right direction. I'm still getting a message, but I'm helped for now. I'm getting along nicely when suddenly the whole thing stops abruptly, nothing is reacting as it should. I'm calling the supportdesk again, telling the story, forwarded to TechSupport and now also the physical support is on it's way. They are looking, even using a special diagnostic device, a conclusion is made and I'm presented with a description for the solution. 
 
I'm now at the party that is going to solve my disruption. I hear myself, now I have the solution, skipping the problem history all together and I hear myself stating "that's the solution, you fix it". I have a diagnostic report after all and I now exactly what the cause of the problem is. I'm flabbergasted when I'm called a few hours later to hear that an investigation has been done, that the cause is found and that they are going to fix it; exactly as is stated by myself earlier. I question myself if "my" testers have this same knack and are doing the whole diagnostics again when they get work transferred from another tester or do they trust the work of the tester before them? Are developers asking the new tester in their project to do all the already done test work again to make a new diagnosis?

I get a heart-attack when I hear the guy on the other side of the line mentioning the amount that is to be paid for the solution. I'm quiet for a bit. I have myself also done some investigation 'on the internet' on the different possibilities to fix the problem and I have seen (exactly the same) solution that cost a fraction of the amount that this guy is presenting. The only thing is that I have to get my hands dirty myself. In an impulsive moment I flap out that I'm going (thus) fix the problem myself. 

There's silence on the other side of the line (no, I'm not expecting a waiting music this time) and then the voice says that I'm still to pay for the diagnostic fee. Clearly annoyed now, I'm stating that I will not pay for this fee, since I didn't ask for it. Even more so: I already had the solution in a report presented to them, did I calculate my diagnostic fee to them? Again my thoughts were wandering off to my working situation; isn't this exactly what we are doing as testers? Doing the work of our predecessor over again because we want our own view on the problem or we don't trust the data of the one that tested before us and then calculate the costs to our clients (time, money, etcetera...). I mumble something about 'service' being a virtue and I end the phone call after some grumbling and discussion.

In the aftermath my thoughts go to the situation at work and that many disruptions, issue and bugs are raised to easily by users because they have no idea of the costs that a solution costs, especially since it's not their own money they spend. I wonder if, even if the problems are a bit more complicated by nature, if people are rewarded for it they would solve it themselves. Because solving things themselves would be cheaper that letting it be solved by the (more) 'expensive' IT department. Would one be solving problems more quickly and not spending time on implementing workarounds that might worsen the problem of make it unsolvable? What would that mean for 'us testers'? Should we trust the 'results from the past'...

And now? For a fraction of the costs I have fixed the problem myself. What? A tester isn't supposed to fix a program? Says who? Is that relevant at all?

Oh? Didn't I mention that this wasn't an IT-problem? No... I had car trouble. 
It broke down on the highway, while I was under way to a hike on a nice, bit chilly Sunday afternoon. I had the ANWB (Dutch breakdown service) on the phone. First the regular helpdesk, than the technical support. The tech guy said I could drive on with the problem after restarting the car, but when the problem worsened, the ANWB-van with a mechanic came by. 
The cause of the problem was a broken ABS-ring (just Google) and it was repairable by a few easy steps. At the dealer they asked more than twentyfold (!!!) of the amount because they couldn't order the ABS-ring part on it's own but only with the whole axle. In the end I did the repair myself and I'm driving again to there and back again. I also got the invoice of the 'service'... 50 euros for plugging in a device into the car that the guy from the ANWB already did when on the side of the highway. 

And so... the last lesson of this article is... only in their context things really get clear.


Pictures: My own repair attempt and Smart HobbyRepair day in Heemskerk (where I got some helping hands), own archive and Ricardo Vierwind

Geen opmerkingen: