Currently I'm working from home, the snow and iciness made me think twice before commuting to Amsterdam again. Yesterday it took me two hours to get home, a journey that normally takes an hour. So I have (not so environmental friendly - I know) set the heater a degree higher, made myself I nice cup of coffee and started working on the planning and preparation of the chaintest which is to be executed next year March and will do some testing on a system.
It's been two weeks since EuroSTAR 2009 ended, and my mind has been racing with ideas since then. Not only that, I've been busy implementing a lot of stuff and tips I got in the conference too in my current projects.
Since I'm testing and managing a whole program, I really had use of the information provided in the 'Program Test Management- a survival kit' by Graham Thomas. I checked the 'best practices' lists in the sheets and checked the 'anti-practices' lists and set them beside the stuff I already did in my project.
One of the needed skills was negotiating and influencing, I noticed I hadn't been very strong in that department: I used the stuff learned in the workshop 'Chatterboxes and Cavedwellers' of Naomi Karten to pinpoint the problem; since I'm an introvert at doing things and my 'audience' is mostly extrovert, I suddenly understood why my message may not have been landing as I thought it did.
One of the other things in 'Survival kit' was that there should be a 'clear test organisation structure with matrixedrelationships' and another 'Clear and agreed interface with stakeholders and sponsors' and a third 'To ensure that your stakeholders and sponsors clearly understand what testing is doing for them', I realized that a part was missing: namely the program itself. I decided to combine all four (three from the track and my addition). In one large overview of the program: picturing the systems, what the connections are, who is responsible (owner, tester and users) and dates that are important. I'll also use these as 'talk-images' to align all stakeholders (first try-out was very positive!). In addendum with these images, I made a scenario with activities, time, input needed, output needed and who is responsible. Thirdly I made a gannt-diagram for time-overview. All three parts are aligned to eachother by use of colors, so the process of one piece may be 'blue' in all three documents. On the way I also made use of the things learned in 'Rik Teubens track; Many can quarrel, fewer can argue.
My 'testcase' pointed out that he really liked the overview; since he was able to place himself in the bigger picture in relation to other parts in the program (which was normally out of scope for him), he also like that he was able to easily identify the timelines and activities in the other documents because he only had to find 'his' colour.
(and I've been also thinking about another lay-over for the images like Neil Pandits' visualized Risk-based testing, heatmaps)
The second positive improvement I made in my workingarea, and this is the system I'm testing (not the program), Is the stuff on exploratory testing, the workshop of Michael Bolton really had loads of stuff in it I could use to stretch the abilities of the system under test. Resulting in some stress in the organisation though, but when I argued (;-) ) this was all beneficial to really get a good look on the quality of the system and preventing more 'pain' in the future, they were (mostly) convinced. And the secundary part of this, is that it made testing this (a bit dull but complex) system fun again.
So, I guess there's still a lot to implement from the EuroSTAR 2009 conference. I haven't been able to work out all the stuff yet, since I also had to catch up on the work I left behind when going to Stockholm, but I'm confident it will all has a place somewhere.