dinsdag 5 april 2016

We are the person of interest

Yesterday I read an article about the TV-series 'Person of Interest'. It was about the making of the final season (5) and how we in the Netherlands are currently at season 3. Something in there triggered me in writing this post. There was a paragraph in there that said the series had mostly become popular due to the fact that it seems that this fiction is happening right before reality catches up (which you obviously not notice when you are two seasons behind the current one). It was right before Snowden made his information public, that in the series already was mentioned that governments were collecting data about everyone. It made me aware again and I had the urge to make others aware too.

I like watching 'Person of Interest'. I think - for me- its like watching a sort of reality-horror/thriller show. It occurred to me that when you are open to pick up the signs you see things that aren't that far fetched at all. On the contrary: I find more and more things become more plausible every day and I even notice some of these things have become reality.

Most testers like being involved in the state-of-the-art and designy side of testing; mobile, test automation, usability... I see testers specialize in 'performance' and 'automation'. I see -alas- still only a small amount of testers that care about Business Intelligence, Big Data and Analytics and I see a growing interest in security testing. But.... it anybody giving it any thought WHAT data they are exactly protecting with these security tests? I don't think so. I don't think that testers (in general) are giving it a second thought that the data they are testing for is 'proper data' in the ethical sense of the word. We test data for correctness, we test if data has been processed correctly by the ETL layer, we test if data is in the right format so our systems can use it, we test the readability and meaning of data to our business. BUT WE DON'T TEST THE ETHICAL USE OF DATA!!! 

I think we should start caring about this! In a world where we become more and more dependent on information technology, where data AND predictive data is becoming more and more a factor in decisions of governments, society and companies to treat people in a certain way, in- and excluding them even. Think this is not going to happen because our societies aren't going to allow that? Guess again, read it and weep: http://www.independent.co.uk/news/world/asia/girls-and-unmarried-women-in-india-forbidden-from-using-mobile-phones-to-prevent-disturbance-in-a6888911.html 

We should, no we MUST make a difference. We as testers are -I think- most fit to check designs and data definitions on unethical use of data and information: we dare asking questions, are skeptical by nature, are curious and think like bad guys (girls) when we need to. We can make a difference when testing the software and systems, particularly databases and data warehouses, data mining software and other data processing systems, by checking them on compliance to data protection acts and that only data is collected  that is actually needed for providing the service etc. Which, I can tell you from experience, isn't the case. In each and every system that is being build right now and has been build in the past data is being collected and stored that isn't a necessity for the service being provided. The designers have just been THINKING LAZY in expense of a bit of privacy-loss.  Ever wondered why a bank needs your gender to conduct business? They don't.

So back to Person of interest. I know that at least a more than one person sees this show and thinks it's science fiction, just like StarWars is. But I'm telling you now: this is reality. current. This machine has been build and it's only a matter of time that the information collected is used in a way we as society might benefit form but also might find not so pleasant. The 'ordering pizza' example might be funny, but it's a genuine wake-up call. Time to act now!  For sure: 'you are being watched'! 

vrijdag 31 juli 2015

Contemplations from 'Common' Events

[This blog was originally published as Dutch article in TestNet Nieuws (http://nieuws.testnet.org/vak/overpeinzingen-uit-alledaagse-dingen/)]

Two weeks ago I experienced a disruption in production, a - especially for me- very serious one. I was able to navigate to a safe point and that was it. Frustrated I called the helpdesk and I started explaining what I was doing up to the moment the disruption occurred, what I did that triggered the disruption and what the impact for me was. While I was telling the story, I noticed that I was thinking about the signals that I had been ignoring up until disruption and all the workarounds that I had been applying and if I had to mention them to the support desk or not. Were they related to this problem or maybe contributed to it or weren't they related at all? Had the problem become worse over time or had my actions made it worse or maybe I had made the problem harder to solve or even unsolvable. I thought that if I had this experience that maybe tons of other users that made incident-reports from the organisation also went through the same thing. What if "my" testers had this problem that were working on a project for a long time? Or any tester for that matter in other organisations?...

My contemplations were interrupted by a voice on the other side of the line: "I'll transfer you to TechSupport". Then it went quiet on the other side. I was thinking then, that I had cursed the dull and corny waiting tunes hundreds of times before, but that I was now doubting if I was still connected now it was not there. I wondered if that was the case with the requests of users too. They tend to throw things over the wall all the time to the IT department, even worse now people are 'scrumming' and it is almost immediately realised.... we now have features in the software that people wanted really badly, now they are there, those features have exposed even worse problems or have now created a situation that users aren't serviced in a need. The silence on the other side of the line is deafening, but the clock on my phone indicating the connection time is still ticking, so apparently I still have a connection. 

I'm hesitating if I should call again and just before the 'moment suprême', a voice sounds on the other side of the line. I start explaining the disruption again from beginning to end and decide to mention I had my doubts a bit longer and that I have been ignoring signals and using workarounds. While I'm telling this, I hear the guy on the other side typing franticly and I realise that I have seen adjustments of the 'historyfield' or 'descriptionfield' itself on several occasions after the initial administration of a bug and I smirk a bit that this principle is not only applicable for 'us testers'. The tester's conscience in action. 

I'm restarting and it all seems to go into the right direction. I'm still getting a message, but I'm helped for now. I'm getting along nicely when suddenly the whole thing stops abruptly, nothing is reacting as it should. I'm calling the supportdesk again, telling the story, forwarded to TechSupport and now also the physical support is on it's way. They are looking, even using a special diagnostic device, a conclusion is made and I'm presented with a description for the solution. 
I'm now at the party that is going to solve my disruption. I hear myself, now I have the solution, skipping the problem history all together and I hear myself stating "that's the solution, you fix it". I have a diagnostic report after all and I now exactly what the cause of the problem is. I'm flabbergasted when I'm called a few hours later to hear that an investigation has been done, that the cause is found and that they are going to fix it; exactly as is stated by myself earlier. I question myself if "my" testers have this same knack and are doing the whole diagnostics again when they get work transferred from another tester or do they trust the work of the tester before them? Are developers asking the new tester in their project to do all the already done test work again to make a new diagnosis?

I get a heart-attack when I hear the guy on the other side of the line mentioning the amount that is to be paid for the solution. I'm quiet for a bit. I have myself also done some investigation 'on the internet' on the different possibilities to fix the problem and I have seen (exactly the same) solution that cost a fraction of the amount that this guy is presenting. The only thing is that I have to get my hands dirty myself. In an impulsive moment I flap out that I'm going (thus) fix the problem myself. 

There's silence on the other side of the line (no, I'm not expecting a waiting music this time) and then the voice says that I'm still to pay for the diagnostic fee. Clearly annoyed now, I'm stating that I will not pay for this fee, since I didn't ask for it. Even more so: I already had the solution in a report presented to them, did I calculate my diagnostic fee to them? Again my thoughts were wandering off to my working situation; isn't this exactly what we are doing as testers? Doing the work of our predecessor over again because we want our own view on the problem or we don't trust the data of the one that tested before us and then calculate the costs to our clients (time, money, etcetera...). I mumble something about 'service' being a virtue and I end the phone call after some grumbling and discussion.

In the aftermath my thoughts go to the situation at work and that many disruptions, issue and bugs are raised to easily by users because they have no idea of the costs that a solution costs, especially since it's not their own money they spend. I wonder if, even if the problems are a bit more complicated by nature, if people are rewarded for it they would solve it themselves. Because solving things themselves would be cheaper that letting it be solved by the (more) 'expensive' IT department. Would one be solving problems more quickly and not spending time on implementing workarounds that might worsen the problem of make it unsolvable? What would that mean for 'us testers'? Should we trust the 'results from the past'...

And now? For a fraction of the costs I have fixed the problem myself. What? A tester isn't supposed to fix a program? Says who? Is that relevant at all?

Oh? Didn't I mention that this wasn't an IT-problem? No... I had car trouble. 
It broke down on the highway, while I was under way to a hike on a nice, bit chilly Sunday afternoon. I had the ANWB (Dutch breakdown service) on the phone. First the regular helpdesk, than the technical support. The tech guy said I could drive on with the problem after restarting the car, but when the problem worsened, the ANWB-van with a mechanic came by. 
The cause of the problem was a broken ABS-ring (just Google) and it was repairable by a few easy steps. At the dealer they asked more than twentyfold (!!!) of the amount because they couldn't order the ABS-ring part on it's own but only with the whole axle. In the end I did the repair myself and I'm driving again to there and back again. I also got the invoice of the 'service'... 50 euros for plugging in a device into the car that the guy from the ANWB already did when on the side of the highway. 

And so... the last lesson of this article is... only in their context things really get clear.

Pictures: My own repair attempt and Smart HobbyRepair day in Heemskerk (where I got some helping hands), own archive and Ricardo Vierwind

dinsdag 10 maart 2015

Let's blog about...Let's Test BeNeLux

Once a regular time to start the day... now a unholy moment to get up. I got on the bus at 05:42, the chauffeur hadn't even bothered to turn on the lights yet. Was easy on the eyes though. Traveling by train was quite fine today, unlike yesterday when I had to arrange a car on last notice because of 'actions by NS personnel'. Approximately 08:30 I stepped into 'Mezz'  for the Let's Test BeNeLux, great venue when your tagline is 'For those about to Rock', since it's a smaller (music)stage/ rockvenue.  At registration already some familiar and also loads of unfamiliar faces for me. Always easy to have the longest name on the registration list; easy and fast find :-)

After some coffee I ran off to mainstage where James M. Bach was scheduled for the opening keynote about 'checking versus testing'. In style the keynote starts with some rock music by AC/DC and James plays te part with a striking pose :-). Interactiveness is encouraged and the 2Dcode is shown to download the deck on-site (saves notetaking) so I have an easy job only to have to write down the keywords and scribble my doodles down. 
My interpretations of this keynote is that checking seems to be the fetish of people like managers, who don't understand that testing is more than automaticly running stuff but and that checking is part of testing. Testing being ' evaluation by learning through experimentation and exploration including questioning, modeling, observation, inference, etc. It's like morphine; something that's for professionals for use for a specific use, but not to be given to children.
When we look into testing there are four quadrants, consisting of spontaneous testing and checking and deliberative testing and checking, all activities no matter in which quadrant they are, are useful but it takes people who understand the matter to really make it valuable. The key is 'making sense' , which is the part that can't be automated (probably also the reason why 'sensemaking' has 'sense'  or 'sentient'  in it ;-))
As I see it, checking is something that can be defined and when you have difficulty defining it into a specific criterium, you'll probably have something before you that is in the category of sentience and non-checkable testing. Checking is something that is derived from algoritms. 
In the QA I asked a question that referred to something that James called epistemic testability, which was explained as the things we already know. Together with the mention of the 'history oracle' (the things we see/find we already know), I wondered how to cope with the things we think we know. 
As I interpreted James' answer this is the core of testing and he referred to the story of the 'Silver Bridge', which had a problem in it since the beginning but only after 40 years the problem emerged. He also mentioned having dinner; what are the acceptance criteria there, how are you going to define when you are done up front? It's all about discussion and conversation, but also having an attitude of acceptance; acceptance that problems can and will be in the things we test. With this knowledge and mind-bender, I went for the coffee break. 

After the coffee break James Lindsay had a very energetic note about 'A nest of test'. First time I had to take out my laptop in a non-testlab room and test during the track!! How cool is that. Check out the IP:
for some interesting teststuff. I really had a good time puzzeling around and figuring out what would cause the things I encountered. It was cool to test with a room full of people and having people hypothetising about the things seen on the screen when changing the parameters. I felt like this is what 'Let's Test'  is all about; learning and especially doing together.  Sorry for being so short in this part, but being very busy with tools, reduced the amount of time of being able to blog...

.... The continued...

What a fabulous lunch! Good food and a very sunny terrace outside with testing colleagues. It was almost too difficult to drag my ass into the venue again.

But I got myself up to listen to Jean-Paul van Varwijk about the challenges of implementing context driven testing (at Rabobank international). 
Jean-Paul told about some Dutch context (the Dutch apparently have loads of publications about testing compared to other countries) and the steps that lead to the implementation of context driven testing. Rabobank, also because of the crisis and the wish to become more agile, changed to an organisation with 'Domain based delivery teams'. 
It's surprising to hear about 'thought leadership'  in this particular case, since lot's of times I have heard about the term thought leadership being perceived as a nonsense thing, since you can't give leadership to thoughts. My opinion around that was that it was that this thought leader is someone who knows his (or her!!!) stuff and guides people to investigate new things and to learn, educate and stimulate development; it was mostly honed away. Understand my surprise that the thought leader is described in this presentation as such! 
Jean-Paul tells about the uncertainty about not having guidance and direction, he tells about being a bit down about the situation of not knowing where the organisation is heading, but is recently more enthusiastic because direction is more outspoken and he's even motivated to organise workshops again. I found this last part of this track the most valuable, since it (again) points out - to me- that having the organisation or management pointing into a direction or to have leadership, especially in turbulent times or change programs/ organisational changes (and implementations) is essential to keep your people motivated and stimulated and to keep reminding them that they are invaluable to the organisation, even during these times of turmoil.

After Jean-Paul, Joep Schuurkes took the stage to do a track called 'Helping the new tester to get a running start'. He made the analogy with learing to navigate a city to make a point that the 'usual suspects' as plain documentation, map, route descriptions, etc., won't make a newby in the company a happy starter.  He has lot's of images of his home town of Rotterdam to explain the different aspects of introducing the employee in the company. For instance, when showing a picture of Rotterdam right after WOII (flat), he explains that a historic view might not be that interesting for your new team member, since they have to work on the now and future development, but then again we (IT in general) are too history unaware and an overview is important to know how you got there where you are. Slide by slide he ads and ads to the package, only to tell us that we need to become more abstract and have a more guideline like approach with the next key areas: provide structure, model the application (SANFRANCISCODEPOT-heuristic), model your approach to testing (mind the overhead hazard), guide interactions with the application and with the team, empower the new tester (mastery, autonomy, purpose) and the least; have fun! 

I hoped to warm up in the sun during the afternoon break, the conference room being a fridge. But I ended up having a great conversation about conferences and German literature being an inspiration for a workshop about reporting (looking forward seeing it at one of the future conferences!).

Back to the stage in the fridge again. Andreas Faes starts his track, titled "Testing test automation model", with telling a story of the whale, experiencing different things in the "emptiness" of space and defining those things to create it's model to understand these. Loving the story about counting; 1,2,3,4,5,6,7,8,9,10,11,12,13, €... Euro being a number in the model of his son who has not grasped the concept of currency yet. By assimilation this model is correct in his sons mind, but who understands currency knows € isn't a number of course. About understanding models and verifying them...:-). Making a bridge to models in test automation, Andreas explains his path to the now, on the way explaining some historic concepts on the way and adressing what a implicit and explicit model is, but specifically how to get from an implicit (test) to a explicit (automated) model. The idea of what is mentioned here, domain specific language, sounds familiar to me and I can't help but think about 'Kenniskunde'  (sorry for the international guys; it's a concept by Sjir Nijssen on use of proper Dutch language and mathematics and logic in the daily use) or 'Kennis Representatie Zinnen' (google translates this to knowledge representation sentences, but I wonder if this the same meaning), seems -like the article- a Dutch principle, but I'm sure there's a non-Dutch version as well. It triggers me to look into this matter more and it dissapoints me a bit that the track suddenly is over. It feels it's ended very abruptly and would have loved to have heard more about this, but I guess the fact that I am triggered is also valuable, so I have to be satisfied for now.

Instead of Jacky Franken, Pascal Dufour now takes the stage. Which I find a bit too bad, since I skipped Jacky's track in an earlier conference knowing I would see it here. The topic of Pascal is very relevant for me, so it makes up for the loss. 'Automation in DevOps and Continuous delivery' it is called. From continuous integration, to continuous delivery to continuous deployment. Continuous seems to me to ensure a constant, fast feedback loop to development, team or customer, dependent on what type of 'continuous...' is used. DevOps is then explained, because as I understand, to be truly agile in development, whether this is XP or SCRUM, development and operations should be 'on eachothers' lap'  sort of speak; hence DevOps. I got confused during the track about DevOps, as it seemed as a line of tools to be able to push through a development lifecycle, but checking Wiki set me on track again. Getting back into the track again an example is shown of a check in cucumber and a summary about what is possible and to be done. And then suddenly the presentation is over and slides over to a discussion. Keeps me wondering about whether continuous integration, continuous delivery and continuous deployment also needs or implies continuous testing?....or is only checking then possible?...

After the testlabrats James Lyndsay and Bart Knaack had finished the testlab report and Huib Schoots closed the official part of the day, the crowd went to the bar or the hotdog stand by 'dokter Worst' outside, enjoying a hotdog, some fries and beer (or wine, or sodadrink etc.) and some after conference conversations. I called it they day when I had just finished my hotdog and (after all it IS almost an summer day) a glas of rosé. 
I had an excellent day with good tracks, talks and I learned a lot. I think this Tasting Let's Test or this year called 'Let's Test BeNeLux' is a nice oppurtunity for those can't afford the 17000 (ex 25% VAT!!) Swedish croner to attend the full edition.  Hope to attend again next year.

maandag 22 september 2014

Sad but True...

As some of you might know, the past year I haven't been very active in blogging. First it was because I had a new job, then it was because I didn't feel like it (having more free time, causes to slack a bit :-) ) and after that, I felt to bullied, vulnerable and 'attacked' to blog anything. (I think 'Gotesen' with her blog wrote it down very well: http://godtesen-on-test.blogspot.nl/2013/11/being-pramatic-tester.html  ).

Lately there's much fuss about the ISO29119 standard. I've followed the different 'discussions', seen the rise of a petition, seen blogs being written etc. etc.  
I've observed and my sadness has grown and grown. I'm deeply saddened that a group within the tester's community is perceived and treated as lesser lifeform by people who think they have a right to do this.  I'm saddened that although I have a right to learn, explore and experience things myself, I'm bullied into a certain thoughtprocess, fellow testers who deprive me of a learning process of my own, only by their own false pretence of 'knowing what's good for me'. 
I was astonished of one of the replies on a reply I wrote 'I don't follow the same process as you' .. I wasn't even aware I HAD a process, but apparently that stamp has been pressed on me. 

I'm astonished by some blogpost, which, judging on the content, are based on non-information or not (entirely) correct facts . I'm even more astonished on the amount of people who, again judging on the replies, are without questioning the content believing what's in there. It scares the s**t out of me, that it's believed that easily, sometimes it seems that only because a certain person says something ' it must be true'. 

I once saw a reference to a quote on the Wikipedia... it was on wikipedia so it must have been true.. only to find out this person had added the wiki-article himself.  
I've seen perfectly good replies, seen 'beaten to death'  by replies that shout 'it doesn't matter what you say, it's wrong anyway' -non-arguments.
Arguments are made that are of the 'pot calling the kettle black'  persuasion. 
Arguments are made, it seems, because of the sake of it, not because they have any constructive value in the discussion. 
People with the loudest shout or that have the gift of easy writing are sabling down what people with small voices or that have difficulty writing are saying, not on the arguments themselves, but on the way they are using words. 
It's not about the meaning but about the correctness of use of words, there just doesn't seem any tolerance anymore for hearing messages, just because a comma or certain word was used wrongly, but only when it's not a message of ones own, because then you are supposed to 'get the overall message'.

Yes, I'm one of the 'ISO people', but I'm also a tester, thinker, questioner, learner, explorer and most of all... I'm a human being... sad but true.

dinsdag 26 november 2013

Irish Luck

Hope the 'luck of the Irish' will rub off on me this week because so far It's been an unlucky week indeed.

I think Murphy's Law is with me currently and hope to leave it there...

Yesterday, during my two-weekly hospital drill, I hurt my big toe. At first, although it hurt a lot, I didn't think much of it, but soon it became really annoying and I called my doctor. He thought it couldn't be broken due to the cause, but since it hurt a lot, he send me to the First Aid post in the hospital to make some X-rays just to reassure me that it was fine... Well... It wasn't ...it WAS (IS) broken. So  they decided that I had to get a special sort of shoe.

And I waited and waited... And after an hour my husband went to the desk to ask how long we had to wait, since he got very hungry ... And they were surprised we were still there... They'd forgotten us and after ten minutes I got my 'shoe' and was off to home...

Today, I had to fly to Ireland for SoftTest. When I got to the check in I kept getting the message that the passenger couldn't be found. I used several ways to check in, but it didn't validate. So I checked my original ticket and got white around the nose... It was booked for the 5th of November , luckily they had booked my return flight correctly, but now I had a new problem: World Ticket Center doesn't have a desk at Schiphol and frankly: although the send me wrong ticket, I had to check it when it came in. So the money's gone alas. 

But I had to find out there wasn't a WTC desk at Schiphol because different schiphol personnel kept sending me all over the place, so I hobbled from departures 2, to 3 to 1 just to get back at KLm in 2 where I was advised to get to the servicedesk and when I finally got to the desk to hear I had to go to AerLingus desk in departures 1. I have never walked this much on schiphol when I was healthy, and I can assure you, it's no picknick with a broken toe either. 

But at AerLingus I could still buy a ticket for the flight I had planned to take and now I'm on the plane on my way to the runway. 

At least the Irish cabin crew already rubbed me a bit when I told them my story, I hope the Irish luck will have rubbed off on me a bit! 

maandag 13 mei 2013

TestNet SpringEvent 2013 - Part 1 - The Tutorial

Today is 13th of May, SpringEvent of the Dutch TestNet. I have started my journey early today to get to Nieuwegein's NBC where it's held. I'm travelling by public transport, which goes perfectly well until I get to Utrecht Central Station, where the fast-tram stop has been moved from front to back and I didn't get the signing (is there any in the stations hall?) and I end up looking for the changed location and missing the tram, resulting in delay...aargh... well I got to the venue eventually and on time, I guess that's the most important.

The day starts with the tutorials, since it's my part-time day and I won't have to visit a client in the morning this time, I'm taking the opportunity to attend one. I chose 'Automating Production Simulations for Added Value' by Scott Barber (Twitter:@sbarber). Here are some 'blogsnippets from that tutorial':
Scott is after that telling about is path which he followed to get where he's now. I'm wondering where this is going, is there a message where this is going to or just getting to know him.... it takes a while but the point is: he didn't follow any programming or testing education but he looks at things differently than 'the norm' and we're about to look at things differently during this tutorial and he shows the image about 'nothing can stop automation'.

There's loads of 'me' from Scott, but little 'what's in it for me' till now, but anticipation is building when the following is said;  when you're taking your first steps in automation he's going to 'melt the brain', I sure hope so. The audience is really tame till this moment, there's not THAT much interaction although Scott is asking questions and relates to the knowns in the audience. I guess it takes a while to pick up steam.

Next is the following comic
and the following mind-map, which we can add to (please also look at this map for tutorial content):
http://www.mindmeister.com/nl/291649893?t=WVVr4hE3PO (I also made a PDF of the 12.52 h. version which will be available on 'funtestic.nl' later on...)

What's the point : "I don't care what "framework" you use, they all miss something important!
He's now mentioning the amazing: http://en.wikipedia.org/wiki/Specialisterne 

Tester's TIP; save a webpage that has loads of validations, save it to your desktop, open it and delete (most of) the javascript (it still has to submit though!), open it in your browser, type in everything you want and submit... and you bypass all the front-end / pre-commit data validations. 

"Most test automation lack narrowly defined Oracles to detect almost anything of value!" is now shown on the sheet, there's also a discussion about all the shades of grey between YES or NO. (My thoughts on this are: shades of grey are just very small defined YES or NO's ....even if you have a deviation of minus 10% or plus 10% it is still a YES/NO question... it falls between these margins or not)

 "An automated test's value is mostly unrelated to the specific purpose for which it was written. It's the accidental things that count: the untargeted bugs that it finds" 
Don't stop just because your automated checks work. Add more value with production simulations.
Wow, nice/surprising fact: ... only 56% of traphic on websites are really humans, the rest is bots and spiders.

Seems to me that automation is all about very nifty 'if then elses' and same as very small yes-no's mentioned earlier; the smaller those get (more elaborated) the more 'sentient' a test /might/  seem... It's becoming cool when this is combined with random data input so flows are followed differently every time.

It's now 12.44, that means I only have about a quarter of an hour left of this tutorial and we're running through models (see mind map for the types!). It's time for me to round up this blogpost. The tutorial has been a bit disappointing for me in one way that the three quarters of introduction could have been used to tell more about content than intro; the tutorial was little hands-on and mostly highlevel. Feels like of the 4 hours, only 2 hours were used effectively on content, which is a pity, because the topic is interesting enough.
Well I guess my expectancy was a bit higher and although I'm a bit disappointed I guess I'm inspired enough to search for more info, which is something too! Now signing of for the closure of this tutorial and making myself ready for the second part of the SpringEvent this afternoon and evening!

donderdag 18 april 2013

Client Based Testing ['golden' oldie]

This was my first start of an article that went with an abstract (for TestNet NL)... Because I was so 'wet behind the ears' back then and a relative 'new' tester, the presentation became a fiasco (to put it mildly) with me more in tears than in a happy mood... It can happen, but I didn't give up! ~Remember that things don't always go perfect the first time around...
As I read it now, I'm still behind the matter and it's still very relevant. I think I was 'before the time'... enjoy ... I'm curious what you find of the stuff... (date of last revision: fourth of February 2008...text below is UNchanged!, presentation was held on 'Fall event of TestNet in September 2008). It wasn't completely finished either; I guess I was too 'frightened' after the presentation fiasco to push ahead...still it gives a good idea of what I had in mind back then...


Testing has been a hot item the last couple of years. More and more businesses are starting to understand the importance of testing to mitigate their risks and establish a certain amount of quality of their product(s).
Over the years testing has evolved from ‘an activity done just before production’ to ‘a structured process of measuring characteristics of a process or system’.
This structured process is - for its part- based on Risks (Risk Based Testing) or Requirements (Requirement Based Testing). Also methods have been developed that are involving ‘the business’ or ‘the management’ more because typically one seems to think that the prioritization of risk or requirements are best set by ‘the business’ or ‘the management’.
Testers or test managers repeatedly seem to fail to involve the ‘real’ client when developing policies, strategies or plans. Not the one who pays the money but the people who are meant to work with the product and/or processes should have an important contribution in this stage. Especially in companies where requirements are poor and there is no time or money to develop these (for example Agile testing) or in companies where there are too little or too many stakeholders to determine the prioritization of risks (layering, budgets etc.)
Hence the introduction of Client Based Testing, or – in short- CBT. CBT should be approached in two ways; from the testers view and from the clients view.

Firstly the Testers view, or in particular, the test managers view. At setting up the test planning mostly risks or requirements are used for determining the activities to be performed for testing, when these cannot be produced by the organization the manager will mostly look for specifications and/or use-cases and will set up his tests on these bases. Forgetting a very important and very accurate source, namely: the end-users or production-unit of the company. Even though the British standard provides for this group of people by the means of being a test bases, most test managers ‘forget’ to involve these group of people. In practice I’ve found that this has a couple of reasons.

  1. The ‘old school’ tester (now manager) gives a natural preference to non-human input or non-communicative input, having been a programmer in the past
  2. The test manager (formally non-test) has a natural tendency not to involve the ‘common people’ and always communicate with people higher in the organizational hierarchy
  3. People ‘on the floor’ have a tendency not to have any time available (or otherwise said: have a natural dislike to management-people and or new software being developed (and tested) which implies the possibility of rendering them unnecessary) or do everything not to help (on which the reaction of the test manager is to ignore them in the first place)

Client based testing obligates the tester to develop more communicating skills but it also requires qualities like empathy, pliability and the ability to translate jargons.

Secondly the Clients view. Some years ago I received a questionnaire called TUSK which Isabel Evans had developed, was still developing. The TUSK list is based on the SUMI list for software but in this case the questions are translated to how the client or organization experiences the tester of test team and what part of the testing activities should be improved to the clients liking.  I used this list – as a pilot to write an article on TUSK usage - in different organizations and found that not the information the answers provided helped the most in improving the test process, but the time spent with the customer and listening tot the client was the most helpful. The client really felt understood and was more willing to participate and cooperate with the test team on improving their deliverables and processes.