Testing Standards

Mark Wieder mwieder at ahsoftware.net
Wed Nov 9 12:09:47 CST 2005


Tom-

Wednesday, November 9, 2005, 6:16:12 AM, you wrote:

> Thank you, this is exactly what I was looking for. I have spent the
> last week and a half going over the specifications and using our Rev
> project as proof of concept. Since we are using an outside company  
> for the coding, they have a person doing the use-case scenarios and
> specs. They will also have a person writing the QA tests when all of
> this is done. So far I think we are on track with what you are  
> describing here. Having the Rev demo really makes all of this easier
> (for me). I just don't have the working experience in this process  
> and am a little nervous. Things like what are the next steps and are
> we on track and is this enough to describe etc. are what is  
> concerning me.

Some answers in brief here, but I suggest we take this offline because
I think it is rapidly getting OT and I don't expect there's a lot of
interest in the subject on the list. Also, if you ask any three people
these questions you're likely to get eight conflicting answers.

1. Make sure that the person writing the QA tests isn't the same
person writing the code. That never works.

1A. What is the person who's writing the QA tests using as a reference
to write them from? As a first pass, looking at the code isn't a good
idea - that resolves to the same thing as point 1. As a second pass,
though, white-box testing necessary, which leads to...

2. Code coverage is important: make sure that every line of code is
tested. I don't know what your budget is for QA, but there are various
tools out there for testing this. You want to make sure than each line
of code is hit at least once by some test case; every "if" conditional
path is tested; every "else" gets tested; every "if" has a matching
"else", etc. Code walkthroughs are very useful if you have the time
and budget to handle them.

Software testing is usually divided into: unit testing, functional and
boundary testing, integration testing, and regression testing.

Unit testing is what the developer does. Builds aren't handed off to
QA for further testing until all unit tests have passed. In Agile
(Extreme) Programming this is process is handled by writing the tests
first and then writing the code to pass the tests. Xtalk example:

-- spec says passwords have to be greater than five chars
function CheckPasswordLength
  -- EnterPassword returns true if password accepted
  if EnterPassword("") then
    return false -- has to be greater than five chars
  end if
  if EnterPassword("hello") then
    return false -- has to be greater than five chars
  end if
  if not EnterPassword("bonjour") then
    return false -- should pass this one
  end if
  if EnterPassword("hello1234") then
    return true -- are numbers OK? not clear from the spec
  end if
  return true
end CheckPasswordLength
-- now write the EnterPassword function so that CheckPasswordLength
passes

Functional and boundary tests are written make sure that the code does
what it's supposed to do and doesn't do what it's not supposed to do.
I'm reminded of Mel Gibson and Danny Glover in the Lethal Weapon
movies: "Is it 1-2-3 and we go on 4, or is it 1-2-3-4 and then we go?"
If the spec says that valid values are 0-100, you need to ensure that
negative values and values greater than 100 are rejected and you want
to make sure to test values of zero and 100 as well.

Integration testing is the next step - after the code has been tested
by itself, what happens when this part is put into the system? If this
is a module of a larger app that you've been testing, there may be new
use cases, and there may be interactions between the unit under test
and the larger environment.

Finally, regression testing is what happens when you get version 1.1?
Running all the tests again will probably bring up some problems
because some things have changed since the last version. The existing
test suite will have to be archived and modified to adapt to the
changes, new tests added for additional features, and most
importantly, the new build shouldn't breask any existing tests in
places that haven't been explicitly changed.

There are various tools available to automate these processes, but
they're pricey, have limitations in what they can and can't do, and
usually have steep learning curves.

> Do you recommend any 'good' books on this process? Maybe on what to
> expect in the specs? Maybe what is most often left out and or what  
> most often causes the most problems?

Again, this is probably a matter of personal taste. I'm fond of Cem
Kaner's books "Testing Computer Software" and "Lessons Learned in
Software Testing". The first is a classic, was out of print for some
time and is now back in a second edition with not much changed. This
means that the examples are all out of date (MSDOS and such), but if
you can ignore that the methodology is well thought out. The second I
enjoyed just as a fun read, but you have to be pretty weird to read a
book on software testing for fun. The lessons and examples in there
might be more in-depth than you want to get unless you're diving into
software testing in a big way. You can find these online at Amazon or
Powells if you have to, but see if your local independent bookstore
can order them first.

What's left in and out: I usually find that what's left out causes
more problems that what's put in. It's easy to write test cases and
very hard to write a complete set of test cases. When I'm developing
something (this goes back to point 1) I find that I think I'm doing a
good job testing what I've written, but as soon as I hand it to
someone else they try a use case that I never thought of and it all
falls apart rapidly. I hate to say it, but forethought pays off. So do
peer reviews for sanity checks - "is closing the window the same as
pressing the Done button?" "what happens if someone pastes text in
here instead of typing it?" "what happens if someone puts in quotes
and ">>>>" symbols in a url field?", etc.

Some resources:

Check out:
http://www.qalinks.com/
http://www.opensourcetesting.org/testmgt.php

OK - got more long-winded than I thought. I'll duck out now.

-- 
-Mark Wieder
 mwieder at ahsoftware.net




More information about the use-livecode mailing list