Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Automated Software QA/Testing? 248

nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA. I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't). My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"
This discussion has been archived. No new comments can be posted.

Automated Software QA/Testing?

Comments Filter:
  • by irokitt ( 663593 ) <[archimandrites-iaur] [at] [yahoo.com]> on Saturday July 31, 2004 @01:25PM (#9853399)
    However, one of the aspects of development that sometimes "kills" the fun is testing or QA.
    I'm sure that quite a few Microsoft employees agree wholeheartedly.

    Laugh, it's good for you.
  • Manual Testing (Score:4, Interesting)

    by Egonis ( 155154 ) on Saturday July 31, 2004 @01:26PM (#9853404)
    Although I haven't personally used many testing tools:

    I created an Enterprise Application composed of Client/Server Apps -- the best test was a small deployment of the Application to users who are apt to help you conduct the test, over a few weeks, I found bugs I never caught with my own manual tests.

    Applications that test your code, etc are great from what I have heard, but will not catch Human Interface related issues, i.e. GUI Mess-Ups, Invisible Buttons, etc.
    • Re:Manual Testing (Score:4, Insightful)

      by GlassHeart ( 579618 ) on Saturday July 31, 2004 @02:20PM (#9853675) Journal
      What you're referring to is called "beta testing", where a feature-complete product is released to a selected group of real users. This is a highly effective technique, because it's simply impossible to think of everything.

      However, if you go into beta testing too early, then major basic features will be broken from time to time, and you'll only irritate your testers and make them stop helping you. This is where automated tests shine, because they help you ensure that major changes to the code have not broken anything.

      Put another way, automated test can verify compliance to a specification or design. User testing can verify compliance to actual needs. Neither can replace the other.

    • Here's a specific real life question. How can one validate a touchscreen voting system. In a major metropolitan area any given election might have 50 ballot designs (different legisative districits, languages, and multiple party primaries). Each ballot might have 80 choices to make each of which might have 2 to 8 options. Your looking to eliminate errors that might happen less than one in hundred times. and you have to use you fingers to test 1000 terminals.

      Since in practice you wont be able to review

      • This doesn't sound too complicated.

        First, you have come up with a general plan.
        - What exactly do you want to test - and to what degree of certainity
        - What are your priorities - here I would say that the accuracy must be less than 1%
        - What are your resources (i.e. how many testers, their qualifications)
        - What is the timeline
        - What is the estimated cost (and ultimately, what is the real profit from this stuff?)

        Then you start to develop a small test plan. First, you need a precise behavioral m
      • Sure,

        Set up small degree voting stations in department stores, youth centres, community centres, restaurants, and libraries for such things as:

        - New Services? Which are most popular
        - Materials for Libraries, what would be used most frequently?
        - Most Popular Items on a Menu

        When done across a board, this is a nice way to get free testing, and please many different agencies and companies by providing this free service, thus improving the image of the voting system.
  • by F2F ( 11474 ) on Saturday July 31, 2004 @01:26PM (#9853405)
    how about we go back to basics and read the proper books on computer science? no need for your shmancy-fancy-'voice debugged'-automagically-'quality assured' offerings, thanks.

    i'll stick with The Practice of Programming [bell-labs.com]. at the very least i trust the people who wrote it to have a better judgement.
  • by dirk ( 87083 ) <dirk@one.net> on Saturday July 31, 2004 @01:26PM (#9853406) Homepage
    The first thing you need to learn is that you shouldn't be doing large scale testing on your own systems. That is just setting yourself up for failure, since the only real testing is independent testing. Preferably you should have full-time testers who not only design what needs to be tested, but how the testing will be done and who will do the actual testing. Where I work, we have 2 testers who write up the test plans, and then recruit actual users to do the testing (because they can then not only get some exporsure to the system, they can suggest any enhancements for the next version). Testing your own work is a huge no-no, as you are much more likely to let small things slide than an independent tester is.
    • by wkitchen ( 581276 ) on Saturday July 31, 2004 @01:45PM (#9853506)
      Testing your own work is a huge no-no, as you are much more likely to let small things slide than an independent tester is.
      Yes. And also because you can make the same mental mistakes when testing that you did when developing.

      I once worked with a dyslexic drafer. He generally did very good work, except that his drawings often had spelling errors. When he looked at his own drawings, the errors were not obvious to him like they were to everyone else. Most people don't have such a pronounced systematic tendency to make some particular error. But we all occasionally make mistakes in which we were just thinking about something wrong. And those mistakes are invisible to us because we're still thinking the same wrong way when we review it. So having work checked by someone other than the one who created it is a good practice for just about any endeavor, not just software.
      • I once worked with a dyslexic drafer. He generally did very good work, except that his drawings often had spelling errors.

        Imagine that. A "drafer" who makes spelling errors.

      • More importantly, you tend to excuse and work around your own quirks and bugs. If you know the application will crash if you do certain things, you tend to avoid that - often using clunky workarounds instead.

        This may sound very strange to some; why not fix the problem instead of working around it all the time? But the reality is that this behaviour is really, really common. I've seen it so many times when someone is sitting down to test something and the developer is standing over the shoulder saying "no,
        • Anything you do more than twice should be automated


          Interesting point. That's one of my favorite arguments in the CLI vs. GUI debates. Writing instructions on how to do something in a console environment takes about the same work as writing a script that performs the job. After you write the script, nobody needs to do the handwork anymore.

    • you shouldn't be doing large scale testing on your own systems

      I would qualify that statement by saying that you shouldn't be the only one to test your own system. Development should do unit testing. QA should do functional testing.

      The sooner you test, the sooner you find bugs. The more experience you have (with testing and with the product), the more productive you can be. End user testing is great, but it's only one part of the process.

    • To clarify: We do have a separate QA team that performs large scale testing. I was hoping to help them by providing software to at least partially automate their processes. Testing is not a job that exercises a lot of creativity, unlike development. There should always be an element of fun in any kind of job/work; perhaps by automating their menial tasks it would lift their spirits up a bit. At the very least they get a new toy to play with. I agree that the dev team should not perform QA, mainly becau
      • Automation is only good for stable applications.
        Elements that are going to or may be further developed will negate any benefits achievable from test automation.
        The "Industry Standard" tools such as Mercury's, Compuware's or IBM Rational's test automation suites require a significant amount of coding effort themselves.
        "Simple" design changes can blow away weeks of scripting.

        Automating stable components is the best way to go. As each component is completed and stable it can be automated for regression testin
        • Automation is also good for complex embedded devices that are designed to an open specification. Along the same lines its also good for testing servers written to the specifications.

          What white box automation needs (white box: where the testers are not contaminated by the code, and cannot contaminate the code) is a way to inject events into the device under test (DUT) and a way to observe and record repeatable results.

          For complex embedded devices that communicate with other things (example cellphones, or
      • Testing is not a job that exercises a lot of creativity, unlike development.

        I have to take exception to this statement. I am a full-time tester, but I am also a very good programmer. Testing of large complex systems absolutely requires creativity. It may be that my problem domain is different than most other testers. I work with complex telecomm equipment (e.g. systems with 1200+ separate CPUs with full redundancy carrying terabytes of traffic). Most of our developers don't even understand how the sy
      • Testing is not a job that exercises a lot of creativity, unlike development.

        Thanks for the laugh.

        I guess it depends on what you're testing, and whether you hire QA people or testers.

        Testers generally run other people's test plans, but who wrote that test plan, and did they have a spec, or just a few conversations with the developers? I've only had one contract where the test plans I wrote were based on a complete specification. I've never had a job where we got even 80% of a spec before the test plan
    • You can avoid a lot though, by writing libraries for most of the common functionality of your application, isolating functions that don't need to be exposed to the world, and writing unit tests for each function in the library. Most programmers won't devote the time to do this, though. In fact most of the programmers I've met wouldn't even know how.

      The only company I've ever run across that ever did testing well was Data General. I was on their team that was doing an audit of the entire C standard library

    • I often say that our testing people are very special people. That's because I'm a programmer. I write code because it's fun, and I consider testing to be a completely boring, mind-numbing, horrible way to spend a day. I don't say that they are special people to denigrate them, however. They are special people because through whatever quirk of environment and genetics, they actually love that kind of work, and they truly excel at it. It's a thing that completely mystifies me, but I won't question it. Because
  • by eddison_carter ( 165441 ) on Saturday July 31, 2004 @01:28PM (#9853418)
    Nothing can compare to having a dedicated test staff. At the last software place I worked (part-time, in school, while getting my engineering degree), we had 3-6 college students working on testing most of the time (we would also be given some small projects to work on).

    Testing goes far beyond what any automated system can test, if you have a user in there somewhere. You also need to check things like "How easy is it to use?" and "Does this feature make sense?". We also suggested features that the program did not have, but from our experiance using it, thought that it should have.
    • by DragonHawk ( 21256 ) on Saturday July 31, 2004 @02:54PM (#9853871) Homepage Journal
      When Fred Brooks published his book, The Mythical Man-Month, one of the things he noted was that testing should acount for *more then half* of the budget of a software project. Actual design and coding should be the minority. This is because software is complex, inter-related, easy to do wrong, and not obvious when it is done wrong.

      Of course, nobody wants to do that, because it's expensive and/or boring. Thus we have the state of software today. Just like we had the state of software back in 1956 when he wrote the book.

      It never ceases to amaze me that we're still making the same exact mistakes, 50 years later. If you work in software engineering, and you haven't read The Mythical Man-Month, you *need* to. Period. Go do it right now, before you write another line of code.
      • I agree with most of what you say, except for the "boring" part. The Mythical Man month is still relevant today.
        Just as there is a creative rush in building a working software system out of the ether, there is an equal rush and creative element is software testing.

        Testers and developers think differently but have the same purpose in mind. At the end of the day, both want the best possible product to be delivered.

        I suggest signing up to StickyMinds [stickyminds.com] as a good place to start.
      • Based on my experience, I have to agree with this. I think the proportion has declined a bit with time, but it is still close to half the time.

        Of course, this usually isn't on the schedule. Management's view is that if you spent 6 weeks testing and few bugs were found, then the time was wasted and the product could have shipped out earlier.

        But regardless of the schedule, the test time that Brooks states will get spent. Often that time is spent on repeated testing as a result of bug fixes. Last minute
    • Automated testing tools are best suited for regression testing. Regression testing is the set of test cases that are perform over and over again with each release. Its main function it to make sure that the new features did not break anything that was not supposed to change.

      Our test group uses a product called Hammer (sorry, but I don't know where they got it or how much they paid for it) for their regression testing. Hammer has its own scripting language (may be VB based) and its own database that is used

  • I agree about programming. I prefer the design phase. I like to design a system to the point that programming it is a cinch. What really sucks about software development is not testing it is meetings. Meetings suck the fun out of programming. Stupid senseless timewasting meetings. Scott Adams hits the nail on the head about meetings every time.
  • by drgroove ( 631550 ) on Saturday July 31, 2004 @01:29PM (#9853422)
    Outside of unit testing and limited functional testing, developers should be doing QA on their own code. That's a bit like a farmer certifying his own produce as organic, or a college student awarding themselves a diploma. It misses the point. QA function, automated, regression et al testing is the responsibility of a QA department. If your employer is forcing you to perform QA's functions, then they obviously don't "get it".
    • Don't you mean "should not be doing QA on their own code.".... ?
      • Don't you mean "should not be doing QA on their own code.".... ?

        In actual practice, he got it right by forgetting the "not". ;-)

        I usually present the QA folks with lots of test code. This makes them my friends, since they are usually under pressure to get the product out the door and don't have time to think up all the tests themselves. I don't either, of course, and I can't ever give them something complete. And I have the usual developer's problem of not being permitted any contact with actual user
        • Quite often, I get a bit of flak from management for being too friendly with the QA people. They usually have this silly "clean-room" concept for how it should be done.

          Your management has their hearts in the right place. The problem with the developer providing the QC of their own code is that they may miss the same problems in their test code as they did in development.

          I think of the system testing at my company as comprised of two main activities: integration testing and feature testing. I can write

    • I do a fair bit of QA and testing of our software product - and If i could have a nickel for the number of times its been apparent that a developer has checked in code they clearly NEVER tested.

      A developer has some responsibility to ensure thier code at least functions in the context of the overall application before making it my problem. Just because it compiles does not mean it is done.
  • Testing (Score:4, Informative)

    by BigHungryJoe ( 737554 ) on Saturday July 31, 2004 @01:29PM (#9853423) Homepage
    We use certification analysts. They handle the testing. Unfortunately, the functional analysts that are supposed to write the tests rarely know the software well enough to do that, so some of my time is spent helping to write the black box tests. Writing good tests has become especially important since most of the cert analyst jobs are being sent to India and aren't on site anymore.

    bhj
    • Re:Testing (Score:2, Insightful)

      by SoSueMe ( 263478 )
      Certified Analysits should be able to work from well written specifications.
      You do provide complete and acurate TDS (Technical Design Specifications) for architectural details and FDS (Functional Design Specifications) for system operation, don't you?
  • Test Matrix (Score:5, Interesting)

    by Pedro Picasso ( 1727 ) on Saturday July 31, 2004 @01:30PM (#9853428) Homepage Journal
    I've used auto-test thingies, ones that I've written, and packaged ones. Some situations call for them. Most of the time, though, it's just a matter of doing it by hand. Here's what I do.

    Create a list of inputs that includes two or three normal cases as well as the least input and the most input (boundaries). Then make a list of states the application can be in when you put these values into it. Then create a graph with inputs as X and states as Y. Print your graph and have it handy as you run through the tests. As each test works, pencil in a check mark in the appropriate place.

    Now that you've automated the system to the point where you don't need higher brain functions for it, get an account on http://www.audible.com, buy an audio book, and listen to it while you run through your grid. It still takes a long time, but your brain doesn't have to be around for it.

    This is going to sound incredibly elementary to people who already have test methodologies in place, but when you need to be thorough, nothing beats an old fashioned test matrix. And audiobooks are a gift from God.

    (I'm not affiliated with Audible, I just like their service. I'm currently listening to _Stranger in a Strange Land_ unabridged. Fun stuff.)
  • by TheSacrificialFly ( 124096 ) on Saturday July 31, 2004 @01:31PM (#9853433)
    It's extremely difficult to develop and maintain on any enterprise size system. One of the big problems management has with automation I've found is that once they've put the money into initally developing the automation, they think it will run completely automatically forever more.

    From my personal experience at one of the world's largest software companies, automation maintenance for even a small suite (200-300 tests, 10 dedicated machines) is a full time job. That means one person's entire responsibility is making sure the machines are running, the tests aren't returning passes and fails for reasons other than they are actually running the tests, and making changes to the automation both when either the hardware or software changes. This person must know the automation suite as well as the tests attempting to be performed intimately, and must also be willing to spend his days being a lab jockey. It's usually pretty difficult to find these people.

    My point here is that even after spending many dev or test hours developing automation, in no way is it suddenly complete. There is no magic bullet to replace a human tester, the only thing you can do is try and improve his productivity by giving him better tools.

    -tsf
    • by Bozdune ( 68800 ) on Saturday July 31, 2004 @01:54PM (#9853546)
      Parent has it right. Most automation efforts fail because the test writers can't keep up with the code changes, and not many organizations can pay QA people what you need to pay them if you expect them to be programmers (which is what they need to be to use a decent automation tool). Plus, one refactoring of the code base, or redesign of the UI without any refactoring of the underlying code, and the testers have to throw out weeks of work. Very frustrating.

      Even in the best case, automation scripts go out of date very quickly. And, running old scripts over and over again seldom finds any bugs. Usually nobody is touching the old functions anyway, so regression testing is largely pointless (take a lude, of course there are exceptions).

      I think the most promising idea on improving reliability I've seen in recent years is the XP approach. At least there are usually four eyes on the code, and at least some effort is being put into writing unit test routines up front.

      I think the least promising approach to reliability is taken by OO types who build so many accessors that you can't understand what the hell is really going on. It's "correctness through obfuscation." Reminds me of the idiots who rename all the registers when programming in assembly.
      • This is one part of extreme programming I like very much. The idea is to write the test cases before you write the software. That way, you're testing to specification, not implementation.

        Even in the best case, automation scripts go out of date very quickly. And, running old scripts over and over again seldom finds any bugs.

        To this I must respectfully disagree. In small(er) projects, it might be closer to the truth, but from my experience regression testing is vital. Regression testing is mainly useful

    • automation maintenance for even a small suite ... is a full time job

      Excellent point. I am fortunate to be able to work on test automation full time, and to have at least one other person to do most of the "lab jockey" chores. My day is divided between improving the tools, writing new tests, and analyzing results (which often requires revising a test).

      There are times when I resist adding new tests (usually a big pile of stored procedures that a developer or another tester wrote), because I don't have the

  • is an oxymoron.
    Difficult if there is no logic and no interactions.

    Scripts will be of some help.
    Probably best will be long complicated sequences of operations whose net effect must logically be precisely nothing.

    If you're lazy like me, integrate first, and every module is responsible for not going beserk in the presence of buggy neighbors. Too much software seems designed to assume that everything else is perfect. The old game of "gossip" shows that even if everything is perfect, your idea of perfect and m
  • Here is the standard management response to automating anything:

    "We don't have time for that. Just get the QA testing complete so we can start the layoffs."

    This basically makes the entire question of automating processes academic. Now, if automating processes can lead to massive job loss, salary savings and bonuses, it might actually be approved.

    Long-term value is never EVER approved instead of short-term pocket-stuffing, EVEN IF a business case can be made for it. I've seen near-perfect business cas
    • While it is true that companies start to realize that automation can help them with the quality of their software, companies that lay off testers or don't perform manual ad-hoc testing on top of what's automated are most likely poorly managed.

      The thing is, once you automate something your automation will walk the same code path that was fixed after you logged your bugs after you ran this test case first time. It is very likely that the code paths that you're testing will never be touched again and therefor
  • by cemaco ( 665884 ) on Saturday July 31, 2004 @01:39PM (#9853485)
    I worked 6 years as a Quality Assurance Specialist. You cannot avoid manual testing of a product. Standard practice is to manually test any new software and automate as you go along, to avoid having to go over the same territory each time there is a new build. You also automate specific tests for bugs found after they are fixed, to make sure they don't get broken again. My shop used Rational Robot from IBM. There are a number of others, Silk is one I have heard of, but never used. Developers often have an attitude that Q.A. is only a necessary evil. I think part of it is because it means admitting that they can't write perfect code. The only people I have seen treated worse are the help desk crowd. (another job I have done in the past). The workload was terrible and when layoff time came, who do you think got the axe first? As for developers doing their own testing? That would help some but not all that much. You need people with a different perspective.
  • TDD (Score:4, Insightful)

    by Renegade Lisp ( 315687 ) * on Saturday July 31, 2004 @01:41PM (#9853490)
    I think one answer may be Test Driven Development (TDD). This means developers are supposed to create tests as they code -- prior to coding a new feature, a test is written that exercises the feature. Initially, the test is supposed to fail. Add the feature, and the test passes. This can be done on any level, given appropriate tools: GUI, End-to-End, Unit Testing, etc. Oh, did I mention JUnit? The tiniest piece of code with the most impact in recent years.

    I came across this when I recently read the book by Erich Gamma and Kent Beck, Contributing to Eclipse. They do TDD in this book all the time, and it sounds like it's actually fun.

    Not that I have done it myself yet! It sounds like a case where you have to go through some initial inconvencience just to get into the habit, but I imagine that once you've done that, development and testing can be much more fun altogether.

  • by kafka47 ( 801886 ) on Saturday July 31, 2004 @01:46PM (#9853511) Homepage
    At my company, we have a small QA group that tests several enterprise client-server applications, including consumer-level applications on multiple platforms. To exhaustively test all of the permutations and platforms is literally impossible, so we turn to automation for many of the trivial tasks. We've developed several of our own automation harnesses for UI testing and for API and data verif. testing. The technologies that we've used :
    - Seque's silktest [segue.com]
    - WinRunner [wilsonmar.com]
    - WebLoad [radview.com]
    - Tcl/Expect [nist.gov]

    There are *many many* problems with large-scale automation, because once you develop scripts around a particular user interface, you've essentially tied that script to that version of your application. So this becomes a maintenance problem as you go forward.

    One very useful paradigm we've employed in automation is to use it to *prep* the system under test. Many times its absolutely impossible to create 50,000 users, or 1,000 data elements without using automation in some form. We automate the creation of users, we automate the API calls that put the user into a particular state, then we use our brains to do the more "exotic" manual testing that stems from the more complex system states that we've created. If you are to embark on automating your software, this is a great place to start.

    Hope this helps.
  • Design into the app, up front, the ability for every function and transaction to be scripted. Have diagnostic hooks that can interrogate that can interrogate and verify the state of the app, verify the integrity of database transactions, etc. Then all of the testing can be automated except for the GUI itself.

    As a bonus you'll end up with a scriptable app that advanced users will love.

  • by Anonymous Coward on Saturday July 31, 2004 @01:47PM (#9853517)
    I was part of a team for a major global company that looked into automated testing tools for testing the GUI front end web applications. These were both staff and customer facing applications.

    We evaluated a bunch of tools for testing the front end systems, and after a year long study of what's in the marketplace, we settled on the Mercury Interactive [note: I do not work for them] tools: QuickTest Professional for regression testing, and LoadRunner for stress testing.

    No one product will test both the front and back ends, so you will need to use a mixture of the tools, open source tools, system logs, manual work, and some black magik.

    Our applications are global in scope. We test against Windows clients, mainframes, UNIX boxes and mid-range machines.

    The automated tools are definitely a blessing, but are not an end-all-be-all. Many of the testing tool companies just do not understand "global" enterprises, and working to integrate all the tools amongst all your developers, testers, operation staff, etc can be difficult.

    Getting people on board with the whole idea of automated tools is yet another big challenge. Once you have determined which toolset to use, you have to do a very serious "sell" job on your developers, and most notably, your operations staff, who have "always done their testing using Excel spreadsheets".

    Another big problem is no product out there will test the front and the back end. You will have to purchase tools from more than one vendor to do that. A tool for the backend iSeries, for example, is Original Software's TestBench/400. Again, this does not integrate with Mercury's tools, so manual correlation between the two products will be needed.

    You can only go so far with these tools; they will not help you to determine the "look and feel" across various browsers - that you will have to do yourself.

    QuickTest Professional does have a Terminal Emulator add-in (additional charge), that allows you to automated mouse/keystrokes via Client Access 5250, and other protocols.

    The best way to determine your needs, is call up the big companies (CA, IBM, Mercury) and have them do demos for your staff. Determine your requirements. Setup a global team to evaluate the products. Get demo copies and a technical sales rep to help you evaluate in your environment. Compare all the products, looking at the global capability of the product, as well as support 24/7.

    No tool is perfect, but it is better than manual testing. Once everybody is on board, and has been "sold" on the idea of the tools, you won't know how you lived without them.

    Also, make sure that you have a global tool to help in test management to manage all your requirements and defects and test cases. Make sure it is web-based.

    Expect to spend over a year on this sort of project for a global company. Make sure you document everything, and come up with best practice documentation and workflows.

    Good luck!

  • 1. Write code
    2. Post to windowsupdate.com
    3. ....
    4. Profit

  • I suggest you beta test it with the end user and see what they have to say. Remember that THEY will be the ones that will have to use it on a day-to-day basis.

    It seems to be a fait accompli that most businesses will accept IT bullshit, but the engineering companies (umm, people that build bridges and refineries, not engineering wannabees) are a lot less malleable when some computer specialist comes calling.
  • Re: (Score:2, Funny)

    Comment removed based on user account deletion
  • I have been using the Mercury Interactive suite of tools in recent times. They work very nicely for applications with a Windows frontend. Unfortunately, TD works using an Active-X front-end and requires IE (the plugin on Mozilla will not work).

    To be honest many of the things they do could easily be done by something else, but QA/Testing may not seem to be the most interesting for open source developers.

    1. Finish last line of code 30 minutes before deadline
    2. Set email autoresponder to pick a helpdesk response from:
      • You probably have a virus, please reinstall Windows
      • This is a known problem with the latest Windows Update, please reinstall Windows
      • The software will only crash if you have been downloading porn on work time, please leave your extension number if you wish us to investigate your hard disc in detail
    3. Send memo to line manager explaining that there is currently no technical documentation for the pr
  • by alanw ( 1822 ) *
    Shortly before it went tits-up in the aftermath of Y2K (lots of testing in 1999, not so much afterwards), and the bursting of the Dot.Com bubble, one of my previous employers decided to release the software testing application they had developed under the GPL. It's called OpenSTA [opensta.org] and it's available at SourceForge [sourceforge.net].

    It's designed to stress test web pages, and analyse the load on web servers, database servers and operating systems.

    There is also a new company - Cyrano [cyrano.com] that has risen from the ashes of the old

  • I've had some exposure lately to something where automated testing is so far from imaginable. I wonder if and how anyone could test a system like this automatically:
    • Inputs Come From: web browser, green screens via curses wrapped by proprietary mumbo, EDI, phone system, email, scanners, tablets to get handwritten input, etc
    • Outputs go to: web browser, green screens via curses wrapped by proprietary mumbo, EDI, email, phone system, proprietary 3rd-party document storage
    • Database: Mostly proprietary storage
    • You need to slowly begin rewriting it into something more sane. Like Java.
  • Once you separate the business logic and other standalone components - just do unit testing on them.
    Your UI/network code should not contain wiring logic spaghetti! Separate into validation etc logic that also can be unit-tested. If a button handler has 50 lines of code... - you have much more problems in the first place, solve them first.
  • Integrated Testing (Score:2, Insightful)

    by vingufta ( 548409 )
    I work for a mid sized systems company with about 2K employees and we've struggled with testing our systems. There are a few reasons for this:
    • The system has grown overly complicated over the years with tens of subsystems. There is some notion of ownership of the individual subsystems in both dev and qa but no such thing exists when it comes to interoperability of these subsystems. Dev engineers are free to make changes to any subsystem they want to while QA does not feel that they should test anything out
  • by William Tanksley ( 1752 ) on Saturday July 31, 2004 @02:09PM (#9853629)
    You need two things: first, people who are dedicated to testing and aren't concerned merely to uphold their pride in the code they wrote (this is a long way to say that you need a dedicated testing team that doesn't report to the coding team); and second, testable code. The best way to get the second needed item, in my experience, is to have your developers write their automated unit tests BEFORE they write the unit they're developing.

    This is generally called "test-first" development. If you follow it, you'll find some nice characteristics:

    1. Each unit will be easily testable.
    2. Each unit will be coherent, since it's easier to test something that only does one thing.
    3. Units will have light coupling, since it's easier to express a test for something that depends only lightly on everything else.
    4. User interface layers will be thin, since it's hard to automatically test a UI.
    5. Programmers will tend to enjoy writing tests a bit more, since the tests now tell them when they're done with their code, rather than merely telling them that their code is still wrong.

    You can go a step further than this, and in addition to writing your tests before you write you code, you can even write your tests as you write your design. If you do this, your design will mutate to meet the needs of testing, which means all of the above advantages will apply to your large-scale design as well as your small-scale units. But in order to do this you have to be willing and able to code while you're designing, and many developers seem unwilling to combine the two activities in spite of the advantages.

    -Billy
  • SilkTest (Score:2, Informative)

    by siberian ( 14177 )
    We use SilkTest for automated testing as well as monitoring. Our QA guys love it and it free's them up from doing regular system testing to focus on devising new and devise tests to confound the engineering teams!

    Downside? Its Windows stuff AND its hellaciously expensive..

    • I've used SilkTest for the last five years, and I'm working on getting rid of it. I have not been able to develop an effective strategy for automated testing of GUIs, and it is not ideal for the rest of the automation I do (mostly CHUIs and APIs).

      Good:

      • the 4Test scripting language
      • easy to call out to some Windows API functions

      Mixed:

      • the outline-based test selection (I'd rather query a database)
      • the results view (I'd really rather put the results in a database)

      Bad:

      • the cost (~$5,000 / seat)
      • the ma
  • We've had good luck with Jameleon [sourceforge.net]. We use JUnit for the low level stuff and Jameleon to test the web front end. Of course, it's probably a good idea to use human testers before you go to production, but automated testing can cut down on the bugs that make it to the human testers.
  • You need to buy a copy of the Pragmatic Programmer's starter kit [pragmaticprogrammer.com]

    The third book in the series is about project automation, where they teach you how to repeat in a controlled manner all the stuff you learned in the first two books. The basics:

    1) Write unit tests before writing your methods
    2) Your unit tests should follow the inheritance tree of the classes under test to avoid duplicate test code.
    3) Write a vertical "slice" of your application first (all the layers, none of the functionality). This wi
  • the users did the QA testing, just like Microsoft has the users do. We had QA people, but I am not sure what exactly they were doing becuase many flaws and mistakes got past them. Just not from my programs, but ones my coworkers wrote. I did my own QA testing, and took longer to develop code, hence I was let go.
  • Testing (Score:5, Interesting)

    by Jerf ( 17166 ) on Saturday July 31, 2004 @02:16PM (#9853661) Journal
    Lemma One: It is impossible to comprehensively test the entire system manually.

    Proof: For any significantly sized system, take a look at all the independen axes it has. For instance: The set of actions the user can take, the types of nouns the user can manipulate, the types of permissions the user can have, the number of environments the user may be in, etc. Even for a really simple program, that is typically at least 5 actions, 20 nouns, (lets estimate a minimal) 3 permission sets (no perm for the data, read only, read & write), and well in excess of 5 different environments (you need only count relevant differences, but this includes missing library A, missing library B, etc.). Even for this simple, simple program, that's 5*20*3*5, which is 1,500 scenarios, and yes, you can never be sure that precisely one of those will fail in a bad way.

    Even at one minute a test, that's 25 hours, which is most of a person-week.

    Thus, if you tested a enterprise class system for three days, you did little more than scratch the surface. Now, the "light at the end of the tunnel" is that most systems are not used equally across all of their theoretical capabilities, so you may well have gotten 90%+ of the use, but for the system itself, a vanishing fraction of the use cases. Nevertheless, as the system grows, it rapidly becomes harder to test even 90%.

    (The most common error here is probably missing an environment change, since almost by definition you tested with only one environment.)

    Bear in mind that such testing is still useful, as a final sanity check, but it is not sufficient. (I've seen a couple of comments that say part of the value of such testing is getting usability feedback; that really ought to be a seperate case, both because the tests you ought to design for usability are seperate, and because once someone has fuctionally tested the system they have become spoiled with pre-conceived notions, but that is better than nothing.)

    How do you attack this problem? (Bear in mind almost nobody is doing this right today.)
    1. Design for testability, generally via Unit Tests. There are architectures that make such testing easy, and some that make it impossible. It takes experience to know which is which. Generally, nearly everything can be tested, with the possible exception of GUIs, which are actually provide a great example of my "architecture" point.

      Why can't you test GUI's? In my experience, it boils down to two major failings shared by nearly all toolkits:

      1. You can not insert an event, such as "pressing the letter X", into the toolkit programmatically, and have it behave exactly as it would if the user did it. (Of the two, this is the more importent.)
      2. You can not drive the GUI programmatically without its event loop running, which is what you really want for testing. (You need to call the event engine as needed to process your inserted events but you want your code to be in control, not the GUI framework.) One notable counterexample here is Tk, which at least in the guise of Python/Tk I've gotten to work for testing without the event loop running, which has been great. (This could be hacked around with some clever threading work, but without the first condition isn't much worth it.)

      The GUIs have chosen an architecture that is not conducive to testing; they require their own loop to be running, they don't allow you to drive them programmatically, they are designed for use, not testing. When you find a GUI that has an architecture at least partially conducive to testing, suddenly, lo, you can do some automated tests.

      And in my case, I am talking serious testing that concerns things central to the use of my program. I counted 112 distinct programmatic paths that can be taken when the user presses the "down" key in my outliner. I was able to write a relatively concise test to cover all cases. Yes, code I thought was pretty good turned out to fail two specific cases (

  • In our 'enterprise environment' type company(3000+ users) we only do manual testing.We gather up some real techies and some real non tech people.Its amazing how much better the non techs are at testing and letting you know of various bugs.
    They will let you know flaws that you never imagined were possible!All in all nothing beats manual testing.We dont have any dedicated testing staff so we just gather up a few end users.
  • Anything you can test with a command line, do. Text in, text out, use 'diff result.log official.log' to find if you broke anything. Anything that requires a mouse, though, I hear there's products for that but I've never used them.

    I have a pet project (jenny [burtleburtle.net]) that generates testcases for cross-functional testing, where the number of testcases grows with the log of the number of interacting features rather than exponentially. It often allows you to do a couple dozen testcases instead of thousands, and it
    • Expect, autoexpect, Tcl/TK and Perl. With that combination you can even test GUIs, mouse and touch screen applications, but yes, it is a pain in the derrier...
  • All of it. My beef with the unit testing craze is the suggestion that only individual "units" (whatever the hell a unit is) need to be tested automatically. If you design your application well, you should be able to simulate everything up to the multi-step transactions a user would go through. Instead of your code being 75% testable, make it 95% testable. You'll always need people to test that last bit that can't be automated (since the only true test is end-to-end), but the more bugs you can find befor
  • Creating the infrastructure for automated testing is a big investment, one that generally only pays off in the long run. If your company decides to take the plunge here are a couple of suggestions:

    Don't let developers check in code until it has passed the entire regression suite. Otherwise, SQA (the test maintainers) will be in a constant catch-up mode and ultimately the disconnect will grow until the whole test suite is abandoned.

    Make tests easy to create. This is harder than it sounds. Tools that simp

    1. A dedicated QA staff. You should have as many testers as you have developers.
    2. Tools for the QA staff to create their own automation. They don't like doing manual testing much, either, so they'll have incentive to use the tools. :-) I'll talk about the tools we use in a bit.
    3. Training for the QA staff on the tools. Hire QA people capable of at least a little shell programming. And the tools they use should be not much harder than writing shell scripts.
    4. A good SCM (source code management) system that pr
  • I've been working on a simple idea for a while, a programming language that *provides* the testing capability, implementing the testing story. Its not a horribly difficult idea, just something I've been playing with. My goal is create a test language you can then compile against various languages to see if the implementation in any given language behaves in the same way.. also to make unit tests a little easier to read. You can read about it on the project blog: http://www.bigattichouse.com/thoughtbrew.php? [bigattichouse.com]
  • follow up (Score:2, Informative)

    by nailbite ( 801870 )
    To clarify: The dev team only tests on the code level (memory leaks, code fault tolerance, etc.). We have a separate QA team that performs customer-level testing. Some of them are technically inclined, some are not. All of them, however, understand the mechanisms of the system on a user level. Currently, they manually test each operation of each component in the system using test procedure lists and guidelines. I was thinking if there was anyway to, even partially, automate their processes using softwa
  • automated testing (Score:2, Informative)

    by dannannan ( 470647 )

    Seems to me that there are really a few levels of automated testing: there's automatically generating and running test cases, and there's having the ability to automatically repeat test cases you've manually created and run in the past.

    Just being able to repeat manually created test cases is a big help. It sounds really simple -- create a harness that you can plug test cases into and start writing test cases -- but scheduling and coordinating that sort of thing starts to get really difficult on large proj

  • Autoexpect (Score:3, Informative)

    by HermanAB ( 661181 ) on Saturday July 31, 2004 @03:08PM (#9853963)
    Expect and Perl.

    Expect is a Tcl thing, that is also a module in Perl.

    Autoexpect is a script generator of sorts for Expect.

    Using that combination, you can create a test harness that can also test things that were not designed to be tested, such as GUIs, Browsers, SSH and Password sessions.

    However, it is not easy and yes it works on Windoze too...

  • by A.S. ( 122423 )
    (And I don't mean developers.)

    The solutions to your problem is trained monkeys. it has been shown in trained monkeys that wide-dynamic-range neurons in the cerebral cortex participate in the encoding process by which monkeys perceive (Kenshalo et al., 1988). There has not been so large a demand for the little guys since Mustacioed italians wandered the steets of New York with them almost a centruy ago.

    I find that ring-tailed monkeys (from South America, principally Brazil) are the best for software testin

  • I have worked as an "automated testing specalist" for the last 7 years.

    In my experience, automated functional testing is good for 1 situation: functional GUI regression testing on systems using a classic "waterfall model" setup where the GUI dosen't change much and there are more than about 3-4 releases per year.

    In any other situation, you usually don't get the payback. The software is expensive (I use Mercury Interactive's Winrunner), often running into $100k range. The skill set required is quite spec

  • I have worked at several companies that had licences for Winrunner and Loadrunner, but because the software was so expensive, they wouldn't let anyone who was uncertified touch it, and the cost of the software blew the budget for testing for well over a year. End result: 6-digit price tag software sits on a shelf for several years.

    During the dotBoom, I coughed up $500 out of my own pocket for Visual Test. I thought it would help me with the hard parts of testing my code: the NT services and the database c

  • A lot of this issue is resolved when you simply code your tests before you design your code. By having the tests coded you have a functional spec that is rather hard to squirm out of and since it is pre-coded you can run it automatically!
  • I've been in QA for a number of companies and have found one consistent problem... re-invention of the wheel...
    I attribute this to two things

    1) Commercial tools are over complicated and not very good
    2) There's no tool that exists to do X Y Z
    3) Our software can't be automatically tested !
    4) A lot of time is taken up with reporting !

    1).. Totally agree... have you ever seen Test Director ? It's a nightmare to use, takes ages to import testcases, and it's automation interface sucks... plus it costs a FORTUNE.
  • We do automated testing where we can, to make sure everything's working properly after every build.

    At the same time, we also utilize full-time testers when we hit milestones, to make sure that items related to look, feel and IA are caught and dealt with.

    So, to sum up:

    Automated testing to sanity-check builds

    Manual testing to ensure milestone quality
  • I don't understand how QA could happen in software. QA is a manufacturing spot-check to make sure you're not turning out duds. How does that translate to software? Is click-testing a website in a bunch of different browsers QA? Please explain.
  • Although not quite specifically what you asked for, I can attest that using valgrind definately helps find many bad errors which invariably would send your coders insane.

    I run a small open/commercial hybrid development company here - valgrind has been one of the most impressive bug prevention tools we've ever added to our arsenal.

    One very important area of testing/QA is regression testing. Always make sure you have a record of what worked in the /previous/ versions so that when you make a new version you
  • I have had many years of experience in QA departments over my career. My observation is that it is difficult to attract good talent to a QA depratment. Many developers and technically inclinded folks see QA as menial labor. This mentality misses the value add and complexities of a true QA department/function. Ideally, you would hire dedicated and technically experiences individuals that: + Can analyize requirements into test plans (not by following the programming logic, but by following the business log
  • by Chacham ( 981 ) *
    Programming done "properly" is both design and coding. Ideally, (and crucial on large projects) there is design of all functions including "in and out". Anyone whose coded in Assembly knows this is a must.

    If properly speced, bounds checking and others tests are very simple to write, and either Q&A can do it, or a day can be set aside where you listen to your favorite music, book on tape, or movie, and test each module. Testing is simple, and many suites make it even easier.

    Ultimately, the problem with
  • 10 years of QA... (Score:2, Interesting)

    by drdreff ( 715277 )
    ...has shown me that most of the commercial tools available are of extremely limited use. If you're working in any cutting edge environment or pushing existing technology to its limits the tools will not be able tohelp you.

    Home grown test suites built to suit the job at hand have always turned out to be the bets testing solution. Automation is the only way to keep up with the test effort required for a large project. Especially when QA is seen as an afterthough or a barrier to release. Automation will ke
  • by SKarg ( 517167 )
    Software Development Magazine [sdmagazine.com] had an article about test first programming [sdmagazine.com] of a GUI application that describes a method of incorporating the automated testing into the application.

    The example GUI application is a simple game, but the methodology could be used for any GUI application.

    From the article:

    The big question was how to handle all the stuff that would normally be considered GUI code-specifically, the hit-testing: Where do users click, and what should happen when they click there?

    I suspected th

  • The best defense against a client-side blowup is to have as many layers as possible helping out, because they all have their various strengths and vulnerabilities.

    In development:

    • Good design: The larger the project, the more cowboy coding and migrated-to-production prototype code can hurt the process. Many bugs come in the form of workarounds, repeated code, or unclear code responsibilities, all of which can be mitigated by good design.
    • Unit test: Developers have a better idea of the contract the rest
  • These things have proved true to me:

    (o) Just like coding testing is engineering: you have limited resources, and have to allocate them to get the max. benefit.

    (o) Some small enterprises cannot afford full-time software testers. Some have huge existing code bases, and small or non-existent QA teams. And that just has to do.

    (o) Black-box testing for regressions is often where the max. bang for your buck is. Consider: your unit tests may cover 99% of code, and 100% of the units tests may pass. But when one

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors

Working...