I recently dropped in my 2 cents in a discussion about between Ludovic and SpriteCloud about QA 101. I am now working at Longtail Video performing a bunch of roles, including sales, support and QA (Quality Assurance) of our Bits on the Run online video platform. I knew a lot about online video because of my previous job running transcoding operations at Joost, but I was only aware of some common best practices; I didn’t know much about the day-to-day business of running a QA process.

Getting started

The first couple of releases things were mostly a matter of familiarizing myself with the software and trying to click everything on each page and thus hoping to catch all bugs. A lot of time was spent trying to think up as much invalid input as possible and see if that would break stuff. Of course there was already a sane development/staging/production environment set up so software could be tested in a safe environment before being deployed to production.

Initial signs of organization

The easiest way to get started organizing software quality assurance is by compiling checklists. For each page you should define all operations that should succeed. Do still remember that invalid input should be rejected and, (this is a lesson that I learnt the hard way) do realize that it is quite possible that software breaks badly when you try to feed it UTF-8 characters. I keep all of my checklists in Google Spreadsheets, as that way it is very easy to share how the QA process is coming along. Also conditional markup makes all failed test cases nice and red.

This will probably not work for everyone, but during a test run we do not ticket all the bugs that are found. At first we just have a shared Google spreadsheet where we track all the bugs that are found during the QA process. Most bugs (especially typos and smaller logic bugs) are just fixed straight away and tested again once fixed. Only issues that are tagged for a future release or bugs that would require major rework are ticketed.

Also, in a fairly early stage we split up the test process in two parts, as we have two systems that work largely independent from each other.

  1. The base software (contentserver, API, databases, encoders)
  2. The dashboard (which runs completely on top of the API)

Thus most of our releases are also staggered releases, where first the base system is released and sanity checked, and after that we release and check the dashboard.

Automation

We don’t have a lot of automation, but I have done a little bit of programming to make the tests of the content server go quicker. Essentially I have built a bunch of PHP pages that I walk through before each release. For each release I need a video key of a newly uploaded video, so this means that I have built one config file that needs to be updated before each test run and after that I can click through the pages relatively quickly. Switching between staging and production is a matter of commenting out a couple of lines in the config file. The pages are really simple and look more or less like the image below:

At some point it might be nice to automate the tests of the dashboard too with a Behavior Driven Development (BDD) solution like Cucumber/Selenium-webdriver but for now we have not deemed it worth the development time yet.

Test process

For a typical release I do a couple of steps to test and verify the software:

  • I start by gathering all the tickets for this release. I make a spreadsheet that contains all changes/tickets and how they should be tested. This also doubles as the test plan. There are a couple of possible test solutions for each ticket:
    • Add this to one of the existing test plans
    • Nothing fundamental changes, the existing test plans should cover this
    • This is covered by unit tests, just run the normal test plans
  • For any new features or radical changes I will spend a lot of time on just using it so I understand how it really works and is supposed to work and will also try to feed it as much invalid data etc to see if that breaks anything.
  • When this is done I execute the (updated) test plans.
  • When everything looks sane I give a green light for deployment
  • After release I do a sanity check with a limited test plan (that can be executed quickly) to verify that nothing essential broke in production.

Making things work for you

In the end I do not think there is one correct way to test software. Especially if you’re working in a smaller team, you will just have to figure out a process that works for you. No matter how much you test, there will always be bugs that slip through, but there are two things that are key to an effective QA process:

  1. Catch all major bugs that break the core functionality of your product
  2. Whenever you receive a bug report from end-users, verify if this warrants a new test case in your test plan and figure out what you can do to catch similar bugs in the future. (aka learn from your mistakes)
Advertisements