Posts Tagged ‘test automation’

Last time I wrote something about test data at manual testing. This blog entry has several words about test data at test automation.

Now the computer is taking care of submitting and checking the data, so now we can use different kind of data. We don’t have to keep the data simple, and we can use more data than during manual testing. E.g. you don’t want to type 100 megabytes of data, but test automation can do that. Test automation can also detect small changes which are difficult for human to notice. For example making difference between I (capital i), I (lower L) and 1 (number one) can be difficult for human. But test automation usually detects changes at binary level, and all of those have different Ascii code. So it can detect those.

At discussion forum checking the paging of 100 messages thread correctly would be nearly impossible for manual tester. Or at least to result would be poor because he would most likely just put a few characters to each message. But with test automation we can create longer and shorter messages with realistic looking data. What is realistic in this case? Or how do we know it? Internet is full of forums, so we can select almost any of them for analyzing. We could get e.g. in average how many words each message has, and what kind of standard variation they have. Then test data can be constructed based to that information.

Test automation can use random data. The data is created at run time by computer. It has some issues which we have to remember: Test must be reproducible. Random generator should not be cryptographically strong. It should be such that when we set its seed to specific value; it always produces the same result. Initial seed can be related to time, but then next values must be always based to previous number. Also the logging should make sure that all steps and initial stages are logged.

One good way to cheat at randomness is to create predefined list of data. You can reproduce it all the time, and it is “random enough” if you trust that different order doesn’t change the result. At one project I wanted to test how application reacted to broken input files. Manually it would be impossible to test, but with computer it was quite simple. I just had to create the inputs and I used Ramasa for that. ( Seed for that were couple working inputs. During testing I found one major crash which could have been also the security issue. Without test automation and possibility to test huge mass of data, we could have left major security issue to the application.

It seems that I should have structured my series a bit differently. There are still plenty of important things which I should describe. And they touch test automation as well as manual testing also, but also security and performance testing. And then there is plenty of miscellaneous issues like using production data.


Test automation and testability should be in same sentence. If the application is not testable, it will be hard to automate. This short blog entry lists few things which are benefiting the testability of web applications.

What kind of application is testable? The primary issue is that it is easy to test or it doesn’t need plenty of guessing. Test automation should be able to find easily the components which it clicks or edits. There should also be some easy way to check if the results are expected. Testable application aids to create test automation which tolerates the changes of the application.

Most web applications have forms. They are the most difficult part of test automation. Many modern web application frameworks are creating own strange names and ids for fields. When application is modified, the field names are change also or they can be more or less random. For example SharePoint based web application has field names like:’ ctl00_m_g_18447b1a_f553_3331_d034_7f143559a4fe_ctl01_stateSelection_input’. In worst case the content of field name is tied to session, or dynamically changed when page is revisited at later point of the test. Testable application should have simple and descriptive id or name parameter for input-fields and links. In this case it could b “stateSelection”. In most test automation frameworks it is much easier to refer to name-parameter than to some over cryptic xpath. It’s also more debuggable than e.g. //div[@id=’mainpart’]/table[0]/tr[1]/input[contains(@id,’stateSelection’].

Test should be able to assert if the response was correct or not. This is usually done by checking if some specific text or component is at the web page. To simplify the testing the html-code should be as simple as possible to avoid misinterpretations. It should contain e.g. html-comments to describe what the web application itself thought about the result. But no matter what the application does, it should never spit out the stack trace or some other error message which reveals the internals of application. That information belongs only to the logs. In error case the content should have something which can be easily linked to error log. It doesn’t help only the test automation, but it will also help to debug production time problems.

Proper server side logging and simplifying accessing to them increases also the testability. In case of failure the logs should have stack traces, exception information and other debugging information. It is very useful if test automation can retrieve these in case of error and add that to its logs.

Ajax is real pain for test automation, performance testing, manual testing and security testing. But developers and users love it! One major problem at Ajax is that web page loading is marked done before all content has been loaded. So it is very difficult for the automation to understand if the loading has succeeded or not. So there are couple things which should be done: First the agreement between testers and other stakeholders what is the maximum loading time the content can take. Second is some clear mark at dynamic content when it is loaded and rendered to the screen. This can be e.g. small comment at html-page. That way the test automation can have good timeouts, and it can detect the finished page.

This short blog entry describes just small part of web application testability. But my experience is, that these are the most common problems.

Many times I hear that exploratory testing (ET later) is pure manual testing. But that’s not true. You can use any possible tools to help ET. This is the part one of multiple articles where I present tools which you can use to assist your ET and other manual testing.

What is the purpose of tool? Its main purpose is to release tester to do meaningful tasks. If initializing the test takes more than 1 second and needs to be repeated over and over again, it is preventing the good testing. And tools should be used to remove that kind of obstacles.

Unfortunately often the tool itself becomes the “purpose of testing”. I know that – I am usually doing test automation. It seems that very often the tool itself becomes the obstacle because someone thinks that it is the silver bullet for all testing problems. At that point the tool turns to testing problem.

After short introduction let’s start with very simple case. Let’s imagine we were testing For tester that is really boring case, because every time he wants to do something, he has to login. Easiest way to get around the problem is to get tool to do login. If the case is this simple, I’d take Selenium IDE. It is Firefox plugin which records the test case. After recording it can be played over and over again to get the test to specific point. The screenshot below shows the whole test for login. (Credentials are not real ones…)

Selenium IDE

Selenium IDE script for

Selenium IDE is good for small tasks, but I would not recommend any recording tool for large scale test automation or complex tasks. Its simplicity justifies its use.

I will write later more about tools which can help exploratory and other manual testing styles.