A Scripting Framework for Automation
An automation framework is a common set of tools, guidelines, and principles for your tests. Having a framework helps to minimize test script maintenance. This article we are not talking about any automation tool specific. But, what we mentioned here is about the thought process and guidelines that go into creating an automation test case into words. As always, we hope this will be useful to you all.
At MeU Solutions, we have defined guidelines as phases will run through any automation framework. Below is a process, guidelines for those phases of a framework which use the latest automation methodologies which are combined data-driven and keyword driven (refer http://en.wikipedia.org/wiki/Keyword-driven_testing for overview information or http://eliga.fi/Thesis-Pekka-Laukkanen.pdf for more details).
I. Prepare test case for automation test.
- Work with manual testers to define how many test cases need to be automated. Because keyword-based tests must be maintained and updated over time, you have to be wise about choosing which test cases to automate. Choosing test cases to automate may be based on some criteria such as repetitive tests, have large data sets – time-consuming, have high risk, tests for different browsers or environments, the test that is hard to test manually.
- Automation tester will evaluate test cases list from manual to specify which test cases are able to automate.
- Import selected manual test cases to automate-able test cases list.
- You can start small by attacking your smoke tests first. Then cover your build acceptance tests. Then move onto your frequently performed tests, then move onto your time taking tests. But make sure every test you automate, it saves time for a manual tester to focus on more important things.
II. Develop automation test scripts
In this phase, the automation tester needs to follow below steps to develop a new automation script:
- Firstly, read the manual test case, understand each test steps inside. Understand the functional testing objectives, understand what test data is needed, know what needs to be verified. If there are any unclear points, contact manual tester immediately for clarification and refer to any available documents (requirement, user story, help file, guideline, …) to make things clearer.
- Secondly, developing the keyword: After reviewing the test cases in the test suite, starting to break down the test case to many keywords (different types of actions will have different keywords) and make sure that keywords can be inherited as well, the number of keywords should be kept as minimum as possible, has comments for each keyword for readability. Because in the later time, there will be so many keywords that you need to remember and organize your scripts and keywords. This itself becomes a cumbersome task at a point. So long as each keyword corresponds with a known command, the keyword can be as simple as “login”, “logout”,… These should be task-oriented, and not focused on the details. In keyword development progress, please always follow the automation development standard (that is a document contains some standards to make a keyword easy to understand and debug). After a new keyword was created, let commit to source code control and broadcast this keyword information to the team by email or any channel…
- Thirdly, prepare test data: Test data is the data will be processed by the keyword to make keyword run to complete a step of a test case. A good investment of time and effort should go into creating well-structured test data during the early stages of the automation testing project. This makes the process of writing automated tests easier, in the later stages of the project life-cycle. The quality of test data also determines its reusability and maintainability.
- If data is 1-time usage (private data), you can put a hard code string to the keyword. Although not advisable.
- If data is reusable, you must create and reuse data from testbed (testbed is a data repository which contains all reusing variables).
- And finally, create automation test script: Combine multiple keywords with test data to create a new test script. Recommend to use “Cleanup” action in each script as well, and capturing the screenshots for the failed steps and creating a cumulative Pass/Fail report at the end of test execution which is saved in a local drive. And we should define a common Configuration File containing all the environment settings like application URL, browser-specific information, login credentials, …
- After an automation tester completes test scripts, test scripts will be assigned to another tester to review.
- Test scripts need to be reviewed in 4 aspects:
- Review all test steps of test script: make sure they are following and covering the manual test correctly.
- Review keyword to make sure they are following the standard.
- Review the test data to make sure they are correct and valid.
- Run and re-run automation test script and make sure test script can run in other platforms/environments without any error.
- The reviewer gives the feedback to the author of the test script if any error was being detected.
IV. Execute script, analysis result, and report
When an automation tester developed a new test script completely, that test script will run passed at that time. But we’re not sure in the later time we re-execute the script or in case of combining that test script with other scripts in the test suite, or even test environment does not change at all — nothing will ensure the test script will be passed again. So, in order to increase the fault tolerance of test script, we need to run test scripts as many times automatically as possible on all environments to make sure it will work smoothly. To do that purpose, we have 3 types of automation running.
- Nightly run: All test scripts were developed in a day need to be collected and run at the end of working day.
- Weekly run: All test scripts was developed in a week need to be collected and run at the weekend.
- Integration run: When test scripts have been passed at 2 above steps. It isn’t enough to believe that when the test scripts were imported into an integrated test suite which is used to automated test when every sprint of AUT (Application Under Test) was released can be passed again.
- Analysis and Debug
You can perform debugging and analyzing everytime you are scripting to ensure that your logic is properly coded. Try to use message boxes frequently to output various values at various stages of test execution. This will give you a visibility into the test as nothing else would. Any fail run needs to analyze by:
- Check logs to detect the reason of failed test script.
- Rerun test script and manually observe the console to detect issues.
- If test script still is failed, we will run test script by manual to deep analysis of failed test script. If test script is failed by a bug -> collect log and report the bug. If the test script is failed by the script -> update script and rerun.
- Report: All result of test script will be collected and report as a test’s result by appropriate reporting format. We can create reports at the end of each execution in the form of charts and tables if management needs it. The management should always be informed about the test case coverage, that means which manual test cases are covered in automation and which of them are remaining.
V. Maintenance test scripts.
Maintenance usually occurs when there is a change request an application. The scripts should immediately be updated to cope with that change to ensure flawless execution. For example, if you are writing some text in the textbox through the script and now this text box becomes the drop-down list, we should immediately update the script.
- Any fail test scripts by itself need maintenance in running progress.
- Any fail test scripts because of GUI change or product change need to plan time for maintenance officially.