? QA Design Gurus: June 2015

Jun 30, 2015

Load Testing with Telerik Test Studio

Nowadays most of the web applications are invoking external web services or RESTful services to get the dynamic data. If external services execution takes more response time then web application performance will be decreased. This can lead to unsatisfied application users.

Load testing is becoming a must for the web application. Load testing using Telerik Test studio is a simple tool to develop a load test and run

Load Testing with Telerik Test Studio involves simple steps

1) Record the test
2) Modify the dynamic parameters
3) Set the number of users & time
4) Run




Key Features

  • Multiple Channels to Create Load Tests
  • Crafting Complex Load Testing Scenarios Is a Snap
  • Load Testing of Web Services
  • Load Testing Traffic from Mobile Devices
  • Easy to Setup, Easy to Run
  • Test Lists Scheduling and Distributed Execution
  • Smart Diagnosis of Issues

You can also create test from regular Test Studio functional test and Fidler trace

Jun 29, 2015

Internationalization Startup Parameters of OpenEdge Database

Here is the list of useful OpenEdge startup parameters  below-

Conversion Map:

-convmap <filepath>
Pathname of the CONVMAP file.

Use -convmap to identify the CONVMAP file to use for code page conversions, collation orders, and case conversions. By default, Progress uses the convmap.cp file in the DLC directory.

Case Table:

-cpcase <case table name>
Name of a case table in the convmap.cp file and for the
current -cpinternal codepage.

Use -cpcase to specify the case table. This table establishes case rules. The case rules are used by the caps and lower case functions. In a character field format, use an exclamation point (!) to tell Progress to convert all characters to uppercase during input. To retrieve the value of this startup parameter at runtime, use:

SESSION:CPCASE.

Collation Table:

-cpcoll <collation table name>
Name of a collation table within the convmap.cp file and for the current -cpinternal code page.

Use -cpcoll to identify a collation table for Progress to use. Progress uses collation rules to compare characters and sort records if a by clause cannot be satisfied by an index. To retrieve the value of this startup parameter at runtime, use:

SESSION:CPCOLL.

Jun 26, 2015

Want to do Load test of a simple Web UI test? JWebUnit is sufficient

Concurrent user scenarios very common when  you work for a  web application if it is used by multiple users across the globe.There are some challenges when you test this kind of scenarios.

If you want to do a load test for your website then you can go for any commercial tool like Telerik Test Studio, Neo load, Load runner..etc

There are some scenarios  like we need to validate a  defect(small web UI steps) which required some concurrent  users login or need to do a load test of just login UI. In these cases, you can use JWebUnit.

JWebUnit is a Java-based testing framework for web applications. It wraps existing testing frameworks such as HtmlUnit and Selenium with a unified, simple testing interface to allow you to quickly test the correctness of your web applications.

Setup:

1) Download latest version of JWebUnit jars here. One lib folder will be available.
2) Write a java program
3) All above lib folder jars to the classpath.

Sample program:

import static net.sourceforge.jwebunit.junit.JWebUnit.beginAt;
import static net.sourceforge.jwebunit.junit.JWebUnit.clickElementByXPath;
import static net.sourceforge.jwebunit.junit.JWebUnit.setBaseUrl;
import static net.sourceforge.jwebunit.junit.JWebUnit.setTextField;
import static net.sourceforge.jwebunit.junit.JWebUnit.submit;
import static org.junit.Assert.*;

import org.junit.Test;


public class ExecuteWebUI {

@Test
public void test() {
     setBaseUrl("http://xxxxx/router/login");
             beginAt("/login.jsp");
             setTextField("loginName", "xxxxx");
             setTextField("password", "xxxx");
             submit();
}

}

Using eclipse:

1) Create a Java Project
2) Copy above downloaded lib folder to this project
3) Add all lib folder jar to project classpath
4) Create a JUnit test case

Jun 25, 2015

Telerik Test Studio Runtime setup


Test Studio has many features and recently we have used its Scheduling service for running our automated test lists. Although there is detailed documentation here on how to setup the runtime, many issues have come up while doing it manually. Here are few things which will help us in understanding the feature better.


This is how the scheduling service works:
  •  When a test list is created and scheduled, the all tests which are present in the list only will be uploaded to Storage DB – TestStudioStorage in the SQL Server 2008 which will be installed with a customized selection while installing Test Studio. (The DB details can be seen in the file at “C:\Program Files (x86)\Telerik\Test Studio\StorageService\CloudStorageSelfHost.exe.config
  •           The DB then copies all the files to a Temp folder (%Temp%\Projects\) and from this location the file locations are picked up while scheduling.
  •        Once a list is uploaded to DB and any changes are made later into the same list, new changes will not be updated in DB – An existing issue in TTS. This requires a manual clean up of the files in the DB using a tool called DBBrowser.exe. This will allow us to delete the desired files. Scheduling the list again will update the DB with new changes.
    • We can try out the dynamic list option for creating a test list and see if that updates the project files automatically to the DB. 
    • Also, there is an option where you can right click on the schedule which you've created and upload the latest files to the DB. Please see the steps here.

Below are the things which are to be kept in mind while using Test Studio Scheduling Server:

  • Ensure that the Telerik Scheduling and Storage Services are started and running.
  •  The port for connections should be proper while installing the Scheduling Server. More info here.
  •  Connection to the Execution Server should be up and available in the list of running services.
  • All the references which are used in the solution level should be included inside the project folder for the tests to recognize and map accordingly.
  • If any relative test paths are used in coded steps, i.e. Test as step – coded step (As mentioned in the step 2 above, relative path tests will not be uploaded to the temp folder), those tests should be uploaded to the storage DB. This can be achieved by creating a dummy test list and simply running it – This would upload the test files to DB (The result of this dummy list can be ignored, it is just to upload files to DB). Once these tests are uploaded to DB, they can be called using relative paths in the tests of test list. Else, we will encounter the error “FileNotFoundExeption” for the tests in relative path.
However, there is a minor glitch in the scheduler wherein the first test in the test list is not executed, from the second test, the list runs fine. As a workaround, we can add a dummy test as the first test in the list and scheduled it so as to not disturb the desired batch of tests.

An alternative to stop using the scheduler and run the test lists locally with email triggers after each run of the test list can also be achieved by following below steps:
  • Create a class library with a class which extends IExtensionExecution library.
  • Ovevrride the methods such as OnAfterTestCompleted, OnAfterTestListCompleted and write custom code to trigger emails/write logs etc.
  • Build the project and add the project.dll to the references of the project which contains tests/test lists (CSTestUI in our case)
  • When the individual tests are run, OnAfterTestCompleted is invoked and when test lists are executed, OnAfterTestListCompleted is supposed to be invoked.

But I have checked this behavior and observed that OnAfterTestCompleted method is getting invoked at the end of each test and generating email too, but OnAfterTestListCompleted method is not getting invoked.
There is a post in telerik test studio forums (link) which provides a workaround for the same – available in the documentation link. But even the workaround does not seem to work and the support team did not provide any solution to this. Even if we prefer to run locally, we can navigate to the test lists in the local machine through UI. So, We can continue to use the scheduler to get the result reports. 

There could be other workarounds / solutions for these issues which can be updated in this post at a later point of time when found! :)



Jun 16, 2015

Are you storing test results in Database or in flat files? Try Rollbase. You can get better reports and you can save lot of time


After executing each test case/suite, we need to store test results somewhere. Generally we  store test results in a database, flat files or sending emails.

Generally higher management needs test results statistics like Daily, weekly, monthly..etc test reports.To provide these reports we need another application (may be a web application) which can read test results from Database, flat files, emails and display in a nice format. Sometimes we need to send test results statics in an email to higher management.

To develop this application QA need to spend lot of time and team required some development skills

In this case, you can try Rollbase

Rollbase is a platform as a service (PaaS) software solution. You can get rapid application development in the cloud, with minimal coding. Build, deploy and manage cloud-based applications with a single tool designed to get you up and productive fast.

Rollbase is a cloud application. Rollbase provides REST API to create records.



Rollbase has many features like Charts, Gauges..etc. Using these features you can display test results in the better format.

Rollbase has other features like batch jobs and triggers. Using these features you can automate the process of getting test results analytics.

Rollbase also supports sending emails, SMS, mobile app so that you can get the report anytime and anywhere

You can also build a dashboard to see the Live results in a single page

Importance of automated tests in Continuous Integration


In today's business world, it has become a norm to work as a distributed team. If we take the example of OpenEdge development in Progress Software (The Company i work for), OpenEdge product contains multiple components like a language, database, servers, IDE, etc.. The development of these components is done using multiple technologies like C, CPP, Java, ABL, .net, etc.. Along with that, the development of this product is distributed among different teams across the world. So, application development environment might have multiple teams, multiple technologies involved. Development happens at one place and testing at other. So, it is always difficult to catch right person at right time.

In addition to distributed teams, now a days with AGILE development everywhere, applications are being released in a very short time to market. With this, development is being done iteratively and for each iteration whole Software Development Lifecyle needs to be completed within that short cycles.

QA face difficulties to keep up with these fast releases, since they need to complete whole regression testing of existing functionality along with the new feature testing. Along with the functional testing, they may need to do load testing, performance and system testing.

Continuous integration is a good way to minimize regression testing. In order for Continuous integration to succeed, QA relies heavily on individuals writing scripts for tools. The scripts need to be written not only to test the feature being implemented, but the feature we delivered a year ago still has to work.

Continuously integrated builds undergo automated testing whenever a developer pushes code to a repository, but running larger tests at specific times is still valuable. During a nightly build, run a full site or application load test for whatever we expect the user base to be at any given time. Then, toward the end of an iteration or sprint, stress the application to its breaking point to set a new bar for how many concurrent users it can handle.

So, including all our automated test suites to Continuous integration increases QA's confidence.

Jun 12, 2015

BLUE GREEN DEPLOYMENT (HOT UPGRADE)



In General, Deployment requires make changes to the Production or testing environments. Deployment techniques are different from product to product. Few of them are simple and few of them are complex which are required downtime and customer might get impact due to this. Blue Green Deployment is one of the deployment techniques which are very simple and not risky. This deployment does not required downtime and customer would not get any impact during deployment.

In Blue Green Deployment, We will maintain two sets of deployments one is blue deployment and another one is green deployment. This is not at application level as deployment happen with two different versions of applications in different set of instances.  Here a different version of applications means changes in applications like rolling out small features, bug fixes, enhancements to the customers in incremental basis. Here Application changes should not be at hardware for easy and faster recovery in case of issues with either application or deployment.


Blue Green Deployment Process:

  1. Bring up the auxiliary resources required to run your application
    • These are things like load balancers and security groups.
  2. Deploy version 1 of your application 
    • All of the traffic from the load balancer goes to this version.
  3. Create version 2 of your application  
    • This usually is different code (new feature, bug fix, etc), but could also be a different instance type or key name.
  4. Switch traffic from version 1 to version 2
    • Here is your blue/green deployment. The traffic going through the load balancer no longer goes to Version 1, it goes to Version 2 instead. 
  5. Delete Version 1
    •  Once you are happy with how Version 2 is performing, you can delete the resources (instances, etc) that Version 1 was using.





     

Jun 3, 2015

Automate your daily tasks

Things that you do daily like installing a latest build, creating a basic environment with which you can start working on your tasks. All such things should be considered for automating.

For me i found Apache ANT, Shell and Batch Scripting are good tools or technologies using which i could quickly automate.

These tools are such that, they don't need any extra efforts to get them in place. For example, if you are using batch scripting, you don't have to download any software to work on it. Similarly, Apache ANT is a tool which is readily available for most of the products that we work with.

Automating small and simple things benefit you speed-up your daily tasks, you learn more.

Re-use

In our daily work, we often hear or think about re-use. However, we don't spot opportunities for re-use and, the time and effort saving benefits that can be seen from re-use.

Re-use comes in many forms from a program created to facilitate the ease of future similar programs to automated test created in a way that can be easily reused time and time again.

Re-use can be a powerful and useful thing for everyone however, we should not get stuck trying to make everything re-usable, recognizing when and where to use it can come from trial and error, we don’t always get it right. If it’s not something you've thought of before then we should start looking for opportunities.

The trick for me at least is recognizing when the opportunity is there, with a little thought about future usage of something you can tell if it’s worth putting in a little extra effort upfront to take away the effort in future usage, sometimes it’s worth it other times its simply not.