? QA Design Gurus: March 2015

Mar 30, 2015

Cross-Site Request ForgeryAttack

What is CSRF Attack?
Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they're currently authenticated. A malicious request is sent to a web application that a user is already authenticated against from a different website. This way an attacker can access functionality in a target web application via the victim's already authenticated browser. Targets include web applications like social media, in browser email clients, online banking, and web interfaces for network device
Basically, an attacker will use CSRF to trick a victim into accessing a website (planting an exploit URL or script on pages that are likely to be visited by the victim while using the web application) or clicking a URL link that contains malicious or unauthorized requests (sending an unsolicited email with HTML content).
It is called ‘malicious’ since the CSRF attack will use the identity and privileges of the victim and impersonate them in order to perform any actions desired by the attacker, such as change form submission details, and launch purchases or payments for the attacker or a third-party account.

CSRF Attack using GET Request
Consider a bank application which uses GET requests to transfer money.
If the application was designed to primarily use GET requests to transfer parameters and execute actions, the money transfer operation might be reduced to a request like (transfer 100Rs to User1):
GET http://bank.com/transfer.do?acct=User1&amount=100 HTTP/1.1
Now, an exploit URL could be built using the original URL as (Transfer 100000Rs to User2):
http://bank.com/transfer.do?acct=User2&amount=100000

Now, attacker disguises the exploit URL as an ordinary link, encouraging the victim to click it like:
<a href=" http://bank.com/transfer.do?acct=User2&amount=100000">View my Pictures!</a>
Or as a 0x0 fake image:
<img src=" http://bank.com/transfer.do?acct=User2&amount=100000" width="0" height="0" border="0">
If this image tag were included in the email, attacker wouldn’t see anything. However, the browser will still submit the request to bank.com without any visual indication that the transfer has taken place

CSRF Attack using POST Request
The only difference between GET and POST attacks is how the attack is being executed by the victim. POST Requests can be attacked using <form> tags, as it needs proper input fields for accepting the request.
Suppose an attacker wants to login to a web application and he is unaware of any credentials. He can create a user by himself by knowing the details of the web page.
Here is how attacker can write a POST request to create a user.
    <form action="http://host:port/security/createUser.jsp?action=create" method="POST">
      <input type="hidden" name="submit" value="&#32;&#32;Save&#32;&#32;" />
      <input type="hidden" name="name" value="hacker" />
     <input type="hidden" name="role" value="Administrator" />
      <input type="hidden" name="password" value="test123" />
      <input type="hidden" name="confirmPassword" value="test123" />
      <input type="hidden" name="action" value="create" />
      <input type="submit" value="Submit request" />
    </form>
    <script>
      document.forms[0].submit();
    </script>

This form will require the user to click on the submit button, but this can be also executed automatically using JavaScript:


<body onload="document.forms[0].submit()">

Remedy for CSRF vulnarability
The easiest way to check whether an application is vulnerable is to see if each link and form contains an unpredictable token for each user. Without such an unpredictable token, attackers can forge malicious requests. Focus on the links and forms that invoke state-changing functions, since those are the most important CSRF targets

“IoT” a challenge for Security Testing

Most of you might have seen the movie Die Hard 4.0. The plot is about the antihero who does a cyber-crime (a “fire sale”) by a hacking major government applications and taking control over them; in the course of the movie the male protagonist fights with him and sets everything right. Well it is kind of extreme to think about it in real life , but it always get reminded to me very often when I look at “IoT”.  Yes, buzzword that we have been hearing every now and then and getting linked to every technology to give a new face and dimension. To put it simple, IoT is all about getting the current state and behavior of a Physical object.  This one seems like a new game, but what if it catches up very quickly?  

What is it all about?

Let’s try to put it in the form of real time use-cases and see how it generally works and majorly why it is a challenge for Security testing

Simple use-case

Know if the light in my bedroom is on off and if it is on then off it
Devices – one light with a sensor to access it

Little complex use-case

When I hit someone by car, then the car should send a message for ambulance, my insurance company should be notified for any damage, the traffic control signals should know there was accident and then provide alerts/instructions to all the vehicles/transports, the ambulance should be able to look at the medical history of the person who was hit and provide him right first aid and on and on and on...

Devices – Car sensors, devices, traffic device to multi-cast messages, wearable’s etc..


Now something more complex


Let me not explain you what it and leave the use-case to your imagination :)

Well current applications are pretty huge enough who have such kind of applications and talk among heterogeneous platforms/systems, but IoT is little different. You program the device to react on certain events or actions. In the chaos of data transfer across multiple applications what if someone breaches in? This definitely lays down huge responsibility for the testers to certify the IoT applications. It’s all together a new area for testers too and the reasons would be too many new technologies

                 devices, sensors, chips, network, signals, multicast, gateways

-        IoT is all about devices which would be the physical object, sensors and chips.
-        Data transmission will be mostly multi-cast(just like push notifications)
-        Gateways – interface between the Machines/Smart Machines/Sensors/devices to communicate

Analytics and Big-data(This one is not new, but will be heavily used)

                Now with the amount of devices/wearable’s we might be having with IoT applications, the amount of data that we need to store/ monitor and access would be very huge in-addition to the existing customer/business use-cases data. Big-data might be a key player here to maintain and access such kind of data across different structural/non-structural databases in a very short time across multiple devices. Data comes with a history and one need to monitor the history and get its statistics and that is where we see the role of “Analytics”.

Connection protocols

                HTTP has been there for a very long time and it might be opted out with some new connection protocols like CoAP and MQTT.

      Above all, Security

                What kind of protocols are going to be used if it is not HTTP to make connection secure. There might be WPA level security but we are talking about security while accessing the data. Any new safer authentication mechanisms for M2M or M2D communications? And if not, what kind of SLA’s would be followed across in the industry for security.


In future we might definitely be seeing cloud platform offerings for building IoT applications, but to validate the IoT application or certifying the applications of not being vulnerable to any attacks would be serious concern. If we just look at IoT as an adaption for new technology then it is as good as testing a new mobile feature but when we look it up as integration of technologies then little ink, white-board, techies(like you) and a word document might give us a wonderful Strategy on Security testing for IoT.

Mar 29, 2015

Let's hack Javascript

Well yes, most of the applications are HTML5/CSS/JS based and it’s quite easy to build them too with the kind of libraries/frameworks available in the market. When we talk about security testing of these kind of applications we majorly come across some common security breaches like SQL Injection, Cross-site Scripting and Cross-Site Request Forgery which could be detected using “Fiddler” a Progress Telerik tool, but a safe/secure application might have many layers of security where these major attacks are not the only good candidates to hack the system.

Something about JavaScript Obfuscation

It makes our job pretty easier to develop web application and mobile application at the same time with minimal changes using HTML5/CSS/JS.  As soon as the application is deployed and provided access to public, everyone out there can access your JavaScript code from the browser and look at the logic and try to understand(if they can) what it does. So how far is the code secure? 
That is where we have code obfuscation. Code obfuscation changes the code (but the functionality still runs) and makes it non-readable/understandable.

For example, here is a simple addition code in JavaScript

Before Obfuscation

 var z = 2 + 3;
document.write(z);

After Obfuscation

eval(function(p,a,c,k,e,d){e=function(c){return c};if(!''.replace(/^/,String)){while(c--){d[c]=k[c]||c}k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1};while(c--){if(k[c]){p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c])}}return p}('1 0=2+3;4.5(0);',6,6,'z|var|||document|write'.split('|'),0,{}))

So if a simple two line statement looks this complex, imagine how a 100 line JavaScript code would look like. Code Obfuscation simple follows a unique mechanism to encode and encrypt your code and make non-understandable.
Well, what was this whole write-up related to testing. The explanation for JavaScript obfuscation was explained to highlight the “malicious obfuscated JavaScript” attack which is considered to be the top internet security threats. Some smart hackers can send malicious obfuscated JavaScript which cannot be detected at our Presentation layer or any available anti-virus software. By doing this they can inject the data they want, access the UI elements and divert the traffic to their own domains.

Access/Policies/Controls inside JavaScript

The above section was about malicious data injection, but now let’s talks about how we can stop unwanted users to access our application.

Content-Security-Policy (CSP) – This policy will define set of rules on what kind of request with specific URL/domain/http type can access the application. If you are going to execute JavaScript inside the code then you can define the script source inside the CSP which will not allow hackers to execute their scripts. In addition to that we can also notify if any unwanted request (from an unwanted domain) was access the application. CPS really helps from stopping Cross-site Scripting attacks.

Accessing different origin resource safely using CORS - One important concept of Web application Security is “Same-origin policy”, which means the hosted web page can access the resources of the hosted domain but not the resources of another domain. If you try to access it then it will not allow you to do so. The Cloud world, access resources from different domains hence this policy has to be somehow bypassed securely. In-order to achieve this JSONP (JSON with padding) was introduced to call the services in other domain. In JavaScript users can invoke a HTTP Request and in return back the response from the domain to which the request was invoked. But JSONP has security vulnerability; it just passes the response to the requested server without doing any validation of data (which might inject malicious JavaScript). Hence using JSONP is security vulnerability and should be used for accessing resources from a different domain.

As a better alternative, Common-Object Resource Sharing(CORS) was introduced which could access resources from different domain for all HTTP verbs(JSONP could only do for GET) in addition to that it first passes the response from the other domain and allows it to get parsed and then on validating all the policies it accepts the response. In the terminology of CORS the first request is termed as “pre-flight” request which can perform validation and then based on the rules it accepts the requests. You can look have a look at Progress OpenEdge Mobile to know CORS has been used to access resources from other domain.


The above concepts will help us understand what are best practices that need to be followed while developing JavaScript applications to not save from Security attacks; hence while testing JavaScript applications we can look through these concepts and validate the level of vulnerability.

Soft Assertions with TestNG



All of us writes automated tests and do the assertions as per our test cases. Each test case might have more than one or more assertions. Assume you have automated one test case and having more than one assertion. We all know that if first assertion in the test case fails then test case will be failed and do not validate remaining assertions in the test case.
For example, you automated one test case which validates UI Verification in the Webpage and DB Verification in the back end system...etc. Now you wanted your test case to validate both UI and DB Verification even any one of the verification is failed. It is not possible with hard assertion as test case will fail immediately when assertion condition fails.
TestNG support the following assertions.
Hard Assertions:
Test immediately fail and stop executing the moment a failure occurs in assertion. You may want to use hard assertion in case of Pre Condition of test case fails and no point of executing the test case in further.
Soft Assertion:
Tests don’t stop running even if assertion condition fails, but the test itself marked as a failed test to indicate right result. This is useful if you are doing multiple validations like UI Page Multiple elements Verifications, DB Verification...etc and you wanted to assert DB even any one of the UI Assertion fails and fail the test case once all the validations are complete in case of failures other Pass the test case.
Example:
package automation.tests;

import org.testng.asserts.Assertion;
import org.testng.asserts.SoftAssert;

public class Sample {
  private Assertion hardAssert = new Assertion();
  private SoftAssert softAssert = new SoftAssert();
}
 @Test
public void testForHardAssert() {
  hardAssert.assertTrue(false);
}
 @Test
public void testForSoftAssertWithNoFailure() {
  softAssert.assertTrue(false);  
}
 @Test
public void testForSoftAssertionFailure() {
  softAssert.assertTrue(false);
  softAssert.assertEquals(1, 2);
  softAssert.assertAll();
}
}
If you look at the test case (testForSoftAssertionFailure), the softAssert.assertAll() does the trick instead of writing your owned custom logic. This method collates all the failures and decides whether to fail the test or not. So instead of writing custom logic, the TestNG library itself offers the facility to perform soft assertions in your test.

Mar 28, 2015

Do you care about your application code (Code Coverage)? Have your tests covered 100% of Code (Test Coverage)?




Code Coverage refers to the number of lines of code being exercised when application is running. Test coverage refers to the test cases that are written against to the Requirement Document. Both are important to the Quality Assurance Personnel to get the indication of how thoroughly your product has been tested.
Measuring code coverage is a technique for understanding exactly which application code is being exercised. There are tools that can be used to indicate which code is being exercised when running through tests cases. For example, there was a code path that would be executed only in case of error condition. We should write the test case to simulate the error condition and then verify the error message was displayed. This type of testing refers to the white box testing. You need to see the application code in order to assess the code coverage.
In my opinion, code coverage metrics gives us an important insight how effective our tests and what parts of the source code are thoroughly exercised. You can look at code coverage report and find out specific code areas which are not exercised by your tests. By Doing Code Coverage, You can add new test cases or modify existing test cases to get cover all areas of your application.
Test coverage often refers to the set of test cases that are written against to requirement specification document .This would be the block box testing and not required to go through the code to write test cases.
Benefit of code coverage measurement:
·         To identify areas in the source code where test cases need to be added so that test case coverage can improve.
·         It helps in determining the quality of the product by measuring of code coverage
·         Ensuring that testing is completely exercising the complete application with planned and exploratory tests.
·         To cover all the workflows in terms of decision tress in the code.
·         Increased confidence for Product release.

Understanding Encryption Systems.



Now a day’s most of the applications are moving to the cloud and more positive towards cloud, at the same time they are also concerned about  security. In short security is protecting the data from unauthorized uses, uses https, TCP Etc protocols even it may not helps sometime, looks for more and next question is how? Here the concept of Encryption becomes prominent.
Encryption is another way to enhance the security of a message or file by scrambling the contents so that it can be read only by someone who has the right encryption key to unscramble it.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage — or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it all meet compliance requirements? And today we see encryption growing at an accelerating rate in data centers.
Today we help you pick the best encryption options for your projects. Our focus is on encrypting in the data center: applications, servers, databases, and storage. We will also cover tokenization and discuss its relationship with encryption.
Understanding Encryption
Systems
Let’s begin with the basics which are still important to put into practice.
Three major components involves in the building the structure of an encryption system.
Data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system depend on the nature of the payload, as well as where it is located or collected.
Encryption Engine: This component handles actual encryption (and decryption) operations.
Key Manager: This handles keys and passes them to the encryption engine. In a basic encryption system all three components are likely located on the same system. As an example take personal full disk encryption (the built-in tools you might use on your home Windows PC or Mac): the encryption key, data, and engine are all stored and used on the same hardware. Lose that hardware and you lose the key and data — and the engine, but that isn’t generally a concern for FDE. (Neither is the key, usually, because it is protected with another key, or passphrase, that is not stored on the system — but if the system is lost while running, with the key in memory, that becomes a problem). For data centers these components are likely to reside on different systems, increasing complexity and security concerns over how they work together.
Building an Encryption System
In general we split the application into components like encryption engine in an application server, the data in a database, and key management in an external service or appliance.
-       Application, which collets the data
-       Database holds the data.
-       Files where stores the data.
-       Storage volume where files reside.
Encrypt the data at application level and the data is protected all the way down and it adds complexity to the system and always not possible.
Here is an example. Let’s say someone tells you to “encrypt all the account number” in a particular application. We will further say the reason is to prevent loss of data in case a database administrator account is compromised.
The data isn’t necessarily moving, but we want separation of duties to protect the database even if someone steals administrator credentials. Encrypting at the storage volume layer wouldn’t help, because a compromised administrative account still has access within the database. Encrypting the database files alone wouldn’t help either, because for the database to work authorized users work inside the database where the files are no longer encrypted.
Encrypting within the database is an option, depending on where the keys are stored (they must be outside the database) and some other details which we will get to later. Encrypting in the application definitely helps because that is completely outside the database. But in either case you still need to know when and where an administrator could potentially access decrypted data.
 In summary, all ties together. Know why we are encrypting and where can encrypt, how to position the components to achieve the security.
That covers the basics of encryption systems. Next section goes in the layers of data encryption.
Encryption Layers
You can picture enterprise applications as a layer cake: applications sit on databases, databases on files, and files are mapped onto storage volumes. You can use encryption at each layer in your application stack: within the application, in the database, on files, or on storage volumes. Where you use an encryption engine dominates security and performance.
Higher up the stack can offer stronger security, with higher complexity and performance cost.
There is a similar tradeoff with encryption engine and key manager deployments: more tightly coupled systems offer less complexity, but also weaker security and reliability. Building an encryption system requires a balance between security, complexity, and performance. Let’s take a closer look at each layer and the various tradeoffs.

Application Encryption
One of the more secure ways to encrypt application data is to collect it in the application, send it to an encryption server or appliance (or an encryption library embedded in the application), and then store the encrypted data in a separate database. The application has full control over who sees what so it can secure data without depending on the security of the underlying database, file system, or storage volumes. The keys themselves might be on the encryption server or could be stored in yet another system. The separate key store increases security, simplifies management of multiple encryption appliances, and helps keep keys safe for data movement: backup, restore, and migration/synchronization to other data centers.

Database Encryption
Relational database management systems (RDBMS) typically offer two encryption options: transparent and column. In the layer cake above columnar encryption occurs as applications insert data into the database, whereas transparent encryption occurs as the database writes data out. Transparent encryption is applied automatically to data before it is stored at the file or disk layer. In this model encryption and key management happen behind the scenes, without the user’s knowledge, and without requiring application programming. The database management system handles encryption and decryption operations as data is read (or written), ensuring all data is secured, and offering very good performance. When you need finer control over data access you can encrypt single columns, or tables, within the database. This approach offers the advantage that only authenticated users of encrypted data are able to gain access, but it requires changing database or application code to manage encryption operations. With either approach there is less burden on application developers to build a crypto system, but slightly less control over who can access sensitive data. Some third-party tools also offer transparent database encryption by automatically encrypting data as it is stored in files.
These tools aren’t part of the database management system itself, so they can work with databases that don’t support TDE directly; they provide greater separation of duties for database administrators, as well as better protection for file based output like reports and logs.
File Encryption
Some applications, such as payment systems and web applications do not use databases; instead they store sensitive data in files. Encryption can be applied transparently as the data is written to files. This type of encryption is either offered as a third-party add-on to the file system, or embedded within the operating system. Encryption and decryption are transparent to both users and applications. Data is decrypted when a user requests a file, after they have authenticated to the system. If the user does not have permission to read the file, or has not provided proper credentials, they only get back useless encrypted data. File encryption is commonly used to protect “data at rest” in applications that do not include encryption capabilities — including legacy enterprise applications and many big data platforms.

Disk/Volume Encryption
Many off-the-shelf disk drives and Storage Area Network (SAN) arrays include automatic data encryption. Encryption is applied as data is written to disk, and decrypted for authenticated users/applications when requested. Most enterprise class systems hold encryption keys locally to support encryption operations, but rely on external key management services to manage keys and provide advanced key services such as key rotation. Volume encryption protects data in case drives are physically stolen. Authenticated users and applications have access to unencrypted data.

Tradeoffs
In general, the further “up the stack” you deploy encryption, the more secure your data is. The price of that extra security is more difficult integration, usually in the form of application code changes. Ideally we would encrypt all data at the application layer and fully leverage user authentication, authorization, and business context to determine who can see sensitive data. In the real world, the code changes required for this level of precise control often pose insurmountable engineering challenges or are cost prohibitive. Surprisingly, transparent encryption often perform faster than application layer encryption, even with larger data sets. The tradeoff is moving high enough “up the stack” to address relevant threats while minimizing the pain of integration and management. Later in this series we will walk you through the selection process in detail.

Till now we understood about the components involved in encryption. It’s not sufficient to build a secure application.  Stay tuned to know one of the important factors in security which is nothing but Key Management Options and then on how to implement it on cloud.