Tag Archives: testing

Java testing using XStream, AspectJ, Vsdf

Problem
A unit or integration test may require the creation of complex object structures. This is even more problematic in legacy systems that were not created with testability in mind. Faced with this, it may seem better in the short term to avoid programmatic tests and continue to rely on traditional test techniques.

Solution
A possible short term approach is to use object serialization to capture the required objects as they are used in the actual system. Then a test can rebuild the objects and use them in various tests. We will not elaborate on the shortcomings of this approach.

XStream
Their are many approaches in Java that can be used for object serialization: JAXB, JavaBean serialization, and so forth. All of them have issues. For example, many of them require that the object to be serialized conform to certain requirements, like the JavaBean specification. If the objects don’t there are ways around it, but this quickly becomes complex, not only must the top level object be ‘handled’ but then its nested object graph. The XStream library does not have these requirements and so can probably handle a large percent of the use cases.

Example Test
In listing 1 below is a unit test using XStream. A class called XApp contains a scenario1 method that we wish to test. To invoke scenario1, a lot of data must be created and structured into a required object hierarchy, here we just use a simple map to represent that. Thus, the test creates an object from an XStream XML serialization and then invokes the scenario1() method of the system under test.

Listing 1, test
package com.octodecillion.testing;

import static org.hamcrest.core.Is.is;
import static org.junit.Assert.assertThat;

import java.io.File;
import java.io.IOException;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;

import com.octodecillion.testing.XApp.Inventory;
import com.thoughtworks.xstream.XStream;
import com.thoughtworks.xstream.io.xml.StaxDriver;

/**
 * @author jbetancourt
 *
 */
@RunWith(JUnit4.class)
public class InventoryTest {
	/**
	 * @throws IOException 
	 * 
	 */
	@SuppressWarnings("boxing")
	@Test
	public void scenario1_test() throws IOException {
		Inventory inventory = (Inventory) new XStream(new StaxDriver())
				.fromXML(new File(
				"inventory-scenario1.xml"));

		XApp app = new XApp();
		app.setInventory(inventory).scenario1();
		assertThat(app.getInventory().stock.size(), is(1));
	}

}

AOP
Since there are no mappings or modifications to objects, serializing with XStream takes two lines of code. We can insert this source code (or via a utility) where we need it in the system under test to capture a state for future testing. However, this requires that we litter the code with these serialization concerns. We can remove them after we capture the object, of course.

An alternative is to use Aspect Oriented Programming. With AOP we can ‘advise’ at ‘joinpoints’ to capture objects. This can be done with Load-Time Weaving. The original source is unmodified and to recapture the same objects we just rerun the system and reapply AOP LTW.

In listing 2, the AspectJ AOP Java framework is used to create an aspect to capture the data object at the setScenarioData() method in the class under test, XApp.

Listing 2, object capture
package com.octodecillion.testing;

import java.io.File;
import java.io.FileWriter;

import com.thoughtworks.xstream.XStream;
import com.thoughtworks.xstream.io.xml.StaxDriver;

/**
 * Aspect to allow XStream streaming of application data object.
 * 
 * @author jbetancourt
 *
 */
public aspect XStreamObject {
	/**
	 * Capture the App inventory object during creation of data 
	 * and stream to file.
	 * 
	 */
	@SuppressWarnings("javadoc")
	after(final XApp app) : 
		execution( protected void setScenarioData()) && 
		!within(XStreamObject) && target(app) {
		
		try {
			FileWriter writer = new FileWriter(
					new File("inventory-scenario1.xml"));
			new XStream(new StaxDriver()).toXML(app.getInventory(), writer);			
			writer.close();			
		} catch (Exception e) {
			e.printStackTrace();
		}		
	}	
}

Storing using Vsdf
One problem with using serialization for tests is where to store these. If we have a complex app, which we do or else why go through all this, there can be a lot of objects and these can be in various states depending on the test scenario requirements.

In prior posts Simple Java Data File and very simple data file I presented a concept for just this scenario. With Vsdf, multiple serializations can be stored in one file. Thus we can store all streamed object in one file, or various serialized states of an object type can be stored per Vsdf file. Some advantages of this approach is the reduction in test files, ability to change or update individual serializations.

Example Class to test
In listing 3 below is the example class to test. It is more of a “Hello World” type of app.

Listing 3, target class to test
package com.octodecillion.testing;

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

/**
 * Simple XStream example.
 * 
 * @author jbetancourt
 *
 */
public class XApp {

	private Inventory inventory;
	
	/**
	 * The scenario that needs testing.
	 */
	public void scenario1() {
		// do the business case here ...
	}

	/** default constructor */
	public XApp() {
		setScenarioData();
	}

	/**
	 * The Application entry point.
	 * @param args
	 * @throws IOException 
	 */
	public static void main(final String[] args) throws IOException {
		new XApp();
	}

	/**
	 * set up the data for scenario1
	 */
	protected void setScenarioData() {
		inventory = new Inventory();
		inventory.add("322", new Vehicle() {
			@Override
			public int getNumWheels() {
				return 4;
			}

		});
	}

	/**
	 * @author jbetancourt
	 *
	 */
	public class Inventory {
		/**		 */
		Map<String, Vehicle> stock = new HashMap<String, Vehicle>();

		/**
		 * @param id
		 * @param veh
		 */
		public void add(final String id, final Vehicle veh) {
			stock.put(id, veh);
		}

	} // end class Inventory

	/**
	 * @author jbetancourt
	 *
	 */
	public interface Vehicle {

		/**
		 * @return num of wheels
		 */
		public int getNumWheels();

	} // end Vehicle

	/**
	 * @param inventory
	 * @return the app object
	 */
	public XApp setInventory(final Inventory inventory) {
		this.inventory = inventory;
		return this;
	}

	/**
	 * @return the inventory
	 */
	public Inventory getInventory() {
		return inventory;
	}

} // end class XApp

Issues
Sounds great but what happens when a class that was serialized is modified, like gets new fields? How do we handle versioning and so forth?

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Java JMockIt mocks via Spring DI

How to use Dependency Injected mock objects to allow Integration Testing of a Java application.

When working with some legacy code bases, the introduction of Dependency Injection has limitations. Thus, various forms of ‘instrumentation’ will be required to reach the “last mile”. A major difficulty with legacy code is instantiation of objects with the “new” operator.

One form of instrumentation is a modern Mocking framework, like JMockIt, another is the use of AOP with, for example, AspectJ.

As part of an evaluation of a possible approach I looked into using DI for data driven Integration Testing. Is it possible to use declarative specification of JMockIt mock objects? That is, can they be specified in a Spring XML bean configuration file and loaded via Spring Framework DI? This is in lieu of Using AspectJ to dependency inject domain objects with Spring or some other framework.

Short answer, yes. Useful technique? Probably not. But …

I was able to solve this by looking at the JMockIt source code for the mockit.internal.startup.JMockitInitialization class.

From JMockIt source repository:

final class JMockitInitialization
{
  ... snipped ...
  private void setUpStartupMocksIfAny()
   {
      for (String mockClassName : config.mockClasses) {
         Class<?> mockClass = ClassLoad.loadClass(mockClassName);

         //noinspection UnnecessaryFullyQualifiedName
         if (mockit.MockUp.class.isAssignableFrom(mockClass)) {
            ConstructorReflection.newInstanceUsingDefaultConstructor(mockClass);
         }
         else {
            new MockClassSetup(mockClass).setUpStartupMock();
         }
      }
   }
  ... snipped ...
}

See the “Using mocks and stubs over entire test classes and suites” in the tutorial for further info.

Turned out to be easy to create a Spring XML configuration and a simple mocked “hello world” proof of concept. The web server was instrumented dynamically!

Updates
One thing I could not figure out was how to destroy existing Mocks that were created in the Tomcat app server. Calling Mockit.tearDownMocks() had no effect. Admittedly, JMockIt is for use in JUnit based tests, so this may be pushing it.

Further reading

  1. Hermetic Servers This shows that the above idea is useful and not unique.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Behavior counters for improved JUnit tests

A weak unit test may give false assurance if it is not responsive to API and test data changes.

Introduction
Recently I came across and even contributed some unit tests that were low quality. Not that they did not test what we wanted. When certain classes were changed the tests did not break, they seemed to still be testing the original API. Since I can’t duplicate the actual tests in this blog post, I’ll try to make up something very simple to illustrate the issue.

Example
In listing one below, a Revenue class is designed to merely illustrate the concept. The class contains a simple list of revenue per salesperson, for example.

Listing 1, example SUT

package com.octodecillion.junit;

import java.util.ArrayList;
import java.util.List;

public class Revenue {
   private List<Integer> users;

   public List<Integer> getUsers() {
     return users;
     // future bad change:
     //return new ArrayList<Integer>();
   }

   public void setUsers(List<Integer> users) {
    this.users = users;
   }
}

In listing 2 below we access this list and assert that each revenue is positive. Yeah, doesn’t make any business sense, bear with me.

Listing 2, A JUnit test of the Revenue class

package com.octodecillion.junit;

import static org.junit.Assert.*;
import java.util.ArrayList;
import org.junit.Before;
import org.junit.Test;

/**  */
public class RevenueTest {
	private Revenue revenue;
	private ArrayList<Integer> list;

	@Before
	public void setUp() throws Exception {
	 list = new ArrayList<Integer>();
	 revenue = new Revenue();
	}

	@Test
	public void should_have_positive_revenue() {
	  list.add(25);
	  list.add(99);
	  revenue.setUsers(list);
	  	
	  for (Integer tax : revenue.getUsers()) {
	    assertTrue(tax > 0);
	  }
	}
}

Problem
Listing 2 above will correctly test the Revenue class. But, what would happen if the Revenue’s class getUsers() method is accidentally changed to return a new List that has no entries? The test as written will still pass! Since the list has no entries the for loop will be skipped and the assertion will not be executed. We can protect against this by having a test of the size of the list, like so:

assertTrue(revenue.getUsers().size() > 0);

But, in a complex test there may be a limit on how extensive is the proactive testing. Or thru lack of skill or error the test could have been written incorrectly. Worse case, unit or integration tests are bogus, and the real problems will be found in production. What could be done?

Writing tests for the tests would be a “turtles all the way” scenario. We need to put inline behavioral coverage in the tests. By contrast, in normal behavioral testing with Mock Objects, the system under test (SUT) will have the behavioral testing, for example, Behavior-based testing with JMockit.

Solution
Listing 3 below, shows an alternative: we use an execution counter. For each block that must be executed within the unit test, not within the SUT, we increment a counter. At the end of the test we assert that the counter has a specific value. This ensures that no block is inadvertently skipped due to SUT changes or test data values.

Listing 3, JUnit test using execution counter

package com.octodecillion.junit;

import static org.junit.Assert.*;
import java.util.ArrayList;
import org.junit.Before;
import org.junit.Test;

public class RevenueTest {
	private Revenue revenue;
	private ArrayList<Integer> list;
	private InvokeCounter invokeCounter= new InvokeCounter();

	@Before
	public void setUp() throws Exception {
	  list = new ArrayList<Integer>();
	  revenue = new Revenue();
	}

	@Test
	public void should_have_positive_revenue() {
	  list.add(25);
	  list.add(99);
	  revenue.setUsers(list);
		
	  for (Integer tax : revenue.getUsers()) {
	    invokeCounter.increment();
	    assertTrue(tax > 0);
	  }	
		
	  invokeCounter.assertCounterMin(1);
	}

}

Implementation
Listing 4 below is possible implementation of an invocation counter class. This was updated (2012-11-19) to allow named counters.

Listing 4, Invocation counter implementation Gist

Click to expand source
Alternatives
The main alternative to the above technique is Of course, writing better tests. Another alternative is to use code coverage reporting for the JUnit tests. In a ‘test infected’ organization this would work. The reports are scrutinized and thresholds set and so forth. Alas, this is a passive approach or relies too much on manual intervention.

Summary
Discussed was a possible weakness of unit tests. An approach using invocation counters to assert block coverage of a test was presented.

Links

  1. Code Coverage
  2. JUnit
  3. junit.org seems to be down
  4. JUnit wikipedia entry
  5. JMockit
  6. Unit Testing
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Test Coverage Using JMockit

The JMockit Unit Testing library continues to astound. One new thing I discovered is its Coverage reporting.

Code Coverage
Code coverage is simply a measurement of what code has been actually run when tests are executed. There are many such measures, ramifications, and tools. Like testing itself, code coverage measurement is probably not done enough, or misused.

“Code coverage tells you what you definitely haven’t tested, not what you have.” — Mark Simpson in comment

Path Coverage
Plenty of coverage reporting tools out there. What this one also includes is Path coverage. This is different then branch or line coverage. Paths are possible execution paths from entry points to exit points. If you visualize a methods statements in a directed graph, paths are a enumeration of the possible edges traversed when that method is invoked. So, Path coverages is inclusive of Branch coverage. Well, I’m not a testing expert, so this may be way off.

Very surprising results. For example, you run a coverage report with a tool such as Cobertura or Emma and feel very happy that you exercised every line and branch with your tests. Then you run the same tests but use JMockit Coverage and discover your tests didn’t cover all the paths! Not only that your line coverage wasn’t so great either.

Report
JMockit explicitly gives you a report showing:

Path
Measures how many of the possible execution paths through method/constructor bodies were actually executed by tests.
The percentages are calculated as 100*NPE/NP, where NP is the number of possible paths and NPE the number of fully executed paths.

Line
Measures how much of the executable production code was exercised by tests. An executable line of code contains one or more executable segments.
The percentages are calculated as 100*NE/NS, where NS is the number of segments and NE the number of executed segments.

Data
Measures how many of the instance and static non-final fields were fully exercised by the test run. To be fully exercised, a field must have the last value assigned to it read by at least one test. The percentages are calculated as 100*NFE/NF, where NF is the number of non-final fields and NFE the number of fully exercised fields.

— from the JMockit coverage report HTML page

Other information is found by using the full HTML output option.

Example
A sample JMockit coverage report is here. Of course you can drill down into various parts of the html page. Like when you click on an exercised line you will get a list of what invoked that line.

Worth it?
Are the metrics such as Path coverage that this tool generates accurate? Is JMockit coverage a replacement for other tools such as Cobertura? I don’t know. For most projects, the resources would probably make the use of coverages generated by multiple tools prohibitive.

Evaluation
One possible approach to evaluating coverage tools is to just use actual real results of the target application. Use the list of bugs and correlate to a coverage tool report. Where were the bugs? Which tool gave the least measure for this location? True, a ‘bug’ is not always a code problem or limited to one ‘unit’, which is a what a unit test is targeted to.

Further Reading

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Unit Testing what will never happen?

Some developers refuse or are slow to test units for certain values or situations because, they claim, those values or situations will not occur.

Some reasons given are that the presentation tier or UI will prevent certain values; that other modules in the call chain already have error handling and validation. So why test something that won’t happen?

Let’s take an example. In the method/function below, the type will never be blank or null. Looking at the actual source code for the application will show this. So should a unit test be written that will invoke this method with a null, “”, or ” “?

public boolean service(final String type){
    // do stuff
    return result;
}

Yes!
1. Things change.
2. Bad stuff happens.
3. Development process will use invalid values.
4. More complete testing. Regression testing, path coverage, etc.
5. The method could now be used in a different call chain.
6. Its a public method, anything can invoke it, even a business partner via some remoting technology.
7. When invoked as part of other unit tests, it could have invalid values.

#3 is, I think, the most important. Did you ever do development and something not work and it turn out to be existing code that did not handle arguments correctly? That is wasted time and an aggravation. Sure, in the current production code, a blank string won’t be used, but during development, especially TDD, you scaffold code. Sometimes you don’t have values yet, so you just use a blank string.

Just the other day I tested a deployed production method that correctly tested the argument for an empty string, “”, and did the right thing. However, the code did not check for a blank string, ” “, and throws an exception. Unit testing would have shown this.

And, yes, this will never happen in our application. ūüôā

Ok, you still don’t want to test something for null or invalid values? At least put this in writing in the Javadoc: “I (insert your name and address) am a great developer and stake my professional career that this method will never be invoked with invalid values.” The address is needed so us mediocre developers can hunt you down when the server crashes.

Further Reading

  1. Testing getter/setter using JUnit
  2. FindBugs and JSR-305

Off topic, some music …
If ” – Oregon (2009), from “Prime” CD.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Testing getter/setter using JUnit

So, I was thinking of testing my getter and setters in a JUnit test. Yea, I know it is not recommended, but I thought that I could write a simple reflective iterator that could do it without much fuss.

What I show below is that maybe getter/setters should be tested, especially in Java which really does not have “real” properties. Further, by investigating a potential solution, I show that naive testing will not work, and the reason is that testing IS hard to do.

Opponents of unit testing getter/setters are making a category error. They are equating the getter/setter idiom with the non-existent support for language level Properties in some languages, like Java.

A method named getFoo(), provides no guarantees on how it is getting the result, could be a field, could be a webservice, etc., and thus can fail like anything else.

 

Anyway, just to save time, and not have to code this, I did a quick search and found that this has been done many times before. So I looked at one of the solutions, Scott’s. Very nicely done! With one line you can test a class’s property getters and setters. Then I noticed a problem.

In his approach he invokes the setter and then the getter, finally comparing that the values are the same. For example, if the property is a boolean, the test class will set it true, then read it and the value better be true. However, what if the value prior to your set was true, the object is initialized with true for the field value?

For example, I took a class and set field x to true via the default constructor, then I modified the setX method to not set the field value.

public class Fubar {
  private boolean launch = true;
  
  public void getLaunch(){ 
      return this.launch;
  }

  public void setLaunch(boolean launch){  
      /* this.launch = launch; */ // broken!
  }
}

Code at: git clone git://gist.github.com/1408493.git gist-1408493

In the unit test the getLaunch method still returned true. The getter/setter test did not fail; the test was bogus. Not only that, the fact that the setLaunch method did not work illustrates that sometimes testing setters is warranted.

Thus, the version of the program that controls the sports strategy will ship and the boolean that aborts a play cannot be set to false! (Edited, removed joke about war stuff; you can’t be too careful with all the craziness in the world).

Updates
March 9 2014: I forked the original code with minor changes to a GitHub repository at: https://github.com/josefbetancourt/property-asserter.git
March 4, 2014: Just today a co-worker had an issue with a class. The getter was trying to return a value that was defined as transient. Thus, on deserialization the field was returning a null. The unit test could have provided a test for this. If a class implements Serializable, then it should include tests using serialization, and so forth.
March 2, 2014: I had a “heated” discussion with coworkers on this. I attempted to defend this post’s ideas. They asserted that this is a bogus example, and the issue is only applicable to boolean or primitives types. I contend that in fact it is very applicable where:

  • There are defaults of any kind.
  • There are methods that change object state. (Note: XUnit approach cannot test side effects unless they are part of the public interface)
  • The class will be maintained in the future.

That pretty much includes all non-trivial classes.

Feb 11, 2012: I’m using Scott’s solution in our test suite. I did modify the code a bit. Turns out that a JavaBean property is not so clear cut. If you have a getter and setter method pair but no actual target field, are those Javabean methods? Looks like the Java bean introspector will report these as bean accessors. Hmm.
June 7, 2013: There are two more issues with the above algorithm for testing getter/setters. First since the approach is for JavaBeans, there are issues with non-Javabeans.
    1. If the object doesn’t have a default constructor, one can’t automatically instantiate an object.
These are not insurmountable. The PropertyAsserter class has methods that take a class argument to use for testing a getter/setter.

 

In a recent presentation I gave on unit testing, I said that testing could be difficult. This is a great example of this. In this case, one should have followed analogous patterns in other fields. Thus, from semiconductor testing, for example, you would test gates by not just writing a 1 bit, you have to write 1 and 0 to make sure the gate is working, not stuck on high or low.

The bullet points

  1. Java Getter/Setters are a programmer’s agreement, easily broken.
  2. A getter/setter could be collaborating with external resources or services.
  3. A getter/setter could be part of a JavaBeans property event broadcasting requirement.
  4. If it can be automated, what’s the fuss?
  5. Never underestimated what could go wrong.
  6. Improves test coverage levels and get a pat on head from management.
  7. Would you not test a five cent metal washer on an interplanetary star ship?

Resources


Bill Evans Trio – Nardis – 19 Mar 65 (7 of 11)

Article on Bill Evans: http://www.chuckisraels.com/articleevans.htm

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

JMockit

Yesterday at work I gave a presentation on Unit Testing. It went well. 160 slides! And, no one passed out and hit the floor.

One thing I mentioned was mocking frameworks and how JMockit is very useful. Perhaps JMockIt represents the state of the art in Java based Mocking tools.

There are plenty of good reasons for using mocks:

“JMockit allows developers to write unit/integration tests without the testability issues typically found with other mocking APIs. Tests can easily be written that will mock final classes, static methods, constructors, and so on. There are no limitations.” — JMockit

JMockit is ” a collection of tools and APIs for use in developer testing, that is, tests written by developers using a testing framework such as JUnit or TestNG.”

I’ve used it for some tests. Since it uses Java instrumentation it can mock almost anything, especially those legacy untestable great OO classes. Best of all it has a very good tutorial.

The only ‘negative’, so far, is that JMockit does not, afaik, have many developers working on the project. That could also be a plus, of course.

Another mock tool is PowerMock.

Seems to me there are too many mock frameworks and they do pretty much the same things. Time for consolidation so that an API and a body of practice can shake out?

Further reading

  1. Mock object
  2. MockingToolkitComparisonMatrix
  3. Beyond EasyMock and JMock, try JMockIt !
  4. The Difference Between Mocks and Stubs
  5. The JMockit Testing Toolkit
  6. The Concept of Mocking
  7. PowerMock
  8. Unit Testing Using Mocks – Testing Techniques 5
  9. Making a mockery of CQ5 with JMockit


Off topic, some music …

Stefano Cantini – Blowin in the wind

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Exception verification using fault injection via AOP

A simple example is used to show how to use Aspect Oriented Programming to provide fault injection.

Intro

A few months ago I was looking at Java code and thinking there must be a better way to see if this will really work, if there are any issues in how the program would react to exceptional conditions.  I could write some unit tests or force some problems to see if the exception handling is correct and the log file output is really useful.  But is there another way?

I got a solution after a while of thinking: Insert faults.  Now how to do that in an easy and  maintainable way? Not a new idea of course. After doing a web search I found some interesting references.

Funny, today at work I was extending an application and one of the support objects was failing. Turned out to be an index out of bounds problem. The method had adequate exception handling but, of course, an index out of bounds scenario was not tested. In this case the problem would not occur in production (hmmm), but it definitely occurred during development, and was not handled correctly.

Exception Handling

There are many references of correct exception handling strategies and best practices.  However, when creating applications, the developer must still make critical creative decisions about how to handle an exception.    That decision may not be a wise one.  And even if it is, may later prove to have been very unwise since the system requirements and implementation may have changed due to maintenance or requirement evolution.

Testing Approaches

Using various testing methodologies a subsystem can be exhaustively tested for exception handling.  Yet, most testing methods require that the subsystems be isolated in some manner.   This, though very valuable, can give fatal false confidence that the system will behave in a predictable manner in responding to exceptional conditions.

The prime example of this is within the Unit Testing methodologies.¬† Since Unit Tests are tests of isolated components, they do not test actual exception handling in the live system.¬† A perfect component can still contribute to system failure when it is used incorrectly or uses other components incorrectly.¬† ¬†¬†For example, a unit test can show that class X’s methods correctly handle and when necessary throw the perfect exception.¬†¬† That means nothing if components that interact with X don’t correctly use those exceptions: ¬†they swallow, mask, or mistakenly catch them when there is nothing they can do.

Thus, one needs to use Integration Testing to test assemblages of components.  But, we are still back with the same shortcoming as with Unit Testing. Thus we would next need various forms of system testing.  And, that system testing must be a White Box test.  A Black Box test wherein an interface is exercised with various inputs (such as in web test systems or by QA personnel), will not necessarily invoke the full internal API/SPI between components that could be part of a complicated call chain.  Why?  Because, the interface, such as a browser client, will have (we hope) the validation, security, and logging layers implemented so that by the time a system test workflow enters the deep system components within the client or middleware, there is no way to influence the target component state into programmatic intentional failure mode.

If there is no way to exercise a run time trajectory, then why bother? The reason is that exceptional conditions are exceptional. Resources can be exhausted, systems not available, and so forth. The rule of Fail Fast may not be enough if collaborating components are not exceptionally responsive.

Fault Injection

To see if a system is responding to exceptional conditions, one must wait for those conditions or create them.    Analogously to Fuzz Testing we can dynamically and randomly insert faults.  In Fuzz Testing, inputs are manipulated to a system under test.  In Fault Injection we have to manipulate inputs inside the real system, within the components themselves.  And as in Fuzz Testing, we can employ various strategies to do so, such as randomized faults, etc.   Of course, this system is not the deployed production system, it is the real system in a test environment, an internal full deployment.

Some approaches are: source modification, source macros, annotations, dependency injection (IOC), and Aspect Oriented Programming (AOP). Of course, there are many other techniques as found in open/closed projects, commercial products, and the research community.

Fault Injection using AOP

Since using the real system is required, a viable approach is to use the Instrumentation already available in the Java system. We could dynamically insert code that forces exceptions. AOP as implemented by the AspectJ language can already do this.

The major advantage of using AOP is that the source is not changed in any way. AspectJ is the state of the art in the Java ecosystem for using AOP.

Other approaches

Monitoring

BTrace (a Java oriented approach comparable to DTrace) is very interesting. However, since, like DTrace, it is meant to be safe, defect injection may not be doable. But, see this post on an unsafe mode for BTrace.

Source modification

One way of injecting failure is just to edit the source code and add strategic code that will cause the instigating failure.  An advanced form of this is Mutation Testing.   Code changes could be:  arguments with wrong values, nulls, deliberately thrown exceptions, etc.  For example, one can simply create a method that throws a runtime exception:


     *** DON'T DO THIS ***

    public static void REMOVE_XXX_FROM_XXX_SOURCE(){
       if(true){
          throw new NullPointerException(
             "nt****** DELIBERATE RUNTIME EXCEPTION*****n");
       }
    }
    

Then insert invocations to this method at strategic points in the code, deploy to a test environment, and see if the expected results are obtained, such as logging output that can identify the cause, or that no side effects are recorded, such as incorrect data storage.

This is a low tech and error prone approach.¬† Most importantly, this is dangerous.¬† In the mad development rush to meet deadlines and also eat lunch, this code could make it into production!¬† Even with that horrific method name, it will wind up in production!¬† One would have to create deployment filters that stop the process if any “fault” inducing code is included.¬† Another reason why this is not a good approach is that it doesn’t scale very well.¬†¬† Forcing exceptions is just one type of verification.¬† Another is setting values outside of expected ranges or states.¬† Thus, one would need many different kinds of source code insertions.¬†

Of course, Unit and Behavior based testing tools sets can supply these required verifications if included into the system as Built-In Self-Tests (BIST).

Built-In Self-Test

In the hardware realm, BIST as found in, for example, IEEE 1149.1 JTAG, has been very successful. On the software side, there is ongoing research on how to implement BIST-like capability. This would make Brad Cox’s concept of the “Software IC” even more powerful.

Macros

A somewhat viable approach to source code fault insertion is instead of inserting the faulting code, ¬†insert “include” macros in the original source code that indicates what fault insertion should be done at a location.¬† A fault injection preprocessor can scan and insert the fault framework’s invocations to accomplish the requirement.¬† The source code build process can then simply enable or disable the use of the preprocessor.¬† This would however still require source code modification and an extra maintenance nightmare.¬† When the code changes the macros may also require change.

Annotations

Instead of macros, we could also use Annotations.¬† Annotations could explicitly state the runtime exceptional behavior “service level agreements” at the component level. These SLA could then be systematically manipulated to test if they really hold at the system level.

Dependency Injection

One can also inject exceptional behavior by dependency injection, using programmatic, declarative, or annotations, faulted components could be inserted (or created via Aspect Oriented Programming) into the target system.

AOP Demonstration

In listing one below, an ATM class uses a service to do some banking.

/**
 * Driver class for example. 
 * For a real system see:  https://bitbucket.org/aragost/javahg/
 * @author jbetancourt
 */
public class ATM {
	private BankService service = new BankService(1000);

	/** 
	 * Perform a banking action.
	 */
	public void transaction(){	
		{
			service.deposit(100L);
			service.withdraw(200L);
		}
		
		service.statement();		
	}
	
	/** Application entry point	 */
	public static void main(String[] args) {
		new ATM().transaction();
	}
}

Listing two is the BankService being used. Of course, a real service would be more complex.

/**
 * Example service.
 */
public class BankService {
	// Use Long to allow example's null for failure
	// injection.

	private BigDecimal balance;
	
	public BankService(long savings){
		this.balance = new BigDecimal(savings);
	}
	
	/** */
	public void deposit(Long amount){
		balance = balance.add(new BigDecimal(amount));
		System.out.println("Deposit:  " + amount);
	}
	
	/** */
	public void withdraw(Long amount){
		balance = balance.subtract(new BigDecimal(amount));
		System.out.println("Withdraw: " + amount);
	}
	
	/** */
	public void statement() {
		System.out.println("Balance:  " + balance);		
	}	
	
} // end BankService

A example Non-faulted execution result is:

Report: pointcut usage
  withdrawCut:   true ->  false
  depositCut:  false ->  false
       Deposit: 100
      Withdraw: 200
       Savings: 900

When we run the demo program we get:

Exception at the withdraw method.

Report: pointcut usage
withdrawCut:   true ->   true
depositCut:  false ->  false
Deposit: 100
Exception in thread "main" java.lang.NullPointerException
   at BankService.withdraw_aroundBody2(BankService.java:15
   at BankService.withdraw_aroundBody3$advice(BankService.java:81)
   at BankService.withdraw(BankService.java:1)
   at Main.main(Main.java:16)

Exception at the deposit method.

Report: 
pointcut usage
withdrawCut:   true ->  false
depositCut:  false ->   true
Exception in thread "main" java.lang.NullPointerException
   at BankService.deposit_aroundBody0(BankService.java:9)
   at BankService.deposit_aroundBody1$advice(BankService.java:81)
   at BankService.deposit(BankService.java:1)
   at Main.main(Main.java:15)

All pointcuts enabled.

Report: 
Report: 
pointcut usage
withdrawCut:   true ->  false
depositCut:  false ->   true
Exception in thread "main" java.lang.NullPointerException
   at BankService.deposit_aroundBody0(BankService.java:9)
   at BankService.deposit_aroundBody1$advice(BankService.java:81)
   at BankService.deposit(BankService.java:1)
   at Main.main(Main.java:15)

Implementation

Now to test the exceptional behavior we want to fault the deposit and withdraw methods.
First we have a way of specifying what we want to fault by using a JSON configuration file:

{
	"__comments":[
		"File: properties.json"
	],
	"settings":{
		"random":true
	},
	
	"cuts":{
		"depositCut":false,
		"withdrawCut":true
	}	
}

The “cuts” settings indicate which AspectJ “pointcut” to turn on. The “random” setting indicates if we want the exceptions to be randomly inserted into the code base at the enabled pointcuts.

The Abstract failure injection aspect. Subclasses (aspects) will supply the actual pointcuts to specify where to ‘do’ the injection.

/**
 * FailureInjectionAspect.aj 
 * @author jbetancourt
 * 
 */

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

/**
 * Abstract Aspect that determines if failure should occur.
 * @author jbetancourt
 * 
 */
public abstract aspect FailureInjectionAspect {
	private Map<String, Boolean> pointcutFlags = new HashMap<String, Boolean>();
	private volatile boolean initialized =false;
	
	/** 
	 * Initialize Failure injection aspect.
	 * @throws Exception 
	 * 
	 */
	protected void init() throws Exception {
		Config config = new Config();
		config.configure();
		pointcutFlags = config.getPointcutFlags();		
		initialized = true;
	}	

	/**
	 * Get boolean value for pointcut name.
	 * 
	 * @param name pointcut name.
	 * @return true if pointcut enabled.
	 * @throws IOException 
	 */
	protected boolean isPointcutEnabled(String name) {
		if(!initialized){
			try {
				init();
			} catch (Exception e) {
				throw new IllegalStateException(
						"Could not initialize object",e);
			}
		}
		
		boolean f = false;	
		Object val = pointcutFlags.get(name);
		if (null != val) {
			f = ((Boolean) val).booleanValue();
		}	

		return f;
	}
	
} // end FailureInjectionAspect.aj

Here is a aspect that uses a nulling type of injection.

/**
 * 
 * Example of an aspect for nulling an argument to a method.
 * @author jbetancourt
 *
 */
public aspect AmountFailureAspect extends FailureInjectionAspect {

	/** fault the deposit */
	private pointcut depositCut(Long amount) :
		execution(public void BankService.deposit(Long)) 
		&& if(AmountFailureAspect.aspectOf().isPointcutEnabled("depositCut"))
		&& args(amount) 
		&& !within(FailureInjectionAspect)
	;

	/** fault the withdrawal */
	private pointcut withdrawCut(Long amount) :
		execution(public void BankService.withdraw(Long)) 
		&& if(AmountFailureAspect.aspectOf().isPointcutEnabled("withdrawCut"))
		&& args(amount) 
		&& !within(FailureInjectionAspect)
	;

	/** Null the amount arg */
	void around(Long amount) : depositCut(amount)|| withdrawCut(amount) {
		amount = null;
		proceed(amount);
	}
}

Here is how to read the configuation JSON file. I use the JSON-Simple library.

import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.io.Writer;
import java.util.HashMap;
import java.util.Map;
import java.util.logging.Level;
import java.util.logging.Logger;

import org.json.simple.JSONObject;
import org.json.simple.JSONValue;

/**
 * 
 * @author jbetancourt
 *
 */
public class Config {
	private static final Logger logger = 
          Logger.getLogger(Config.class.getName());
	private String configFile = "srcproperties.json";
	private Map<String, Boolean> pointcutFlags = new HashMap<String, Boolean>();
	private volatile boolean initialized = false;
	private String basePath;

	/**
	 * 
	 * @param configFile2 
	 * @throws Exception
	 */
	@SuppressWarnings("unchecked")
	public void configure() throws Exception {
		basePath = new File(".").getAbsolutePath();

		String path = basePath + configFile;

		Object obj = (JSONObject) JSONValue
				.parse(new FileReader(new File(path)));
		
		Map<String, Map<String, ?>> map = (Map<String, Map<String, ?>>) obj;
		pointcutFlags = (Map<String, Boolean>) map.get("cuts");
		Map<String, Object> settings = (Map<String, Object>) map
				.get("settings");
		
		Object r = settings.get("random");
		boolean randomize = (r != null &amp;&amp; ((Boolean)r)) ;
		
		if(randomize){
			println(String.format(
			    "Pointcut usagen%16s %6s    %6s","Name        ",
			    "Prior","Current"));
			
			for(Map.Entry<String, Boolean>entry : pointcutFlags.entrySet()){
				Boolean prior = entry.getValue();
				Boolean f = Math.random() > 0.65;
				entry.setValue(f);
				println(String.format(
					"%15s: %6s -> %6s", entry.getKey(),prior,f));
			}
			println("n");		
		}		
		
		saveConfig();		
		initialized = true;		
		logger.log(Level.INFO,"Initialized: " + initialized);
	}
	
	/**
	 * Write the config settings to external JSON file.
	 * 
	 * Format is: { cutname : boolean , ... }
	 *   Example: {"withdrawCut":true,"depositCut":true}
	 * Will be used to set up behavior analysis service to 
	 * monitor response.
	 * 
	 * @throws IOException
	 */
	protected void saveConfig() throws IOException{
		String json = JSONValue.toJSONString(pointcutFlags);
		File file = new File(basePath + "runtimeProperties.json");
		FileWriter writer = new FileWriter(file);		
		writer.append(json);		
		writer.close();		
	}	

	/** getter */
	public Map<String, Boolean> getPointcutFlags() {
		return pointcutFlags;
	}

	/** setter */
	public void setPointcutFlags(Map<String, Boolean> cutFlags) {
		this.pointcutFlags = cutFlags;
	}

	/**
	 * Just a shortcut to System.out.println(String).
	 * @param s
	 */
	private void println(String s){
		System.out.println(s);
	}
}

Running at the command line

Using AspectJ is much easier in a supporting IDE like Eclipse. Below is the “mess”, unless you love the CLI, of compiling and running in a command shell. Could be made clean by creating aliases, scripts, etc.

cd src
src>javac -d ..bin -cp ..jarsjson-simple-1.1.jar  *.java
src>java -cp ..bin Main
Jun 9, 2011 3:18:24 PM Main main
INFO: Starting Main.....
   Deposit:  100
   Withdraw: 200
   Balance:  900

Now we change the flag from false to true in the runtimeInjection.json file:

src>type runtimeInjection.json | sed "s/false/true/" > temp
src>copy /Y temp runtimeInjection.json
src>del temp

Compile the aspects using the Aspectj “ajc” compiler.¬† Here we use 1.6 compliance level; destination of output to ..bin folder; and give the classpath.

c:\Users\jbetancourt\Documents\projects\dev\AspectsForNullTesting\src>java\aspectj1.6\bin\ajc -1.6 -d ..\bin -cp "c:\java\aspectj1.6\lib\aspectjrt.
jar;c:\javaaspectj1.6\lib\aspectjtools.jar;c:\javaaspectj1.6\lib\aspectjweaver.jar;..\jars\json-simple-1.1.jar" -sourceroots .

Now we run the same Main program.  Since the pointcut flag is true, the advise is invoked and the list argument to the service(List) is set to null.

c:UsersjbetancourtDocumentsprojectsdevAspectsForNullTesting>java -cp "c:javaaspectj1.6libaspectjrt.jar;c:javaaspectj1.6libaspectjtools
.jar;c:javaaspectj1.6libaspectjweaver.jar;jarsjson-simple-1.1.jar;bin;." Main
Jun 9, 2011 3:25:04 PM Main main
INFO: Starting Main.....
Pointcut usage
Name          Prior    Current
withdrawCut:   true ->  false
depositCut:  false ->  false
c:UsersjbetancourtDocumentsprojectsdevAspectsForNullTesting>java -cp "c:javaaspectj1.6libaspectjrt.jar;c:javaaspectj1.6libaspectjtools
.jar;c:javaaspectj1.6libaspectjweaver.jar;jarsjson-simple-1.1.jar;bin;."   Main
Jun 9, 2011 3:25:10 PM Main main
INFO: Starting Main.....
Pointcut usage
Name          Prior    Current
withdrawCut:   true ->   true
depositCut:  false ->   true
Jun 9, 2011 3:25:10 PM Config exec
INFO: Initialized: true
Exception in thread "main" java.lang.NullPointerException
at BankService.deposit_aroundBody0(BankService.java:19)
at BankService.deposit_aroundBody1$advice(BankService.java:26)
at BankService.deposit(BankService.java:1)
at Main.main(Main.java:16)

c:UsersjbetancourtDocumentsprojectsdevAspectsForNullTesting>java -cp "c:javaaspectj1.6libaspectjrt.jar;c:javaaspectj1.6libaspectjtools
.jar;c:javaaspectj1.6libaspectjweaver.jar;jarsjson-simple-1.1.jar;bin;."   Main
Jun 9, 2011 3:25:13 PM Main main
INFO: Starting Main.....
Pointcut usage
Name          Prior    Current
withdrawCut:   true ->   true
depositCut:  false ->  false
Jun 9, 2011 3:25:13 PM Config exec
INFO: Initialized: true
Deposit:  100
Exception in thread "main" java.lang.NullPointerException
at BankService.withdraw_aroundBody2(BankService.java:25)
at BankService.withdraw_aroundBody3$advice(BankService.java:26)
at BankService.withdraw(BankService.java:1)
at Main.main(Main.java:17)

Updates

  • Feb 1, 2012: Just learned about Byteman.
  • April 25, 2013: A new article on using Byteman: http://aredko.blogspot.com/2013/04/fault-injection-with-byteman-and-junit.html

Further Reading



Nicolas Lens – Sumus Vicinae (Flamma Flamma)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.