How to unit test Java servlets

The question of how to unit test Servlets comes up a lot. How can it be done? Should it be done? What are the options?

A unit test, in the realm of xUnit semantics, is an isolated test of the smallest testable subset of a program. Usually this translates to a test of a single method in an Object. When this object itself is part of a framework or container, such tests border on becoming Integration Tests. How could these types of objects still be ‘unit tested’?

Written by: Josef Betancourt, Date: 2015-09-17, Subject: Servlet testing

Options

Here are a few options.

POJO

When you write a servlet, ultimately the servlet object is instantiated by the server container. These objects do a lot behind the scenes that may prevent invoking methods on them when not attached to an actual container.

A servlet, or any other server based object like an EJB, provides access to problem domain services or functionality. The easiest way to test these Objects is to refactor that service into plain old Java objects, POJO.

Jakob Jenkov writes: “… push the main business logic in the servlet into a separate class which has no dependencies on the Servlet API’s, if possible”.

If your working with a framework that is likely the design approach anyway.

Servlet stub library

A library that allows creation of “server” objects can make creating stubs for testing very easy. Again, a framework should provide such a feature.

Mocking

Mocking using modern libraries like Mockito, Powermock, and JMockit, provides a very powerful approach.

In listing 1 below, a test is created for a target SutServlet class’s doGet. This method will set the response status to 404 if an ID request parameter is null.

Using JMockit, proxies of HttpServletRequest and HttpServletResponse are created. The request’s getParameter and the response’s setError methods are mocked. The actual unit test assertion is done in the mocked setError method.

Listing 1, JMockit use

@RunWith(JMockit.class)
public class SutServletTest_JMockit {
    
    @Test
    public void should_Set_ResourceNotFound_If_Id_Is_Null() throws Exception {
        new SutServlet().doGet(
        new MockUp<HttpServletRequest>() {
            @Mock
            public String getParameter(String id){
                return id.compareToIgnoreCase("id") == 0 ? null : "don't care";
            }
        }.getMockInstance(),
        new MockUp<HttpServletResponse>() {
            @Mock
            public void sendError(int num){
                Assert.assertThat(num, IsEqual.equalTo(HttpServletResponse.SC_NOT_FOUND));              
            }
        }.getMockInstance());
    }
     
}

JDK Dynamic Proxies

The Mock approach can also be duplicated using dynamic proxies. JDK dynamic proxy support is usable here. JDK proxies have one limitation, they can only proxy classes that extend an interface. (Still true in Java 9?). Servlets extend interfaces, so we can the proxy support in the JDK.

Listing 2, using JDK proxies

public class SutServletTest_using_jdk_proxy {
    
    private static final String DON_T_CARE = "don't care";
    private static final String SEND_ERROR = "sendError";
    private static final String GET_PARAMETER = "getParameter";

    /**  @throws Exception  */
    @Test
    public void should_Set_ResourceNotFound_If_Id_Is_Null() throws Exception {
        
        // request object that returns null for getParameter("id") method.
        HttpServletRequest request  = (HttpServletRequest)Proxy.newProxyInstance(this.getClass().getClassLoader(),
            new Class[]{HttpServletRequest.class},
                new InvocationHandler() {
                    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
                        if(method.getName().compareToIgnoreCase(GET_PARAMETER) ==0){
                            return ((String)args[0]).compareToIgnoreCase("id") == 0 ? null : "oops";
                        }
                        return DON_T_CARE;
                    }
                }
        );
        
        // Response object that asserts that sendError arg is resource not found: 404.
        HttpServletResponse response  = (HttpServletResponse)Proxy.newProxyInstance(this.getClass().getClassLoader(),
            new Class[]{HttpServletResponse.class}, 
                new InvocationHandler() {
                    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
                        if(method.getName().compareTo(SEND_ERROR) == 0){
                            Assert.assertThat((Integer) args[0], IsEqual.equalTo(HttpServletResponse.SC_NOT_FOUND));
                        }
                        return DON_T_CARE;              }
                }
        );
         
        new SutServlet().doGet(request,response);
    }
}

Javassist Proxies

Just for completeness, in listing 3, we use the Javassist library.

Listing 3, using Javassist proxy

public class SutServletTest_using_assist {
    
    private static final String DON_T_CARE = "don't care";
    private static final String SEND_ERROR = "sendError";
    private static final String GET_PARAMETER = "getParameter";

    /**  @throws Exception  */
    @Test
    public void should_Set_ResourceNotFound_If_Id_Is_Null() throws Exception {
        
        // request object that returns null for getParameter("id") method.
        HttpServletRequest request  = (HttpServletRequest)createObject(new Class[]{HttpServletRequest.class},
            new MethodHandler() {
                public Object invoke(Object self, Method thisMethod, Method proceed, Object[] args) throws Throwable {
                    if(thisMethod.getName().compareToIgnoreCase(GET_PARAMETER) == 0){
                        return ((String)args[0]).compareToIgnoreCase("id") == 0 ? null : "oops";
                    }
                    return DON_T_CARE;
                }
            }
        ); 
        
        // Response object that asserts that sendError arg is resource not found: 404.
        HttpServletResponse response  = (HttpServletResponse)createObject(new Class[]{HttpServletResponse.class},
            new MethodHandler() {
                public Object invoke(Object self, Method thisMethod, Method proceed, Object[] args) throws Throwable {
                    if(thisMethod.getName().compareTo(SEND_ERROR) == 0){
                        Assert.assertThat((Integer) args[0], IsEqual.equalTo(HttpServletResponse.SC_NOT_FOUND));
                    }
                    return DON_T_CARE;
                }
            }
        );
         
        new SutServlet().doGet(request,response);
    }
    
    /**
     * Create Object based on interface.
     * <p>
     * Just to remove duplicate code in should_Return_ResourceNotFound_If_Id_Is_Null test.
     * @param interfaces array of T interfaces
     * @param mh MethodHandler
     * @return Object
     * @throws Exception
     */
    private <T> Object createObject(T[] interfaces, MethodHandler mh ) throws Exception{
        ProxyFactory factory = new ProxyFactory();
        factory.setInterfaces((Class<?>[]) interfaces); // hmmm.        
        return factory.create(new Class[0], new Object[0], mh);
    }
}

Embedded server

Its also possible to start an embedded server, deploy the servlets, and then run the tests. Various Java app servers (like Tomcat and Jetty) support this and are well documented. The complexity comes when only partial integration is required. For example, we may want to have a real app server running the tests, but do we also really need a database server too? Thus, we also have to deploy stubs or mocks to this embedded server. Many resources on web for this approach, for example, “Integration Testing a Spring Boot Application“.

Another approach is the concept of the Hermetic Servers.

AOP

AOP can be used on embedded server, and this would allow “easy” mocking of integration endpoints and mocks. Such an approach was shown here “Unit test Struts applications with mock objects and AOP“.

References

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Continuous Testing while developing, CDT?

I previously wrote about Continuous Testing here. Strange, at the time a web search turned up very little about the concept. What was found was the use of the term in the sphere of Continuous Integration systems and processes. Today there are more relevant hits.

Terminology
On Wikipedia the term CT, “Continuous testing”, is redirected to Test Automation. I don’t know the arcana of Wikipedia but “Continuous testing” is hidden somewhere as you can see if you visit the redirection link. The edits on the redirection page shows that some editors where getting into the details of CT, etc.

One editor mentioned the problem of dependency detection. If you edit source Foo and source Fee is a dependency, then you should rerun tests for Fee too?

Via a web search, Wikipedia article has, Continuous test-driven development, CTDD. That seems relevant. However, that seems to imply the original Test Driven Development TDD practices are being used. From what I read, TDD is not that popular. The use of unit testing is more popular. So, if a tool automatically runs unit or functional tests on local code changes, that has nothing to do with how those tests were written, TDD or not. The tests could have been written years later for some legacy system that is now being maintained with appropriate tests.

Continuous Testing
We don’t edit code then invoke a compile step anymore. Our IDEs do that automatically. Then why have to invoke our unit tests manually? This “Continuous Testing” (CT) approach enables a smoother Test-Driven Development (TDD), maintenance, or refactoring, work flow.

This type of CT is in contrast to tests run on a Continuous Integration server, Continuous Developer Tests, CDT. Dev tests (unit, functional, integration, …) are run in the developer workstation in response to source changes. Nothing new of course, IDEs have always rebuilt on such events, but the tests have not been run at a fine-grained level.

Is there any evidence of this? Some papers on CT are found here.

Great videos on the CT as implemented in Mighty-Moose, a product for Microsoft Visual Studio, are found at continuoustests.
Mighty Moose Demo, a CT product for Visual-Studio.

Cons?
Mentioning this to any developer will give you immediate “buts”: But my tests take too long; it would be distracting; I change code constantly;…… I sometimes think developers are driven by a little motor in them, but … but … but … buuuut.

Implementations?
Why isn’t automatically running of tests supported in IDE’s like Eclipse? Build systems, like Maven, have always supported test goals. Now Gradle will support Continuous Builds.

Is there is a direct way to invoke the JUnit plugin via adding a new custom “builder” to Eclipse? A builder in Eclipse is triggered by resource changes. So, on source code change this builder would have to run an associated ‘JUnit run configuration’ that in turn could run the GUI test runner or the build system which invokes the tests.

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

In dev, a missing test always passes

What if a you lose an automated test? What if that was testing a critical functional area? This article discusses this and for unit tests implements a Java @RequiresTest annotation

How to lose a test

Can’t lose a test? Sure you can. Test reports just give you counts and changes? Who looks at that? No one. In a long time span, test maintenance can develop warts and tests are silently deleted since they are failing and its too hard to fix or no time available. The original developers of a component may have gone on to other things and that tender loving care is nowhere to be found.

Coverage reports don’t help much with this. Unless there are drastic changes in test results or coverage levels, no one looks at them, they become just another management spreadsheet number, or developer hipster feel-good schtick.

Who loses tests

Sure, for tools, utilities and highly focused systems, especially FOSS, this is not likely. The rapid change and larger development teams ensure full use of test tools. In these projects there is more likely to be a level of “test-infected” developers.

For other kinds of systems, like IT projects, testing will be forgotten when the scat hits the fan, or when the test evangelist moves on or gives up. Testing will just rely on manually repeated testing of a local facsimile of the target system and waterfail test department testing.

Quoting Fowler here “Imperfect tests, run frequently, are much better than perfect tests that are never written at all.”, I would add or those that are never run.

Does it matter

For real-world large applications that quickly become legacy, any missing tests can prove disastrous. A missing test would make any potential defect show up much later. Later is too late and costs more to fix.

Ironically, the best example of the lost of tests are legacy systems that have no automated tests, a de-testable system. In such a system, defects are found in late stage waterfall phases, or worse in production.

What should be tested

Ideally everything would have a valid unit/functional/integration test. In reality this is not cost effective and some would argue that some things should not be tested. For example, it is claimed that getter/setters do not need tests. (Clearly in a language with true properties, this is true. Java, not.)

So if some things should not be tested, what should be? And if those things that should be tested are not tested?

Options

If missing tests are a concern, what can be done? As in many system decisions, it depends: What kind of tests, when are the tests run, who manages the tests, what kind of test monitoring, and so forth.
The following are just a few options that could be considered.

Monitoring of missing tests

The Continuous Integration system or the build tools it invokes present and track missing tests. Missing tests are considered a failure and must be acted on: confirming or removing from consideration.
There is no need to create this missing test list. The build system adds to this list as each build is performed.

Required tests database

For critical systems or subsystems a required test database could be used. This would be more of a management and tool issue since an ongoing project may change many tests during its duration.
Required tests specification is not a new concept. Hardware systems have always had such a concept and even go further by having Built-In Self Tests.
Note that one argument against recent standards and by extension a ‘required test database’ is that this is not congruent with modern agile processes.

Requires test annotation

For “devtests”, using xUnit frameworks, it is much easier to indicate what should be tested. This can be easily made part of the build configuration system and run as part of the test phase. To make more resilient to bit rot, the source code itself should store this information. In the Java devcology this can be done for unit tests using annotations.

Example

In listing 1, a developer has decided that two methods should have tests. So the methods are annotated with @RequiresTest.
Listing 1, an application class with annotations

package com.anywhere.app;

import com.octodecillion.test.RequiresTest;

/**
  */
public class Foo1 {	
	@RequiresTest
	public void fum1(){
        // stuff		
	}

	@RequiresTest
	public void fum12(){		
        // stuff
	}
}

Below in listing 2, a Java implementation of a RequiresTest annotation is shown. It uses the same approach I used in Search Java classpath for JUnit tests using Spring, except now the filter checks for a different annotation. These two could be combined into one implementation that searches for tests or requires annotation.

Funny, I did not annotate the RequiresTestAnnotationScanner with @RequiresTest.

Listing 2. A requires test annotation

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Search Java classpath for JUnit tests using Spring

A simple classpath scanner implementation that searches for classes having any method annotated with @Test or @RunWith. Spring Framework’s scanner utility is used to do this.

Many tools and dev environments support the search and invocation of JUnit tests. Some just search for Java files that end in ‘Test’, others search for files that actually contain tests. I think the latter is more accurate and useful.

Use case
One obvious one is programmatically creating test suits. The JUnit 4 approach of an annotation with a list of test classes is just plain wrong. With the code below a JUnit 3 type of test suite can be created.

Implementation
In the code below, Spring’s ClassPathScanningCandidateComponentProvider is used to scan the classpath. A custom TypeFilter is used to test each method of found classes for the annotations, and if found, the class is added to a list. This is available as a Gist.

I got the idea for this from Classpath Scanning: Hidden Spring Gems.

As I mentioned, this is a simple approach. If you look at other’s such as in Eclipse, the scanners are more robust.

How would this be done without using Spring’s scanner support?

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Unit testing Java exception handling using JMockIt

How do you test that a method caught an exception? What if that catch did not have a side effect, it just logged output, or simply swallowed the exception?

Context
We have a method in a class that was written with a try catch and we need to unit test it. There are many ways this task can occur, such as the need to test legacy code or the need to write a test before a refactoring of such code.

We won’t go into the anti-pattern aspects or what constitutes proper handling of an exception here.

Error hiding is an anti-pattern in computer programming. Due to the pervasive use of checked-exceptions in Java, we must always address what to do with exceptions. Error hiding is when a catch clause does not property handle an exception.
 

In a catch block there are three common ways of handling an exception:

try{
   ... stuff ...
}catch(X){
   // 1. do something here
   // 2. maybe throw X or something else
   // 3. skip the above and do nothing
}

How are these tested?

Thrown exception
When the method reacts by throwing an exception we can test using the standard JUnit @Test(expected=SomeException.class), or for fine-grained verification, the test itself has a try catch block where we can assert the exception details.

Swallowed exception
If a method does nothing in the catch block, which is also called “swallowing” the exception, should it even be a test issue? Yes. The method is just tested normally. One of the tests must force the exception, of course. We do this since in future the method may be changed to handle the exception differently. One of the test assertions that can be made is that the forced exception is not thrown from the method.

Exception handler pointcut
Testing that an exception in the method under test was actually caught is possible, but only with Aspect Oriented Programming (AOP). One example language is AspectJ which supports the pointcut:

handler(TypePattern)
Picks out each exception handler join point whose signature matches TypePattern.
 

Behavior in catch
It gets more interesting when the catch block has ‘behavior’. This behavior has side effects. If these side effects are only local to the method, such as setting a flag false, then normal testing is adequate. If this behavior has side effects at the class or with collaborating objects, then this requires more complex testing.

It can get murky with this kind of testing. What is important is that one does not test the implementation (but sometimes that is crucial), only the interactions and requirements of the target “Unit” under test. What constitutes a “unit” is very important.

“Typically, a unit of behavior is embodied in a single class, but it’s also fine to consider a whole set of strongly-related classes as a single unit for the purposes of unit testing (as is usually the case when we have a central public class with one or more helper classes, possibly package-private); in general, individual methods should not be regarded as separate units on their own.” — Rogerio in JMockit Tutorial

.

Example
The method being tested invokes a method on a collaborating object and that object throws an exception. In the catch block, the exception is logged using the logging utility collaborator . Though not part of an explicit API, that logging may be critical to the use of a system. For example, an enterprise log monitoring system expects this logging for support or security concerns. A simple class Shop is shown in figure 1,

Figure 1, the class to test

public class Shop {
    private ShoppingSvc svc;
    
    /**
     * Get product name.
     * @param id the unique product id
     * @return the product name
     */
    public String getProduct(int id){
        String name = "";
        try {
            name = svc.getProductName(id);
        } catch (Exception e) {
            Logger.getAnonymousLogger()
			.log(Level.SEVERE, 
			"{result:\"failure\",id:\""+ id + "\"}");
        }
        
        return name;
    }
    
}

JMockit Use
JMockit supports two type of testing: behavior and state-based (or “Faking”).
Using the state based approach we create a mock for the getProductName(String) method of the collaborating (or dependent) class, ShoppingSvc. With JMockit this is easily done as an inline MockUp object with the target method mocked to throw an exception.

Listing 2, mocking

new MockUp<ShoppingSvc>() {
    @Mock
    public String getProductName(int id) throws IOException{
		throw new IOException("Forced exception for testing");
    }
};

JMockit’s behavior based support is then used to test the catch clause handling. As in other mocking frameworks, a record-replay-verify phases are used. Since the side effect of the exception handler here is the use of the logging dependency and we are not testing the logger, we ‘behaviorally’ mock the Logger class.

We can do this in the test method signature, @Mocked final Logger mockLogger. This mocks every method in the logger class. Then we set an expectation on the log method being used in the exception handler, then verify the method was actually invoked.

The full test class is shown in figure 3 below and the sample code is in a repo on GitHub:https://github.com/josefbetancourt/examples-jmockit-exceptions.

An alternative to using both state and behavior mocking is to just specify the exception throwing with the expectations. The article “Mocking exception using JMockit” shows how to do this. Of course, the JMockit Tutorial has all the details.

Listing 3, the full test class

@RunWith(JMockit.class)
public class ShopTest{
    /**
     * 
     * @param mockLogger Logger object that will be behaviorally mocked.
     */
    @Test
    public void shouldLogAtLevelSevere(@Mocked final Logger mockLogger)
    {
        /**
         * state-based mock of collaborator ShoppingSvc
         */
        new MockUp<ShoppingSvc>() {
            @Mock
            public String getProductName(int id) throws IOException{
                throw new IOException("Forced exception for testing");
            }
            
        };
        
        // the SUT  
        final Shop shop = new Shop();

        // what we expect to be invoked
        new Expectations() {{
            mockLogger.log(Level.SEVERE,anyString); 
        }};
        
        shop.getProduct(123); // actual invocation
        
        // verify that we did invoke the expected method of collaborator
        new Verifications(){{
            mockLogger.log(Level.SEVERE, anyString);  // we logged at level SEVERE
        }};
    }
}

Alternatives?
Just write better code so that you don’t need unit tests? This is mentioned in Functional Tests over Unit Tests

Test using a scripting language like Groovy? See Short on Time? Switch to Groovy for Unit Testing

Software
- JUnit: 4.12
- JMockit: 1.18
- JDK: 1.8
- Eclipse: Mars
- Maven: 3

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Continuous Integration (CI) misconception

In some online resources the term Continuous Integration (CI) is always used in the broadest sense to mean that on some schedule or event the outputs of every ongoing project or separate teams are obtained, put together somehow, and then a test system is updated so that various tests can be invoked. No wonder some test and management professionals are wary of the concept.

The problem here is the “other” usage. More correctly CI can even be applied to one team on one project. One distinguishing feature of CI is that there are multiple developers*. Thus, as these developers complete various tasks and commit or push to a shared repository, a build and deploy process is run to create testable systems.

The term “integration” in CI is applicable to more inclusive senses, or a fuzzy continuum, from one project and one team to combinations of these. Thus, some processes are CI to a certain degree, or worse very CI anti-pattern to a certain degree.

In the modern CI best practices, CI is done via various build and deployment servers that automate some or all of the pipeline. In the past at some companies, the designated build person was doing manual Continuous Integration.

Sure, in CI there will be episodes of actual integration with other current projects, teams, or externally generated artifacts. If this is automated, then we have full CI.

* Even a single developer who uses various branching strategies on one code base may use CI practices.

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Ant hooks using Groovy via XML transform

This time the Ant Hook scripts using Groovy is implemented by transforming the target Ant build script. (old post in draft status)

In the post, Ant Hooks using Groovy Script via Scriptdef, we used the Ant BuildListener interface to add a hooks feature that invokes a Groovy script mapped to build events. Then in the last post Ant hooks using Groovy, continued we added the ability to skip a target execution.

The problem with the former implementations is that the target Ant script must be modified to take advantage of hooks. Using XMLTask, we can modify the ant script directly. The InsertHooks.groovy script reads the hooks.inix file and transforms the build.xml to build-hooked.xml. The build-hooked.xml file will have an Ant build listener set to the Hook.groovy script.

The scripts are not general purpose, of course. Just a proof of concept thing.

Approach

hooks.inix:

[>hook/root/compile?when=after,skip=false]
	println " hook: root,{target=${event.target.name},when=post,event=$event}"
[<]

[>hook/demo1/deploy?when=before,skip=true]
	println "  hook: {project=${event.project.name},target=${event.target.name},when=pre,event=$event}"
[<]

[>hook/demo1/compile?when=before,skip=false]
	println "  hook: {project=${event.project.name},target=${event.target.name},when=pre,event=$event}"
[<]

[>fragment]
	<path id="libs">    
		<fileset dir="lib">
            <include 
                name="groovy-all-2.2.1.jar" />
        </fileset> 
    	
    	<pathelement location="src/main/groovy"/>
	</path>    
     
    <!-- Groovy library -->
    <taskdef name="groovy"
       classname="org.codehaus.groovy.ant.Groovy"
       classpathref="libs"/> 
    
    <!-- sets a BuildListener to the project -->
    <scriptdef name="set-listener" 
        language="Groovy"
        classpathref="libs" 
        src="src/main/groovy/com/octodecillion/util/ant/Hook.groovy"> 
    	<attribute name="path"/>
    </scriptdef>
     
    <!-- install the listener -->
    <set-listener path="hooks.inix"/>   
     
[<fragment]

InsertHooks.groovy

package com.octodecillion.util.ant

import static com.octodecillion.util.inix.Inix.EventType.END

import com.octodecillion.util.inix.Inix

import org.apache.tools.ant.*

import static groovy.io.FileType.FILES

/**
 * Insert the XML into Ant script to enable hooks.
 *   
 * @author josef betancourt
 */
class InsertHooks{

	def ant
	def DEBUG = false
	static final String srcFilePath='build.xml'
	static final String destFilePath="build-hooked.xml"
	static final String INIXFILE = 'hooks.inix'	
	static final String XMLTASK = 'com.oopsconsultancy.xmltask.ant.XmlTask'
	
	static main(args){
		new InsertHooks().execute()
	}
	
	/** An Ant task entry point */
	public void execute() throws BuildException{
		def ant = new AntBuilder()
		
		try {
			
			def fragment = loadFragment(INIXFILE)
			if(!fragment){
				throw new BuildException("'fragment' from $INIXFILE is invalid")
			}		
			
			def engine = new groovy.text.SimpleTemplateEngine()
			def template = engine.createTemplate(fragment)			 
			def xml = template.make([hookFilePath:INIXFILE])
			  
			ant.path(id: "path") {
				fileset(dir: 'lib') {
				   include(name: "**/xml*.jar")
				}
			}
	 
			ant.taskdef(name:'xmltask',classname:
				XMLTASK,
				classpathref: 'path')
			 
			def xpath = '(//target)[1]' 
			ant.xmltask(source:srcFilePath,dest:destFilePath, 
				expandEntityReferences:false,report:false){
				insert(position:"before",path:xpath,xml:xml)				 
			}
				 
			new File(destFilePath).eachLine{
				println it
			}
			
		} catch (Exception e) {
			e.printStackTrace()
			throw new BuildException(e.getMessage(), e)
		}
	}
	
	def loadFragment(String path){
		def text = ''
		def inix = new Inix(path)
		def theEvent = inix.next()
		 
		while(theEvent && theEvent != Inix.EventType.END ){
			Inix.Event event = inix.getEvent()

			if(event.isSection("fragment")){
				text = event.text
				break
			}
			
			theEvent = inix.next()
		}
		
		return text
	}	
	
}

// end Script

Hook.groovy

package com.octodecillion.util.ant

import groovy.transform.TypeChecked;
import groovy.transform.TypeCheckingMode;

import java.util.List;
import java.util.Map;
import java.util.regex.Pattern

import com.octodecillion.util.inix.Inix

import org.apache.tools.ant.BuildEvent
import org.apache.tools.ant.BuildException
import org.apache.tools.ant.Project
import org.apache.tools.ant.SubBuildListener;

import static groovy.io.FileType.FILES

// wire in the listener
def path = binding.attributes.get('path')
if(!path){
	throw new BuildException("'path' to hook inix not set")
}

def listener = new HookListener(project,path)
listener.project = project
project.addBuildListener(listener)

// end wiring

/**
 * Ant build listener that invokes groovy hook scripts.
 *  
 * @author josef betancourt
 *
 */
//@TypeChecked
class HookListener implements SubBuildListener {
	Project project
	boolean DEBUG = false
	
	/**                          */
    def HookListener(Project project, String path){
		this.project = project
		loadInix(path)				
    }    
    
	/** load scripts in inix file */
	def loadInix(String path){
		debugln("load inix")
		def inix = new Inix()
		inix.reader = new BufferedReader(
			new FileReader(new File(path)))
		 
		def theEvent = inix.next()
		def found = false
		 
		while(theEvent && theEvent != Inix.EventType.END ){
			def event = inix.getEvent()

			if(isHook(event)){
				found = true
				def key = [event.path[2],((String)(event.params['when'])).
					toUpperCase()].join('/')
					
				String txt = event.text
				String skString = event.params['skip']
				boolean sk = (skString.compareTo('true')==0 ? true : false)
				debugln "key=$key, ${event.params['skip']}, skip=$sk"
					
				def node = new HookNode(txt, sk)
			
				def prj = event.path[1]				
				if(!hooks[prj]){
					hooks[prj] = [:]
				}	
				
				hooks[prj].put(key,node);			
			}
			
			theEvent = inix.next()
		}
		
		dumpHooks()		
		
	}

    /** invoked by Ant build */
	@Override
    public void targetStarted(BuildEvent event) {
		die("targetStarted invoked with null event", !event)
		invokeTargetHook(event, When.BEFORE)
    }
    
	/** invoked by Ant build */
    @Override
    public void targetFinished(BuildEvent event) {
		die("targetFinished invoked with null event", !event)
		invokeTargetHook(event, When.AFTER)
    }

    /** Invoke the target's hook script */
    def invokeTargetHook(BuildEvent event, When when){
        def b = new Binding()
        b.setProperty("event",event)
		b.setProperty("hook",this)
		
        def shell = new GroovyShell(b)
		
		def hookName = "${event.target.name}/$when"
		def pHook = hooks[event.project.name][hookName]
		def rHook = hooks['root'][hookName]	
		debugln("invokeTargetHook: $hookName\npHook:  $pHook\nrHook:  $rHook")
			
		boolean skipSet = false
		
		if(pHook){
			skipSet = pHook.skip	
			debugln("skipSet=$skipSet")
			shell.evaluate(pHook.text)
			
			if(!override && rHook){
				skipSet = skipSet ? skipSet : rHook.skip
				shell.evaluate(rHook.text)
			}			
			
		}else if(rHook){
			skipSet = rHook.skip			
			shell.evaluate(rHook.text)		
		}
		
		if( skipSet && (pHook || rHook) && (when == When.BEFORE) ){
			createSkipforTarget(event)
		}		
    } 

	/**   */
	private createSkipforTarget(BuildEvent event) {
		debugln "setting skip: ${event.target.name}_skipTarget"
		event.project.setProperty("${event.target.name}_skipTarget", "true")
		event.target.setUnless("${event.target.name}_skipTarget")
	}
	
	/** throw exception if flg is true */
	private die(Object msgObject, boolean flg){
		if(flg){
			throw new IllegalArgumentException(String.valueOf(msgObject))			
		}		
	}	
	
	@TypeChecked(TypeCheckingMode.SKIP)
	private isHook(ev){		
		ev.path && ev.path[0] == 'hook'		
	}
    
	private dumpHooks() {
		if(!DEBUG){
			return			
		}
		
		hooks.each{
			it.each{ node ->
				debugln(node)
			}
		}		
	}
	
	private debugln(Object msg){
		if(DEBUG){
			println(msg)
		}
	}
	
	String TARGETHOOK = "target"
	def override = true;
	
	enum When{
		BEFORE('before'),AFTER('after')
		String name
		
		When(s) {this.name = s}
	}
	
	private class HookNode {
		String text
		boolean skip
		public HookNode(String text, boolean skip){
			this.text = text
			this.skip = skip
		}
		
		def String toString() {return "s:$skip"};
	}
	
	Map<String, Map<String,HookNode>> hooks = [:]
	
    //@formatter:off
    @Override
    public void subBuildFinished(BuildEvent event) {}
    @Override
    public void subBuildStarted(BuildEvent event) {}
    @Override
    public void buildFinished(BuildEvent event) {}
    @Override
    public void buildStarted(BuildEvent event) {}
    @Override
    public void messageLogged(BuildEvent event) {}
    @Override
    public void taskFinished(BuildEvent event) {}
    @Override
    public void taskStarted(BuildEvent event) {}
	//@formatter:on
} // end class HookListener

// end Script

Further reading

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Adaptive log level based on error context

The other day I was thinking about my companies’ application logging system, and wondered why we don’t increase the log output when there is a problem? I mean doing this automatically.

Problem
If a method in a class is logging at the ERROR level, and an exception occurs, the log output is useful at that level, it contains a stack-trace, for example. However, there are more levels below ERROR that contain much more info that could be useful in maintaining the system. Levels are used to control the logging output in logging systems. For example, in java.util.logging there are seven default levels: SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST.

Solution
One way of getting at this useful information is by the class with the detected exception setting the log level in that problem method or function to a more verbose level. The algorithm would probably be similar to the Circuit Breaker design pattern.

Like “First Failure Data Capture” this approach could be called Nth Failure Data Capture.

Issues
Of course, while this may be easy to do this programmatically, in practice, this is not simple approach. Many questions remain: performance; resources; is one error occurrence enough to trigger a change; are all threads effected; which target level; how much logging, how to reset the level, and so forth.

Funny, I’m sure this is not a new idea. A quick search makes it look like it is not a well-known approach.

Alternatives

  • Record everything instead of logging just some things. This is possible with some systems, for example Chronon?

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

How to easily siphon water from pool cover

Time to open up the pool. This time I’ll use my brain and figure out how to do this better.

Yucky Method
The cheapest way to do this is by getting a short length of hose, putting one end in the pool, the other in your mouth and getting the air out. Once that is done, if you take the end and lower it below the other end in the pool, the laws of physics take over, and planet will try to make the two ends of the hose have the same water pressure. Thus, the water starts draining out.

But, that is yucky. You have to really put your arms in that dirty water, and you may get some of it when you suck out the air. I see little wiggly worms in there.

My Method
Get one of those large plastic water jugs. Like the ones used in water dispensers.

pool-siphon

  1. Put a hole in the cap so that you can push the hose thru.
  2. Fill the jug with water.
  3. Put the cover back on the jug.
  4. Now one end of the hose is in the jug. Take the other end and stick it in the pool.
  5. Carefully, move the jug closer to the pool and upend the jug.
  6. Water will start draining from the jug into the pool. This will remove the air in the hose!
  7. Pull the hose in the jug so that it is bottom of jug. This will allow you to flip the jug over again and prevent air to get into the hose.
  8. Bring the jug below the height of the other end in the water.
  9. Now when you flip the jug over, the pool water will be draining out.

Writing down the steps makes it seem complicated. All your trying to do is remove the air from the old hose your using to siphon out the water. It’s just like that motor gas siphoning technique.

An even better better way?
This video shows an alternative method. I didn’t try this, but the video shows it working. If you have a long enough hose, you can connect put that hose in the pool. Turn on the water. When all the bubbles have stopped coming from the end in the pool, turn off the faucet. Disconnect end of the hose at the faucet side. If the height of the pool is higher than the final end of the hose you should start getting water draining from the pool.

Or you can buy a pump. I once bought a cheap pump and it didn’t last one day of use.

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Do tablets have a black screen of death problem?

Just happened to my tablet, a Samsung Tab Pro 10.1. If you search online for this you find many discussions and pleas for help. Does this happen to other brands of laptops?

BTW, there is also a White Screen of Death associated with iPod, iPad, or iPhone.

On restart, the screen would not show. Sound is ok, buttons seem functional. A restart or reset using button combinations did not fix this.

The Fix
Luckily I found some instructions on how to fix this. Remove the back cover, disconnect the LCD cable, wait for a few minutes, then reconnect.

Note: Now my WI-FI level is very low. Yikes! I took it apart to see if there is some kind of antenna connection to the case or cover. Don’t see anything. waaaaaa. (;゚︵゚;)

Update: June 18, 2015 – Changed the channel my wi-fi router was using. Fixed! But, now if I hold the tablet at edge, get low WI-FI level. Arrrrr. >:(

 

One person wrote Galaxy Tablet Reboot Trick. Too bad I did not try that first.

Notes

  1. Doing this may void your warranty.
  2. Don’t use a metal device to pry the back cover off. Get a plastic prying device that are sold in kits for this kind of thing. Or use a guitar pick.
  3. Getting the cover off takes a lot of careful prying.
  4. Some people recommend you disconnect the battery connector before you disconnect the LCD connector.
  5. Getting the cover back on is just as hard. I still don’t have it seating well.

Background
Note that all (?) electronic components that have multiple connected parts will have issues. When I worked with metrology components or Electrochemical control devices, sometimes the only thing that would fix them was to disconnect and reconnect some device or subsystem, wait a while, then turn the unit back on. I just read that this is one technique to ‘fix’ ECU units on some automobiles.

Links

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.