<< Previous | Home

Global Data Consistency, Transactions, Microservices and Spring Boot / Tomcat / Jetty

We often build applications which need to do several of the following things together: call backend (micro-) services, write to a database, send a JMS message, etc. But what happens if there is an error during a call to one of these remote resources, for example if a database insert fails, after you have called a web service? If a remote service call writes data, you could end up in a globally inconsistent state because the service has committed its data, but the call to the database has not been committed. In such cases you will need to compensate the error, and typically the management of that compensation is something that is complex and hand written.

Arun Gupta of Red Hat writes about different microservice patterns in the DZone Getting Started with Microservices Refcard. Indeed the majority of those patterns show a microservice calling multiple other microservices. In all these cases, global data consistency becomes relevant, i.e. ensuring that failure in one of the latter calls to a microservice is either compensated, or the commital of the call is re-attempted, until all the data in all the microservices is again consistent. In other articles about microservices there is often little or no mention of data consistency across remote boundaries, for example the good article titled "Microservices are not a free lunch" where the author just touches on the problem with the statement "when things have to happen ... transactionally ...things get complex with us needing to manage ... distributed transactions to tie various actions together". Indeed we do, but no mention is ever made of how to do this in such articles.

The traditional way to manage consistency in distributed environments is to make use of distributed transactions. A transaction manager is put in place to oversee that the global system remains consistent. Protocols like two phase commit have been developed to standardise the process. JTA, JDBC and JMS are specifications which enable application developers to keep multiple databases and message servers consistent. JCA is a specification which allows developers to write wrappers around Enterprise Information Systems (EISs). And in a recent article I wrote about how I have built a generic JCA connector which allows you to bind things like calls to microservices into these global distributed transactions, precisely so that you don't have to write your own framework code for handling failures during distributed transactions. The connector takes care of ensuring that your data is eventually consistent.

But you won't always have access to a full Java EE application server which supports JCA, especially in a microservice environment, and so I have now extended the library to include automatic handling of commit / rollback / recovery in the following environments:
  • Spring Boot
  • Spring + Tomcat / Jetty
  • Servlets + Tomcat / Jetty
  • Spring Batch
  • Standalone Java applications
In order to be able to do this, the applications need to make use of a JTA compatible transaction manager, namely one of Atomikos or Bitronix.

The following description relies on the fact that you have read the earlier blog article.

The process of setting up a remote call so that it is enlisted in the transaction is similar to when using the JCA adapter presented in the earlier blog article. There are two steps: 1) calling the remote service inside a callback passed to a TransactionAssistant object retrieved from the BasicTransactionAssistanceFactory class, and 2) setting up a central commit / rollback handler.

The first step, namely the code belonging to the execution stage (see earlier blog article), look as follows (when using Spring):
public class SomeService {

    @Autowired @Qualifier("xa/bookingService")
    BasicTransactionAssistanceFactory bookingServiceFactory;

    public String doSomethingInAGlobalTransactionWithARemoteService(String username) throws Exception {
        //write to say a local database...

        //call a remote service
        String msResponse = null;
        try(TransactionAssistant transactionAssistant = bookingServiceFactory.getTransactionAssistant()){
            msResponse = transactionAssistant.executeInActiveTransaction(txid->{
                BookingSystem service = new BookingSystemWebServiceService().getBookingSystemPort();
                return service.reserveTickets(txid, username);
        return msResponse;
Listing 1: Calling a web service inside a transaction
Lines 5-6 provide an instance of the factory used on line 13 to get a TransactionAssistant. Note that you must ensure that the name used here is the same as the one used during the setup in Listing 3, below. This is because when the transaction is committed or rolled back, the transaction manager needs to find the relevant callback used to commit or compensate the call made on line 16. It is more than likely that you will have multiple remote calls like this in your application, and for each remote service that you integrate, you must write code like that shown in Listing 1. Notice how this code is not that different to using JDBC to call a database. For each database that you enlist into the transaction, you need to:
  • inject a data source (analagous to lines 5-6)
  • get a connection from the data source (line 13)
  • create a statement (line 14)
  • execute the statement (lines 15-16)
  • close the connection (line 13, when the try block calls the close method of the auto-closable resource). It is very important to close the transaction assistant after it has been used, before the transaction is completed.
In order to create an instance of the BasicTransactionAssistanceFactory (lines 5-6 in Listing 1), we use a Spring @Configuration:
public class Config {

    public BasicTransactionAssistanceFactory bookingSystemFactory() throws NamingException {
        Context ctx = new BitronixContext();
        BasicTransactionAssistanceFactory microserviceFactory = 
                          (BasicTransactionAssistanceFactory) ctx.lookup("xa/bookingService");
        return microserviceFactory;
Listing 2: Spring's @Configuration, used to create a factory
Line 4 of Listing 2 uses the same name as is found in the @Qualifier on line 5 of Listing 1. The method on line 5 of Listing 2 creates a factory by looking it up in JNDI, in this example using Bitronix. The code looks slightly different when using Atomikos - see the demo/genericconnector-demo-springboot-atomikos project for details.

The second step mentioned above is to setup a commit / rollback callback. This will be used by the transaction manager when the transaction around lines 8-20 of Listing 1 is committed or rolled back. Note that there is a transaction because of the @Transactional annotation on line 2 of Listing 1. This setup is shown in Listing 3:
CommitRollbackCallback bookingCommitRollbackCallback = new CommitRollbackCallback() {
    private static final long serialVersionUID = 1L;
    public void rollback(String txid) throws Exception {
        new BookingSystemWebServiceService().getBookingSystemPort().cancelTickets(txid);
    public void commit(String txid) throws Exception {
        new BookingSystemWebServiceService().getBookingSystemPort().bookTickets(txid);
TransactionConfigurator.setup("xa/bookingService", bookingCommitRollbackCallback);
Listing 3: Setting up a commit / rollback handler
Line 12 passes the callback to the configurator together with the same unique name that was used in listings 1 and 2.

The commit on line 9 may well be empty, if the service you are integrating only offers an execution method and a compensatory method for that execution. This commit callback comes from two phase commit where the aim is to keep the amount of time that distributed systems are inconsistent to an absolute minimum. See the discussion towards the end of this article.

Lines 5 and 9 instantiate a new web service client. Note that the callback handler should be stateless! It is serializable because on some platforms, e.g. Atomikos, it will be serialized together with transactional information so that it can be called during recovery if necessary. I suppose you could make it stateful so long as it remained serializable, but I recommend leaving it stateless.

The transaction ID (the String named txid) passed to the callback on lines 4 and 8 is passed to the web service in this example. In a more realistic example you would use that ID to lookup contextual information that you saved during the execution stage (see lines 15 and 16 of Listing 1). You would then use that contextual information, for example a reference number that came from an earlier call to the web service, to make the call to commit or rollback the web service call made in Listing 1.

The standalone variations of these listings, for example to use this library outside of a Spring environment, are almost identical with the exception that you need to manage the transaction manually. See the demo folder on Github for examples of code in several of the supported environments.

Note that in the JCA version of the generic connector, you can configure whether or not the generic connector handles recovery internally. If it doesn't, you have to provide a callback which the transaction manager can call, to find transactions which you believe are not yet completed. In the non-JCA implentation discussed in this article, this is always handled internally by the generic connector. The generic connector will write contextual information to a directory and uses that during recovery to tell the transaction manager what needs to be cleaned up. Strictly speaking, this is not quite right, because if your hard disk fails, all the information about incomplete transactions will be lost. In strict two phase commit, this is why the transaction manager is allowed to call through to the resource to get a list of incomplete transactions requiring recovery. In todays world of RAID controllers there is no reason why a production machine should ever lose data due to a hard disk failure, and for that reason there is currently no option of providing a callback to the generic connector which can tell it what transactions are in a state that needs recovery. In the event of a catastrophic hardware failure of a node, where it was not possible to get the node up and running again, you would need to physically copy all the files which the generic connector writes, from the old hard disk over to a second node. The transaction manager and generic connector running on the second node would then work in harmony to complete all the hung transactions, by either committing them or rolling them back, whichever was relevant at the time of the crash. This process is no different to copying transaction manager logs during disaster recovery, depending on which transaction manager you are using. The chances that you will ever need to do this are very small - in my career I have never known a production machine from a project/product that I have worked on to fail in such a way.

You can configure where this contextual information is written using the second parameter shown in Listing 4:
MicroserviceXAResource.configure(30000L, new File("."));
Listing 4: Configuring the generic connector. The values shown are also the default values.
Listing 4 sets the minimum age of a transaction before it becomes relevant to recovery. In this case, the transaction will only be considered relevant for cleanup via recovery when it is more than 30 seconds old. You may need to tune this value depending upon the time it takes your business process to execute and that may depend on the sum of the timeout periods configured for each back-end service that you call. There is a trade off between a low value and a high value: the lower the value, the less time it takes the background task running in the transaction manager to clean up during recovery, after a failure. That means the smaller the value is, the smaller the window of inconsistency is. But be careful though, if the value is too low, the recovery task will attempt to rollback transactions which are actually still active. You can normally configure the transaction manager's timeout period, and the value set in Listing 4 should be more than equal to the transaction manager's timeout period. Additionally, the directory where contextual data is stored is configured in Listing 4 to be the local directory. You can specify any directory, but please make sure the directory exists because the generic connector will not attempt to create it.

If you are using Bitronix in a Tomcat environment, you may find that there isn't much information available on how to configure the environment. It used to be documented very well, before Bitronix was moved from codehaus.org over to Github. I have created an issue with Bitronix to improve the documentation. The source code and readme file in the demo/genericconnector-demo-tomcat-bitronix folder contains hints and links.

A final thing to note about using the generic connector is how the commit and rollback work. All the connector is doing is piggy-backing on top of a JTA transaction so that in the case that something needs to be rolled back, it gets notification via a callback. The generic connector then passes this information over to your code in the callback that is registered in Listing 3. The actual rolling back of the data in the back end is not something that the generic connector does - it simply calls your callback so that you can tell the back end system to rollback the data. Normally you won't rollback as such, rather you will mark the data that was written, as being no longer valid, typically using states. It can be very hard to properly rollback all traces of data that have already been written during the execution stage. In a strict two phase commit protocol setup, e.g. using two databases, the data written in each resource remains in a locked state, untouchable by third party transactions, between execution and commit/rollback. Indeed that is one of the drawbacks of two phase commit because locking resources reduces scalability. Typically the back end system that you integrate won't lock data between the execution phase and the commit phase, and indeed the commit callback will remain empty because it has nothing to do - the data is typically already committed in the back end when line 16 of Listing 1 returns during the execution stage. However, if you want to build a stricter system, and you can influence the implementation of the back end which you are integrating, then the data in the back end system can be "locked" between the execution and commit stages, typically by using states, for example "ticket reserved" after execution and "ticket booked" after the commit. Third party transactions would not be allowed to access resources / tickets in the "reserved" state.

The generic connector and a number of demo projects are available at https://github.com/maxant/genericconnector/ and the binaries and sources are available from Maven.

Copyright ©2015, Ant Kutschera.
Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!

Rules Engine 2.2.0, now with JavaScript (Nashorn) Support

A new version of the Simple Rule Engine is available, so that you can now use JavaScript (Nashorn) for writing your rules (MVEL is still supported because it is so fast!).

New Features:
  • JavaScript based Rule Engine - Use the JavascriptEngine constructor to create a subclass of Engine which is capable of interpreting JavaScript rules. It uses Nashorn (Java 8) as a JavaScript engine for evaluating the textual rules. Additionally, you can load scripts, for example lodash, so that your rules can be very complex. See the testRuleWithIterationUsingLibrary() and testComplexRuleInLibrary() and testLoadScriptRatherThanFile() tests for examples. Nashorn isn't threadsafe, but the rule engine is! Internally it uses a pool of Nashorn engines. You can also override the pool configuration if you need to. See the testMultithreadingAndPerformance_NoProblemsExpectedBecauseScriptsAreStateless() and testMultithreadingStatefulRules_NoProblemsExpectedBecauseOfEnginePool() tests for examples. If required, you can get the engine to preload the pool, or leave it lazily fill the pool (default). Please note, the engine is not completely Rhino (Java 6 / Java 7) compatible - the multithreaded tests do not work as expected for stateful scripts, but the performance of Rhino is so bad that you won't want to use it anyway.
  • You can now override the name of the input parameter - previous versions required that the rules refer to the input as "input", for example "input.people[0].name == 'Jane'". You can now provide the engine with the name which should be used, so that you can create rules like "company.people[0].name == 'Jane'".
  • Java 8 Javascript Rule Engine - If you want to use Java 8 lambdas, then you instantiate a Java8JavascriptEngine rather than the more plain JavascriptEngine.
  • For your convenience, there are now builders for the JavascriptEngine and Java8JavascriptEngine, because their constructors have so many parameters. See the testBuilder() test for an example.
  • Javascript rules can refer to input using bean notation (e.g. "input.people[0].name") or Java notation (e.g. "input.getPeople().get(0).getName()").
The library is available from Maven Central:


Have fun!

Copyright ©2015, Ant Kutschera
Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!

Several Patterns for Binding Non-transactional Resources into JTA Transactions

I recently published an article about how to bind non-transactional resources like web services / microservices into global distributed transactions so that recovery is handled automatically. Over the years I have often had to integrate "non-transactional" systems into Java EE application servers and data consistency was often a topic of discussion or even a non-functional requirement. I've put "non-transactional" into quotes because often the systems contain ways of ensuring data consistency, for example using calls to compensate, but the systems aren't what you might traditionally call transactional. There is certainly no way of configuring a Java EE application server to automatically handle recovery for such resources.

The following is a list of patterns that we compiled, showing different ways to maintain consistency when faced with the task of integrating a non-transactional system.
  1. Write job to database - The common scenario whereby you want to send say an email confirmation after a sale is made. You cannot send the email and then attempt to commit the sales transaction to your database, because if the commit fails, the customer receives an email stating that they have bought something and you have no record of it. You cannot send the email after the sales transaction is committed to your database, because if the sending of the email fails (e.g. the mail server is temporarily down), the customer won't get their confirmation, perhaps with a link to the tickets that they bought. One solution is to write the fact that an email needs to be sent, into the database in the same transaction that persists the sale. A batch or @Scheduled EJB can then periodically check to see if it should send an email. Once it successfully sends an email it changes the state of the record so that the email is not sent again. The same problem applies here that you might only be able to send the email but not update the database. But if you were able to read the database, you are likely to be able to update it, and sending the same email twice because of a database failure isn't as bad as never sending it, as could be the case if you didn't handle sending email asynchronously. One disadvantage of integrating like this is that it means that you cannot integrate a system from which you need the result in order to continue processing your business logic before replying to the user. You must handle the integration asynchronously.
  2. JMS - In a similar scenario to the previous solution, instead of writing a job to the database, you can send a JMS message containing the job. JMS is transactional, but asynchronous so this solution suffers from the same disadvantages as the solution above. Instead of changing the state of the work to be done, if you cannot process the work at that time, you send the message back into the queue with a property so that it is only processed after a certain amount of time, or you send the message to a dead letter queue for manual handling.
  3. Generic Connector (JCA Adapter) - I recently published a blog article describing a generic JCA resource adapater that I have created which lets you bind typically non-transactional resources like web services into JTA transactions. See the blog article for more details. Using the generic connector means that the transaction manager will execute callbacks when the transaction needs to be committed, rolled back or recovered, so that you only need to write application code which responds to these events.
  4. CDI Events - Using @Inject @Qualifier Event<T> on a field & field.fire(t); when you want to fire an event & @Observes(during=TransactionPhase.AFTER_FAILURE) @Qualifier T on a method parameter, the method will be called for each fired event, after the transaction fails. This way you can implement some compensation for when the transaction fails. Equally, you can use different transaction phases to do different things, like AFTER_SUCCESS to perform a call to confirm an initial reservation. We have even used these mechanisms to delay the call to the remote system, for example to post work to a workflow engine just before the commit, so that we are sure that all validation logic in the complex process has completed before the remote system call is made. See number 12 below.
  5. Custom Solution - If you can really really justify if, then you could build complex code with timeouts etc. involving batches and scripts which handle committing, rolling back and recovering transactions using the remote resource. The question you need to ask yourself is whether you are an expert in writing business code, or an expert in effectively writing transaction managers.
  6. Business Process Engine - Modern engines can integrate all kinds of remote resources into business processes and they tend to handle things like failure recovery. They typically retry failed calls and they can durably handle process state during the time it takes for remote systems to become online again so that the process can be resumed. Rather than commit and rollback, BPEL supports compensation to guarantee consistency across the entire landscape.
  7. Atomikos & TCC - A product which is capable of binding web services into JTA transactions. So far as I can tell, it is a stand alone transaction manager which can run outside of a Java EE application server. But I have no experience with this product.
  8. WS-AT - Using proprietary configuration (and/or annotations) you can set up two application servers to do their work within a global transaction. While this sounds promising, I have yet to come across a productive system which implements WS-AT. Really only supports SOAP web services, although JBoss has something in the pipeline for supporting REST services.
  9. EJB - Remote EJBs: Java EE application servers have been able to propgate transaction contexts from one server to another for a relatively long time. If you need to call a service that happens to be implemented using the Java EE stack, why not call it using remote EJB rather than calling it say over a web service, so that you get the service bound into a global transaction for free?
        - Local EJBs: If the service you are calling happens to be written in Java using say EJB technology, why not just deploy it locally instead of going to the extra effort to call it remotely say via a SOAP web service? You might get brownie points with the enterprise architects, but has scalability and composability been compared to performance, consistency and simplicity? Sure, modern architectures with trends like microservices mean that deploying lots of remote services is good, but there's always a trade-off being made and you need to really understand it when making the decision about what parts of the landscape need to be accessed remotely.
  10. Transaction Callbacks - like solution 4 but using the transaction synchronisation API to register callbacks which are called at the relevant stage of the transaction. The problem here, unlike with CDI events, is that you don't know the context in which the transaction is being committed or rolled back, because the callback is not passed the relevant data unlike the object which is passed into an observing method in CDI. So if you need to compensate the transaction and call say a web service to cancel what you did during the transaction, where do you get the data that you need to do so?
  11. Enlist XA Resource into Transaction - add a custom implementation of the XAResource interface, which you enlist into the transaction using the enlistResource method. Unfortunately the commit/rollback methods are only called once and if they should fail, they won't be called again during recovery.
  12. Non-transactional resource last - If no other pattern can be implemented, and you don't need to call the resource at a specific time during the process, e.g. you need to send an email as part of the transaction, but it doesn't matter if you do it as the first or last process step, then always call it right at the end of the process, shortly before the transaction is committed. The chances of the transaction not being able to commit is relatively small (especially if all the SQL has been flushed to the database), compared to the chances of your remote system call failing. If the call fails, then rollback the transaction. If the call succeeds, then commit the transaction. If the transaction then fails during commit, and it is important to you to compensate the non-transactional resource, you will need to use one of the patterns described above to add some compensation to the system.
The following table sums up the solutions. The recovery column indicates the level of automated recovery which this solution supports. The synchronicity column indicates whether you can use the solution if you need the response in order to continue processing, in which case you need a synchronous solution. Synchronicity here has nothing to do with blocking vs. non-blocking, rather it has to do with timing and whether you need a response in order to finish processing an activity.

Solution Synchronicity Recovery
1) Write job to database Asynchronous Manual1
2) JMS Asynchronous Semi-automatic2
3) Generic Connector (JCA Adapter) Synchronous Automatic3
4) CDI Events Asynchronous Not supported4
5) Custom Solution Depends on your implementation Depends on your implementation
6) Business Process Engine Synchronous Supported5
7) Atomikos & TCC No experience, presumably synchronous No experience, presumably supported
8) WS-AT (Configuration) No experience, presumably synchronous No experience, presumably supported
9) EJB Synchronous Automatic6
10) Transaction Callbacks Synchronous Not supported4
11) Enlist XA Resource into Transaction Synchronous Not supported4
12) Non-transactional resource last Asynchronous because it must be called last Not supported
  1. Manual Recovery - you need to program what to do if handling fails, i.e. how often a retry should be attempted before putting work on a "dead letter queue".
  2. JMS will automatically attempt to resend messages if you configure the queue to be durable. But what you do with a failed attempt to handle a message is up to you, the programmer.
  3. The transaction manager will continuously attempt to commit/rollback incomplete transactions until an administrator steps in to handle long running failures.
  4. Callbacks are only called once so you have just one shot
  5. A business process engine will repeatedly attempt to re-call failed web service calls. The same is true for compensation. The behaviour is typically configurable.
  6. Remote EJBs: The JTA transaction is propagated across to other app servers and as such the coordinating transaction manager will propagate transaction recovery to the other app servers bound into the transaction.
    Local EJBs: Using local EJBs means that any calls that they make to the database will be handled in the same transaction as your application code. If the local EJB is using a different database, you should use XA drivers for all databases, message queues, etc., so that the transaction manager can use two phase commit to ensure system-wide consistency.
Of all of these, my current favourite is the generic connector. It supports calls from which a response is required, as well as recovery being fully automatic. That means that I can concentrate on writing business code, rather than boilerplatey code that really belongs in a framework.

If you know of further ways, please contact me or leave a comment so that I can add them to the list.

Copyright ©2015, Ant Kutschera
Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!

Mysql versions prior to 5.7 do not fully support two phase commit

While doing some tests for the recently released generic JCA adapter which is capable of binding remote calls to microservices (as well as other things) into JTA transactions, I discovered a bug in Mysql 5.6 which has been around for nearly ten years.

The test scenario was a crash after the "prepare" phase of the XA transaction, after both the database and the generic connector vote to commit the transaction. After crashing the database or the application server, the transaction manager will try to recover and commit the transaction in the database. But while testing the crashes with Mysql 5.6 rather than Mysql 5.7, I was having the problem that the database never actually committed the transaction, meaning that the system as a whole was in an inconsistent state. There were absolutely no problems with the generic connector, just the database. The application server continuously logged that there was an incomplete transaction but was unable to complete the transaction in the database. During the simulated database crash, the application returned a HeuristicMixedException to indicate to the user that something is not and will not ever be consistent. The error logged by JBoss was:

...WARN [com.arjuna.ats.jta] (Periodic Recovery) ARJUNA016037: Could not find new XAResource to use for recovering non-serializable XAResource XAResourceRecord < resource:null, txid: <...eis_name=unknown eis name >, heuristic: TwoPhaseOutcome.FINISH_OK ...>

I spent time debugging in the Mysql driver code but eventually came across MySQL Bug #12161 which has only just been closed after being open for nearly 10 years! It is clearly a point where the database is not two phase commit compatible, because remember what Wikipedia states: "... Log records, which are typically slow to generate but survive failures, are used by the protocol's recovery procedures." In the case of Mysql 5.6 and previous versions, its log records do not survive failures, as documented in the bug report. Additionally, section of the JCA spec 1.6 says we must not erase knowledge of the transaction branch until commit or rollback is called, again which Mysql is not adhering to.

Unfortunately the fix is not yet in the 5.6 GA version of Mysql, rather only available in the 5.7 DMR version. But that is due to become a GA release soon, and other databases like Postgres and Oracle do not have this issue. The problem is also clearly described in the JBoss manual (RTFM!), which also provides tips on getting XA recovery working with Oracle where special access needs to be granted to the appropriate user (which has caught us out in the past). More information on this problem is located here. Further tests with the H2 database sadly showed that it too does not support recovery of XA transactions.

Copyright ©2015, Ant Kutschera

Tags : ,
Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!

Global Data Consistency in Distributed (Microservice) Architectures

UPDATE: Now supports Spring and Spring Boot outside of a full Java EE sever. See new article for more details!

I've published a generic JCA resource adapter on Github available from Maven (ch.maxant:genericconnector-rar) with an Apache 2.0 licence. This let's you bind things like REST and SOAP web services into JTA transactions which are under the control of Java EE application servers. That makes it possible to build systems which guarantee data consistency, with as little boiler plate code as possible, very easily. Be sure to read the FAQ.

Imagine the following scenario...

Functional Requirements

  • ... many pages of sleep inducing requirements...
  • FR-4053: When the user clicks the "buy" button, the system should book the tickets, make a confirmed payment transaction with the acquirer and send a letter to the customer including their printed tickets and a receipt.
  • ... many more requirements...

Selected Non-Functional Requirements

  • NFR-08: The tickets must be booked using NBS (the corporations "Nouvelle Booking System"), an HTTP SOAP Web Service deployed within the intranet.
  • NFR-19: Output Management (printing and sending letters) must be done using COMS, a JSON/REST Service also deployed within the intranet.
  • NFR-22: Payment must be done using our Partner's MMF (Make Money Fast) system, deployed in the internet and connected to using a VPN.
  • NFR-34: The system must write sales order records to its own Oracle database.
  • NFR-45: The data related to a single sales order must be consistent across the system, NBS, COMS and MMF.
  • NFR-84: The system must be implemented using Java EE 6 so that it can be deployed to our clustered application server environment.
  • NFR-99: Due to the MIGRATION'16 project, the system must be built to be portable to a different application server.


NFR-45 is interesting. We need to ensure that the data across multiple systems remains consistent, i.e. even during software/hardware crashes. Yet NFR-08, NFR-19, NFR-22 and NFR-34 make things a little harder. SOAP and REST don't support transactions! - No, that isn't entirely true. We could very easily use something like the Arjuna transaction manager from JBoss which supports WS-AT. See for example this project (or its source on Github) or this JBoss example or indeed this Metro example. There are several problems with those solutions though: NFR-99 (the APIs used are not portable); NFR-19 (REST doesn't support WS-AT, although there is something in the pipeline at JBoss); and the fact that the web services we are integrating might not even support WS-AT. I have integrated many internal and external web services in the past but never come across one which supports WS-AT.

Over the years I have worked on projects which have had similar requirements but which have produced different solutions. I've seen and heard of companies who end up effectively building their own transaction managers, which bind web services into transactions. I've also come across companies who don't worry about consistency and ignore NFR-45. I like the idea of consistency, but I don't like the idea of a business project writing a framework which tracks the state of transactions and manually commits or rolls them back trying to stay synchronised with a Java EE transaction manager. So a few years ago I had an idea of how to fulfil all of those requirements yet avoid such a complex solution that it was akin to building a transaction manager. NFR-84 almost comes to the rescue because Java EE application servers support distributed transactions. I wrote "almost" because what is missing is some form of adapter for binding non-standard resources like web services into such transactions. But the Java EE specifications also contain JSR-112, the JCA specification for building resource adapters, that can be bound into distributed transactions. My idea was to build a generic resource adapter that could be used to bind web services and other things into the transaction under the control of the application server, with as little configuration as necessary and with as simple an API as I could design.

Background to Distributed Transactions

To understand the idea better, let's take a look at distributed transactions and two phase commit which can be used to bind calls made to a database into an XA transaction using SQL. Listing 1 shows a list of statements which are needed to commit data in an XA transaction:

mysql> XA START 'someTxId';

mysql> insert into person values (null, 'ant');

mysql> XA END 'someTxId';

mysql> XA PREPARE 'someTxId';

mysql> XA COMMIT 'someTxId';

mysql> select * from person;
| id  | name                          |
| 771 | ant                           |

Listing 1: An XA transaction in SQL

The branch of a global transaction is started within the database (a resource manager) on line 1. Any arbitrary transaction ID can be used and typically the global transaction manager inside the application server generates this ID. Line 3 is where the "business code" is executed, i.e. all the statements relating to why we are using the database, i.e. to insert data and run queries. Once all that business stuff is finished, and between lines 1 and 5 you could be calling other remote resources, the transaction is ended using line 5. Note however that the transaction isn't yet complete, it just moves to a state where the global transaction manager can start to query each resource manager as to whether it should go ahead and commit the transaction. If just one resource manager decides it does not want to commit the data, then the transaction manager will tell all the others to rollback their transactions. If however all of the resource managers report that they are happy to commit the transaction, and they do so via their response to line 7, then the transaction manager will tell all the resource managers to commit their local transaction using a command like that on line 9. After line 9 runs, the data is available to everyone as the select statement on line 11 demonstrates.

Two phase commit is about consistency in a distributed environment. Rather than just looking at the happy flow, we also need to understand what happens during failure after each of the above commands. If any of the statements up to and including the prepare statement fail, then the global transaction will be rolled back. The resource managers and the transaction manager should all be writing their state to persistent durable logs so that in the event of them being restarted they can continue the process and ensure consistency. Up to and including the prepare statement, the resource managers would rollback the transactions if they failed and were restarted.

If some resource managers report that they are prepared to commit but others report they want to rollback, or indeed others don't answer, then the transaction will be rolled back. It might take time, if resource managers have crashed and become unavailable, but the global transaction manager will ensure that all resource managers rollback.

Once however all resource managers have successfully reported that they want to commit, there is no going back. The transaction manager will attempt to commit the transaction on all resource managers even if they temporarily become unavailable. The result is that temporarily there may be inconsistencies in the data which other transactions can view, as say one resource manager that crashed has not yet been committed, even though it has been restarted and is again available, but eventually, the data will become consistent. This is an important point, because I have often heard, and I even used to cite that the two phase protocol guaranteed ACID consistency. It doesn't - it guarantees eventual consistency - only the local transactions viewed as individuals have ACID properties.

There is one more important step in the two phase commit protocol, namely recovery, which must be implemented for failure cases. When either the transaction manager or a resource manager becomes unavailable, the transaction managers job is to keep trying until eventually the entire system again becomes consistent. In order to do this it can query the resource manager to find transactions which the resource manager believes to be incomplete. In Mysql the relevant command is shown in listing 2 together with its result, namely that a transaction is incomplete. I ran this command before the commit command in listing 1. After the commit, the result set is empty, since the resource manager cleans up after itself and removes the successful transaction.

mysql> XA RECOVER ;
| formatID | gtrid_length | bqual_length | data     |
|        1 |            8 |            0 | someTxId |

Listing 2: The XA recover command in SQL


The JCA spec includes the ability for the transaction manager to retrieve an XA Resource from the adapter, which represents a resource which understands commands like start, end, prepare, commit, rollback and recover. The challenge is to use that fact to create a resource adapter that can call services which have the ability to be commit and rolled back, in a similar fashion to a remote database engine. If we make a few assumptions we can define a simplified contract which such services need to implement, so that we can bind them into distributed transactions.

Consider for example NFR-22 and MMF, the acquirer system. Typically payment systems let you reserve money and then shortly afterwards book the reservation. The idea is that you call their service to ensure there are funds available and you reserve some money, you then complete all your business transactions on your side, and then definitely book the reserved money once your data is committed (see the FAQ for an alternative). The reserving and definitely booking should take no more than a few seconds. Reserving and releasing the reservation in the event of a crash should take no more than a few minutes. The same often goes for ticket booking systems in my experience where a ticket can be booked, and shortly afterwards confirmed, at a time when you are willing to take responsibility for its cost. I will refer to the initial stage as execution and the latter stage as commit. Of course if you cannot complete business on your side, an alternative to stage two, namely rollback, can be run in order to cancel the booking. If you aren't friendly enough to rollback and you just leave the reservation open the provider should eventually let the reservation timeout so that the reserved "resource" (money or tickets in this example) can be used elsewhere, for example so that the customer can go shopping or the ticket can be bought by someone else. What is the motivation for this timeout? Three phase commit and the experience of the author that back end systems like ticket booking systems and payment systems only reserve resources like tickets and money for a limited amount of time.

It is that kind of system that we want to bind into our transaction. The contract for such services looks as follows:

  1. The provider should make three operations available: execution, commit and rollback (although commit is actually optional 1),
  2. The provider may let non-committed and non-rolledback executions timeout after which any reserved resources may be used in other transactions,
  3. A successful execution guarantees that the transaction manager is allowed to commit or rollback the reserved resources, as long as no timeout has occurred 2,
  4. A call to commit or rollback the reservation can be done multiple times without side effects (think about idempotency here), so that the transaction manager may finish the transaction if an initial attempt failed.

Footnote #1: Sometimes web services offer an execution operation and an operation for cancelling the call, e.g. so that money is indeed not taken from the customers account. But they don't offer an operation for committing the execution. If we go back to the discussion around listing 2 where I stated that the transactions are eventually consistent rather than immediately consistent, it becomes clear that it doesn't matter if a system in a global transaction definitely books resources during the execution stage rather than waiting until the commit stage. Eventually, either all systems will also commit, or all will rollback and the money transaction will be cancelled, freeing up reserved funds on the customers account. Note however that a service offering all three operations is cleaner, and if it is possible to influence the system design, it is recommended to ensure the services being integrated offer all three operations: execute, commit and rollback.

Footnote #2: After a successful call to the execute operation, a web service may not refuse to commit or rollback the transaction due to say business rules. It may only temporarily fail due to technical problems, in which case the transaction manager may attempt completion again shortly afterwards. It is not acceptable to build business rules into the commit or rollback operations. All validation must be completed during the execution, i.e. before commit or rollback time. The same is true in the database world - during XA transactions the database must check all constraints at latest during the prepare stage, i.e. definitely before the commit or rollback stage.

Let's compare using a contract like this to using a database. Take the acquirer web service: the money that is reserved during execution is really put to one side and is no longer available to other entities trying to create a credit card transaction. But the money also hasn't been transferred to our account. There are three states: i) the money is in the customer's credit; ii) the money is reserved and may not be used by other transactions; iii) the money is booked and is no longer available to the customer. This is analagous to a database transaction: i) a row has not yet been inserted; ii) the row is inserted, but not currently visible to other transactions (although that depends on the transaction isolation level); iii) finally the row is committed and visible to everyone. Although this is similar to the acquirer example the transaction which reserves money in the web service is immediately visible to the entire world once the execution stage is committed - it does not remain invisible until after the commit stage as is the case with the database. The isolation level is different but of course the web service can be built to hide such information, for example based on the state, if the requirements need it to be so.

With a contract like this, there are several fundamental differences to the way in which WS-AT and two phase commit are designed. Firstly transactions encapsulated inside a web service are NOT kept open between execution and commit/rollback. Secondly, because the transaction isn't kept open, resources are not locked, as they might be when using databases. And these two differences lead to a third: rolling back a web service call is normally not about undoing what it did, rather about changing the state of what it did so that from a business point of view, resources again become available.

These differences are what give the generic connector the advantage over traditional two phase commit. What is really going on in this connector is that we are piggy-backing onto the distributed transaction, cherry-picking the best parts, namely execution, commit, rollback and recovery. By doing this in a Java EE application server, we get transaction management for free!

A final stage, namely recovery (see listings 1 & 2) is required, but it does not necessarily need to be implemented by the web service because the adapter can handle that part internally - after all it knows about the state of the transactions since it has been making calls to the web service.

So, with the above assumptions, we can build a generic JCA resource adapter which tracks transaction state and calls the commit/rollback operations on web services at the correct time, when the transaction manager tells the XA resource to do things like start a transaction, execute some business code and commit or rollback the transaction.

Applicability to Microservice Architectures

Microservice architectures or indeed SOA have one noteworthy problem when compared to monolithic applications, namely that it is hard to keep data consistent in a distributed system. A microservice will typically provide operations for doing work, but should also offer operations to cancel that work. The work doesn't need to be made invisible, but it does need to be cancelled as far as the business is concerned, so that no more resources (money, time, human effort, etc.) is invested in the work. The adapter presented here can be used inside an "application layer", i.e. a service which your clients (mobile, rich web clients, etc.) call. That layer should be deployed in a Java EE application server and make use of the generic connector each time one of the microservices in your landscape is called. That way, if something fails, all the microservice calls can be "rolled back" by the transaction manager. The point of the application layer is to control the global transaction so that anything which needs to be done consistently can be monitored and coordinated by the transaction manager, rather than say calling each microservice directly from the client and then having to write code which cleans up and restores consistency.

Using the Adapter

The first requirement I gave myself was to build an API which allows you to add business calls to a web service, inside an existing transaction. Listing 3 shows an example of how to bind the web service call into a transaction using Java 8 lambdas (even though the API is compatible with Java 1.6 - see Github for an example).

public class SomeServiceThatBindsResourcesIntoTransaction {

  @Resource(lookup = "java:/maxant/BookingSystem")
  private TransactionAssistanceFactory bookingFactory;
  public String doSomethingInvolvingSeveralResources(String refNumber) {
    BookingSystem bookingSystem = new BookingSystemWebServiceService()
    try ( ...
      TransactionAssistant bookingTransactionAssistant = 
... ) {
      //NFR-34 write sales data to Oracle using JDBC and XA-Driver

      //NFR-08 book tickets
      String bookingResponse = 
          bookingTransactionAssistant.executeInActiveTransaction(txid -> {

        return bookingSystem.reserveTickets(txid, refNumber);
      return response;
    } catch (...) {...}

Listing 3: Binding a web service call into a transaction

Line 1 designates the class as an EJB which by default uses container managed transactions and requires a transaction to be present on each method call, starting one if none exists. Lines 4-5 ensure that an instance of the relevant class of the resource adapter is injected into the service. Line 9 creates a new instance of a web service client. This client code was generated using wsimport and the WSDL service definition. Lines 13-14 create the "transaction assistant" which the resource adapter makes available. The assistant is then used on line 21 to run line 23 within the transaction. Under the hood, this sets up the XA resource which the transaction manager uses to commit or rollback the connection. Line 23 returns a String which sets the String on line 20 synchronously.

Compare this code to writing to a database: lines 4 and 5 are like injecting a DataSource or EntityManager; lines 9 and 13-14 are similar to opening a connection to the database; finally lines 21-23 are like making a call to execute some SQL.

Line 23 doesn't do any error handling. If the web service throws an exception it leads to the transaction being rolled back. If you decide to catch such an exception you need to remember to either throw another exception such that the container rolls back the transaction, or you need to set the transaction to roll back by calling setRollbackOnly() on the session context (the demo code on Github shows an example where it catches an SQLException).

So, the overhead of binding a web service call into a transaction is very small and similar to executing some SQL on a database. Importantly the commit or rollback is not visible in the application code above. However we do still need to show the application server how to commit and rollback. This is done just once per web service, as shown in listing 4.

public class TransactionAssistanceSetup {
  @Resource(lookup = "java:/maxant/BookingSystem")
  private TransactionAssistanceFactory bookingFactory;
  public void init() {
      .registerCommitRollbackRecovery(new Builder()
      .withCommit( txid -> {
        new BookingSystemWebServiceService()
      .withRollback( txid -> {
        new BookingSystemWebServiceService()

  public void shutdown(){

Listing 4: One time registration of callbacks to handle commit and rollback

Here, lines 1-2 tell the application server to create a singleton and to do it as soon as the application starts. This is important so that if the resource adapter needs to recover potentially incomplete transactions, it can do so as soon as it is ready. Lines 5-6 are like those in listing 3. Line 11 is where we register a callback with the resource adapter, so that it gains knowledge of how to commit and rollback transactions in the web service. I have used Java 8 lambdas here also, but if you are using Java 6/7 you can use an anonymous inner class instead of the new builder on line 11. Lines 13-14 simply call the web service to book the tickets which were previously reserved, on line 23 of listing 3. Lines 17-18 cancel the reserved tickets, should the transaction manager decide to rollback the global transaction. Very importantly, line 26 unregisters the callback for the adapter instance when the application is shutdown. This is necessary because the adapter only allows you to register one callback per JNDI name (web service) and if the application were restarted without unregistering the callback, line 11 would fail with an exception the second time that the callback is registered.

As you can see, binding a web service, or indeed anything which does not naturally support transactions, into a JTA global transaction is very easy using the generic adapter that I have created. The only thing left is to configure the adapter so that it can be deployed together with your application.

Adapter Configuration

The adapter needs to be configured once per web service which it should bind into transactions. To make that a little clearer, consider the code in listing 4 for registering callbacks for commit and rollback. Only one callback can be registered per adapter instance i.e. JNDI name. Configuring the adapter is application server specific, but only because of where you put the following XML. In Jboss EAP 6 / Wildfly 8 upwards, it is put into <jboss-install-folder>/standalone/configuration/standalone.xml, between the XML tags similar to <subsystem xmlns="urn:jboss:domain:resource-adapters:...>

  <resource-adapter id="GenericConnector.rar">
        <config-property name="id">
        <config-property name="handleRecoveryInternally">
        <config-property name="recoveryStatePersistenceDirectory">
        <recovery no-recovery="false">
      ... one connection-definition per registered commit/rollback callback

Listing 5: Configuring the generic resource adapter

Listing 5 starts with the definition of a resource adapter on lines 2-35. The archive is defined on line 4 - note the hash symbol between the EAR file name and the RAR file name. Note that you may also need to stick the Maven version number in the RAR file name. It depends upon the physical file in your EAR and app servers other than JBoss may use different conventions. Line 6 tells the application server to use the XAResource from the adapter so that it is bound into XA transactions. Lines 8-32 then need to be repeated for each web service which you want to integrate. Lines 9 and 10 define the factory which the resource adapter provides and this value should always be ch.maxant.generic_jca_adapter.ManagedTransactionAssistanceFactory. Line 11 defines the JNDI name used to lookup the resource in your EJB. Line 12 names the pool used for the connection definition. It is recommended to use a unique name per connection definition. Lines 13-15 define the ID of the connection definition. You must use a unique name per connection definition. Lines 16-18 tell the resource adapter to track transaction state internally so that it can handle recovery without help from the web service which is being integrated. The default value is false, in which case you must register a recovery callback in listing 4 - see listing 6 below. Lines 19-21 are required if the resource adapter is configured to handle recovery internally - you must provide the path to a directory where it should write the transaction state which it needs to track. It is recommended to use a directory on the local machine where the application server is running, rather than one located on the network. Lines 22-31 are required for JBoss so that it really does use the XAResource and bind calls into the global transaction. It is possible that other application servers only require line 6 - the deployment to other application servers has not yet been fully tested (more details...).


Until now I haven't said much about recovery. Indeed, the handleRecoveryInternally attribute in the XML in listing 5 means that the application developer doesn't really need to think about recovery. Yet if we return to listing 2, recovery is a clear part of the two phase commit protocol. Indeed Wikipedia states that "To accommodate recovery from failure (automatic in most cases) the protocol's participants use logging of the protocol's states. Log records, which are typically slow to generate but survive failures, are used by the protocol's recovery procedures.". The participants are the resource managers, e.g. the database or web service, or perhaps the resource adapter if you wish to interpret it so. I have to be honest that I don't fully understand why the transaction manager cannot do this instead. So that the resource adapter is more flexible, but also in case you are not allowed to let the adapter write to the file system (operations management departments in big corporations tend to be strict like this), it is also possible to provide the resource adapter with a callback so that it can ask the web service for an array of transaction numbers which the web services feels are in an incomplete state. Note, if the adapter is configured as above, then it tracks the state of the calls to the web service itself. The information that the web service's commit or rollback method was called is saved to disk after a successful response is received. If the application server crashes before the information can be written it isn't so tragic, since the adapter will tell the transaction manager that the transaction is incomplete, and the transaction manager will attempt to commit/rollback using the web service once again. Since the web service contract defined above requires that the commit and rollback methods may be called multiple times without causing problems, it should be absolutely no problem when the transaction manager then attempts to re-commit or re-rollback the transaction. That leads me to state that the only reason you would want to register a recovery callback is that you are not allowed to let the resource adapter write to disk. But I should state that I do not fully understand why XA requires the resource manager to provide a list of potentially incomplete transactions, when surely the transaction manager is able to track this state itself.

Setting up recovery so that the adapter uses the web service to query transactions which it thinks are incomplete, involves first setting the handleRecoveryInternally attribute in the deployment descriptor to false (after which you do not need to supply the recoveryStatePersistenceDirectory attribute) and second, adding a recovery callback, as shown in listing 6.

public class TransactionAssistanceSetup {

  @Resource(lookup = "java:/maxant/Acquirer")
  private TransactionAssistanceFactory acquirerFactory;
  public void init() {
      .registerCommitRollbackRecovery(new Builder()
      .withCommit( txid -> {
      .withRollback( txid -> {
      .withRecovery( () -> {
        try {
          List<String> txids = new AcquirerWebServiceService().getAcquirerPort().findUnfinishedTransactions();
          txids = txids == null ? new ArrayList<>() : txids;
          return txids.toArray(new String[0]);
        } catch (Exception e) {
          log.log(Level.WARNING, "Failed to find transactions requiring recovery for acquirer!", e);
          return null;

Listing 6: Defining a recovery callback

Registering a recovery callback is done next to the registration of the commit and rollback callbacks that were setup in listing 4. Lines 18-26 of listing 6 add a recovery callback. Here, we simply create a web service client and call the web service to get the list of transaction IDs which should be completed. Any errors are simply logged, as the transaction manager will soon come by and ask again. A more robust solution might choose to inform an administrator if there is an error here, because firstly errors shouldn't occur here and secondly the transaction manager calls this callback from a background task, where no user will ever be shown an error. Of course, if the web service is currently unavailable, the transaction manager will receive no transaction IDs, but the hope is that the next time it tries (roughly every two minutes in JBoss Wildfly 8), the web service will again be available.


To test the adapter, a demo application based on the scenario described at the start of this article was built (also available on Github) which calls three web services and writes to the database twice, all during the same transaction. The acquirer supports execution, commit, rollback and recovery; the booking system supports execution, commit and rollback; the letter writer only supports execution and rollback. In the test, the process first writes to the database, then calls the acquirer, then the booking system, then the letter writer and finally it updates the database. This way, failure at several points in the process can be tested. The adapter was tested using the following test cases:

  • Positive Testcase- here everything is allowed to pass. Afterwards the logs, database and web services are checked to ensure that indeed everything is committed.
  • Failure at end of process due to database foreign key constraint violation- Here the web services have all executed their business logic and the test ensures that after the database failure, the transaction manager rolls back the web service calls.
  • Failure during execution of acquirer web service- here, the failure occurs after an initial database insert to check that the insert is rolled back
  • Failure during execution of booking web service- here, the failure occurs after an initial database insert and the web service call to the acquirer to check that both are rolled back
  • Failure during execution of letter writer web service- here, the failure occurs after an initial database insert and two web service calls to check that all three are rolled back
  • During commit, web services are shut down- by setting breakpoints in the commit callbacks, we can undeploy the web services and then let the process continue. Initial committing on the web services fails but the database is fine and the data is available. But after the web services are redeployed and up and running, the transaction manager again attempts to carry out the commit which should be successful.
  • During commit, the database is shut down- also using breakpoints, the database is shutdown just before the commit. Commit works on the web services but fails on the database. Upon restarting the database, the next time the transaction manager runs a recovery it should commit the database.
  • Kill application server during prepare, before commit- Here we check that nothing is ever commit.
  • Kill application server during commit- Here we check that after the server restarted and recovery runs, that consistency across all systems is restored, i.e. that everything is committed.


The demo application and resource adapter log everything they do, so the first port of call is to read the logs during each test. Additionally, the database writes to disk, so we can use a database client to query the database state for example select * from person p inner join address a on a.person_FK = p.id;. The acquirer writes to the folder ~/temp/xa-transactions-state-acquirer/. There, a file named exec*txt exists if the transaction is incomplete, otherwise it is named commit*txt or rollback*txt if it was commit or rolledback, respectively. The booking system writes to the folder <jboss-install>/standalone/data/bookingsystem-tx-object-store/. The letter writer writes to the folder <jboss-install>/standalone/data/letterwriter-tx-object-store/. The adapter removes the temporary file named exec*txt once the transaction is commit or rolled back, so the only way to verify completion is to read the adapter logs, although checking that the files are removed makes sense, albeit doesn't inform whether there was a commit or a rollback.

The results were all positive and as expected although an ancient bug in Mysql provided a nice little challenge to overcome, which I will write about in a different article. If you have difficulty with your database, take a look at the JBoss manual which provides tips on getting XA recovery working with different databases.


  • The service I am integrating only offers an operation to execute and an operation to cancel. There is no commit operation. No worries - this is acceptable and discussed above, where the contract that web services should fulfil is discussed. Basically, call the execute operation during normal business processing and the cancel operation only if there is a rollback. During the commit stage, don't do anything, since data was already committed during the call to the execute operation.
  • What happens if a web service takes a long time to come back online, after a business operation is executed but before the commit/rollback operation has been called? Transactions that require recovery may end up in trouble if they take a long time to come back online, because it is recommended that the systems behind the web service implement a timeout after which they clean up reserved but not booked (committed) resources. Take the example where a seat is reserved in a theatre during the execution but the final booking of the seat is delayed due to a system failure. It is entirely possible that the seat will be released after say half an hour so that it can be sold to other potential customers. If the seat is released and some time later the application server which reserved it attempts to book the seat, there could be an inconsistency in the system as a whole, as the other participants in the global transaction could be committed, indicating that the seat was sold, and for example money was taken for the seat, yet the seat has been sold to another customer. This case can occur in normal two phase commit processes. Imagine a database transaction that creates a foreign key reference to a record, but that record is deleted in a different transaction. Normally the solution is to lock resources, which the reservation of the seat is actually doing. But indefinate locking of resources can cause problems like deadlocks. This problem is not unique to the solution presented here.
  • Why don't you recommend WS-AT? Mostly because the world is full of services which don't offer WS-AT support. And the adapter I have written here is generic enough that you could be integrating non-web service resources. But also because of the locking and temporal issues which can occur, related to keeping the transaction open between the execution and commit stages.
  • Why not just create an implementation of XAResource and enlist it into the transaction using the enlistResource method? Because doing so doesn't handle recovery. The generic connecter presented here also handles recovery when either the resource or the application server crash during commit/rollback.
  • This is crazy - I don't want to implement commit and rollback operations on my web services! WS-AT is for you! Or an inconsistent landscape...
  • I'm in a microservice landscape - can you help me? Yes! Rather than letting your client call multiple microservices and then having to worry about global data consistency itself, say in the case where one service call fails, make the client call an "application layer", i.e. a service which is running in a Java EE application sever. That service should make calls to the back end by using the generic connector, and that way the complex logic required to guarantee global data consistency is handled by the transaction manager, rather than code which you would have to otherwise write.
  • The system I am integrating requires me to call its commit and rollback methods with more than just the transaction ID. You need to persist the contextual data that you use during the execution stage and use the transaction ID as the key which you can then use to lookup that data during commit, rollback or recovery. Persist the data using an inner transaction (@RequiresNew) so that the data is definitely persisted before commit/rollback/recovery commences - this way it is failure resistant.
  • The system I am integrating dictates a session ID and does not take a transaction ID. See the previous answer - map the transaction ID to the session ID of the system you are integrating. Ensure that you do it in a peristent manner so that your application can survive crashes.
  • The payment system I am integrating executes the payment on their own website, but the "commit" occurs over an HTTP call. Can I integrate this? Yes! Redirect to their site to do the payment; when they callback to your site, run your business logic in a transaction and using the transaction assistant execute a no-op method in the execution stage which will cause the commit callback to be called at commit time; in the commit callback make the HTTP call to the payment system to confirm the payment.

Using the generic adapter in your project

To use the adapter in your application you need to do the follwing things:

  • Create a dependency on the ch.maxant:genericconnector-api Maven module,
  • Write code as shown in listing 3 to execute business operations on the web services that your application integrates,
  • Setup commit and rollback callbacks as shown in listing 4, and optionally a recovery callback as shown in listing 6,
  • Configure the resource adapter as shown in listing 5
  • Deploy the resource adapter in an EAR by adding a dependency to the Maven module ch.maxant:genericconnector-rar and referencing it as a connector module in the application.xml deployment descriptor.

For more information, see the demo application.


The idea that I had, namely to bind web service calls into JTA transactions using a generic JCA resource adapter does work. It eliminates the need to build your own transaction management logic and it does ensure that there is consistency across the entire landscape, regardless of whether a transaction is committed or rolled back in the application code running in the Java EE application server.

Further Reading

A plain english introduction to CAP Theorem
Eventual consistency and the trade-offs required by distributed development
The hidden costs of microservices
Microservice Trade-Offs
Starbucks Does Not Use Two-Phase Commit

Copyright ©2015, Ant Kutschera, with thanks to Claude Gex for his review.

Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!