Wednesday, May 9, 2012

[Error ORABPEL-10902]: compilation failed, XML parsing failed because "".

Almost every body working with BPEL will run in the error:

Error:
[Error ORABPEL-10902]: compilation failed
[Description]: in "C:\projects\vijfhuizen.com\trunk\src\soa\bpel\HelloWorld\bpel\
HelloWorld",
XML parsing failed because "".
[Potential fix]: n/a.


The JDeveloper error message does not say more, especially the last one, "Potential fix: n/a". But it is possible to get some more in depth log info. This is done via build.xml file that JDeveloper creates for your bpel process.

Do the following steps:

Change the 'verbose' setting to true in the build.properties file.

# Set verbose to true if you want to
# see verbose output from deployment tasks
verbose = true


Right click on build.xml and then run ant task 'Compile'.
Detailed result is now shown in the log.

########

Writing a 'Singleton' BPEL Process

A typical use case in BPEL is connecting to an external resource / system which only support a single connection at a time. In such a case we need to protect against parallel BPEL instances submitting concurrent requests to that resource. This implies that we need some way to serialize requests for that resource (probably on a first in first out basis). A common design pattern for achieving this is the Singleton, first documented in the Gang of Four's patterns book (where they use a print spooler as an example). Now BPEL doesn’t explicitly support the notion of a Singleton, however it does allow you to simulate one, which for the purpose of what we are trying to achieve is good enough. The basic idea is to have an explicit initiateSingleton operation which is used to instantiate a single process instance, and then have a separate invokeSingleton operation which is used to submit each actual request. The invoke operation is then placed in a loop, so as soon as it’s processed one request it loops back round for the next one. For this example, I’ve just created a simple process which keeps track of the number of times it’s been called. It takes a single parameter, name and returns a string of the following format: Hello . You are caller number of this singleton. Pretty simple, but it illustrates the point. The basic process is shown in the screenshot below: The first activity in the process is a receive activity, InitiateSingleton, which is used to instantiate the process. We then have a simple assign which just sets the count to zero. The remainder of the process, which handles the invokeSingleton operation is contained within the ProcessRequest scope, which itself is placed inside a while loop, that loops for ever. This contains the following activities: InvokeSingleton – This receives the next request to be processed by the singleton. Wait – This waits for a period of 5 seconds. It is just to simulate going out to an external system. It also enables us to see how multiple requests for the singleton are queued up by the Oracle BPEL PM Server. SetResult – Here we just set the result and increment the count by one. CallbackClient – Finally we just send back the result to the client; before looping back round to process the next request. Message Correlation At first glance this all looks deceptively simple, however the complexity comes when you try and call the invokeSingleton operation. The reason is that when we invoke an operation on a BPEL Process, the message is received by the Oracle BPEL PM Message Handler, depending on the operation it will either invoke a new process to handle it (as with the initiateSingleton operation) or route it through to the appropriate instance of an already running process. This is where it gets a bit convoluted, when we receive the message for the invokeSingleton operation, even though we only have a single instance of the BPEL process running how does BPEL PM know that it is the correct instance to route the message to? The answer is it doesn’t. Now under “normal” circumstances, where we have an asynchronous interaction between two processes, Oracle BPEL PM defaults to using WS-Addressing to ensure that messages are successfully routed between two process instances. However in these cases, it’s typical that the interaction between the two processes (say A and B) began by process A initiating process B and passing it details (using WS-Addressing) about itself so that when process B sends back a response it contains sufficient information for it to be routed to the appropriate instance of process A. However with our Singleton process, any process calling the invokeSingleton operation will have no prior knowledge of that process, so for our purposes we can’t use WS-Addressing to correlate the message. Fortunately for situations where WS-Addressing isn’t appropriate or available BPEL provides the concept of correlation sets. Essentially correlation sets allow you to use a unique value present in the message body of all exchanged messages (e.g. orderId) to link that exchange of related messages together. You do this in BPEL by first defining a property which corresponds to the unique value, and then defining a property alias for each message in the exchange, that defines where that property is located within the message. Next you define a correlation set which can consist of one or more properties (see the product documentation or the Oracle SOA Suite Developer's Guide for more details on how to create Correlation Sets). For our purpose, I’ve defined a simple element called singletonId which is contained in both the initiateSingleton and invokeSingleton operations. Next we have defined a correlation set SingletonCS which is used in the InitiateSingleton and InvokeSingelton operations. On the initiateSingleton operation I’ve defined this to initiate the correlation set (see screenshot below), this means whatever value is contained with the singletonId in the start operation must be present in the invokeSingleton operation for it to be routed through to this process instance. The next part of the problem is to return the response back to the appropriate caller of the process. This at first may seem strange, but recall we may have several processes calling the singleton at the same time, and thus all waiting for a response. We need to ensure that each response is returned to the appropriate instance. Now you’ve probably already guessed that we need to use correlation sets again (as we have disabled WS-Addressing for this Partner Link). This time the calling process will need to generate a unique key that it passes in the request (requestId) to the singleton. The singleton will then return this in the response to the requester. If we look at the SingeltonAccessor process, we use the XPath operator generateGUID to generate this value. Specifying the Reply to Address So now everything looks fine, so we can go ahead and run the process; well not quite! If you do you will notice that request from the SingeltonAccessor successfully reaches our Singleton process. But for some reason the response from the Singleton never reaches the SingeltonAccessor, in fact if you take a closer examination of the Singleton Audit trail for the callback activity you will see that it actually skipped the callback! Now it turns out that this is quite a rational behaviour for the simple reason that the Singleton doesn’t actually know where to send its callback to. This is because at design time the caller of an asynchronous process typically isn’t known, and thus the callback address needs to be specified at runt time. In fact if you log into the BPEL Console and examine the WSDL for the call back operation, you will see that the soap:address location for the endpoint is defined as: http://set.by.caller Now by default Oracle BPEL PM uses WS-Addressing to handle this, thus when I invoke an asynchronous process, part of the SOAP payload contains a return address for the asynchronous process, however we’ve just disabled WS-Addressing for this message exchange as we are using correlation sets. So how do we provide the reply to address? Well one way would be to pass this in as part of the request message and then use dynamic partner links to set the replyToAddress. However a simpler way is to define the replyToAddress property on the partnerlink to point to the reply address. This takes the format: /partnerLinkTypeName/[rollName] So in our case: http://server:port/orabpel/default/SingletonAccessor/1.0/Singleton/SingletonRequester Now if we deploy this it should finally work. To run the example, first go into the BPEL Console and initiate the Singleton process (set Singleton Id to "Singleton/1.0"). From the console initiate the SingletonAccessor entering whatever value you like for the name. You could kick off a number of these to see what happens. Loose Ends So that all works, however there are two basic problems with this approach: Firstly to invoke the Singleton you need to know the value of the SingletonId. This in our case is fixed within the SingltonAccessor process. Secondly, the Singleton process will only ever return a result to the SingletonAccessor process (as this is fixed by specifying the replyToAddress). What if we want to call it from another process or web service client? Actually the solution to this is very straight forward; you use the SingletonAccessor as the entry point for accessing the Singleton. Thus the real client will always call the SingletonAccessor. Thus the client never needs to know the SingletonId, and secondly as WS-Addressing remains turned on between the “real” client and the SingletonAccesor the client doesn’t need to worry about correlation sets or specifying a replyToAddress. One final question you may have is how do I prevent multiple instances of the Singleton process being kicked-off by mistake? Well why not try it? You will see that as long as you specify the same value for the SingletonId, BPEL PM will throw a “Conflicting Receive” fault as an equivalent receive activity has already been enabled by the original Singleton. Conclusion As we can see, implementing a Singleton within BPEL is relatively straight forward. Whilst this is not a totally clean solution, it does achieve the desired effect.

########

Top 5 Insights for Maximizing Returns with SOA

Oracle are hosting a live webcast on some of the key insights gained by customers who have successfully implemented SOA based solutions within their organization.

They will be covering areas such as; building a business case for SOA, strategies for adopting SOA, critical success factors as well as some of the efficiency drivers and cost savings achieved through the deployment of SOA based solutions.

Participating executives include:

  • Job Simon, Senior Director, NetApp
  • Dan Goerdt, Director, Schneider National Inc.
  • Jennifer Briscoe, CTO and VP, Collect America
This is an excellent opportunity to gain some valuable insights into how you can leverage SOA within your own organization.

########

Downloading the SOA Design Time for JDeveloper 11G

racle Fusion Middleware 11G Release 1 was officially launched by Oracle on the 1st July 2009; included in this release are the 11G version of WebLogic, SOA Suite, WebCenter and Identity Management. You can download all of the required components from http://www.oracle.com/technology/software/products/middleware/index.html.

Installing the Oracle SOA Suite is pretty straight forward; however the one gotcha is the install of JDeveloper 11G does NOT install the SOA Design Time. This is not really an issue as JDeveloper automatically checks for updates (including the SOA Design Time) and will prompt you to download and install them (alternatively select Help->Check for Updates).

However the SOA Design Time is 200+ MB in size. So not a trivial download, particularly if you are installing JDeveloper in multiple environments. Ideally it would be nice to be able to download a local copy of the SOA Design time extension which can then be installed in each environment that is required. Well fortunately you can do this by going to the Oracle JDeveloper product updated center shown below. From here click on the download link for the SOA Oracle Composite Editor (circled below).



To install the design time, within JDeveloper select Help->Check for Updates. Click Next and Select Install from Local File and browse to where you saved soa-jdev-extension.zip. Click Open, click Next and then Finish. JDeveloper will then install the extension; finally restart JDeveloper when prompted.

Saturday, 13 June 2009

Arch2Arch Podcasts

The Oracle Technology Network Arch2Arch Podcast brings together architects and other experts from across the Oracle community and beyond to discuss the issues, tools, and technologies that are a daily part of the software architect's ever-changing world. I've always enjoyed these podcasts and find them an extremely useful and convenient source of information.

Their latest podcast, is part one of a two part interview by Bob Rhubart (the host of Arch2Arch) with Antony Reynolds and myself about our book the Oracle SOA Suite Developer's Guide. The interview is truly international with Bob carrying out the interview based in California, Antony in the UK and myself from a hotel room in Sydney, Australia (about 8am in the morning).

It's quite strange hearing yourself speak, but what struck me is how English Antony sounded! Have a listen and see what you think (and whether I’ve picked up an Aussie twang) and whilst you’re their check out some of the other excellent podcasts as well.

########

SOA Down Under

Well it's been a while since I posted my last blog, and since then a lot has happened!!

On a personal note, I have moved half way round the world, re-locating from the UK to Melbourne, Australia. As you can imagine the process of tearing down your life on one side of the world and then rebuilding it on the other side is quite a time consuming but strangely cathartic activity (as well as leaving little time for blogging).

I've now been in Melbourne for 4 months and am absolutely loving it. I'm still with Oracle, but now working in Product Management for Oracle Fusion Middleware.

From a middleware perspective Oracle has announced the acquisition of BEA as well as released a technical preview of the Oracle SOA Suite 11g the next generation of the Oracle SOA Suite. No doubt I will be writing more about these in future blogs.

Well now that the dust has started to settle (in terms of my move to Australia), I plan to get back to blogging on a more regular basis, so if you have any requests then by all means let me know. In the meantime I'm currently writing one on how to call EJB's via WSIF for the ESB so look out for that one soon!!

########

Invoking EJB's from Oracle ESB using WSIF

A common requirement with the Oracle ESB is to use it to invoke one or more EJBs; one way to do this is to create a standard SOAP service from an existing EJB using the wizards in JDeveloper.

Whilst this is straight forward, it has the disadvantage that you introduce the overhead of using SOAP/HTTP as well as losing the ability to invoke the EJB as part of a wider transaction.

An alternative approach to SOAP, is to use WSIF (Web Services Invocation Framework) which allows us to expose the EJB through a standard WSDL interface but bind to it a run time using native bindings (i.e. RMI) as opposed to SOAP. This provides us with significantly improved performance and enables us to include the invocation of the EJB within a distributed transaction.

This document covers how to go invoke an EJB 3.0 from the ESB via WSIF. For the purpose of this document, we will create a simple EJB (Greeting) with a single method helloWorld. However to make this example a bit more a bit more realistic, we will pass in a complex type Person, which has the properties title, firstName and lastName.

Note: For the sake of this example, we have assumed you have created an application with JDeveloper containing a single project called Greeting.

Creating the WSIF WSDL File
Before you can invoke an EJB via WSIF you need to create a WSDL file which contains the appropriate binding information required by the WSIF framework to invoke the EJB.

There are two basic approaches you can take:
  • Contract First – With this approach we start with by defining an abstract WSDL document which defines the service and its corresponding operations and then write an EJB which implements this contract. This EJB will typically act as a wrapper to one or more existing EJBs.
  • EJB First – Here we start with one or more EJB and create a WSDL interface based on the EJB and it’s operations, and then the ESB or BPEL to assemble these operations into a meaningful service.
For the sake of this article we are going to take the contract first approach as this is generally accepted as best practice. In order to do this we will need to carry out the following steps:
  • Define the abstract WSDL for the required service
  • Implement and deploy the corresponding EJB
  • Add the WSIF bindings for the EJB to our abstract WSDL.
  • Implement and Deploy an ESB project to invoke the EJB
We cover each of these steps in detail below.

Define Abstract WSDL File
The first step is to define a simple abstract WSDL file to describe your services, for our purpose we have defined the following:


At first glance there is nothing here to indicate that the WSDL is for a service to be implemented using an EJB. The only thing worth mentioning is that we have defined our XML schema elements for our input and outputs parameters in a separate file, which we have imported into our WSDL document.

The reason for doing this it that at a later stage we will need to generate Java classes based on the XML Schema, and separating out the schema makes this process slightly simpler, especially if you need share the Schema between multiple EJBs.

The schema for Person.xsd is as follow:


Note: We have defined all our parameters as elements and NOT complexTypes since the Oracle ESB will complain if you try and use a WSDL which uses complexTypes in its message definitions.

Implement Session EJB
The next step is to implement a corresponding stateless session EJB to implement our Greeting service. The simplest way to do this in JDeveloper is right click your Greeting project and select New. This will bring up the ‘Gallery’ as shown below. Browse to the Business Tier-> EJB section and select Session Bean (as shown below) and click OK.



Figure 1 - Creating a Session Bean

This will launch the Create Session Bean Wizard. In Step 1, select ‘Enterprise Java Beans 3.0(J2EE 5.0)’ as the version of EJB you wish to use and click next.

In step 2; specify the EJB Name, i.e. Greeting, Keep the default settings for the Session EJB 3.0 options (as shown below) and click ‘Next’.

Figure 2 – Specify EJB Name and Options

Next specify the bean class name. This will have already been defaulted based on the EJB Name you specified in the previous step (i.e. it will have appended bean to it). Leave the class name as is, but specify the package name as appropriate. In our case we set the package name to com.bpelpeople.ejb and then click next.

Figure 3 – Specify Class Name

Finally specify that you want the EJB to implement a remote, local and web service endpoint interface (as shown below). Then hit ‘Finish’ to generate your EJB.

Figure 4 – Specify EJB Interfaces

For each operation defined in our WSDL file we should create the equivalent method in our EJB. For each method we need to define its input and return parameters. For operations or method that return simple XML types (e.g. xsd:string, xsd:integer) we can use the equivalent type in Java.

However for complex types, such as our Person element we need to implement the appropriate classes required for serialization/de-serialization between java and xml.

Using schemac to generate serialization/de-serialization classes
As part of the SOA Suite, Oracle provides the schemac utility (bundled with BPEL PM) which you can use to generate the serialization/de-serialization classes. To use this utility open the BPEL Developer Prompt and then run the following command:

schemac –noCompile –sourceOut < dir=""> < file="">

This will generate the source Java for serialization/de-serialization classes for the specified schema file (e.g. Person.xsd in our case) and places these within the specified source directory. Within JDeveloper you will need to import these classes into you project containing your EJB.

You will notice that for each element, schemac will generate three classes (e.g. Person, IPerson and PersonFactory); we will use the actual concrete class (i.e. Person) for the input parameter to our method.

So for our Greeting EJB we have defined the following method:


Setting Project Classpath
Once you have imported the generated Java classes into your project, in order to compile them you will need to add the orabpel.jar file to your project classpath, this is located at:

$SOA_HOME/bpel/lib/orabpel.jar

Create Deployment Descriptor
Finally we need to create a deployment descriptor for our EJB, within the Application Navigator right click on our Greeting project and select ‘New’. From the gallery, browse to General -> Deployment Profiles and select ‘EJB Jar File’. This will bring up the Create Deployment Profile window.


Figure 7 – Specify Deployment Profile Name

Specify a profile name, i.e. ‘Greeting’ in our example and click ‘OK’. This will launch the ‘EJB JAR Deployment Profile Properties’ window, as shown below.


Figure 8 – EJB JAR Deployment Profile Properties

Specify a name for the application, in our case we have chosen ‘GreetingApp’ (note: you will need to make a note of this value as you will use it when defining the WSIF Binding for your service) and click ‘OK’.


Note: you will need to define an Application Server connection to the OC4J instance to which you want to deploy your EJB first.

Deploying the EJB

You can now use JDeveloper to deploy your EJB, however before doing this you should define an Application Server connection with JDeveloper to the oc4j instance on which the SOA Suite is deployed (e.g. oc4j_soa).

You are now ready to deploy your EJB. You can now use the deployment file you just created to deploy your EJB. Right click on this file and select deployTo-> oc4j_soa (where oc4j_soa is the name of your application server connction).

As part of the deployment process, JDeveloper will bring up the Configure Application window. However if you click ‘OK’ and follow the standard deployment process, you will need to include the orabpel.jar within your .ear file. This can be a bit cumbersome, particularly if you have several EAR files to deploy.

The other option is to deploy your application as a child of the orabpel application. By designating orabpel as the parent application, your EJB will inherit the set of shared libraries imported by the parent including orabpel.jar. To do this select ‘GreetinApp’ within the ‘Configure Application’ window and then select orabpel as the parentApp as show below in figure 9.


Figure 9 – Setting the Parent Application

Then click ‘ok’ and JDeveloper will complete the deployment of our EJB.

Adding WSIF Bindings

Now that we have written and deployed our EJB we are ready to add the WSIF bindings to our abstract WSDL file to enable it to be called from the ESB.

Modify Definitions Element
Within the <definitions> element of your WSDL file you need to add the following namespaces:
  • xmlns:ejb=”http://schemas.xmlsoap.org/wsdl/ejb/”
  • xmlns:format="http://schemas.xmlsoap.org/wsdl/formatbinding/"
The ejb namespace allows the binding of WSDL operations to methods on an EJB class. The format namespace adds support for mapping Java types to XML Schema definitions.

Add Bindings Element
This is where we bind our service to an EJB rather than a standard SOAP service. For our example, the WSDL <binding> element is defined as follows:



Ejb Binding
<ejb:binding> should be the first element within our <bindings> tag and identifies that this is service is bound to an EJB rather than a SOAP service.

Type Definitions
Next, you need to map the XML Schema elements used within the WSLD message definitions to the Java types used in the method invocations for your EJB.

The <format:typemapping> element will contain one <format:typemap> for each xml schema element that we need to map, it has two attributes encoding and style both of which should be set to ‘Java’ to indicate that we are mapping to Java classes.

The <format:typemap> element has two attributes, typeName which hold the name of the xml schema element that we are mapping and formatType which contains the class name of the Java class to which we are mapping it.

In our example, we have specified two type mappings one between our Person element and the corresponding class that we generated using schemac, and the other between the Return element and the java.lang.String class.

Method Mapping
The final step is now to map the EJB method calls onto the WSDL operations. This is done using the <ejb:operation> tag to identify which EJB method should be used to support a given operation

This element has the following attributes:
  • methodName – which should be set to the equivalent method within the EJB.
  • interface – which should be set to ‘remote’.
  • parameterOrder – which should be set to name of the <part> contained with the input message for the operation.
  • returnPart – which should be set to the name of the <part> contained with the output message for the operation.

Add Services Binding
Finally we have to add the element to our WSDL to specify where to locate the service. This looks pretty normal, except that instead of a <soap:address> element, we need to specify an <ejb:address> element.

This contains one attribute jndiName; which specifiec the JNDI Name of the deployed EJB. In our example the <service> element is defined as follows:



Calling your EJB from ESB
We are now ready to invoke our EJB from within our ESB. To do this, create a SOAP Service based on our WSDL file within your ESB project in the normal way (note you will need to import the Schema into your ESB project first).

However before registering your ESB project you will need to define the following endpoint properties on your service:
  • java.naming.factory.initial - This is used to specify the initial context factory and should be set to com.evermind.server.rmi.RMI InitialContextFactory.
  • java.naming.provider.url - Used to specify the URL for the EJB provider, the structure of this is covered below.
  • java.naming.security.principal - Specifies the user id to be used to invoke the EJB and should be set to the appropriate value (e.g. oc4jadmin)
  • java.naming.security.credentials - Specifies the corresponding password to be used to invoke the EJB and should be set to the appropriate value (e.g. welcome1).

This is so that the ESB is able to locate and invoke the EJB at run time.

java.naming.provider.url
Specifies the URL for the provider (or application) which contains our EJB, this takes the form:

opmn:ormi://<hostname>:<opmn request port>:<oc4j container>/<application>

Where
  • <hostname> is the host on which the Oracle Application Server is deployed.
  • <opmn request port> is the runtime port for OPMN requests on the Oracle Application Server (e.g. 6003).
  • <oc4j container> is OC4J container to which we have deployed our EJB Application (e.g. oc4j_soa).
  • <application> is the name of the application in which our EJB is deployed, we specified this as part of our deployment descriptor (i.e. GreetingApp).
Unfortunately JDeveloper won’t allow you to specify these specific properties, so you will need to enter the .esbsvc file outside of JDeveloper.

To do this first close down JDeveloper (otherwise you can get sync issues) and then open the appropriate .esbsvc file in your favourite text editor and specify the endpoint properties at the end of this file as follows:



Once done, you can open up JDeveloper and deploy your ESB project.

Deploy EJB Classes to ESB Engine
Before we can call the EJB from the ESB, we must first deploy the remote interface class of our EJB (Greeting.class) and our Java serialization/de-serialization classes to the ESB engine, the simplest way to do this is to copy the java classes into the directory:

<soa_home>\bpel\system\classes

Once done you will need to re-start the SOA Suite so that it picks up the classes.

Deploy Patch to ESB
Finally if you are using 10.l.3.3 of Oracle SOA Suite you will need to install patch 6314009 which is available at metalink.oracle.com.

########

Monitoring SOAP Messages between BPEL Processes

When debugging BPEL processes it can sometime be very useful to see the actual messages flowing between processes.

Now, often the audit trail in the BPEL Console provides sufficient information to see whats going on, and by clicking on the appropriate invoke, receive or reply activity you can see the content of the payload that was sent or received.

However this is only half the story as it doesn’t show details of the actual soap message exchanged and in particular details such as the SOAP headers used for WS-Security and WS-Addressing.

Now you may know that Oracle BPEL PM ships obtunnel a tool for monitoring SOAP Messages exchanged between two parties, e.g. between BPEL and an external web service.

For those who aren’t familiar with obtunnel, it’s a TCP Monitor which works by listening on one port and then forwarding the message onto another (i.e. where the actual service is located).

The simplest way to configure a process to call a web service via obtunnel is to set the property location on the Partnerlink and re-deploy the process. The value specified here will override the webservice endpoint specified in the wsdl document.

This works fine for indivdual service invocations. However if we want to monitor messages between several different BPEL processes, then having to modify multiple partner links across multiple processes can be quite tedious and also requires you to go back and amend them all to there correct value.

In addition for asynchronous processes it’s more complicated, since the invoking process will pass a call back location to the invoked process, which as a result causes replies to by-passes obtunnel, so you don’t see the responses coming back.

Ideally what we want is a simple way to configure BPEL PM to always go via obtunnel when calling a BPEL process without the need to make any changes to the process definition. This is the subject of this technical note.

Configuring the BPEL Server to use obtunnel as a Proxy
Oracle BPEL PM can easily be configured to use a proxy; typically this is for such scenrios as placing an HTTP Gateway in front of a BPEL Server. However we can use the same approach to set up obtunnel as the proxy.

To achieve this we need to set two properties; soapServerUrl and soapCallbackUrl in the server configuration file.

soapServerUrl
This URL is published as part of the SOAP address of a process in the WSDL file.

The hostname and port for this URL should be customized to match the Host and the Listen Port of your instance of obtunnel. Assuming you are running obtunnel on the same host as your BPEL Server, you just need to change the port number.

soapCallbackUrl
This URL is sent by the process (using WS-Addressing) as part of the asynchronous callback address to tell the recipient of the request where to send the response to.

Again the hostname and port for this URL should be customized to match the hostname and listen port of your instance of obtunnel; so should have the same value as soapServerUrl.

The simplest way to set these properties is on the configuration tab from BPEL Admin (to access this on the BPEL Console login screen select goto BPEL Admin). Once set you will need to restart your BPEL Server.

Note: You will need to re-deploy any processes currently deployed to your server in order for them to be re-compiled (e.g. generate WSDL) with the correct addresses.

Configuring the BPEL Domain
Once we have configured the server, any external caller of a process will now access both the WSDL and the service via the proxy URL.

However the default behaviour for a process is to by-pass this proxy, why? Well really for reasons of performance; it simply doesn’t make sense for a process to call another process via SOAP as the overhead would be to big, rather a process simply calls another process via a direct in memory Java call which is far more performant.

However for our purpose, we can turn this off by setting the property optSoapShortcut to false. The simplest way to set this is in the BPEL Console, click on Manage BPEL Domain (top right hand corner), and then update the property in the configuration tab.

Note: In version 10.1.3.0.1, this property is not actually specified in the domain configuration file, so you will need to add it manually to the domin config file, located at:

[bpel_home]/domains/[domain]/config/domain.xml

Once added you will need to re-start the engine for it to pick up the change (from then on you can modify it as normal in the domain configuration tab).

Running obtunnel
The simplest way to run obtunnel, is to launch the BPEL Developer Prompt, this simply launches a Command Prompt with all the appropriate environment varaiables set. From here simply type the command obtunnel.

Once launched simply specify the port you want to listen on, and then the host name and port of your BPEL Server.

Summary
Using this approach you can easily monitor the various SOAP messages between processes, without the need to make any actual changes to the configuration of the process. All that is required is to deploy them to a version of the BPEL PM Server configured to use the proxy.

Note: If you develop your BPEL Processes against the “Proxy BPEL PM Server”, then the WSDL locations in the PartnerLinks will contain the Host and Port No of the Proxy of the service. This is typically not a problem as you re-configure these as part of the process of deploying the processes to a test / production BPEL PM Server.

########

Using nested Schemas within BPEL

When developing any BPEL based solution, you soon find that you are defining a common set of data objects that are used across multiple processes.

The most obvious place to define those data objects is in one or more XML Schemas which can then be referenced by each of your BPEL Processes.

Oracle BPEL PM 10.1.3 now provides the ability to import these Schemas as part of the BPEL Project Creation Wizard (in previous versions you had to import the Schema after the project was created – which you can of course still do in 10.1.3).

This all works very well, however there is a simple gotcha, that I’ve seen catch out a number of people, and that’s when you import schema’s which themselves import schemas.

Let’s take a simple example. A common scenario is to have a schema which defines common objects such as address, phoneNo, etc. This would be shared across multiple domain specific schemas such as customer (e.g. it imports the common schema to use the address, phoneNo type to hold the equivalent information for a customer).

Now, if were to import the customer schema into our BPEL Process, by default all we are importing are the definitions contained in Customer.xsd. This causes problems when we attempt to parse the customer schema as the parser can’t reference the definitions in the common schema.

The obvious answer here is to simply import the common schema as well. However this doesn’t work. To understand why let’s look at the import statements created in the <types> element of the WSDL file for the BPEL process:



The issue here is that each of the schemas has been imported into a “separate” schema, thus the common schema is not visible to the customer schema. However to fix this you simply edit the WSDL file to combine the imports into a single schema as illustrated below:

########

Using Email to initiate a BPEL Process

The notification service in Oracle BPEL Process Manager allows you to send a notification by email (as well as voice message, fax, pager, or SMS) from a BPEL process.

However another requirement is to be able to use the receipt of an email to initiate a BPEL process. This is the subject of this blog, many thanks to Muruga Chinnananchi on whose original example this is based.

Essentially we want to create a simple process EMailActivation which recieves an email sent to a particular email address. To achive this there are two basic steps:

  1. Configure the BPEL Server to be able to connect to the mail account.
  2. Define the BPEL Process to be initiated on receipt of an email.
Note: Download working example.

Configure Email Account
To configure the email account which the BPEL Server should connect to, we need to place a MailAccount xml configuration file (in our example BpelMailAccount.xml) into the following directory:

    <soa_home>\bpel\domains\default\metadata\MailService

Note: You will need to create the metadata and MailService directories.

The file itself can have any name (though must end with .xml) as you can define multiple accounts. However make a note of the name as you will need it to link your BPEL Process to the actual mail account.

Here’s our sample file:



The outgoing SMTP service doesn’t need to be configured (as we use the notification service to send outgoing emails). However the incoming account is defined by the following tags:

    <incomingServer>
        <protocol>[protocol pop3 or imap]</protocol>
        <host>[imap or pop3 server]</host>
        <email>[imap or pop3 account]</email>
        <password>[imap or pop3 password]</password>
        <folderName>[imap only, inbox folder ]</folderName>
    </incomingServer>

Note: When defining the account name, be careful to use the actual account name not the email address as they are not always the same.

Creating BPEL Process
The first step is to use JDeveloper to create an Asynchronous process initiated by a request message with a payload containing an element of type mailMessage (defined in Mail.xsd installed as part of BPEL PM).

To do this use the BPEL Project Creation wizard to create a BPEL Process in the normal way. After entering the process name and specifying the process template to be asynchronous, select "Next".

This will take you to the next step in the wizard where you specify the Input and Output Schema Elements, click on the flash light for the input schema and select Mail.xsd (located in <SOA_HOME>\bpel\system\xmllib) as shown in the figure 1 below.


Figure 1 - Specify Input and Output Element

This will then open the type chooser window to select the element to use from the imported schema. Select the mailMessage element as shown in figure 2 below.


Figure 2 - Type Chooser

Once the process has been created you can remove the callBackClient activity as we won’t need this.

Import Common Schema
If you now try and compile your process, you will find it fails with an error message. Thus is because the Mail.xsd itself imports a schema (common.xsd), so you need to import this schema as well.

To import the Mail Schema into your BPEL Process, ensure the diagram view for your BPEL process is open and selected in JDeveloper. Then within the BPEL Structure window, right click on the Project Schemas node and select "Import Schemas" (as shown in figure 3 below).


Figure 3 - Import Schema

Note: Once imported, manually update the WSDL file to ensure the import statements for both the Mail.xsd and common.xsd are contained within the same <schema> element or it will still fail to compile. See previous blog - Using Nested Schemas with BPEL for details.

Define Mail Activation Agent
The process itself is now ready for deployment. However we need to complete one final activity, which is to tie the BPEL Process to a mail activation agent for the Email account that we defined earlier.

The Activation Agent will poll the defined mail box for emails and then for each email it receives invoke an instance of the process to handle it.

To do this you need to add the following definition to the bpel.xml file, after the <partnerLinkBindings> element:

  <activationAgents>
    <activationAgent
className=”com.collaxa.cube.activation.mail.MailActivationAgent”
                            heartBeatInterval=”60”>
      <property
name=”accountName">BpelMailAccount</property>
    </activationAgent>
  </activationAgents>


Where heartBeatInterval is how often we want to poll the email account for new emails, and the accountName corresponds to the name of the account configuration file we defined earlier.

Finally deploy the process and send an email to the appropriate account.

Gotcha!! - If you modify the BPEL process in JDeveloper, the bpel.xml file may lose its changes (i.e. the activationAgent definition), and as a result the process will never get initiated - so always check the bpel.xml file is correctly defined just before deploying the process.

Email Server
To make testing easier, I installed my own local mail server. For this I used James (which is an Open Source Java Mail Server from Apache).

Installation of James is very straight forward you just download it and unzip it to a convenient location. To start it, use the script run.bat or run.sh, depending on your operating system in the james-2.3.0/bin directory.

To configure James just bring up a telnet session (on port 4555) to bring up the Remote Administration Tool from which you can create the required accounts. For example, to create the accounts bpel and jsmith (where the password is welcome1) enter the following:

JAMES Remote Administration Tool 2.3.0
Please enter your login and password
Login id:
root
Password:
root
Welcome root. HELP for a list of commands
adduser bpel welcome1
User bpel added
adduser jsmith welcome1
User jsmith added
listusers
Existing accounts 2
user: bpel
user: jsmith
quit

########

Using Analytics to Modify In-Flight Processes

I recently had the pleasure of presenting the Oracle Key Note at the Butler Business Process Management & Integration symposium. The subject of the keynote was to look at how we can use business analytics to enable us modify processes already in-flight.

When you consider the traditional closed loop BPM Lifecycle, as illustrated below, the emphasis has always been very much on using analytics about how processes have performed in the past, in order that we can modify the actual process definition in order to improve / optimize future versions of the process.



Whilst this approach has many benefits, the challenge was how we could modify processes already in-flight. In order to achieve this we need to overcome a number of challenges; firstly we need to collect analytics in (near) real time in order to base our decisions on up to date information; secondly we need a controlled way of modifying the in-flight process based on the data.

Collecting Real Time Business Analytics
Business Activity Monitoring (BAM) provides us with the tool to collect the near time analytics on which we can base our decision on how we wish to modify our in-flight processes.

For those of you who are not familiar with Oracle BAM, it enables the business to gain a real-time view of what’s happening with the business. To achieve this it provides the following key components:
  • Capture Real Time Data - Within a BPEL process you can place a sensor on any activity with a process, when triggered this generates an event which is picked up by BAM in real time.

    Note: You can actually use events generated pretty much by any event based system (e.g. database triggers, messaging systems, etc), it’s just that BPEL makes it very easy.
  • Analyze Processes, Trends, and Context - Next Oracle BAM is able to correlate all these events and synthesize them into meaningful data objects within it’s active data cache
  • Interface for Business Users - The business can then build interactive real-time dashboards and proactive alerts on top of this active data to enable them to monitor the business processes.
For more information on BAM see http://otn.oracle.com/bam.

Modifying In-Flight Processes
Once we have the real time analytics, the next challenge here is to use the data to modify the processes already in flight.

Now when we talk about modifying In-Flight Processes; most people assume that this involves quickly writing a new version of the process, testing it, deploy it, and then migrating the existing in-flight processes to this new improved version. The reality is that is rarely as quick as required!!!

Rather what’s really required from a business perspective is to be able to modify the flow through a process (and its sub-processes) in order to obtain the desired business outcome. There are three basic patterns here which provide a way of achieving this, these are:
  • Modify Process Flow – This uses business rules, which can be modified externally to the process to change the flow through a process instance.
  • Exception Management – This uses BAM to launch a process to manage an exception which is (typically) occurring across multiple processes.
  • Dynamic Process Assembly – Here we use BAM as source of real time data against which to evaluate Business Rules and use the results to dynamically call sub processes and assemble the end to end process on the fly.
Modify Process Flow
With this approach we are not looking to modify the actual process, rather modify the “path” through the process. If we look at any process, then at various points the process will hit a decision point which will determine which route it should take the through the process. Rather than building these decision points directly into the process we can externalise them in a rules engine such as Oracle Business Rules or iLog JRules, as illustrated below.



This then enables the business to modify the rules based on what’s currently happening within the business (as indicated through BAM) and thus get the process to take a different route.

An excellent case study for this is Cattles (a UK company which lends money to the secondary market), which uses a combination of BPEL, BAM and Rules for this purpose.

Exception Management
BPEL already provides a comprehensive way to handle exceptions, within the context of a process. However on some occasions we want to take a more holistic view to managing exceptions. This is certainly the case where we are suddenly “hitting” a common exception across multiple processes.

For example, if we have a loan flow process that’s going out to an external credit rating agency, and for some reason that credit rating check fails. Within the individual process BPEL process, we could have built in a simple re-try mechanism that simply waits a period of time before re-trying the service.

However if we start having a high number of failures (e.g. we may 1000’s of these processes running at any time), rather than handle the same exception multiple times (at least the same root cause) we can use BAM to detect that we have an issue and kick-off a single process to handle that exception (as illustrated below).



Dynamic Process Assembly
Rather than use BAM to feed a Dashboard, we can use it as a real time data source. A BPEL process can then query this real time data and pass this to the rules engine; enabling us to evaluate rules based on real time data.
At this point we could just use the result from the rules engine to dictate the flow through a process (as we did with the first example – Modify Process Flow).

However (as the title implies) we can take it a step further and dynamically assemble the process. The trick here is to have multiple “sub” processes all with the same WSDL definition. The rules engine rather than returning a “decision” now returns the end-point of the sub process to call, which the main BPEL process can then call dynamically (as illustrated below).



For more details on how to implement dynamic routing, see my previous blog; Using BPEL to Implement Dynamic Content Based Routing.

########

Writing a Recursive BPEL Process

Recently I was working with a client who wanted to implement a recursive process (i.e. one that calls itself). Now Recursion is a classic programming pattern, and in theory it should be pretty straight forward for a process to call itself. However at first sight it’s not so obvious.

The issue here is that the way a BPEL Process calls out to another process is to drag a Partner Link on to your BPEL Process and then use the Service Browser to select the deployed process that you wish to call.

Of course with our scenario, the first issue we hit is that we haven’t yet deployed the process as we’re still writing in it! The obvious thing to do here is create a stub process, with just the basic receive and reply operations defined and then deploy the stub. This works great, we can now select the process within the ‘Service Browser’ and finish implementing the process.

However the next problem occurs when we try and re-deploy the completed process. Here the deployment fails with an error that it is unable to validate the WSDL for the Partner Link. The issue here is that as part of the deployment process we are “over-writing” the old stub version of the process, with a new version (as we want to keep the version number the same).

The way BPEL PM achieves this, is to un-deploy the old version of the process, before deploying the new version. As part of the deployment, the BPEL engine validates the WSDL for all Partner Links including our recently un-deployed version of the process and as a result fails!

Fortunately there is a simple work around. First create and deploy the sub process as before, then once deployed go to the BPEL Console and select the process from the dashboard. Then select the WSDL tab and click on the URL for the WSDL Location.

This will open a browser containing the WSDL for the BPEL Process, save the file to your local file system (File-> Save As in IE) as .wsdl file. Now when you create your Partner Link in the process, instead of using the ‘Service Browser’ use ‘Browse WSDL files from your Local File System’ and select the WSDL file you just saved.

Note: When prompted to make a local copy specify ‘yes’.

From here you will be able to implement and deploy your BPEL process without any problems.

Example
I’ve created a simple example process, based of course on the classic Factorial example. You can download this here.

Deployment Considerations
The one drawback with this approach is that the WSDL file will now contain the Endpoint location for the service within in it. Thus is you were to deploy the process on a different server it would fail at run-time.

So you need to modify the WSDL file at deployment time so that the endpoint reflects the hostname and port number of where the process is actually being deployed. The simplest way to do this is update your build script so that ant will automatically do this for you.

Final Thought
Now in theory you could have incredibly nested processes using this approach, however I would advise is bad practice and is likely to have performance implications.

For example; if we ran the Factorial process to work out 50 Factorial, that would result in 50 process instances. Now if we expected to handle 1000 process running in parallel, this would result in 50,000 process instances – so the actual number of process instance could increase very dramatically.

So I would recommend using this approach with caution to ensure that it doesn’t result in a dramatic increase in the overall number of process instances.

########

Querying BPEL Process Instances

The Oracle BPEL Console provides a great tool for monitoring the status of in-flight and completed process. However it is often a requirement to present this information to a business user within a business specific view. One way of achieving this is to use the Oracle BPEL Portlets. This is fine where you want to give the user a specific list of process instances, for example a list of all currently running Purchase Order processes. But what about when you need to give them a list more filtered to their specific business requirements, for example: How do I find all open purchase order processes for a specific customer? How do I find all help desk processes being managed by a specific customer service representative? How do I find all open expense processes that are waiting approval? In addition once you’ve located a process instance, how do I provide the user with access to relevant data contained within the process? Again the BPEL Console provides a mechanism for drilling into the process and looking at the audit trail for the data. But often we want to provide this data in a summarised business view designed specifically for the needs of the business user. The Oracle BPEL PM Server provides a series of API’s that enable you to meet these requirements. In fact what is not often realised is that the BPEL Console itself makes use of these API’s, giving you the flexibility to completely re-write the console if that is what’s required!!! In reality this is rarely if ever the case, typically the requirement is to provide business users with a simplified view tailored to their specific needs. A simple way of achieving this is through the use of these API’s; this is the subject of this article. Locating a Process Instance The Oracle BPEL PM Client API provides a Locator class for enabling a client application to search for processes, instances and activities. The Locator class provides a number of constructors, which enable you to connect to a BPEL domain hosted on either a local or remote J2EE Server. For the purpose of locating specific process instances it provides two very useful methods: listInstances(WhereCondition wc) listInstancesByIndex(WhereCondition wc) Each method returns an array of objects of type InstanceHandle, which can then be used to perform operations on the corresponding process instance. The key parameter for each of these methods is the WhereCondition, which is used to build up a query to restrict which instances are returned by the method. Note: Where condition objects may be concatenated together to form a larger query. The methods append and prepend allow the user to add a clause (in String format) or even a whole WhereCondition object to the beginning or end of the current where condition. The following code snippet shows how to construct a WhereCondition to return all running process instances for the “LoanFlowProcess” where it’s current status is “CheckingCredit”. String pProcessId = "LoanFlowProcess"; String pStatus = "CheckingCredit"; // Constructs a where condition that searches for open instances WhereCondition where = WhereConditionHelper.whereInstancesOpen(); // Extend the where condition to filter on process id WhereCondition whereProcessId = new WhereCondition( "process_id = ?" ); whereProcessId.setString(1, pProcessId); where.append(whereProcessId); // Extend the condition to filter on processes with the status ‘CheckingCredit’ WhereCondition whereStatus = new WhereCondition( "status = ‘" + pStatus + " ); whereStatus.setString(1, pStatus); where.append("and").append(whereStatus) // Find Instances IInstanceHandle[] instanceHandles = locator.listInstances( where ); In the final step, the actual locator class is performing a query against the BPEL Dehydration store, similar to the one illustrated below: “select cikey from cube_instance where " + whereCondition.getClause(); We use the WhereCondition (which wraps a SQL prepared statement where condition), in order to restrict the result set returned by the query. It’s worth exploring in a bit more detail the various parts of the WhereCondition. For the first we use the WhereConditionHelper class to restrict the query to only currently running processes. This is a simple utility class which provides a variety of Static methods for creating various query fragments (e.g. return process instance whose state is open, completed, aborted, stale, etc) which can then be appended to additional where conditions to create the required query. For our second condition we are literally adding the condition cube_instance.process_id = “LoanFlowProcess” to our prepared statement. Here you can specify pretty much any of the columns in the database table cube_instance (e.g. Process_Id, Revision_Tag, Priority, Status). In reality, rather than naming this column explicitly, we should use the appropriate constants defined in com.oracle.bpel.client.util.SQLDefs (e.g. SQLDefs.CI_process_id for our example). The final statement is similar to the second in that we are querying on the process status. But what is the process status? Well it shouldn’t be confused with process state, which we queried on in the first WhereCondition. Rather the process status is a variable that keeps track of where in the process a particular process instance is. When the process is first initiated, this value is set to ‘initiated’. This value is then updated every time you enter a new scope within a process to contain the name of the scope. Note: When you have nested scopes, it will contain the name of the lower most nested scope that the process is in, i.e. it contains the last scope that was entered. Also when a process leaves a scope the status value is NOT reset, i.e. it will still contain the name of the previous scope until it enters a new scope. Process Indexes The listInstances method is very useful but it still doesn’t allow us to perform a query based on actual data held in the process instance, e.g. just return the loan flow process for ‘Dave’. To solve this problem, BPEL allows for a process to have up to 6 indexes and to create a where condition across one or more of these indexes. Essentially there are two steps to this; first you need to set the index values on the actual process instance; secondly you use the index values in a query to pull back all processes for a particular index value in a similar fashion to above. Setting the Index Value The simplest way to achieve this is to embed a piece of Java (using the Java Embed Task) at the start of the process to call the setIndex API to set the index value based on a value in the initial message, as shown in the example below: // Set Index1 for Customer Name String customerName = ((com.collaxa.cube.xml.dom.CubeDOMText) getVariableData("input", "payload", "/auto:loanApplication/auto:customerName/text()")).getText(); setIndex(1, customerName); Note: The getVariableData method is used to retrieve the customerName from the “input” variable; the syntax of the parameters is similar to the “from” component within an assign construct. Querying Processes The following code snippet shows how to construct a WhereCondition to return all running process instances where index_1 is equal to ‘Dave’. String pCustomerName = "Dave"; // Constructs a where condition that searches on index 1 WhereCondition where = new WhereCondition(SQLDefs.CX_index_1 + " = ?"); where.setString(1, pCustomerName); // Find Instances IInstanceHandle[] instanceHandles = locator.listInstancesByIndex( where ); However there is one minor issue with this; under the covers the listInstances method is performing a query on the cube_instance database table, whilst the listInstancesByIndex is performing a query on the ci_indexes database table. The issue here is if we want to perform a query that is a join across these two tables, i.e. show me all LoanFlow process for Dave. The WhereCondition API doesn’t (naturally) allow for joins; however there are two possible workarounds: The first is to set the index values to hold the additional data required by the query, e.g. set index_1 to hold the process name and index_2 to hold the customer id. The second is to extend the where condition passed to the listInstance method to have an IN condition that queries against the ci_indexes database table, as show below: // Extend the where condition to only return open processes with the WhereCondition whereIndex = new WhereCondition( "cikey in (select cikey from ci_indexes where index_1 = ?"); whereIndex.setString(1, pCustomerName); where.append("and").append(whereIndex) Note: I’ve used table and column names for clarity, but in reality you should use the constants defined by SQLDefs. Using an Index to set status Earlier in the article we looked ay how we can use process status to keep track of where we currently are in a process (remember status contains the name of the last scope that we entered). However, whilst this is useful it has a couple of drawbacks; one is that if we use the listInstancesByIndex method to locate a process, we can’t actually filter on the state of the process. However the other is that we rely on insuring that the scopes are correctly named, designed, etc to keep track of where we are in the process. However in reality we may only have a few key milestones that we are interested in, and these may span several scopes or we may have more than one milestone contained within the same scope. An alternative is to use an Index to hold the status of the process, and just update the status of the process using the setIndex method at appropriate points within the process. Accessing Process Data Once we have our list of processes instances, the final stage is to actually access the relevant data contained within the instance to display to the business user. This is simply the case of iterating through our array of instance handles. Once you have the instanceHandle for a process, you can then use this to access process variables contained within the process instance using the getField method. However you need to take care that whatever variable you are trying to access is currently visible within the active scope of the process. I find the simplest way to do this is to define a global variable (i.e. define the variable at the process level) and initialise it at the beginning of the process based upon the content of the initial message received by the process. Then during the lifetime of the process update the variable as required to reflect the true state of the process. The following code snippet shows how we can process each instance returned by the locator and access the variable “LoanApplicationSummary” defined in the BPEL process. // Find Instances IInstanceHandle[] instanceHandles = locator.listInstancesByIndex( where ); // Process each instance for (int i = 0; i < instanceHandles.length; i++ ) { IInstanceHandle instanceHandle = instanceHandles[ i ]; // Get Loan Application Summary Variable Element loanApplicationElement = (Element) instanceHandle.getField(“LoanApplicationSummary”); // Create Loan Application Bean LoanApplication loanApplication = LoanApplicationFactory.createFacade(loanApplicationElement); // Process Loan Application Bean as required … } To access the variable you use getField method on your instance Handle, as show below: // Get Loan Application Summary Variable Element loanApplicationElement = (Element) instanceHandle.getField(“LoanApplicationSummary”); This will return a DOM (Document Object Model) representing the XML contained within the process variable. You can use the actual DOM API to parse and manipulate the XML content but for any complex structure this can be quite tricky. To make this simpler, Oracle BPEL Process Manager provides a lightweight JAXB-like Java object model on top of XML; a so called XML façade. The façade provides a Java bean-like front end for an XML document/element. Façade classes have a corresponding Factory class which parse the XML document/element to create the façade, as show in the code snippet below: // Create Loan Application Bean LoanApplication loanApplication = LoanApplicationFactory.createFacade(loanApplicationElement); Once the façade has been created, you can use its getter methods to access the required data. Note: Façades are generated using the schemac tool shipped with Oracle BPEL Process Manager. You can use schemac to generate the façades from WSDL or XSD files (see the Oracle BPEL PM Developer guide for further details). Summary As we have seen Oracle BPEL PM provides a powerful Client API that makes it relatively simple to build a business specific “console” tailored to the needs of the user. For further information on the API you should see the Oracle BPEL Process Manager Client API Reference Posted by Matt Wright at 14:51 10 comments Wednesday, 14 February 2007 "Private" BPEL Processes When developing any BPEL based solution, good practice dictates that you take a modular approach to process design, which allows the sharing of sub-processes among higher level processes. For example a payment process may be used by both the Expenses Process and Order Process. As a result you will often end up with BPEL processes that you only intend to be called by other BPEL processes, and typically BPEL processes with at least some knowledge about the underlying process. So how do you prevent other ‘clients’ from directly invoking these processes? Now initially this may sound like a security issue, and Oracle BPEL Process Manager provides a number of ways of securing BPEL Processes; in addition Oracle Web Services Manager provides a comprehensive solution for adding policy-driven security to all Web services (not just BPEL Processes). However security is typically intended for enabling controlled secured access to a BPEL Process (or Web Service) by authorized clients. However in this case we don’t actually want any client directly accessing the process. Now admittedly we could take a standard based security approach to this, but is there a simpler way? Now many programming languages such as Java provide the concept of private or protected methods that control what has access to them. For example, in Java a class may declare some of its methods as being protected; indicating that only other classes in this package can access these methods. The great thing about this approach is that the developer is actually signalling a level of intent, i.e. this method should not be called directly except by related classes (or sub-classes) that can be trusted to use the method correctly. Ideally it would be great if BPEL provided similar functionality, however unfortunately it doesn’t. So is there a way of achieving something similar? Well it turns out there is a fairly straight forward way of getting close to the desired effect. The approach makes use of the Oracle HTTP Server embedded within the Oracle Application Server (as such this won’t work for the Developer install) to prevent access to a specific URL pattern, plus the use of domains with Oracle BPEL PM to enable us to create a simple URL pattern for all “private” processes. Configuring Oracle HTTP Server Oracle HTTP Server is the Web server component of Oracle Application Server and is based on the Apache infrastructure. Any one familiar with Apache administration is aware that it provides Allow and Deny directives which let you either allow or deny access to a particular URL (or pattern) based on the host name, IP address (or partial IP address) or a fully qualified domain name (or partial domain name). By specifying a parameter in the httpd.conf file we can create a directive to only allow access to a URL which matches the specified pattern from the host on which the Oracle Application Server itself is installed. For example the following directive will only allow access from the host to any URL which contains the string “/orabpel/private/”: Order deny, allow Deny from all Allow from localhost Note: The httpd.conf file can be found in the directory: /Apache/Apache/conf Configuring Oracle BPEL PM Now the trick is here to be able to provide a simple URL pattern on which to place the restriction. This is actually rather straight forward. Oracle BPEL PM has the concepts of domains into which a BPEL process is deployed. A BPEL domain allows a single instance of Oracle BPEL Process Manager to be partitioned into multiple virtual BPEL sections (each identified by an ID and protected by a password). When Oracle BPEL Process Manager is installed, an initial domain named 'default' is created. If you inspect the location of any deployed process, you will see that the domain makes up part of the URL, for example: http://[hostname]:[port]/orabpel/[domain]/[process name]/[process version] So using this approach we can create a domain called “private”. By doing this we can define a directive to Apache to prohibit access to any URL that contains the pattern “/orabpel/private/”. You can create a new domain from the BPEL console. From the initial login screen, instead of login into a particular domain, select the link Goto BPEL Admin (the default password is oracle). From here select the ‘BPEL Domains’ tab and then ‘Create New BPEL Domain’. Then any process we want to make private we simply deploy to this domain. This obviously makes it very simple to make a process “private” with the added benefit is that it also indicates that the process itself is intended to be private. Additional Considerations Now this approach only works when you call the BPEL process via the SOAP stack. Oracle BPEL PM also provides a Java RMI interface; requests made via this interface don’t go via the Oracle HTTP Server. However access via RMI tends to be more tightly controlled so this should not present a serious problem. It doesn’t prevent other clients on the same box from calling the BPEL Process, however again if the client is hosted on the same box then I would hope you have a reasonable handle on what its doing. Finally it doesn’t prevent you from submitting the BPEL process from the BPEL Console as the invocation is made from the BPEL Server itself; however each domain is password protected so this shouldn’t really be an issue. We also mentioned that this approach doesn’t work for the developer install of Oracle BPEL PM as it doesn’t embed the Oracle HTTP Server. However as a default this is probably a good thing as if the process was secured then it would prevent the BPEL Designer in JDeveloper from actually being able to read the WSDL which is required in order to develop the BPEL processes which calls the “private” processes. Of course you can come up with variations of this approach, for example you could conduct developments against a mid tier install and configure the http directive to only allow certain developers (or at least there boxes) to be able to access processes deployed to a particular domain. Conclusion On its own this isn’t a perfect approach to securing a BPEL process, but then it’s not intended to be. But as a relatively simple approach to creating “private” BPEL processes where it is clear that this is the intention, then I believe this is certainly one way of achieving that. However I would be interested in hearing any comments or suggestions on this approach or any alternatives.

########

Oracle BPEL and ESB

Hi,

The goal of the blog is to spread my personal view and knowledge about the Oracle product named Oracle BPEL Process Manager.

As I can see the many customers are moving to a Service Oriented Architecture (SOA), the BPEL product of Oracle is often a good fit. The upcoming release with Enterprise Service Bus (ESB), the product will be named Oracle SOA Suite and will cover the complete SOA functionality that customers are expecting.
Currently Oracle does not have a real ESB product. Oracle Interconnect can act as an ESB, but is not integrated with Oracle BPEL PM.

Regards,

Manish

########

BPEL Correlation

Where Did I Put That Process?

A BPEL process is initiated and makes a call to an ERP system to raise a purchase order, generating a purchase order number. Later that purchase order causes another system to raise an invoice and send the invoice to the BPEL process. How does the BPEL engine know which BPEL process should receive this invoice and process it. This is dealt with a thing called correlation.

From e-mails and phone calls that I receive it appears a lot of people struggle with BPEL correlation. It seems that the questions falls into two categories, why would I want it, and how do I do it?

What is Correlation?
Correlation is basicallly the process of matching an inbound message to the BPEL engine with a specific process. Normally this matching is hidden from us. Synchronous calls have no need of correlation because the conversation context is maintained on the stack or across a TCP connection. Consenting BPEL processes will usually correlate messages using WS-Addressing headers to pass around magic tokens that act like the session cookies in a web application.

Why Do I Need Worry About It?
Well most of the time you don't! As I mentioned before, calling another BPEL process or using a synchronous call will mean you don't have to think about it. You do need to worry about it in the following situations amongst others.

  • When using an asynchronous service that doesn't support WS-Addressing
  • When receiving unsolicited messages from another system
  • When communicating via files
In these casess we need to be able to tell BPEL to look at some content of the message in order to select the correct process instance to receive the message.

How Do I Get the Right Process Instance?

BPEL provides a construct called a correlation set to allow for custom correlation. A correlation set is a collection of properties used by the BPEL engine to identify the correct process to receive a message. Each property in the correlation set may be mapped to an element in one or more message types through property aliases as shown below.




Things to Remember About Correlation

I see some common misunderstandings about custom correlation. So lets knock them off now.
  • Only the process receiving the messsage needs to worry about correlation. As long as the sending service includes sufficient information in the message to be able to correlate it with previous activities there is no need for the sender to even be aware that correlation is occuring.
  • Correlation properties must be unique for the duration of the life of the BPEL process that set them. Make sure that you can't have two processes working with the same correlation tokens, for example using social security numbers to correlate an expense claims process would be a bad idea if an individual could kick off two seperate instances of the process.
  • Properties can be made up values or actual business identifiers such as purchase orders or numbers. They don't have to be strings, they can be any reasonable XML type.

A Quick Guide to Custom Correlation
Enough of the theory, how does it work in practice? Consider a process A that call a process B that calls a process C that calls a process A. This is one of the scenarios (113) in the BPEL samples distributed with Oracle BPEL PM.
So we have a cycle A->B->C->A. Three different asynch calls.

Note only process A needs correlation because only A receives more than one call. On the invoke from A to B we add a correlation in the correlation tab for the invoke using BPEL Designer. In here we will create the correlation set and create a property to correlate the exchange. We set this to initiate the correlation, meaning that it will associate this process with the given value.

On the receive from C to A we add the same correlation set with its property as we did for the invoke from A to B. However this time we mark the receive as not to initiate the correlation, meaning that the BPEL PM will use this to select the right process instance.

We now go to the BPEL structure diagram in the BEL Designer and add the property alias. We create two property aliases maping to appropriate elements in each message that will have the same value in message from A to B, as in the message from C to A. Note that the elements can be different names and in different structures in the two messages, but they must contain the same value if correlation is to work.

At this point BPEL designer has done almost everything. We need to manually edit the bpel.xml file and add the following XMl fragment to each partner link that will participate in the correlation.

Note that "correlationSet" is a fixed value. I have uploaded a sample of this process. Note deploying it may be tricky due to circular dependencies. How to deploy it is left as an exercise to the reader, but if the worst comes to the worst deploy an empty version of the process B, then deploy process A, then process C and then the real process B.

Useful References
Here are some useful references on correlation.

########

Deploying BPEL processes in a High Available Application Cluster

Introduction
Oracle’s scalable application architecture in combination with the BPEL architecture combines the best elements to create a high available and high performance environment. This article describes a solution to deploy multiple BPEL processes into a high available application cluster.

Overview

In general the deployment of BPEL processes can be done as follows:

  • Using the Oracle BPEL console.
  • Using Oracle JDeveloper 10g.
  • Using the obant tool.
  • Copy files into the Domain’s deployment area on the BPEL server.
The first three deployment mechanisms are based on deploying a single process to the BPEL server. The last deployment mechanism can be used to deploy multiple files. While these solutions are working well for most common environments, it will be more complex when a clustered environment is in place. Most of the previous mentioned deployments must be executed multiple times, which can result into faults.

This article describes how to deploy BPEL processes to a clustered application server environment. In this scenario we will have three (3) instances of Oracle Application Server 10g. On each of the server an Oracle BPEL is installed. This is shown as example in the next diagram.



Each application server is added to a logical cluster. The cluster can be defined via Oracle Enterprise Manager 10g. This applies also for adding the application server instances to this cluster. It is assumed that the application servers are using the Oracle Database 10g for retrieving and storing their data. This database is made high available via Oracle Real Application Clusters.

Solution
Define a logical application cluster and add each application server to this cluster. This can be done via Oracle Enterprise Manager 10g.
  • Create an application cluster, e.g. “BPEL_AS_CLUSTER”.
  • Add each application server instance to this cluster.
After creating the cluster and assigning the application server instances, a cluster exists that contains three application instances. Create for this cluster an OC4J instance named, for example “OC4J_BPEL_DEPLOY”. This OC4J instance will marked as not started. Creating the OC4J instance is done via Oracle Enterprise Manager 10g.

After creating the OC4J instance, each application server will now have a local OC4J instance. The power of the OC4J instance, in other words a J2EE environment, is that it can deploy EAR and WAR files. Creating an EAR or WAR file and deploy this to the application server, results that this file will be unpacked in each OC4J instance.

Using this mechanism for BPEL processes, it confirms to the industry standard of J2EE deployment.

Create an EAR file that contains all the BPEL jar files that must be deployed. For example, create an EAR file with an application name “bpel_deploy” containing a web application named “files”. The names must be conforming to the EAR/WAR structure. In this example the EAR file contains a WAR file that contains the BPEL jar files.

bpel_deploy.ear
meta-inf/application.xml
files.war

files.war
web-inf/web.xml
bpel_ProcessTaks_1.0.jar
bpel_HelloWorld_1.0.jar
bpel_CreditRatingService_1.0.jar

Deploy the EAR file to the OC4J_BPEL_DEPLOY instance of the cluster. The trick is that the EAR file will be deployed to each OC4J instance on each application server! This results that all BPEL jar files are copied to all servers.

The deployment of the EAR file is done via Oracle Enterprise Manager 10g or via Oracle JDeveloper.
After this deployment the BPEL servers must me made aware that BPEL jar files have been deployed. Making the BPEL servers aware of these jar files is done once, only during the initial setup. This is done by replacing the temporary directory for BPEL processes on the server to point to the directory where the BPEL jar files are extracted from the EAR file.

This can be done as follows (UNIX example):

rm ${ORACLE_HOME}/integration/orabpel/domains/default/tmp

ln –s ${ORACLE_HOME}/j2ee/ OC4J_BPEL_DEPLOY /applications/bpel_deploy/files
${ORACLE_HOME}/integration/orabpel/domains/default/tmp

Each time a new EAR file is deployed, with all the BPEL jar files, the BPEL servers are aware that new BPEL processes are ready to apply in the BPEL server environment.

Conclusion
Using the EAR/WAR mechanism of J2EE for deployment of BPEL jar files, results in a more industry standard compliancy. The deployment is less complex to maintain in a high available environment. The risks of faults during deployment are reduced. Deploying the BPEL files is done in the same way as normal J2EE application. Deployment is done via HTTP or HTTPS protocol, making no need of special configuration in the network. Multiple BPEL processes are deployed at once.

########