Monday, 23 February 2015

BPM Express or Standard / IBM Process Designer / BPM Developer Interview Questions

Q: What is BPD?

To model a process, you must create a business process definition (BPD). A BPD is a reusable model of a process, defining what is common to all runtime instances of that process model.

A Business Process Definition (BPD) can include a lane for each system or group of users who participate in a process. A lane is the container for all the activities to be carried out by a specific group of users or by a system.

Q: What is gateway? or How do you converge or diverge the process flows? What are different gateways available and when do you use each?

Gateways control the divergence and convergence of a sequence flow, determining branching and merging of the paths that a runtime process can take.

You can model the following types of gateways in your process diagram:

Parallel (AND) : 
Use a parallel, diverging gateway when you want the process to follow all available paths.

Use a parallel, converging gateway when you want to converge all available paths.

Inclusive (OR):
Use inclusive, diverging gateway when you want to follow one or more available paths based on conditions that you specify.

Use downstream of an inclusive diverging gateway to converge multiple paths into a single path after all the active paths completed their runtime execution. The inclusive join looks upstream at each path to determine whether the path is active, in which case it waits. Otherwise, it passes the token through without waiting.

Note: Inclusive gateways can follow a maximum of n–1 paths. So, if you model a conditional split with three paths, the process can follow two of those paths

Exclusive (XOR):
Use to model a point in the process execution where only one of several paths can be followed, depending on a condition, or to model a point in process execution when the token for one of several incoming paths is passed through the gateway.

Event:
Use to model a point in the process execution where only one of several paths can be followed, depending on events that occur. A specific event, such as the receipt of a message or timer event, determines the path to be taken. An event gateway must be modeled a certain way as described in Modeling event gateways.

Be aware of the following when using gateways:

  • After you drag a gateway from the palette to your process diagram, you can choose any of the available gateway types.
  • When you model inclusive and exclusive gateways, if all conditions evaluate to false, the process follows the default sequence flow. The default sequence flow is the first sequence flow that you create from the gateway to a following activity, but you can change the default sequence flow at any time.
Q: What are different task types?

User Task

  • User tasks must be completed by process participants and are associated with Human services by default. 
  • For cases where you want a user to start the service but no additional user involvement is required, you can also choose a user task type and associate a service with it, such as an Integration or Advanced Integration service.
  • Process Designer automatically creates the required user implementation that you need when you drag process components onto a diagram. You can also choose User Task and an associated service for an activity implementation, as described in Implementing activities.
System Task
  • System tasks must be completed by an automated system or service and are automatically run without a need for user initiation regardless of the type of lane in which they are defined in a BPD diagram. 
  • When you drag an Ajax service, General System service, Integration service, or Advanced Integration service from the library to a BPD diagram, Process Designer automatically creates an activity with a System task type, regardless of whether the service is dragged to a system lane or to a participant lane. 
  • Dragging an activity from the palette to a system lane in a BPD diagram automatically creates an activity with a System task with the Default System service selected. System tasks that you place in a non-system lane are also run by the system. 
Decision Task
  • Decision tasks are useful when you want a decision or condition in a business rule to determine which process implementation is started. 
  • When you drag a Decision service from the library to a BPD diagram, Process Designer automatically creates an activity with a Decision task
Q: What is sub-process?

A subprocess represents a collection of logically related steps contained within a parent process. You can view a subprocess as a single activity, providing a simplified, high-level view of the parent process, or you can drill into the subprocess for a more detailed view of its contents.

Subprocesses can contain swimlanes that are distinct from the parent process. For example, activities in your subprocess can be carried out by a set of participants that is different from the set of participants that carry out the activities in the parent process.

Like other activities, subprocesses can be configured to run multiple times within the execution of the parent process by configuring looping behavior on the subprocess activity element in the parent process.

Q: What are different the subprocess types?

There are three types of subprocesses that you can model in a BPD. Their characteristics are described in the following table.

1. Subprocess

A non-reusable subprocess that exists only within the parent process

Characteristics
  • Each subprocess must contain at least one start event with an implementation type of None.
  • Activity names must be unique with respect to the top-level process activities, and all other subprocesses and event subprocesses under the same top-level process.
Variable Scope
  • Inherits variables from the parent process and can contain local private variables visible only within the subprocess.
  • Variable names declared in a subprocess cannot be the same as variable names declared in any of its parent processes. If there are multiple layers of embedding, with subprocesses contained within other subprocesses, variable names must be unique throughout the entire subprocess hierarchy.
2. Linked process

A call to another reusable process.

Characteristics

The process called by the linked process activity can contain multiple start events, but must contain at least one start event with an implementation type of None.

Variable Scope

Variable data is local to each process, therefore data mapping is required to pass data into and out of the linked process.

3. Event subprocess

A specialized type of non-reusable subprocess that is not part of the normal sequence flow of its parent process, and which might occur zero or many times during the execution of the parent process.

Characteristics

Must contain a single start event, which can be one of:
  • Timer
  • Message
  • Error
Event subprocess execution can interrupt parent process or can run in parallel.

Activity names must be unique with respect to the top-level process activities, and all other subprocesses and event subprocesses under the same top-level process.

Boundary events are not supported on an event subprocess.

Variable Scope
  • Inherits variables from the parent process and can contain local private variables visible only within the subprocess.
  • Variable names declared in an event subprocess cannot be the same as variable names declared in any of its parent processes. If there are multiple layers of embedding, with event subprocesses contained within other subprocesses, variable names must be unique throughout the entire subprocess hierarchy.
Q: How do you assign the activities to users?

For any activity with a BPM service implementation, you can designate the users who receive the runtime task by using the Assignments page in the properties for that activity.

  1. In the Designer view, click an activity in a BPD diagram to display its properties.
  2. Go to the Assignments page in the properties view.
  3. From the Assign To list, choose one of the following options:

Last User in Lane
  • Assigns the runtime task to the user who completed the activity that immediately precedes the selected activity in the swimlane. 
  • Do not select this option for the first activity in a lane unless the activity is a service in a top-level BPD and a Start Event is in the lane. In this case, the runtime task is routed to the user who started the BPD.
Lane Participant
Assigns the runtime task to the participant group associated to the swimlane in which the selected activity is located (the default selection).

Routing Policy
Assigns the runtime task according to the policy that you establish.

List of Users
Assigns the runtime task to an ad hoc list of users.

Custom
  • Assigns the runtime task according to the JavaScript expression that you provide in the corresponding field. To select one or more variables for your expression, click the variable selection icon next to the field. 
  • The JavaScript expression produces results such as USER:<user_name>, ROLE:<group_name>, or PG:<participant_group>, where user_name is the name of an IBM® BPM user (such as tw_author), group_name is the name of an IBM BPM security group (such as tw_authors), and participant_group is the name of a group of users in your enterprise.
Q: What are Coaches?

Coaches are the user interfaces for human services.

There are two types of user interfaces for human services: dashboards and task completion. To build either type of user interface for human services, you use Coaches.

When a Coach is a dashboard user interface, users can run it as a stand-alone user interface at any time. The users access it through the Process Portal.

When a Coach is a task completion user interface, it is part of the human service flow. At run time, when the flow enters the Coach, the user sees the user interface that is defined for that Coach. The user interface consists of HTML code that is displayed in a web browser. The flow leaves the Coach when a boundary event occurs. A Coach can have multiple exit flows with each one associated with a different boundary event.

Q: Explain the difference between the Coaches and Coach Views.

Coaches contain one or more Coach Views. The Coach Views provide the user interface elements and layout for the Coach.

  • Each Coach View can contain one or more other Coach Views, which creates a parent-child relationship between these Coach Views. 
  • At run time, the parent Coach View is rendered as a <div></div> tag that contains a nested <div></div> tag for each child Coach View. 
  • Each Coach View can also have a binding to a business object, CSS code to control its visual layout, and JavaScript to define its behavior

Coach Views are reusable so you can create a library of common user interfaces and behavior. You can combine these common user interfaces to rapidly develop new Coaches.

The Coaches toolkit that is included with IBM BPM contains a set of common user interfaces that are called stock controls. You can include these stock controls when you are creating your own Coach Views.

Q: What are differences between Coaches and Heritage Coaches?

1. Coaches can contain multiple Coach Views. Coach Views are reusable collection of user interfaces and can be bound to a data type. They can be shared between the Coaches. But in Heritage Coaches, all UI elements need to be recreated.

2. Coaches have web 2.0 appearance and behavior and  have client side data model i.e. data can be refreshed without the full page refresh. They use Dojo 1.7.3.

3. Instead of the one-button mechanism of Heritage Coaches, Coach Views use named boundary events. Programmers use boundary events for actions such as data updates with the server and transitions to other Coaches or services

4. Coaches support collaboration while Heritage Coaches do not. More than one person can work on the same Coach instance at the same time in their own browsers

5. The control ID of a view-based Coach is different from the control ID of a Heritage Coach. The control ID of a Heritage Coach is the div node ID. This is not the case in view-based Coaches because Coach Views are reusable and you can have multiple views in a Coach.

In view-based Coaches, the control ID is the value of the data-viewid attribute of a <div></div> tag. By using the data-viewid attribute, View developers can locate the nested View because data-viewid is unique within its parent or enclosing view.

Coach cannot contain Heritage Coach elements and Heritage Coaches cannot contain Coach Views. That is, a user interface must be a Coach or Heritage Coach and not a mix of the two.

Q: How do you perform validation on Coach Views?

To validate the data that is in the Coach before the flow proceeds to the next step in the service flow, add a validation node to the flow. The validation node can be a nested service or a server script. The server script is the simpler implementation although the nested service provides greater flexibility.

Example server script,
 tw.local.validate = new tw.object.CoachValidation();

if (tw.local.application.name == ""){
    tw.system.addCoachValidationError(tw.local.validate, "tw.local.application.name",
    "The name cannot be empty.");
}

Q: How do you enable JavaScript debugging for the Coaches?

For debugging purposes, you can set your Coaches and Coach Views to use the readable versions of Dojo and the Coach framework JavaScript.

  1. Open the administrative console and click Resources > Resource Environment > Resource Environment Provider
  2. On the Resource environment providers page, click Mashups_ConfigService.
  3. Under Additional Properties, click Custom properties. The list of custom properties opens.
  4. Click isDebug, change the Value field to true, and then click OK.
  5. Save your changes to the master configuration.
  6. Restart the application server instance.
Q: How do you generate a unique ID for a Coach View at runtime?

In some situations you might want to use the ID attribute for your DOM elements within a coach view. However, all DOM IDs must be globally unique. For example, during collaboration the default highlighting behavior is implemented based on a unique DOM ID. To ensure a unique ID, you can use the $$viewDOMID$$ placeholder keyword. At run time, this keyword will be replaced by the Coach View DOM ID.

Q: How do you fire a boundary event pro-grammatically?

this.context.trigger(callback);

Q: How do you access a Child Coach view?

context.getSubview(viewId, requiredOrder)

Accesses a subview instance given the subview ID. This method is similar to context.subview[viewid] except that the return value is an array of subview instances.
  • viewId(String) - the view ID or control ID of the subview
  • requiredOrder (boolean) - (optional) indicates whether the array returned needs to maintain the same order as in the DOM tree. The default value is false.
The call this.context.getSubview("viewid") returns an unsorted array of subview objects. The call this.context.getSubview("viewid", false) returns exactly same array. 

The only difference between the two calls and the function call this.context.getSubview("viewid", true) is that this.context.getSubview("viewid", true) returns an array of subview objects whose order matches the order of the DOM nodes in the DOM tree.

Q: How do you associate the Coach view with data and configuration options programmatically?

Please refer http://www-01.ibm.com/support/knowledgecenter/SSNJFY_8.0.1/com.ibm.wbpm.wle.editor.doc/develop/topics/rbindingdata.html?lang=en 


Q: What are under cover agents (UCA) ? 

An undercover agent is started by an event. The event can be a message event, a content event, or a timer event that is the result of a specific schedule
  • Message events can originate from a Business Process Diagram (BPD), from a web service that you create, or from a message that you post to the JMS listener.
  • When an undercover agent executes, it invokes an IBM Business Process Manager service or a BPD in response to the event.
  • When you include a message event or content event in a BPD, you must attach an undercover agent to the event. For example, when a message event is received from an external system, an undercover agent is needed to trigger the message event in the BPD in response to the message.

Q: How do you enable UCA to start a BPD?

If you want to run the startBpdWithName application programming interface (API) to start a BPD instance inside an undercover agent, set the <enable-start-bpd-from-uca> property to true in the 100Custom.xml file or another override file. Restart the product, and check the TeamworksConfiguration.running.xml file to ensure that the setting has the appropriate value. The property is set to false by default, and if you don't change it, you might have errors that prevent the BPD from starting.


Q: What is tracking? How do you enable it? What are the different ways of tracking? What happens when you enable tracking

To create customized and third-party reports in IBM® BPM, you need to identify the data to track and send that data to the Performance Data Warehouse.

To track data in a business process definition (BPD), use autotracking, tracking groups, or both.
  • Autotracking 
    • automatically captures data from tracking points at the entry and exit of each item in a BPD (for example, services, activities, and gateways). 
    • To enable autotracking, make sure that Enable Auto Tracking is selected under the Tracking tab of the Business Process Diagram. (This is the default.)
  • Tracking groups 
    • provide more control over tracked data. For example, use tracking groups track a selected group of process variables across multiple BPDs or process applications and to store tracking points for a timing interval. 
    • To enable tracking groups, make sure that Enable tracking is selected under the Overview tab of the Business Process Diagram. (By default, the checkbox is not checked.) 
    • Note that the Enable tracking setting does not apply to services with tracking points. Tracking data is always enabled when services contain tracking points.
You can take advantage of both tracking methods in a single BPD. If you use both autotracking and tracking groups, you can create a timing interval.

After you configure data tracking for your BPD, and each time you subsequently update your data tracking requirements, you must send the tracking definitions to the Business Performance Data Warehouse. 

When you send tracking definitions, either directly or as part of a snapshot deployment, the Business Performance Data Warehouse establishes the structure in its database to hold the data that is generated by the Process Server when you run instances of your processes. 

In IBM BPM, these tracking requirements are called definitions because they establish the database schema in the Business Performance Data Warehouse to accommodate the tracked data generated by the Process Server.

Q: How do you analyze the time elapsed between the activities in process?

If you want to analyze the amount of time that elapses between certain steps in your process, you can add tracking points to your BPD and then create a timing interval to capture the duration between defined start and end points. When you create a timing interval, you can create custom reports that enable you to calculate the duration of a process, or compare the duration of several processes.

Do the following tasks before creating a timing interval:
  • Enable autotracking
  • Add tracking points to the business process definition
  • Create a tracking group to hold the timing interval data (make sure to add each tracking point to the tracking group you created)
Q: What are tracks? How is it different from a versioning system like CVS?

Process Center tracks the changes in the process applications using Snapshots.

Snapshots: 
  • Record the state of the items within a process application or track at a specific point in time. 
  • From the Process Center console, you can create snapshots of your process applications. 
  • You can also deploy particular snapshots of your process applications on the Process Servers in staging, test, and production environments.
Tracks:
  • Optional subdivisions in a process application based on team tasks or process application versions. 
  • You can determine if additional tracks are necessary for each process application and, if so, enable them at any time
  • Typically Tracks will be created from production snapshot for maintenance purposes.
Difference from versioning systems like CVS.

Unlike typical versioning systems, tracks or snapshots cannot be merged in later point of time. So, It will be challenging for parallel development.



Sunday, 22 February 2015

BPM Advanced / Integration Designer / Process Server Interview Questions


Q: What are the differences between long-running process and micro-flows? when do you use each of them?

We use long running process only in the following conditions

  • If your process requires more than one transaction.
  • If your process will need to stop at any point and wait for external input, either in the form of an event or a human task,.
  • If you have elements in your process that you would like to run in parallel

We use short running process in the following conditions

  • A microflow is a process that is contained within a single transaction. 
  • Because it runs automatically from start to finish, it is ideal for situations where the user is expecting an immediate response. 
  • Example of a microflow would be a database query. 
  • Microflows cannot be used for processes involving human tasks.
  • If you have a short series of steps to model and want them executed very quickly in the runtime environment, use a microflow.

Performance: short running processes offer great performance; so, where ever feasible one should use short running processes.

Q: What is Human Task?
A human task is a unit of work that involves a person. Examples would be an review process, in which a manager must provide final approval, or when a follow-up telephone call with a client is required.

The definition of a human task includes the following information:

  • who can perform the task
  • what needs to be done
  • what happens when the task takes too long
  • how the task will be done


Q: What are different implementation mechanisms or subcategories of Human Tasks?

Stand-alone: A stand-alone task exists independently of a process, and implements human interaction as a service that can be wired to any other component.

There are two instances in particular when you should model your human task as a stand-alone task:

  • The task provides just another service.
  • You intend to replace the stand-alone task at a later stage and do not want to change the component to which it is wired.

Inline: An inline human task is a piece of a larger BPEL process that must be performed by a person.

You should model your process with an inline task when:

  • You need information from the process logic to run the human task. Although information from the process can also be modeled into the input for a human task, the main reason to use an inline human task is because they have direct access to the process context without the need to explicitly model the required information into the input message.
  • You want to define authorization rights on specific activities.

Whether a task is inline or standalone is defined by the manner in which you connect it to your BPEL process

Q: What are different types of Human Tasks?




  • To-do task - a service schedules a piece of work for a person to perform.
  • Invocation task - a person uses a service.
  • Collaboration task - one person assigns work to another person.
  • Administration task - a person is granted administrative powers over an activity or process.
Q: What is difference between the parallel flow and generalized flow activities in BPEL process?

Parallel Flow

  • A parallel process (or flow) is a collection of other process activities that are all nested within a parallel activity. 
  • The nested activities run sequentially in an order that is dictated by links and transition conditions (when no links are present, all activities will be executed concurrently). 
  • Parallel activities are very versatile, and can add a depth to a long-running process.

Generalized Flow

  • A generalized flow activity is very similar to a parallel activity in that you can nest other process activities within it, and then control the execution order of those activities through links. 
  • The generalized flow activity differs in its ability to use conditional links to loop back to previous activities in the sequence

Q: How do you assign people to a process (people assignment)? What are the pre-defined roles for human tasks?

When you define a human task you must stipulate which people are eligible to view, initiate, perform and administer the task. This step is known as people assignment. You can assign work to a named individual, to a member of a certain group.

There are pre-defined roles for human tasks which you can assign people or groups to. Members of a role share the same level of authority. The roles are:

  • Administrators - have the authority to perform high level duties like suspend, terminate, restart, force-retry, and force-complete.
  • Potential creators - can create an instance of the human task, but cannot start it.
  • Potential starters - have the authority to initiate an existing instance. The starter role is subtly different from that of creator, and although a creator can create a new instance, only a starter can start it.
  • Potential owners - can claim, work on and complete tasks.
  • Editors - can work with the content of a task, but cannot claim or complete it. For example, an editor can receive the work item to review a document and add comments, but an editor is not able to finish the task.
  • Readers - are allowed to view tasks, but cannot work on them. This role can be used in situations where someone needs to monitor a task without taking any action in it.

Q: How do you run different versions of the same process at run-time or versioning BPEL processes?

A version is a copy of an existing process that is slightly different from the original. To understand how this differentiation takes place, you must first understand what identifies a version of a business process.
If the module that contains the BPEL process is not associated with a process application or toolkit, then a version of a business process is identified by the following properties:

  • same process component name
  • same target namespace
  • different valid-from date

If the module that contains the BPEL process is associated with a process application or toolkit, then the following properties constitute a process version:

  • same process component name
  • same target namespace
  • different snapshot ID

In addition, it is important to note the following requirements:

  • Correlation set specifications of different process versions need be the same.
  • Interface specifications of different process versions need to remain the same. If you change the WSDL definition in any way, you need to change the BPEL process to incorporate those changes. If the WSDL has changed, you will need to load the new WSDL into IBM® Integration Designer, because when you deploy the module to the runtime environment, it requires the service definitions in the consumer and provider to match.


Of critical importance, the two versions must have the same name and namespace, but have different valid-from dates or snapshot ID's. In addition, where applicable, they must also have the same interface, and correlation set specifications.

  • It is with different valid-from dates that multiple versions of the same BPEL processes are distinguished.
  • In practice, at run time the process engine could use a new version of a process that is set to become valid today, even if an older version of that process was still being used

Q: When do you decide to create a new version of BPEL process?
Here are some possible examples of when you would create a version of a BPEL process:

  • In the likelihood that your BPEL process will need to be modified over time, but its callers will not. In such a case, you will want the existing callers to be able to seamlessly pickup the newest version of the process the moment it becomes effective.
  • You have a solution where multiple versions of the same BPEL process must coexist. Although the solution as a whole cannot be uninstalled and reinstalled, you will need to be able to deploy new versions of the process in such a way as to ensure that new instances use the latest version and, wherever possible, existing instances also migrate to the latest version.


One important consideration is whether instances of the process that are already running should move to the new version. If you want to migrate running instances you need to create a Migration Specification. If you are content to allow existing instances to use the old version you do not need to create a migration specification.

Q: How do you handle faults or exceptions in BPEL process?

A fault is any exceptional condition that can change the normal processing of a BPEL process and, if not handled, can lead to undesirable conditions or results. A well-designed process is one that anticipates possible faults with fault handlers that are designed to lead to predictable outcomes.

Here are some of your options for dealing with faults that are available in the BPEL process editor:

  • Use a terminate activity to stop the execution of a process or an activity so someone can step in and make necessary repairs.
  • Use a reply activity with a fault name associated with it so it will respond with a fault.
  • Use a throw activity to signal an internal fault.
  • Use a fault handler to catch a fault and attempt to deal with it.
  • Use compensation to roll back or undo a process or an activity that has failed after committal.

Q: What is Correlation or Correlation Sets? What is purpose of them?

  • Correlations are used in runtime environments where there are multiple instances of the same process running. 
  • A correlation set for a BPEL process consists of a name and one or more properties. They are used to distinguish the instances of same process in runtime.
  • Correlation sets allow two partners to initialize a BPEL process transaction, temporarily suspend activity, and then recognize each other again when that transaction resumes.
Q: What is Compensation?

Compensation is the means by which operations in a process that have successfully completed can be undone to return the system to a consistent state if an error occurs.

You can compensate a BPEL process in two ways:

  • Save the properties of the individual parts of a process so that they can be restored if the process cannot be committed and must be rolled back (compensation pairs).
  • Use a compensation handler to return a failed process to a balanced state after a fault is thrown when the parent activity has already been committed.

Q: How do you Compensate long running process and short running process (microflow)?

  • To set up compensation for a microflow, store the original properties for each of the invoke activities within the microflow so the original data can be restored if the process cannot be committed and must be rolled back.
  • To set up compensation for a long-running process, you need to specify how to compensate each transaction if the process fails.
Q: What are Escalations?

If a task is overdue it needs to be escalated. Use escalation properties to define when a task must be completed and the actions to take if deadlines are missed
.
There are three possible states for which an escalation can be configured:
  • Ready - when a human task is in a ready state, it is waiting to be claimed. You can configure an escalation to trigger if it sits unclaimed for too long.
  • Claimed - if a staff member has claimed a task, but takes longer than the specified period of time to complete it, an escalation is triggered and another staff member is notified, for example, the manager of the claimant.
  • Subtask started - a subtask is an additional unit of work that is split out from a parent task. If the subtask fails to complete within a specified period of time, the parent task is escalated and indicates that it is still waiting on the subtask.
Q: What are the transactional behavior options for the activities inside a BPEL process?
  • Commit before: preceding transaction commits before this activity starts
  • Commit after: transaction commits immediately after this activity is completed
  • Participates: activity runs in existing transaction
  • Requires Own: activity is isolated in its own transaction.
Q: How do you enter the input data required by the process?

Using the Business Process Choreography Explorer. It is access using Web Browser.

Q: What is Business State Machine? and When do you use BSM vs BPEL process?

Business State Machine is an alternative to BPEL process. If the process can be modeled in terms of life cycle i.e. state transitions and that these transitions are driven by events; typically have loop-based logic (returning to previous states), then choose BSM.

  • A BSM, like other components in Process Server, is a SCA component. Its interface is defined using WSDL. 
  • Any number of WSDLs can be predefined and associated with the BSM as it is developed. And, all of the operations defined in these WSDLs must be used in the BSM

Usage of BSM vs BPEL process:

  • In cases where the state machines are--and will--remain simple, it may be better to use a BPEL process. For example, a state machine without any loops (returning to an earlier state), would probably make more sense being developed as a process. 
  • However, if there is a lot of looping, the state machine provides a much more natural way to develop, debug, and monitor.
  • If you do have performance constraints, then do not prefer BSM as it is internally will get converted into BPEL process by the Process Server runtime .


Q: What is rule group? How do you implement rule groups?

A business rule is a condition that must be satisfied when a business activity is being performed. A rule can enforce a business policy, make a decision, or infer new data from existing data.

  • A rule group is the highest level implementation component of a business rule. The rule group acts as a gateway to the business rule. 
  • The rule group defines the interface and operation that the business rules will implement. 
Rules logic are implemented using Rule sets and decision tables in Rule Groups.
  • Rule sets and decision tables cannot be invoked directly and may only be invoked through a rule group. 
  • One of the most important functions of the rule group is to define a date and time range during which a specific rule set or decision table will be used. An example would be a regularly scheduled July sales event.


Q: Compare the Rule sets and Decision Tables?

Rule sets model the typical if-then kind of business rules, if you need to model rules on values from multiple input combinations, then decision tables are preferred.

Q: What is use of templates in rule groups?

In order to create business rules that are dynamically modifiable at runtime the business rules must be based on templates. Decision tables can also be made dynamically modifiable at runtime by basing the conditions or actions of the decision table on templates.

A template defines what parts of a deployed business rule can be modified by an authorized user. The template uses parameters and constraints to provide dynamicity. Parameters and constraints define which values can be modified and by how much.

Q: What do you use to modify the business rules dynamically?

The Business Rules Manager is a Web client that allows dynamic control of parameter values in template-based rules that are deployed to the IBM Process Server.

If you build your rules using templates you provide runtime control over aspects of the rule. You also provide constraints so that the rules cannot be misused. The runtime administrator uses the Business Rules manager to adjust the implementation of the rule.

You access the Business Rules manager using a web browser. The default URL for accessing the business rules manager is as follows. The URL may vary according to your environment.
http://hostname:port/br

Thursday, 11 December 2014

Integrating WebService with Attachments in IBM Integration Designer


First we will look at different types of handling WebServices with Attachments and later, we will let you know to work with preferred way of handling them in IBM Integration Designer or WebSphere Integration Developer.

Types of attachments

There are four types of attachments: 

  1. MTOM, 
  2. referenced, 
  3. swaRef type and 
  4. unreferenced.


MTOM attachments 

  • use the SOAP Message Transmission Optimization Mechanism (MTOM) (http://www.w3.org/TR/soap12-mtom/) specified encoding. 
  • enabled through a configuration option in the import and export bindings as described in Enabling MTOM support in JAX-WS bindings 
  • should be used to encode attachments for new applications. 
  • MTOM optimization is only available for the xs:base64Binary data type.

Referenced attachments 

  • referenced from the SOAP body; i.e the attachment is defined in the WSDL portType schema for the input or output message for the operation, 
  • the reference appears in the SOAP body as an element that references the attachment using the attachment's content-Id. 
  • To support SOAP messages with referenced attachments for exports, 
    • the interface operations must use the document literal non-wrapped binding style or the RPC literal binding style 
    • the input or output in the operation containing the reference must be binary

swaRef attachments

  • A SOAP with attachment (swaRef) type attachment uses the Web Services Interoperability Organization (WS-I) Attachments Profile. 
  • To pass an attachment as a swaRef type using the WS-I Attachments Profile, follow these steps:
    • Add the WS-I attachment profile to your module. Open Dependencies in the Business Integration view and in the Predefined Resources section select WS-I attachment profile 1.0 swaRef schema file. Save your work.
    • To add an attachment in a business object, create a business object and for the type select swaRef, which will be available since you added the schema previously.
    • To add an attachment as a type for an input or output to an operation, create the operation in the interface. 
      • Add an input or output to the operation. 
      • If using the business object created previously then select the business object as the type to your input or output. 
      • If you are not using the business object created earlier, add another input or output and select swaRef as the type.
    • Generate the binding, deploy the your application and run it.

Unreferenced attachments 

  • do not have a reference from the SOAP body to the attachment.
  • not modeled in the WSDL portType of messages and do not appear in the business object representation. 
  • can only be accessed through the Service Message Object (SMO). 
  • each attachment appears as a separate element in the attachments list of the SMO

Bindings and protocols that can be used with attachments


Only the Java API for XML Web Services (JAX-WS) based binding supports attachments in WID version 6.2.0.1 or higher. 
Only the SOAP 1.2/HTTP or SOAP1.1/HTTP transport protocols can be used with attachments.

Working with MTOM Attachments in Integration Designer

Of all different ways to handle the Web Services with Attachments, MTOM approach is preferred way. This mechanism improves the transmission efficiency of large binary attachments in SOAP messages.

Enabling MTOM support in JAX-WS bindings:


  • SOAP messages that use the MTOM specification can be sent and received by enabling support in IBM Integration Developer using JAX-WS import and export binding properties.
  • On the Properties for the export or import, click the Binding tab and select the MTOM check box. This will enable the optimization of the transmission of binary data in the SOAP attachment
  • Control the size of attachment to be sent using MTOM with Threshold field 
    • is optional, integer value 
    • indicates the minimum size of binary data to be sent using MTOM. 
    • attachment size >= threshold, will be sent using MTOM
    • attachment size < threshold, will be inlined in XML document
    • 0 indicates all docs will be sent using MTOM


MTOM will not work in following scenarios:


  • when the business object parsing mode is set to eager parsing (support is limited to a JAX-WS web service using business object lazy parsing mode).
  • when using a JAX-WS handler (support is limited to a JAX-WS web service which does not use any JAX-WS handlers). The JAX-WS handlers specified on the web service should be removed.
  • when using a service gateway mediation module, the Data Handler primitive cannot be used with MTOM messages. If direct access to the MTOM attachment data is required within the module, then a non-service gateway module must be used.
  • when MTOM is enabled on a JAX-WS export binding, all responses will be sent using MTOM. If some clients do not support MTOM, use two JAX-WS exports - one with MTOM enabled and one with it disabled and ensure client applications use the correct endpoint address.
  • when using the JAX-RPC binding for SOAP/HTTP or SOAP/JMS.



Tracing and Logging Components in WebSphere ESB

Compare the Logging vs Tracing

Tracing components are usually used for short term and temporary debugging purposes with a limited amount of data processed.
Logging components are for long-term data collection and storage. Their design is more performance-oriented for production use.

Tracing Components:

Given below are different tracing component descriptions and when to use each of them.

  1. Cross component tracing (XCT): XCT lets you track messages through the SCA modules and components
    • completely enable or disable it at runtime without the involvement of the development environment. 
    • To enable, use the toolbar on the Server Logs view, or in the administration console under Troubleshooting > Cross-Component Trace State 
    • makes it applicable for test or production environments
  2. Integrated Test Client: Use this feature to test components or modules and view the data flow through mediation primitives.You can use it in two ways:
    • Test component/export/import: This invokes an operation using a message entered in the GUI editor.
    • Attach mode: This is the trace execution of requests not initiated by the ITC.
    • Useful  when a mediation is driven by an external client, such as SOAPUI, or through messaging, such as WebSphere MQ messages being processed using a MQ/JMS Export
    • Advantages:
      • tests (including input messages) can be stored and re-used
      • enter input as XML data (or from file) or graphically using BOs
  3. Trace primitive: Use this primitive to trace messages to be logged to the server logs or to a file.
    • used for debugging mediation flows
    • trace messages contain SMO and other info
    • not typically expected to be used in a production system because the MessageLogger or EventEmitter primitives are more suitable.
    • following properties are configurable:
      • Enabled: Similar to other primitives, it enables or disables the primitive's functionality. It is a promotable property.
      • Destination: This specifies the location where the trace statement should be added, either Local Server Log (SystemOut.log), User Trace (UserTrace.log), or File (User-defined).
      • File path: This specifies the location of the trace file when the destination field is specified as File.
      • Message: This defines the message format that should be traced: {0} = time stamp, {1} = Message ID, {2} = Mediation name, {3} = Module name, {4} = Message defined by the root property, {5} = Version of the SMO. You can include additional explanatory text among the placeholders.
      • Root path: This is the XPath expression that identifies what section of the SMO should be serialized into the message.

Logging Components:

Given below are different logging component available in WebSphere ESB and when to use each of them.

  1. Message logger primitive: 
    • Used to store messages to a relational database
    • can write to other storage mediums, such as flat files, through the use of the custom logging facility
    • logged messages can be used for data mining or auditing purposes later
    • following properties are configurable: 
      • Enabled: This is similar to other primitives. It enables or disables the primitive's functionality. It is a promotable property.
      • Root: This is the XPath expression that identifies what section of the SMO is serialized into the message.
      • Transaction mode: "Same" causes the message to log under the current transaction. "New" causes the message to log under its own transaction.
        • Using the "New" transaction mode ensures that a successful message log to a database is always committed. That is, in "Same" mode, a message logged is not committed if the enclosing transaction is rolled-back. 
        • An example of the transaction not being committed is if the Fail mediation primitive is used, which triggers a ServiceRuntimeException.
      • Logging type: This specifies the location where the trace statement is added, either to a database by default, or use the user-defined handler described below.
      • Data source name: This specifies the preconfigured database, using the JNDI reference, when the logging type field is specified as “Database”.
      • Handler: This directs output somewhere other than the database, specifies a class that extends java.util.logging.Handler.
      • Formatter: This specifies the data formatter class, implementing java.util.logging.Formatter.
      • Filter: This specifies the data filtering handler class, implementing java.util.logging.Filter.
      • Literal: This defines the message format that is traced: {0} = time stamp, {1} = Message ID, {2} = Mediation name, {3} = Module name, {4} = Message defined by the root property, {5} = Version of the SMO. You can also include additional explanatory text.
      • Level: This specifies the trace level that is used to log the message. This is part of the log record passed to the handler.
    • To implement your own handler:
      • Extend java.util.logging.Handler and make your class available at runtime, for example, by including it directly in your project, or perhaps as a Jar in a library.
      • In your implementation, override publish(LogRecord) and use the getFilter() and getFormatter() methods to access the other classes specified.
      • Use getLevel() on the LogRecord argument to access the Level property from the mediation primitive.
      • If logging to a file, then use buffering, such as a new BufferedWriter(new FileWriter).
  2. Event Emitter primitive
    • defines the application specific event data that is placed into the extendedDataElements section of the common base event
    • when to use an event emitter
      • Think of an event emitter as a notification mechanism that is used to indicate an unusual event, such as a significant failure within a flow, or an unusual path in the flow
    • when not to use an event emitter
      • Avoid placing an event emitter in the normal run path of a flow because this affects performance by causing a large number of events to be generated.
  3. Custom mediation primitive
    • Can use the Standard Visual snippet or Java snippet
    • Below are some of the utility visual snippet functions that can be used for logging.
      • print to log: This prints any text to the SystemOut log.
      • print BO to log: This prints a Business Object or Service Message Object to the SystemOut log.
      • print to logger: This prints text to the java.util.logging logger.
      • BO print to logger: This prints a Business Object or Service Message Object to the java.util.logging logger.
      • to text: This prints the toString() of a Java Object to the log.
    • use custom snippet only if u have functionality that cannot be fulfilled by other primities

Performance Considerations


  1. Outputting to log files can drastically reduce the execution speed and becomes a performance bottleneck
  2. If you use trace to log files for testing, remove it completely for performance test or production, or devise a mechanism to enable or disable. 
  3. The trace mediation primitive has a promotable property called “Enabled”. You can use this to have a trace property available for the administrator in the Integrated Solutions Console (under Enterprise Applications > SCA Modules > <moduleName> > Properties). 
  4. The cost of leaving a disabled primitive in the mediation flow is relatively insignificant.


Wednesday, 20 February 2013

Writing Performance Effective Java Code



Writing Performance Effective Java Code

Based on my prior experience and/or learning's, here are few tips I suggest every Java programmer to follow for writing performance effective Java Code.

  1. Use ‘StringBuilder’ instead of String concatenation: Always prefer the usage of ‘StringBuilder’ when you have requirement of concatenating large number of string values. 
  1. Do not prefer to use immutable class constructors; instead use static factory methods if available.  [eg: use of String constructor class for literals. Instead prefer the usage of String literals.] 
i.e. Prefer String s1=”abc”;    instead of     String s1=new String(“abc”);
      Use      Integer.valueOf(String)    instead of     new Integer(String);
       
  1. Prefer the use of primitive types instead of boxed primitive or wrapper types: 
Eg: Integer sum=0;
       for(int i=0; i<1000; i++)
             sum+=i;

The above program with wrapper type is much slower than this one:
int sum=0;
      for(int i=0; i<1000; i++)
             sum+=i; 

  1. Prefer the use of primitive data comparisons to String value comparisons.
    1. In primitives, prefer this comparison order. i.e. use boolean, byte, char, short, int, long instead of ‘float’ and ‘double’ comparisons 
  1. Minimize the usage of synchronization and synchronized utility classes in Java API’s wherever possible. 
eg. Prefer use of ‘StringBuilder’ to ‘StringBuffer’, ‘ArrayList’ to ‘Vector’, ‘HashMap’ to ‘Hashtable’ etc.
  
  1. Minimize the mutability of classes: Always declare the member variables or objects with least possible access specifier and make them ‘final’ when there is guarantee that it’s not changed. 
    1. Try to have all the member variables or objects as ‘private’ unless other access specifier is required
    2. Make all the unchanged primitive variables ‘final
    3. Make all the arrays or collections whose references should remain same as ‘final
    4. Make all the methods ‘final’ when you are sure that the method definition should not be changed.

  1. Keep re-usable code in static final methods: If there is any business logic, or functionality which does not belong to any of your classes or entity, move the methods into a utility class and make all of methods static and final. 
  1. Minimize the scope of local variables: Declare local variables only when it is used for first time. Do not declare them in the very beginning; and use it some where in the middle. By the time reader would have forgot the initial value of variable if any defined.  
  1. Use static block initializers to initialize or construct the expensive objects instead of having them in the methods which gets called many times. 
  1. Prefer the usage of interfaces to reflection: when you follow reflective way of instantiating your class i.e. (with use Class.forName(classname-to-load)), you 
    1. Lose all benefits of compile-time type checking
    2. Suffer with slow performance in method invocation.

Appreciate for taking time and reading through the different tips. If you would like to share your feedback or add on another tip, feel free to comment on the post.

Tuesday, 12 February 2013

Displaying jGrowl message at custom location


Displaying jGrowl message at a custom location:

If you are displaying jGrowl messages in your website, there are five default placement options available for you; top-left, top-right, bottom-left, bottom-right, center. This must be changed in the defaults before the startup method is called.

To do this, you'll have something like
    <script type="text/javascript">
           $.jGrowl.defaults.position = 'top-right';
    </script>

However, jGrowl does not let you directly specify your own positioning of message in web page. 
Solution Approach: Add your own position in the jGrowl CSS file:
To do this, you will have to add one CSS position entry or override any of the default CSS definitions provided by jGrowl in 'jquery.jgrowl.css' file like shown below:
 div.jGrowl.YourDisplayArea {
position: absolute;
left: 3%;
top: 20%;
}

Similarly, you could also position with respect to 'right', 'bottom' attributes available in CSS.

And you can use the new position in your jGrowl messages which will bring up the message in the location as you desired.


$.jGrowl("sample JGrowl message displayed in 'YourDisplayArea '",
{
sticky: true,
position: 'YourDisplayArea'
});


Monday, 11 February 2013

jGrowl messages at different positions


Displaying different jGrowl messages at different positions within the same page


In normal scenario, if you want to display only one jGrowl message in any part of the web site, you will just do it so by creating the messaging and specifying the position accordingly as given in this example (i.e. with any of values for positions --> top-left, top-right, bottom-left, bottom-right, center)

eg: 

          $.jGrowl('This is sticky message which appears on bottom right corner of your website page', 
                        {
sticky: true,
header: 'Simple jGrowl message',
position: 'bottom-right'
});

This displays the simple sticky jGrowl message as expected in 'bottom-right' of page.

However, think about if you want to have one more message displayed; say, in 'top-left' corner; For this, you may try to accomplish this by adding one more similar jGrowl message with position as 'top-left' like one below:

 $.jGrowl('This sticky message should appear on top left', 
                        {
sticky: true,
header: 'Simple jGrowl message',
position: 'top-left'
});

But it doesn't work; instead the jGrowl will display this message just below to the first one (kind of attached to it); This is because by default jGrowl displays all of the messages using default container i.e. default <div> section it uses and hence it will group all of your messages together (if you have multiple); with 'close-all' button to close all of these jGrowl messages at a time.

Solution (Custom containers):

To display jGrowl messages at different locations, you would need to create your own custom containers; This is as simple as creating your own HTML '<div>' section any where in your web page with an 'id' value. You can have the '<div> elements any where in <body> section of HTML or webpage code.

Lets create two DIV's for our examples:

<div id="jgrowlMsg1" class="bottom-right"></div>
<div id="jgrowlMsg2" class="top-left"></div>

And now we just say, 
 
 $('jgrowlMsg1').jGrowl('This is sticky message which appears on bottom right corner of your website page', 
                        {
sticky: true,
header: 'Simple jGrowl message',
position: 'bottom-right'
});



$('jgrowlMsg2').jGrowl('This sticky message should appear on top left', 
                        {
sticky: true,
header: 'Simple jGrowl message',
position: 'top-left'
});


This will bring up messages in different locations as desired in 'bottom-right' and 'top-left' of the webpage respectively.