Category Archives: Architecture

An architecture framework to handle triggers in the platform

Coding in Apex is similar to Java/C# in lot many ways, yet so different from them in few other ways. But one thing that is common is that the application of proper design patterns to solve the problem irrespective of platform or technological difference. This is very particular for the apex triggers, because in a typical Salesforce application, triggers play a pivotal role and a major chunk of the logic is driven through the triggers. Therefore a proper design of triggers is very essential, not only for successful implementations, but also for laying out a strong architectural foundation. There are several design patterns proposed like this, this and this. And some of these patterns have gotten mainstream support among the developers and the architects. This article series is going to propose one such design pattern for handling triggers. All these design patterns share a common trait – which is ‘One Trigger to rule them all’ – A phrase that Dan Appleman made it famous in his ‘Advanced Apex Programming‘ . In fact, the design pattern that is being published in this article series is heavily influenced by Dan Appleman’s original design pattern – many thanks to you, Dan. Another design pattern that influenced this implementation was published by Tony Scott and can be found here and I would like thank him as well for his wonderful contribution. I also borrowed couple of ideas from Adam Purkiss such as wrapping the trigger variables in a wrapper class and thank him as well. His training video on design patterns (available at pluralsight as a paid subscription) is great inspiration and anyone who is serious about taking their development skills on platform to the next level MUST watch them. That said, let’s dive into the details.

Now, why do we need a yet another design pattern for triggers, while we already have few. Couple of reasons – Firstly, though, the design pattern proposed by Dan and Tony provides a solid way of handling triggers, I feel that there is still room to get this better and provide a more elegant way of handling the triggers. Secondly, both these design patterns require touching the trigger framework code every time when a new trigger is added – the method that I propose eliminates it (Sure, this can be replicated with their design patterns as well, as I leverage the standard API). Thirdly, this trigger design pattern differentiates the dispatching and handling and provides full flexibility to the developers in designing their trigger handling logic. Last, but not least, this new framework, as the name suggests, it is not just a design pattern; it’s an architecture framework that provides complete control of handling triggers in more predictable and uniform way and helps to organize and structure the codebase in a very developer friendly way. Further, this design pattern takes complete care about dispatching and provides a framework for handling the events; thus the developers need to focus only on the building the logic to handle these events and not about dispatching.


The fundamental principles that this architecture framework promotes are

  • Order of execution
  • Separation of concerns
  • Control over reentrant code
  • Clear organization and structure

Order of execution

In the traditional way of implementing the triggers, a new trigger is defined (for the same object) as the requirements come in. Unfortunately, the problem with this type of implementation is that there is no control over the order of execution. This may not be a big problem for the smaller implementations, but definitely it’s a nightmare for medium to large implementations. In fact, this is the exact reason, many people have come with a trigger design pattern that promotes the idea of ‘One trigger to rule them all’. This architecture framework does support this and it achieves this principle by introducing the concept of ‘dispatchers’. More on this later.

Separation of concerns

The other design patterns that I referenced above pretty much promote the same idea of ‘One trigger to rule them all’. But the one thing that I see missing is the ‘Separation of concerns’. What I mean here is that, the trigger factory/dispatcher calls a method in a class which handles all the trigger event related code. Once again for smaller implementations, this might not be a problem, but medium to large implementations, very soon it will be difficult to maintain. As the requirements change or new requirements come in, these classes grow bigger and bigger. The new framework alleviates this by implementing the ‘handlers’ to address the ‘Separation of concerns’.

Control over reentrant code

Many times there will be situation where the trigger code might need to perform a DML operation on the same object, which might end up in invoking the same trigger once again. This can go recursive, but usually developers will introduce a condition variable (typically a static variable) to prevent that. But this is not a elegant solution because, this doesn’t guarantee an orderly fashion of reentrant code. The new architecture provides complete control to the developers such that the developers can either deny the reentrant or allow both the first time call and the reentrant, but in totally separate ways, so they don’t step on each other.

Clear organization and structure

As highlighted under the section ‘Separation of concerns’, with medium to larger implementations, the codebase grows with no order or structure. Very soon, the developers might find it difficult to maintain. The new framework provides complete control over organizing and structure the codebase based on the object and the event types.

UML Diagram

The following UML diagram captures all the pieces of this architecture framework.

Trigger Architecture Framework

Trigger Architecture Framework

Framework Components


The TriggerFactory is the entry point of this architecture framework and is the only line of code that resides in the trigger definition (you may have other code such as logging or something else, but as far as this architecture framework, this will be the only code required). The TriggerFactory class, as the name indicates, is a factory class that creates an instance of the dispatcher object that the caller (the trigger class) specifies and delegates the call to the appropriate event handler method (the trigger event such as ‘before insert’, ‘before update’, …)  that the dispatcher provides. The beauty of the TriggerFactory is that it automatically finds the correct dispatcher for the object that the trigger is associated as far as the dispatcher is named as per naming convention and this convention is very simple as specified in the following table.

Object Type Format Example Notes
Standard Objects <Object>TriggerDispatcher AccountTriggerDispatcher
Custom Objects <Object>TriggerDispatcher MyProductTriggerDispatcher Assuming that MyProduct__c is the custom object, then the dispatcher will be named without the ‘__c’.

It accomplishes this by using the new Type API. Using the Type API to construct the instances of the dispatchers helps to avoid touching the TriggerFactory class every time a new trigger dispatcher is added (ideally only one trigger dispatcher class is needed per object).


The dispatchers dispatch the trigger events to the appropriate event handlers to handle the trigger event(s). The framework provides the interface and a base class that provides virtual implementations of the interface methods, but the developers need to provide their own dispatcher class, which is derived from either the virtual base class for each object that they want to have this framework applied. Ideally, the developers want to inherit from the TriggerDispatcherBase, as it not only provides the virtual methods – giving the flexibility to the developers to implement the event handlers only that they are interested in their dispatcher class, but also the ability to provide reentrant feature to their logic.


As discussed above, the ITriggerDispatcher essentially contains the event handler method declarations. The trigger parameters are wrapped in a class named ‘TriggerParameters’.


The TriggerDispatcherBase class implements the interface ITriggerDispatcher, providing virtual implementations for those interface methods, so that the developers need not implement all the event handlers that they do not wish to use. The TriggerDispatcherBase also has one more important method named ‘execute’ which controls if a call has to be dispatched in reentrant fashion or not. It has a separate member variable for each event to hold the instance of the trigger handler for that particular event which the ‘execute’ method utilizes to control the reentrant feature.


The trigger dispatcher classes contains the methods to handle the trigger events and this is the place where the developers had to instantiate the appropriate trigger event handler classes. At the heart of the dispatcher lies the ITriggerDispatcher interface which provides an interface for the developers to implement the appropriate dispatcher for the objects. The interface provides definitions for all trigger events, which means that the trigger dispatcher that the developers implement should implement methods for all the events. However, since it may not necessary to provide implementations for all trigger events – the framework provides a base class named ‘TriggerDispatcherBase’ that provides default implementation (virtual methods) to handle all events. This allows developers to implement the methods for only the events  that they really have to, by deriving from TriggerDispatcherBase instead of implementing the ITriggerDispatcher interface, as the TriggerDispatcherBase implements this interface. One more reason that the developer wants to derive from TriggerDispatcherBase instead of ITriggerDispatcher is because the TriggerDispatcherBase.execute method provides the reentrant feature and the developer will not be able to leverage this feature if the trigger dispatcher for the objects do not derive from this class.

It is very important that the trigger dispatcher class to be named as per the naming convention described under the TriggerFactory section. If this naming convention is not followed, then the framework will not be able find the dispatcher and the trigger class would throw an exception.

Understanding the dispatcher is really critical to successfully implement this framework, as this is where the developer can precisely control the reentrant feature. This is achieved by the method named ‘execute’ in the TriggerDispatcherBase which the event handler methods call by passing an instance of the appropriate event handler class. The event handler methods sets a variable to track the reentrant feature and it is important to reset it after calling the ‘execute’ method. The following code shows a typical implementation of the event handler code for ‘after update’ trigger event on the Account object.

   1: private static Boolean isAfterUpdateProcessing = false;


   3: public virtual override void afterUpdate(TriggerParameters tp) {

   4:       if(!isAfterUpdateProcessing) {

   5:            isAfterUpdateProcessing = true;

   6:            execute(new AccountAfterUpdateTriggerHandler(), tp, TriggerParameters.TriggerEvent.afterUpdate);

   7:            isAfterUpdateProcessing = false;

   8:       }

   9:       else execute(null, tp, TriggerParameters.TriggerEvent.afterUpdate);

  10:  }

In this code, the variable ‘isAfterUpdateProcessing’ is the state variable and it is initialized to false when the trigger dispatcher is instantiated. Then, inside the event handler, a check is made sure that this method is not called already and the variable is then set to true, to indicate that a call to handle the after update event is in progress. Then we call the ‘execute’ method and then we are resetting the state variable. At the outset, this (resetting the state variable to false) may not seem very important, but failure to do so, will largely invalidate the framework and in fact in most cases you may not be able to deploy the application to production. Let me explain this – when a user does something with an object that has this framework implemented, for example, saving a record, the trigger gets invoked, the before trigger handlers are executed, the record is saved and the after trigger event handlers are executed and then either the page is refreshed or redirected to another page depending on the logic. All of this happens in one single context. So, it might look like the state variable such as ‘isAfterUpdateProcessing’ needs to be set to true inside the if condition.


Handlers contains the actual business logic that needs to be executed for a particular trigger event. Ideally, every trigger event will have an associated handler class to handle the logic for that particular event. This increases the number of classes to be written, but this provides a very clean organization and structure to the code base. This approach proves itself – as in the long run, the maintenance and enhancements are much easier as even a new developer would know exactly where to make the changes as far as he/she gets an understanding on how this framework works.

Another key functionality of the handlers is that the flexibility it gives to the developers to implement or ignore the reentrant functionality. The ‘mainEntry’ method is the gateway for the initial call. If this call makes a DML operation on the same object, then it will result in invoking this trigger again, but this time the framework knows that there is a call already in progress – hence instead of the ‘mainEntry’, this time, it will call the ‘inProgressEntry’ method. So if reentrant feature to be provided, then the developer need to place the code inside the ‘inProgressEntry’ method. The framework provides only the interface – the developers need to implement this interface for each event of an object. The developers can chose to ignore to implement the event handlers, if they are not going to handle the events.


The ITriggerHandler defines the interface to handle the trigger events in reentrant fashion or non-reentrant fashion.


The TriggerHandlerBase is an abstract class that implements the interface ITriggerHandler, providing virtual implementation for those interface methods, so that the developers need not implement all the methods, specifically, the ‘inProgressEntry’ and the ‘updateObjects’ methods, that they do not wish to use.


As discussed above, the developer need to define one class per event per object that implements the ITriggerHandler interface. While there is no strict requirement on the naming convention like the dispatcher, it is suggested to name as per the following convention.

Object Type Format Example Notes
Standard Objects <Object><Event>TriggerHandler AccountAfterInsertTriggerHandler,
AccountAfterUpdateTriggerHandler, etc.
Custom objects <Object><Event>TriggerHandler MyProductAfterInsertTriggerHandler Assuming that MyProduct__c is the custom object, then the handler will be named without the ‘__c’.

So, if we take the Account object, then we will have the following event handler classes to be implemented by the developer that maps to the corresponding trigger events.

Trigger Event Event Handler
Before Insert AccountBeforeInsertTriggerHandler
Before Update AccountBeforeUpdateTriggerHandler
Before Delete AccountBeforeDeleteTriggerHandler
After Insert AccountAfterInsertTriggerHandler
After Update AccountAfterUpdateTriggerHandler
After Delete AccountAfterDeleteTriggerHandler
After UnDelete AccountAfterUndeleteTriggerHandler

Note that NOT all the event handler classes, as defined in the above table, needs to be created.  If you are NOT going to handle a certain event, such as, ‘After Undelete’ for the Account object, then you do not need to define the ‘AccountAfterUnDeleteTriggerHandler’.


The TriggerParameters class encapsulates the trigger parameters that the platform provides during the invocation of a trigger. It is simply a convenience, as it avoid repeating all those parameters typing again and again in the event handler methods.


Often, there will be situations where you want to reuse the code from different event handlers such as sending email notifications. After all one of the fundamental principle of object oriented programming is code re-usability. In order to achieve that, this architecture framework proposes to place all the common code in a helper class so that not only the event handlers, but also the controllers, scheduled jobs, batch apex jobs can use the same methods, if necessary. But this approach is not without its caveats; for e.g. if the helper method slightly varies based on the event type, then how would you handle that? Do you pass the event type to the helper method, so that the helper method uses condition logic? There’s no right or wrong answer – but personally, I think it’s not a good idea to pass the event types to the helper methods; And for this example, you can just pass a flag and that can solve the issue. But for other types of situations, passing a flag may not be enough – you need to think little smarter and I’ll leave it to the reader as the situation is totally unique to their requirements.

Using the framework to handle updates

The framework comes with a default implementation to handle the updates. It achieves this by adding a Map variable to hold the objects to be updated and all that the developer needs to do is to just add the objects to be updated to this map in their event handlers. The TriggerHandlerBase abstract class has the default implementation to update the objects from this map variable which is called by the  ‘execute’ method in the TriggerDispatcherBase. Note that I chose to call the updateObjects method only for the ‘mainEntry’ and not for the ‘inProgressEntry’ simply because I didn’t have the time to test it.

Another thing to note is that since the framework will utilize the helper classes to get things done, sometimes the objects that you need to update may be handled in these helper classes. How would you handle that? I suggest design your helper methods to return those objects as a list and add them to the map from your event handler code, instead of passing the map variable to the helper class.

This framework can be easily extended to handle the inserts and deletes as well. To handle the insert, add a List variable to the TriggerHandlerBase and provide a virtual method named ‘insertObjects’ that will insert the list and call it from the ‘execute’ method on the TriggerHandlerBase. I’ll update the code when time permits and for the meanwhile, I’ll leave this to the reader to implement for their projects.

Note that it is not possible to do upsert in this way, because platform doesn’t support upserting generic objects and since our map uses the generic object (sObject), this is not possible. (Thanks to Adam Purkiss for pointing out this fact).


To illustrate this design pattern, the following diagram depicts how this architecture framework will apply for the account object.

Trigger Architecture Framework

Trigger Architecture Framework

The trigger architecture framework project is available as open source and hosted at google code.The source code for this entire framework, along with a dummy implementation which includes classes for handling trigger events on the Account object, is available as an app exchange package. If you need it as a zip file, then it can be downloaded from here. The code documentation is built using apexdoc and can be downloaded from here.


The new trigger architecture framework will provide a strong foundation on the platform to build applications that will provide multiple benefits as outlined previously. The framework does involve many parts and the business solutions built using this framework need to follow certain rules and conventions and the number of classes written will be little high – but the benefits from this approach will easily out-live the effort and the investments. It may be still overkill for very smaller implementations, but it will prove its usefulness for medium to larger implementations.

Securing IBM WebSphere Cast Iron Appliance while integrating with Salesforce-Part IV


This article series discusses about securing IBM Websphere Cast Iron Appliance when integrating with Salesforce. In the previous articles, we discussed about the security challenges and some of the methods to address those challenges including protecting the enterprise network using Firewall, SSL/Certificate based authentication. In this article, we will see how requests from cross-organization to cross-environment (from Salesforce to on-premise) can be prevented. This method also protects your organization’s web services from being accessed from other enterprises who are co-located along with your organization’s orgs. Implementing this method along with the solutions described in the previous articles is a powerful method that will secure your enterprises web services from almost any type of unauthorized access.


In a typical enterprise, multiple environments are created to promote development to production. The usual model is to have one single environment for DEV, one for QA, one for Staging and another for Production. The number of environments may vary depending on the organization’s needs, but the bottom line is that there will be multiple organizations (Sandboxes) on the Salesforce side and matching environments at on-premise with mapping from Salesforce orgs to these local environments. At on-premise, each of these environments are either physically or logically separated using various technologies such as NAT and firewall rules are implemented to prevent cross environment access such that DEV servers can talk only to other DEV servers, QA servers can talk to other QA servers, etc. The bottom line is that applications from one environment cannot call into the applications/databases/services in another environment. But this type of implementation is not possible with Salesforce, because Salesforce doesn’t have the concept that we just explained above. Further we explained in part-2 about how the requests originates from Salesforce. To recap, Salesforce has a set of IP addresses for each region (such as North America, Asia Pacific, etc…) and the web service callouts made by the Salesforce client applications carry one of this IP address as their source address. Salesforce doesn’t have the concept of different network segments for the web service callouts, hence by default, there is no way for the enterprises to distinguish the requests to identify if it come from sandbox or production org or even from their own orgs. But not all is lost. The solution proposed here will address this problem, by not only, preventing the cross-organization to cross-environment access, but also prevents access from other enterprises orgs.


The solution to this problem involves some custom coding along with a Salesforce provided feature called ‘Organization Id’. Every Salesforce organization has a unique identifier called ‘Organization Id’. The ‘Organization Id’ can be found under Your Name | Setup | Administration Setup | Company Profile | Company Information.


Figure 1: Getting the Organization Id

The ‘Organization Id’ is unique for each Salesforce org. It is not only unique with your enterprise’s Salesforce orgs, but also unique universally across all orgs of all it’s clients. That means that if your web service client in Salesforce embed this ‘Organization Id’ in the request, then a web service can catch that and can prevent the call from executing if the ‘Organization Id’ is not in their access control list. This is what exactly we are going to do that. Here is a simple flowchart that describes this technique.


Figure 2: Flow chart that depicts the solution

As shown in the flowchart above, the client (Salesforce) will send the ‘Organization Id’ along with the input payload when it makes a web service callout to the Cast Iron. Apex provides the User API through which the ‘Organization Id’ can be retrieved which suits well for apex code. We get the ‘Organization Id’ from this object and embed it in the HTTP headers. For outbound SOAP messages, the outbound workflow action itself provides the ‘Organization Id’. Cast Iron will retrieve the ‘Organization Id’ from the request and validates against it’s configuration. If the ‘Organization Id’ matches, it continue to process, otherwise, the web service exits immediately, returning an error code. The following sections will provide step by step details along with the necessary code that implements this pattern.

Developing the OrgValidationService Cast Iron Orchestration

Since we want to validate all the incoming requests, it is best to develop the validation as a separate web service, so that all other web services can call this one web service to validate the ‘Organization Id’. We will call this as ‘OrgValidationService’. This web service will be consumed by all other Cast Iron web services that want to validate the organization ids. The following image is extracted from the Cast Iron studio that depicts the flow.


Figure 3: OrgValidationService

Previously, we saw that the client has to send the ‘Organization Id’ along with the input payload. This can be done with couple of ways. I preferred to embed the ‘Organization Id’ as part of the HTTP headers, because, this is less intrusive. Had we chose to embed this as part of the input request parameters, instead of putting into HTTP headers, then the XML schema for all of these web services need to be updated to include the ‘Organization Id’ parameter. Here is the logic:

  • The web service receives the input parameter and assigns it to input variable named ‘objOrgInfoRequest’. While assigning we filter it for the value ‘Organization Id’. Please note that apart from the HTTP headers, you will also notice another parameter named ‘OrganizationId’, which will be used if web service callout is made by outbound SOAP message. The following screenshot shows how the ‘Organization Id’ is passed through the HTTP headers.


Figure 4: Filtering the input headers to get the ‘Organization Id’

  • The valid ‘Organization Id’ for that particular environment is configured in a variable named ‘OrgIdLookup’. This is retrieved and assigned to a variable.
  • The web service checks the ‘Organization Id’ that came as part of the input parameter and checks against this configured value.
  • If it matches, then it assigns the status to ‘true’, otherwise, it assigns the status ‘false’.

The code for this orchestration can be downloaded from here.

Developing the Test Web Service Cast Iron Orchestration

The Test Web Service Cast Iron orchestration will serve as an example of how the regular Cast Iron orchestrations (that will be developed to support your organization’s business requirements) should utilize the OrgValidationService to validate the ‘Organization Id’. The following image is extracted from the Cast Iron studio.


Figure 5: TestRequestService Web Service to be consumed by Apex code.

This orchestration is exposed as a web service which will be consumed by your apex code. It copies the HTTP headers to a local variable and calls the OrgValidationService passing this variable. If the return value from the OrgValidationService is true, then the orchestration proceeds to execute, otherwise, it terminates immediately. The code for this orchestration can be downloaded from here.

Consuming the OrgValidationService in Salesforce

There are couple of ways the Salesforce can be programmed/configured to consume a web service that is hosted elsewhere. The right solution depends on the needs, but they are as follows:

  • Apex code
  • Outbound SOAP message
  • AppExchange App

Consuming the Test Web Service from Salesforce (through Apex code)

With apex code option, the consuming the Test Web Service from Salesforce is done completely by coding in Apex. Developers choose this option when they want to

  • pull data from multiple objects and pass them as input parameters
  • process the data and derive the input parameters for the web service

Here are the steps to consume the test web service from Salesforce using the apex code.

  • Generate the WSDL from the Cast Iron web service and save it to a folder
  • Login into your dev/sandbox org and go to the section ‘Apex Classes’ under Your Name | Setup | App Setup | Develop.
  • Click ‘Generate from WSDL’ and choose the file that you saved from the Cast Iron studio.
  • If you want to rename the default names provided by the import wizard, go ahead and change the file names. I renamed the files as follows:


Figure 6: Naming the system generated classes in Salesforce

  • Click ‘Generate Apex Code’ button and click ‘Done’.

Now we need to create a wrapper class that utilizes the stub generated by the import wizard. To do this, click ‘New’ under Your Name | Setup | Develop | Apex Classes. Copy the code given below. This code consumes the test web service by utilizing the stub that we just generated.

public class TestRequestServiceClient {
    @future (callout=true)
    public static void Test() {
        Boolean isSuccess = true;
        try {
            TestRequestServiceWsdl.Provide_ServicePort binding = buildService();
            List<TestRequestServiceRequest.TestRequest_element > elementList = new List<TestRequestServiceRequest.TestRequest_element >();
            TestRequestServiceRequest.TestRequest_element element = new TestRequestServiceRequest.TestRequest_element ();
            binding.Provide_Service('1', 'Test');
            isSuccess = true;
        } catch (Exception e) {
            isSuccess = false;
    private static TestRequestServiceWsdl.Provide_ServicePort buildService() {
        TestRequestServiceWsdl.Provide_ServicePort updateStatusInstance = new TestRequestServiceWsdl.Provide_ServicePort ();
        updateStatusInstance.endpoint_x         = 'https://yourcastironwebserver:443/TestRequestService';
        updateStatusInstance.inputHttpHeaders_x = new Map<String, String>();
        updateStatusInstance.inputHttpHeaders_x.put('OrganizationId', UserInfo.getOrganizationId());
        updateStatusInstance.inputHttpHeaders_x.put('OrganizationName', UserInfo.getOrganizationName());
        System.debug('\nOrganization Id :' + UserInfo.getOrganizationId());
        updateStatusInstance.timeout_x          = 60000;
        return updateStatusInstance;

Click Save.

Consuming the OrgValidationService in Salesforce (outbound SOAP message)

With outbound SOAP message option, the outbound SOAP action sends the ‘Organization Id’ as part of the request payload. This is the reason that we designed our OrgValidationService to accept both HTTP headers and a separate parameter named ‘Organization Id’ to support both scenarios (Apex code and outbound SOAP messages).

Here is the test web service built to be consumed by an outbound SOAP message. The outbound SOAP message definition provides you the WSDL and the following screenshot shows the test web service implmented using this WSDL. The code for this web service can be found here.


Figure 7: ‘TestRequestServiceO’ web service to be consumed by Outbound SOAP message

Now we have completed all the development tasks. Before start testing, you need to perform the following tasks:

  • Deploy all the three Cast Iron orchestrations in your Cast Iron server (preferably in your development environment)
  • In OrgValidationService, configure the ‘OrganizationId’ configuration property to the organization id of the Salesforce org that you are going to test. You can get the organization id from the ‘Company Information’ link under Your Name | Setup | Company Profile‘ as shown in the Figure 1 above. Note that the organization id that is displayed here is 15 digit and you need to use the 18 digit version, as otherwise, your OrgValidationService will fail. [You can get the 18 digit through many ways. One way is to let it run against your cast iron web service and look at the cast iron log and you will find the 18 digit id (set the log level to ‘All’ in your cast iron server).
  • In Salesforce, update the ‘endpoint_x’ variable in the TestRequestServiceWsdl to point to your cast iron server and save it. (I renamed the system generated file to ‘TestRequestServiceWsdl’. If you choose a different name or if you didn’t rename, then the file name will be different in your case.

With this, we are all set to test it. Invoke the developer console from Your Name | Developer Console and execute the following: TestRequestServiceClient.Test(); Now if you open the debug logs, you will see that the call succeeded. If you check your Cast Iron logs, you would see log message titled either ‘Request came from authorized organization’ or ‘Request came from unauthorized organization’ depending on what value you have configured for the ‘Organization Id’ configuration property in the OrgValidationService orchestration.


This article is the fourth in the five part article series. The first part described the security challenges. The second part explained how the enterprise can use firewall to filter unwanted traffic other than the trusted networks. The third part continued to explain about transport level encryption. The fifth part will now explain about authorizing the trusted users. This scenario is particularly important if you are going to use Cast Iron server to integrate with other systems including other cloud services such as Docusign, Amazon elastic cloud, Windows Azure, etc.

Making authenticated web service callouts from Salesforce to IBM Cast Iron using SSL/certificates–Part V


This article is the fifth and final in the five part article series on making authenticated web service callouts from Salesforce to IBM WebSphere Cast Iron. In this article, we will cover some of the issues that we come across in implementing SSL/Certificate based security and the solution to fix them. This is not an exhaustive list, but these are the most common problems one may face while implementing this type of security.

Certificate issues

PKIX path building failed

Exception message PKIX path building failed: unable to find valid certification path to requested target


This exception can happen due to various reasons. The following list of actions might solve this issue.

  • If you are using two-way certificate authentication, then check whether you have included the certificate on your client side when you make the web service callout.
  • Make sure the certificate is not expired and valid.
  • Make sure to include the certificate if you enable two-way certificate authentication.

IO Exception: DER input, Integer tag error

Exception message

IO Exception: DER input, Integer tag error


This exception can happen due to various reasons.

  • As explained previously, Salesforce will accept PKCS#12 certificates; if your certificate is DER/PEM, then you will receive this error. Once you use the PKCS#12 certificate this error should go away.
  • When you embed the third party certificate in the code and if the certificate content is tampered or if you incorrectly paste the certificate content, then you will receive this error

SSLPeerUnverifiedException: peer not authenticated

Exception message: peer not authenticated.


There are many situations where this error can happen. This can be summarized as:

  • If your certificate has a chain of trust (which means that the certificate has intermediate certificates), then the order of the certificates in the chain has to be correct. The order is defined as follows:
    • The server certificate
    • The intermediate certificate that have signed your server certificate (only if the server certificate is not signed by root certificate)
    • The intermediate certificate that signed the above intermediate certificate
    • Include all the intermediate certificates as defined in the previous certificate except the root CA (this is usually already available in your server’s trust store)
  • One or more of the certificate in the chain has expired or not valid.
  • One or more of the intermediate certificate in the chain is missing.
  • One or more of the certificate in the chain is either a self-signed certificate or not trusted by Salesforce. The list of the CA that Salesforce supports can be found here.

IO Exception: Unable to tunnel through proxy.

Exception message:

IO Exception: Unable to tunnel through proxy. Proxy returns \"HTTP/1.0 503 Service unavailable


This error happens if your firewall doesn’t allow access to your server where the web service is hosted. Making the firewall changes should fix this issue.

In general, when you see an issue, follow this check list to troubleshoot the issue:

  • Make sure you have the firewall settings in your enterprise is configured to allow the inbound web service call.
  • Check whether the certificate is expired or not.
  • Check if your chain of trust has valid certificates.
  • Make sure you embed the PKCS#12 certificate on the Salesforce side when you make a web service callout.

Most vendors provide tools to check your server/certificate. DigiCert’s tool can ping your server and retrieve the certificate and can provide you the report. Similarly, Verisign has its own tool to validate the certificates.


This article series had in-depth analysis of how to make authenticated web service callouts from Salesforce to IBM WebSphere Cast Iron both one-way SSL/Certificate and two-way SSL/Certificates. Though this article uses Salesforce and IBM WebSphere Cast Iron as examples, the concept applies elsewhere whether you are making the authenticated web service callouts from Java client to .NET WCF Web Services or from Windows .NET Azure Web Services to SAP, albeit the implementation details will differ.

Making authenticated web service callouts from Salesforce to IBM Cast Iron using SSL/certificates–Part IV


This article is the fourth in the five part article series on making authenticated web service callouts from Salesforce to IBM WebSphere Cast Iron using SSL/certificates. Web Service callouts is a powerful feature and the IBM WebSphere Cast Iron provides great integration capabilities. We discussed the basics of the authenticated callouts and the problem scenario in the first part and implemented a solution without the SSL/certificates in the second part. We also discussed about securing the web service with one-way SSL/certificate authentication in the third part. In this part, we will continue to add two-way SSL/certificates that demonstrates the concept of both the parties proving their identities.

Understanding the two-way SSL

As explained in the third part, the one-way SSL/certificates based authentication allows one of the parties (read ‘server’) to prove its identity to the client(s) based on the PKI mechanism such that the client and the server can exchange communication on a secure medium. This is the most commonly found scenario which is widely implemented by most web sites such as,, etc. Sometimes, this arrangement is not enough and the server needs to know whom it is talking to. This is particularly relevant when you integrate cloud based services with on-premise or other cloud based systems.

Let’s consider a real scenario. You want to integrate the information from your salesforce platform to your on-premise system – assume you have infrastructure as described in the architecture diagram in Part-2. You can secure your IBM WebSphere Cast Iron server with certificates signed by public CA. You may even put firewall rules to restrict access that allows inbound calls only from servers. (adding * to firewall will not work. Salesforce has a set of IP ranges and the request may arrive from any of these IP addresses. There is no distinction on the requests to indicate the salesforce organization from where the request originated [the salesforce does send the organization name if it is outbound SOAP message based web service callout]. These IP address ranges has to be added to your firewall to restrict the access to servers). But still there is a gaping hole. If someone knows your endpoint who also is a salesforce user (from a different company; for that matter, it can be an individual from salesforce DE user), then he can very well invoke your web services, provided, you do not have other forms of authentication, built into your orchestration. You can always build such other forms of authentication into your orchestrations, but it may not be versatile.

There’s a better solution available which is the two-way SSL/certificates based authentication. In this method, the client, as well, provides its identity to the server to prove it’s identity. This is much better, because, in most cases, this is just a configuration task, especially on the IBM WebSphere Cast Iron side. On the client side, it depends on the platform you use – Salesforce does provide both configuration only and code based approaches. This is definitely better than user name/password based authentication, because, if the user name/password is compromised somehow, then the attacker can easily gain access to your systems. Sure, if the attacker gain access to your client’s public key, then it is still a high risk with two-way SSL/certificates approach as well; but for all practical scenarios, this is highly difficult, if not impossible. But the same is not true with user name/password based approach, for e.g. if the attacker uses brute force method and your password is not strong enough to resist it. There’s much more to this and it is out of scope for this article to discuss elaborately and I’ll write more about the security in a future article series.

To summarize, in two-way SSL/certificates based authentication, both the server and the client(s) prove their identities respectively using SSL certificates. As explained in the Part-3, Salesforce will accept only certificates signed by public CA, when it acts as a client and makes a callout to the server. On the contrary, you can use either self-signed certificates, provided you import this certificate into the Key Store of your IBM WebSphere Cast Iron runtime appliance, or you can use certificates signed by public CA (you don’t need to do any thing special on IBM WebSphere Cast Iron side for this). We are going to see this in action in the remainder of this article.

Tutorial-3: Setting up two-way SSL/certificate authentication

The basic concept here is to add the SSL/certificate authentication to the client side. The concept of two-way SSL/certificate authentication is generic to all platforms and technologies, though the implementation could slightly differ. From Salesforce perspective, there are two ways to accomplish this. They are as follows:

  • Certificate generated from non-Salesforce

  • Certificate generated from Salesforce

In both the scenarios, we can either use self-signed certificates or certificates signed by public CA. This article will explain both these approaches for the above two scenarios.

Certificate generated from non-Salesforce

This is also called the ‘Legacy’ method by Salesforce. In this scenario, we generate the CSR from non-Salesforce system, such as Windows Active Directory Services or Open SSL. This involves some coding work on the Salesforce side which will be explained later.

Self-Signed Certificates

With the self-signed certificates, we can have our own CA installed in our infrastructure and can generate the CSR and sign it. Self-signed certificates may not work in all occasions; for e.g. if the other end can accept only certificate signed by public CA. Usually, this may happen with SaaS vendors or with any third-party services. For our tutorial, we will use the Open SSL to get the self-signed certificates.

Step # 1: Setting up Open SSL and generating root certificate

Download the Open SSL from for linux. This tutorial uses the windows distribution of Open SSL.  Open a command prompt, and execute the following commands:

set OPENSSL_CONF=E:\openssl-win32\bin\openssl.cfg
set RANDFILE=E:\openssl-win32\myca\.rnd

In my workstation, I have installed it under E:\OpenSSL-Win32\. Please replace the drive where you have installed in your workstation.

Now that we have setup the Open SSL, let’s generate the root certificate that will be used to sign the certificate that we will generate in the next step. Before proceeding, let’s cover some basics. A digital certificate is verified using a chain of trust. It is formed as a tree structure and the first node of this tree structure is issued by the Root Certificate Authority and may contain intermediate certificates in the tree structure. The private key of the root certificate is used to sign the other certificates. Each certificate in the certificate chain inherits the trust worthiness of it’s parent certificate which goes all the way up to the root certificate. This is why we need a root certificate before creating the actual certificate, since the private key of the root certificate will be used to sign our certificate. IBM WebSphere Cast Iron normally (like most servers) doesn’t allow a certificate to be imported if the chain of trust is not present, though it does provide an option to bypass it, but this is not a recommended practice.

To generate the root certificate, execute the following command:

openssl req -new -x509 -extensions v3_ca -keyout keys\cakey.pem -out certs\cacert.pem -days 730 -newkey rsa:2048

This command generates a new root certificate with new key which is 2048 bits long.


Figure 4a. Generate root certificate

Step # 2: Creating the CSR for client

The Certificate Signing Request (CSR) contains the information about your organization and the public key which the CA validates it before signing it. In this case, we have our own CA which will be used to sign this certificate.

Here is the command to generate the CSR.

openssl req -new -nodes -out certs\contoso.csr -key keys\cakey.pem


Figure 4b. Create CSR using Open SSL (for self-signed certificate)

This will generate the CSR file under certs\contoso.csr. An important thing to note is that the common name should either exactly match your domain name or should match your domain name with wild card. Wild card is used if you want to use the same certificate for the sub domains. For e.g. the same certificate can be used for the URLs,,, if you use ‘*’ in the common name.  If you use IP address, then note that your clients can access the web service only by IP address, the exception being, if your subjectAlternateName has your DNS name in it (same as common name).

Step # 3: Sign the certificate

Now that we have the CSR generated, it needs to be signed which can be done by the following command.

openssl ca -policy policy_anything -cert certs/cacert.pem -in certs/contoso.csr -keyfile keys/cakey.pem -days 730 -out certs/contoso.cer


Figure 4c. Sign the certificate using Open SSL

The above command instructs to sign the CSR using the root certificate that we generated in step # 1 using the private key and set the certificate to expire in 2 years.

IBM WebSphere Cast Iron can accept only PKCS#12 formatted certificates and hence we need to convert our PEM formatted certificate to PKCS#12 format. This can be accomplished by the following command:

openssl pkcs12 -export -out certs/contoso.p12 -in certs/contoso.cer -inkey keys/cakey.pem


Figure 4d. Convert the PEM encoded certificate to PKCS#12 format

Step # 4: Import the root certificate and self-signed certificate into IBM WebSphere Cast Iron runtime appliance

Most servers including IBM WebSphere Cast Iron ships with root certificates for most of the vendors such as Verisign, Thawte, etc. But since we are using the self signed certificates which is signed by our own CA that we setup in the step # 1, we first need to import the root certificate into the Trust Store.

To do this, click ‘Import’ under the ‘Trust Store’ section of Security->Certificates. This will open up a dialog box. You can either select the file or paste the content from the certificate file. Click ‘Browse’ and select the file and click ‘Open’. Click ‘Import’ and the certificate should be imported into your Trust Store now.

To import the self signed certificate, click ‘Import’ under the ‘Key Store’ section of Security->Certificates. This will open up a dialog box. You can either select the file or paste the content from the certificate file. Click ‘Browse’ and select the file and click ‘Open’. Enter the password that you used to generate the self signed certificate. Click ‘Import’ and the certificate should be imported into your Key Store now.

Step # 5: Update the code to embed the certificate in the web service callout

The web service callout code needs to be updated to include the certificate in the call for the two-way authentication. The certificate that we have now is in PKCS12 format which is a binary format and we need the text version (PEM) of it to embed it into the code. To get this, execute the following command:

openssl base64 -in certs/contoso.p12 -out certs/contoso.pem

Open the contoso.pem file in a text editor and copy the content. Now, update the UserStatusClient file on the Salesforce side by clicking ‘Edit’ against ‘UserStatusClient’ under Setup->Develop->Apex Classes.

string key = ''; // paste the content that you copied from the contoso.pem file.
updateStatusInstance.clientCert_x = key;
updateStatusInstance.clientCertPasswd_x = 'xxxxxx'; // enter the password that you put when you generated the certificate.

Test it by updating the test user’s status and you should see that it has made a web service callout with two-way authentication.

Public CA Signed Certificates

The biggest advantage of getting the certificate signed by public CA is that you there is no need to import the certificate in the target server, as in the case of self-signed certificates, as the web/application server(s) trust the certificates signed by public CA’s. This will really helpful where the user doesn’t have control or a way to import the self-signed certificates into the target server’s Trust/Key Store.

In this scenario, we will see how to get the certificate signed by public CA using CSR generated from Open SSL.

Step # 1: Creating the CSR for client

To create the CSR, you can follow the same steps as described in the step # 2 of the previous section.

Step # 2: Get the certificate signed from public CA

Submit the CSR to your preferred public CA (should be supported by Salesforce) and get it signed.

Step # 3: Update the code to embed the certificate in the web service callout

Follow the same steps as described in the step # 5 of the previous section to update the wrapper class in Salesforce to include the certificate when making the web service callout.

Test it by updating the user’s status and you should see that it has made a web service callout with two-way authentication. And as described above, you don’t need to import the certificate into IBM WebSphere Cast Iron runtime appliance as the server can accept the certificate passed by the salesforce since it is signed by public CA.

Certificate generated from Salesforce

Salesforce provides the option to generate the Certificate from its platform and this is the suggested method by Salesforce. The biggest advantage is that the private key is not shared outside of the Salesforce and the caveat is that you will not be able to use this certificate from any other system, other than the Salesforce, since you cannot get the private key when you download the certificate from Salesforce. This section will also cover both the scenarios; self-signed certificates and certificate signed by public CA, but this time the Certificate is generated from the Salesforce.

Self-Signed Certificates

Step # 1: Creating self-signed certificate the CSR for client

To generate the certificate from Salesforce, login into your Salesforce organization and click ‘Create Self-Signed Certificate’ under Setup->Security Controls (under Administration Setup)->Certificate and Key Management. Enter the information as below.


Figure 4e. Create self-signed certificate from Salesforce.

Click ‘Save’ and it should show you the screen as below.


Figure 4f. Self-signed certificate generated in Salesforce.

Step # 2: Update the code to reference the certificate in the web service callout

Since the certificate is now residing within Salesforce, we don’t need to embed the certificate in the web service callout; instead, we can just reference the certificate name in the code and Salesforce runtime automatically embeds the certificate in the callout. Update the UserStatusClient file on the Salesforce side by clicking ‘Edit’ against ‘UserStatusClient’ under Setup->Develop->Apex Classes

updateStatusInstance.clientCertName_x = 'Contoso_SF';
updateStatusInstance.clientCertPasswd_x = 'test';

Public CA Signed Certificates

Step # 1: Creating the CSR for client

To get the certificate signed by public CA that is generated from Salesforce, click ‘Create CA-Signed Certificate’ under Setup->Security Controls-> (under Administration Setup)->Certificate and Key Management. Enter the information as below.


Figure 4g. Create CSR from Salesforce.

Click ‘Save’.

Step # 2: Get the certificate signed from public CA and Upload to Salesforce

Download the CSR by clicking ‘Download Certificate Signing Request’ button and submit this CSR to your preferred public CA and get it signed. Once you receive the signed certificate, upload it by clicking ‘Upload Signed Certificate’ button.

Step # 3: Update the code to reference the certificate in the web service callout

This is similar as described in the previous section; i.e., since the certificate is residing within Salesforce, we don’t need to embed the certificate in the web service callout; instead, we can just reference the certificate name in the code and Salesforce runtime automatically embeds the certificate in the callout. Update the UserStatusClient file on the Salesforce side by clicking ‘Edit’ against ‘UserStatusClient’ under Setup->Develop->Apex Classes

updateStatusInstance.clientCertName_x = 'Contoso_SF_PCA';
updateStatusInstance.clientCertPasswd_x = 'test';


In this article, we saw how to make two-way authenticated calls using SSL/certificates including some of the fundamentals about the certificates and how they work. In Part-5, which is also the final piece of this article series, we will see some common issues and how to fix them.

Making authenticated web service callouts from Salesforce to IBM Cast Iron using SSL/certificates–Part III


The first part of this article series laid out the foundation and the second part implemented the solution without security. This article will now focus on adding one-way SSL/Certificate authentication to the web service hosted by the IBM WebSphere Cast Iron runtime appliance.

Certificates and SSL: Basics

In the first part of this article series, we briefly touched the topic of X.509 certificates. Let’s dig little deep to understand what certificate means and how this is used to secure data exchange between server and the client.

The data that travels through the public internet can always be snooped in easily. When a user visits a website that uses no security (read HTTP),  the information is returned back as a clear text. This is fine for the websites like or where the content from these websites are not sensitive (i.e., doesn’t reveal privacy, financial, kind of information). At the same time, if a user wants to shop from a website or access his health records from a health care service provider, then the privacy and the financial information can be easily compromised, if the data travels over HTTP. The data exchange between the client and the server needs to be secured in order to protect the financial / privacy information of the users. The SSL was invented to address this issue; with SSL, the entire communication between the client and the server is encrypted using PKI and only the client can decrypt the information sent by the server and vice versa. But how does the user verify that the information did originate from the server that it is talking to.

Assume that an attacker steals the public key and was able to snoop in between and interpret all communications between the client and the server; now the attacker can act as a client to the server and get access to the data, even if the data travels over SSL. Similarly, it can act as a server to the client and can provide either incorrect information or get additional sensitive information from the client. Even though the data itself is encrypted and sent over the wire, the attacker has the public key and now he can decrypt it. How do we solve this problem?

This is where the certificates come to rescue. The SSL certificate or simply a certificate enables the server to prove its identity so that the client can trust the public key that the server exchanges with the client. Once the client verifies that it is indeed talking to the server, all the traffic between the client and the server can be encrypted with a trusted key verified by a registered and trusted independent third party. This third party is called as the ‘Certificate Authority’ and they issue the certificates which certifies that the public key is owned by the same entity who is named in the certificate. This permits the client to trust the public key and sign the data that it sends to the server, since the public key corresponds to the private key which the server has it. The server uses this private key to encrypt all the data that it sends to the client and only the client, which has the public key, can decrypt it.

The process of establishing the secure communication between the server and the client is called SSL handshake. The process is as follows when a client tries to establish the secure connection with the server:

  • The client sends the list of cipher suites that it supports to the server.
  • The server chooses the best and the strongest one from the list and sends back digital certificate along with the public key.
  • The client may communicate with the CA which signed the certificate to verify the authenticity. Once the client is satisfied, it generates a hash encrypted by the public key and send it to the server to generate the mutually agreeable session key.
  • The server decrypts the session key with its private key and generates the session key. This session key is the encryption key which will be used to encrypt all subsequent communications till the end of the session.

Understanding the one-way SSL/certificate security

In a typical communication between a server and a client, it is the server which has to prove its identity to the client. For e.g. when a user visits to buy something, the website has to prove that it is indeed, so that the user can safely browser through their website, add things to shopping cart, provide credit card and place the order – all with encrypted communication (the client still need to authenticate with user name/password, but the client doesn’t need to prove its identity. In other words, the server doesn’t expect the client to prove its identity and anybody who has a particular user’s user name / password, can login into the website). This type of security is called as the one-way SSL/certificate, since only one of the parties provide its identity and the other simply use it to verify it.

For most situations, one-way SSL/certificate security is enough. We will cover the basics of two-way SSL/certificate security and how to implement it in the next part. With the basics in place, the rest of this article will show the necessary steps to add one-way SSL/certificate security to the solution that we built in the previous part.

Tutorial 2: Setting up one-way SSL/certificate authentication

This tutorial will accomplish the following:

  • Generating the CSR.
  • Getting the signed certificate.
    • Getting certificate signed by public CA.
  • Installing the certificate in IBM WebSphere Cast Iron runtime appliance’s Key Store.
  • Configuring the certificate(s) in IBM WebSphere Cast Iron runtime appliance.
  • Securing the web service with HTTPS/SSL certificate in IBM WebSphere Cast Iron Studio.
  • Updating the wrapper class and the stub in Salesforce.
  • Update the remote site settings.

Step # 1: Generating the CSR

The first step is to generate the Certificate Signing Request (CSR) that will be used to obtain the signed certificate. Most major platforms such as Windows Active Directory Certificate Services (part of Windows Server 2003/2008), Open SSL support creating the CSR’s. For this tutorial, we will use the IBM WebSphere Cast Iron runtime appliance to generate the CSR.

Login into WMC and click ‘Generate’ under Key Store panel of the Security->Certificates. This will pop up a dialog box.


Figure 3a.Creating a CSR

The screenshot above contains data for a fictitious company. Update the data to reflect your needs. Here is the description of this data.

  • Alias – This will be the name of the certificate which will be referenced in the ‘Web Service’ endpoint.
  • Distinguished Name:
    • Common Name (CN): The Fully Qualified Domain Name (FQDN) of the entity to be secured. The common name can include * to indicate wild card certificate.
    • Organization (O): The registered name of the organization.
    • Organizational Unit (OU): The department / organizational unit which this is being applied for.
    • Country (C): Country, where the organization is registered. This should not be abbreviated.
    • State (ST): State, where the organization is registered.
    • Locale (L): City, where the organization is registered. This should not be abbreviated.
    • Email (EMAILADDRESS): The email address of the person who administers the certificates.
  • Key Algorithm: The algorithm used to compute the key.
  • Key Length: The length of the key.
  • Valid for: The length of time, the certificate is valid.

Click ‘Generate’ and this will generate the CSR and open up a dialog box with that has content that looks like below:


Figure 3b.The Certificate Request

This is called the PEM formatted CSR and it is basically a text version of binary CSR which is base-64 encoded. Click ‘Download’ and save the file (it saves as .pem file).

Step # 2: Getting the signed certificate

Certificates can be signed either by trusted public CA such as Verisign, Thawte, DigiCert, etc., or it can be self-signed. Salesforce will accept only certificates signed by public CA for one-way SSL/certificate authentication setup, where salesforce is a client consuming web services hosted elsewhere. It does support self-signed certificates in two-way SSL/certificate scenario, provided, the self-signed certificate is installed in the Key Store of the corresponding web server. This tutorial will cover the process of getting certificate signed by public CA. The Part-4 of this article series will cover the process of getting self-signed certificate.

Getting certificate signed by public CA

The CSR can be submitted to the public CA either through email or uploading through their website and in most cases the signed certificate can be obtained in a day or two. Once the CA validates the identity of your corporation, they will issue the certificate.

Step # 3: Installing the certificate in IBM WebSphere Cast Iron runtime appliance

Once the signed certificate is received, now it can be installed into the Key Store in the IBM WebSphere Cast Iron runtime appliance through WMC. Generally, all web servers will have two stores;

  • Trust Store – A Trust Store contains the certificates from public CA’s that your web server trusts. Most web servers ship with the trust certificates of the major CA’s. Naturally, the Trust Store contains only the public keys along with the certificates.
  • Key Store – A Key Store may or may not contain the certificates. If the web server provides SSL based security, then the signed certificate that you obtained (either self-signed or signed by public CA) will be stored in the Key Store. As you might have guessed, the Key Store contains both the public and private keys along with their certificates.

The following screenshot shows the certificates configuration page in IBM WebSphere Cast Iron runtime appliance.


Figure 3c. Certificate configuration page in IBM WebSphere Cast Iron runtime appliance.

Since our goal is to enable the one-way SSL/certificate, we have to import the signed certificate into the Key Store. To do this, click ‘contoso wildcard’ (or whatever the name that you entered when you created the CSR) link under ‘Key Store’ panel of the Security->Certificates. This will open up the following dialog box loaded with the certificate information.


Figure 3d. CSR information

Click ‘Upload’ and this should show the following dialog box.


Figure 3e. Signed certificate upload

Click ‘Browse’ and select the signed certificate file that you received from your CA and click ‘Open’. Alternatively, you can open the signed certificate file in notepad and copy the contents in its entirety which will look like below:


Figure 3f. Pasting signed certificate content

Click ‘Import’. Your signed certificate now should be imported and ready to use.

Step # 4: Configuring the certificate(s) in IBM WebSphere Cast Iron runtime appliance

Most web servers come with root certificates pre-installed from most of the major public CA’s. As said, IBM WebSphere Cast Iron comes with the default Trust Store that is loaded with the root certificates from the industry leading public CA’s. Please make sure that you have those root certificates from the public CA from whom you have bought your signed certificate(s). It is also important that all the intermediate certificates that was part of your signed certificate to be installed into the Trust Store. The public CA will provide you all the intermediate certificates along with your signed certificate. These intermediate certificates will go into the Trust Store. You can validate the presence of intermediate certificates through multiple ways; you can use online SSL utilities from your CA or other CA’s such as DigiCert, SSL Labs, etc. You can also use Open SSL tool to validate it.

Another important setting for making one-way SSL/certificate is setting the SSL Usage Type in the IBM WebSphere Cast Iron runtime appliance. To do this, click ‘Edit’ under Settings panel of the Security->Certificates.


Figure 3g. Client SSL Settings section.


Figure 3h. Update Client SSL Settings section.

Click ‘Save’. This step is what defines the one-way SSL/certificate mechanism. Basically, we are instructing the server that the client doesn’t need to prove its identity using the SSL certificate.

Step # 5: Securing the web service with HTTPS/SSL certificate in IBM WebSphere Cast Iron Studio

The configuration piece on the server side is complete and now it’s time to update the configuration in the orchestration to use HTTPS and the newly installed signed certificate. In the IBM WebSphere Cast Iron Studio, open the Web Service endpoint and select the option ‘HTTPS’ under ‘Security’. Check the ‘Server Certificate Alias Name’ and replace the entry ‘Factory Supplied Identity’ with the certificate name that you entered while creating the CSR. In this case, it is ‘contoso wildcard’ (or whatever the name that you entered when you created the CSR). The best practice is to create a variable so that it can be changed through WMC. You can do this by clicking the little green dot in the bottom lower corner of the text box.


Figure 3g. Web Service endpoint with HTTPS/SSL configuration.

Update the ‘Port’ to match the SSL port that your runtime is configured to run. Go back to WMC and stop the orchestration and undeploy it. Coming back to the Studio, publish the project to the runtime appliance using File->Publish Project. Click ‘Save’ if it is asked.

Step # 6: Updating the wrapper class and the stub in Salesforce.

Now, that the web service is secured by HTTPS/SSL certificate, it is time to update the code on the salesforce side. Go back to your browser and login into your salesforce org. Click ‘Edit’ against the file ‘UserStatusClient’ under Setup->Develop->Apex Classes. This will open the class in the editor. Update the endpoint to reflect ‘https’ instead of ‘http’. The code should look like this:


Replace the <yourservername> with the actual server name. Click ‘Save’. Click ‘Edit’ against the file ‘UserStatusWsdl’ under Setup->Develop->Apex Classes and update the endpoint to reflect ‘https’ instead of ‘http’ as shown above. Click ‘Save’.

Step # 7: Update the remote site settings

As explained in the previous article, the external domain should be added to the remote site settings so that Salesforce allows outbound calls. We had already did this in the previous article, but that is for ‘http’ based one; here we will update this entry to reflect ‘https’ instead of ‘http’ as shown below:


Figure 3h. Remote site settings

Click ‘Save’ after updating the entry.

We just completed setting up the one-way SSL/certificate authentication. Go back to the ‘Users’ page under Setup->Manager Users and repeat the test that was done in the Part-2. You should be able to see the status got updated in your database table. The communication between the server and the client now happens through HTTPS/SSL. You can verify this by using Fiddler or HTTP Watch.


This article covered the basics of certificates and SSL and continued to add the feature to setup the one-way SSL/certificate right from generating the CSR to installing the certificate into the key store. We also saw how to update the code and the settings on the Salesforce side to call the web service over HTTPS. The source for the updated cast iron studio project and the salesforce code can be downloaded from here. The Part-4 of this article series will now update this solution to include two-way SSL/certificate. Stay tuned.

Making authenticated web service callouts from Salesforce to IBM Cast Iron using SSL/certificates–Part II


The first part of this article series explained the scenario and some of the basics in making authenticated web service callouts from salesforce to IBM WebSphere Cast Iron. This article will continue to build the solution for the use case that was explained in the previous article. The solution to the problem is decomposed into several tutorials and this article will lay out the first tutorial.

Tutorial 1: Developing the Cast Iron Web Service and the Salesforce Trigger

The first tutorial will accomplish the following steps:

  • Develop the orchestration to update the users table and expose it as a web service over HTTP.
  • Consume the WSDL in salesforce using apex2wsdl.
  • Create the wrapper class in Apex that encapsulates the system generated stub to make the callout.
  • Create the Apex trigger and make the web service callout using the class generated in the previous step.
  • Configure the salesforce security settings to allow the outbound web service callout.


The web service is built as an orchestration in IBM WebSphere Cast Iron runtime appliance which updates a row in a table in your database. This web service is called whenever a user’s status is updated. The following diagram shows a simplified architecture for this problem.

A1-P2-AD-01-ArchitectureFigure 2a. Architecture

Step # 1: Develop the orchestration.

This cast iron orchestration is a simple orchestration that will receive the input and update the ‘Users’ table and return the status. The following screenshot shows the orchestration and the configurations.


Figure 2b. Orchestration


Figure 2c. Web Service endpoint

Note that we are not configuring any security in the first tutorial. Hence the ‘None’ option is selected when defining the ‘Provide Service’ endpoint.

It’s a good practice to generate the WSDL of this web service and store it as part of the project. This can be accomplished very easily by right clicking the ‘Provide Service’ activity and clicking the ‘Add generated WSDL to Studio Project’ from the context menu as shown in the following screenshot.

A1-P2-CI-03-GenerateWSDLFigure 2d. Generate WSDL

There are many ways to test the web service. The options include the Salesforce Developer Console, SOAP UI, custom test application etc. Use your favorite tool to test the web service. Once the unit test is complete then it can be deployed to the runtime appliance. To do this, click File->Deploy Project which will prompt the following dialog box.


Figure 2e. Publishing the project to the WMC.

Enter the IP address/DNS name of the runtime appliance and the user name and the password and click OK. Browse through the WMC and verify the configuration parameters and start the orchestration.

Step # 2: Consume the WSDL

Once the web service is ready and deployed, the next step is to consume the WSDL on the Salesforce side, so that the web service can be called to update the status update. Sign in to your org, and click ‘Generate from WSDL’ under Setup->Develop->Apex Classes. Choose the WSDL file (from the <cast iron project folder>->WSDL folder) and click ‘Parse WSDL’ button. This tool will generate three classes; one for the request, one for the response and another is the actual stub. This tool names the file based on the namespaces defined in the WSDL file as shown below.


Figure 2f. Before renaming the files

We will change these names as shown in the screenshot below.


Figure 2g. After renaming the files

Click ‘Generate Apex code’ button and this creates the necessary stub, the request and the response files. The ‘UserStatusWsdl’ class needs to be updated with the actual endpoint address, as the Cast Iron embeds the proprietary syntax when a configuration parameter is used for the endpoint definitions. Click the ‘Edit’ button under Develop->Apex Classes and update the ‘endpoint_x’ variable with the endpoint URL where the Cast Iron web service is hosted.

Step # 3: Create the wrapper class

The wrapper class wraps the system generated stub class as shown in the previous step. This class basically constructs the endpoint and the web service operation parameters and makes the call to the remote web service. As indicated previously, this tutorial will make a call to the HTTP based web service that is not secured with SSL/certificates. Click ‘New’ button under Setup->Develop->Apex Classes and copy the following code snippet to the editor.

public class UserStatusClient {

    @future (callout=true)
    public static void updateUser(string userName, Boolean isActive) {
        Boolean isSuccess = true;
        try {
            UserStatusWsdl.Update_StatusPort binding = buildService();

            UserStatusRequest.statusChange_element element = new UserStatusRequest.statusChange_element();
            element.userName = userName;
            element.isActive = isActive;

            Boolean output = binding.Update_Status(element);
            System.debug('\nOutput is :' + output);
            isSuccess = true;

        } catch (Exception e) {
            isSuccess = false;


    private static UserStatusWsdl.Update_StatusPort buildService() {
        UserStatusWsdl.Update_StatusPort updateStatusInstance = new UserStatusWsdl.Update_StatusPort();

        updateStatusInstance.endpoint_x         = 'http://<YourServerName>/UserManager/UpdateStatus';
        updateStatusInstance.inputHttpHeaders_x = new Map<String, String>();
        updateStatusInstance.inputHttpHeaders_x.put('OrganizationId', UserInfo.getOrganizationId());
        updateStatusInstance.timeout_x          = 60000;

        return updateStatusInstance;

Make sure to update the <yourservername> with your actual server name (which is your Cast Iron runtime appliance).

Click ‘Save’.

Step # 4: Create the Apex trigger

An Apex trigger needs to be created that is attached to the ‘User’ object and will be invoked every time the user’s status gets updated. Click ‘New’ button under Setup->Customize->Users->Triggers and copy the code to the editor as in the following code snippet.

trigger UserAfter on User (after insert, after update) {
    if (Trigger.isUpdate) {
        for (User u: Trigger.New) {
            if (u.IsActive != Trigger.OldMap.get(u.Id).isActive) {
                UserStatusClient.updateUser(u.Username, u.isActive);

Click ‘Save’. The trigger simply calls the web service using the wrapper class that was defined in the previous step.

Step # 5: Configure the salesforce security settings

Salesforce needs to be configured to allow any outbound HTTP/SOAP requests and this can be accomplished as follows:

  • Click ‘New Remote Site’ under Setup->Security Controls->Remote site settings (under Administration Setup).
  • Enter a unique name for the ‘Remote Site Name’ label.
  • Enter the URL of the site for ‘Remote Site URL’.
  • Leave the ‘Disable Protocol Security’ unchecked.
  • Leave the ‘Active’ checked.
  • Click Save.

The following screenshot shows the Remote site settings screen.


Figure 2h.Remote site settings

We have completed setting up the tutorial and this is now ready for testing. Create one or more test users and update their status by clicking ‘Edit’ button under Setup->Manage Users->Users section. Uncheck the ‘Active’ column and you will be prompted with the following dialog box.


Figure 2i. Warning dialog when trying to update the ‘Active’ column.

Click ‘Ok’ and then ‘Save’.  This should fire the trigger which will invoke the web service that updates the user’s status in your database. To check whether the call was successful, you can either run the Developer Console or by going to the WMC.

With this, the first tutorial of setting up the use case with no security is complete. The Part-3 article of this series will add one-way SSL/certificate authentication.

Note: The source for the Cast Iron Studio project, the Salesforce trigger and the custom wrapper class  can be downloaded from here.

%d bloggers like this: