Dynamically populating custom HTML email template content in force.com with custom dynamic data using Apex

The Force.com platform provides rich set of features to send emails through the apex code. It provides different templates like visualforce templates, HTML email templates and text templates so that the developers can create rich email experience. The beauty of these templates is that as a developer you can include the fields from an object, so that the emails will be completely customized for the given scenario. For e.g. if you want to send a nicely formatted HTML emails with the opportunity information to your sales group, then you can include the opportunity related fields such as the opportunity name, whether the opportunity is won or lost, etc. There are several articles that covers the basics of email services like this and this. By default, as a developer, you can only send the values of the object fields in the email and cannot include anything that is dynamically computed. This article will show you how to dynamically populate the content of a custom HTML email template with custom data. For e.g. adding dynamically calculated data such as number of days it took to won the opportunity, total number of sales by a sales person in a given month, etc. I’m also going to show you how you can write some generic code that can take varying dynamic parameters, populate the custom HTML email template with those dynamic parameters along with the standard/custom fields from standard/custom objects and send an email to the recipients. What is unique in this approach is that it is not only possible to populate the email body text with custom created dynamic data, but also possible to populate the email subject with custom created dynamic data.

For this demonstration, let us take the following use case. The requirement is to send an email notification on the Case object whenever a case is not closed and the case is still pending even after the set SLA period. For e.g. if a case is pending for 5 days since it was opened and if the SLA was set for 5 days, then on the 6th day, an email notification needs to be sent. Here is the screenshot of how the email will look alike when it is sent. The values highlighted in the red come from the standard fields, while the values highlighted in blue are calculated dynamically.

BP-13-09-01-P1-DynamicEmail-01

Here is the high level approach to accomplish this.

  • Create the custom HTML email template
  • Create the logic to populate the subject and the body text and send the email
  • Create a apex scheduler to send the email

1. Create the custom HTML email template

A. The following screenshots show the steps in creating the custom HTML email template.

BP-13-09-01-P1-DynamicEmail-01(1)

B. In order to include the dynamically calculated data, follow the same convention of using the merge fields. For e.g to include the case status, include ‘{!Case.Status}. Please note that unlike in the standard merge field feature in the HTML templates, here you can use any format for the merge field syntax. For the custom data, I suggest to use the variable name that you used to calculate the dynamic data. In the following example, ‘{!numDays} indicate the number of days the case is open since it was created.

BP-13-09-01-P1-DynamicEmail-02

C. Specify the text for the plain text email. You can click the ‘Copy text from HTML version’ button to copy the content in the HTML template, unless you want to specify a different text for plain text email.

BP-13-09-01-P1-DynamicEmail-03

Click ‘Save’.

2. Create the logic to populate the subject and the body text and send the email

In the above example, the subject has two dynamic parameters and the body text has 6 dynamic parameters. Each requirement is different and we need a generic method to accommodate different number of parameters for both subject and the body text. To accomplish that, I have used two ‘maps’, one for subject and another for body text. This way, we can add any number of dynamic parameters and able to customize based on the requirements. I have created two apex classes to accommodate this logic:

  • EmailMessageWrapper – Wraps email message. Provides different constructors to capture email information.
  • UtilityClass – Provides method(s) to send emails. One of the method supports sending email using email templates and this is the method that we will use for this demo.

The following code snippet from EmailMessageWrapper shows the constructor that we will use in this demo.

   1: public EmailMessageWrapper(String fromAddr, Id toAddrId, String sub, Map<String, String> mapSubjectParams, Map<String, String> mapBodyParams) {

   2:     this(fromAddr, null, toAddrId, null, sub, mapSubjectParams, null, mapBodyParams );

   3: }

   4:

   5: public EmailMessageWrapper(String fromAddr, String toAddr, Id toAddrId, String bccAddr, String sub, Map<String, String>  mapSubjectParams, String body, Map<String, String> mapBodyParams) {

   6:     this.FromAddress = fromAddr;

   7:     this.ToAddress = toAddr;

   8:     this.ToAddressId = toAddrId;

   9:     this.BccAddress = bccAddr;

  10:     this.Subject = sub;

  11:     this.ParameterSubjectMap = mapSubjectParams;

  12:     this.Body = body;

  13:     this.ParameterBodyMap = mapBodyParams;

  14: }

The mapSubjectParams and the mapBodyParams parameters holds the dynamic data as a key/value pair. The key corresponds to the merge field that is defined in the custom HTML email template, while the ‘value’ is the data that will be merged into the merge field.

The following code snippet from UtilityClass shows the sendEmail method that takes two parameters. The first parameter is a list of EmailMessageWrapper and the second parameter is the email template name.

   1: public static void sendEmail(List<EmailMessageWrapper> listEmailMessageWrapper, String emailTemplateName) {

   2:     List<Messaging.SendEmailResult> listEmailResult = null;

   3:     List<Messaging.Singleemailmessage> listSingleEmailMessages = new List<Messaging.Singleemailmessage>();

   4:     EmailTemplate emailTemplate = [SELECT Id, Subject, HtmlValue, Body FROM EmailTemplate WHERE Name = :emailTemplateName];

   5:     for (EmailMessageWrapper emailMessageWrapper : listEmailMessageWrapper) {

   6:         Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();

   7:         mail.setSenderDisplayName('Case Age Notification');

   8:         if(emailMessageWrapper.FromAddress != null && emailMessageWrapper.FromAddress.length() > 0)

   9:             mail.setReplyTo(emailMessageWrapper.FromAddress);

  10:         if(emailMessageWrapper.ToAddress != null && emailMessageWrapper.ToAddress.length() > 0)

  11:             mail.setToAddresses(new String[] { emailMessageWrapper.ToAddress });

  12:         else

  13:             mail.setTargetObjectId(emailMessageWrapper.ToAddressId);

  14:         if(emailMessageWrapper.BccAddress != null && emailMessageWrapper.BccAddress.length() > 0)

  15:             mail.setBccAddresses(new String[] {emailMessageWrapper.BccAddress });

  16:         String subject = null;

  17:         if(emailMessageWrapper.Subject != null && emailMessageWrapper.Subject.length() > 0) {

  18:             mail.setSubject(emailMessageWrapper.Subject);

  19:             subject = emailMessageWrapper.Subject;

  20:         }

  21:         else

  22:             subject = emailTemplate.Subject;

  23:

  24:         for(String key: emailMessageWrapper.ParameterSubjectMap.keySet())

  25:             subject = subject.replace(key, (emailMessageWrapper.ParameterSubjectMap.get(key) == null ? '' : emailMessageWrapper.ParameterSubjectMap.get(key)));

  26:

  27:         mail.setSubject(subject);

  28:         String htmlBody = emailTemplate.HtmlValue;

  29:         String plainBody = emailTemplate.Body;

  30:         for (String key : emailMessageWrapper.ParameterBodyMap.keySet()) {

  31:             htmlBody = htmlBody.replace(key, (emailMessageWrapper.ParameterBodyMap.get(key) == null) ? '' : emailMessageWrapper.ParameterBodyMap.get(key));

  32:             plainBody = plainBody.replace(key, (emailMessageWrapper.ParameterBodyMap.get(key) == null) ? '' : emailMessageWrapper.ParameterBodyMap.get(key));

  33:         }

  34:

  35:         mail.setHtmlBody(htmlBody);

  36:         mail.setSaveAsActivity(false);

  37:         mail.setPlainTextBody(plainBody);

  38:         listSingleEmailMessages.add(mail);

  39:     }

  40:     if(!Test.isRunningTest())

  41:         listEmailResult = Messaging.sendEmail(listSingleEmailMessages);

  42: }

What you see in the above code is that we iterate through the maps to populate the dynamic data in the HTML email template content by replacing the merge field with the dynamic data from the map variable. Since it is a map, we can add any number of dynamic parameters as key/value pairs, where ‘key’ represents the merge field and the ‘value’ represents the data for the merge field.

3. Create a apex scheduler to send the email

The final piece of this demo is to create the apex scheduler to send the email. The following code snippet is from the CaseAgeNotificationExecute class which retrieves the eligible cases and sends an email notification.

   1: public static void processSchedules() {

   2:     Integer slaTime = 3;

   3:     List<EmailMessageWrapper> listEmailMessageWrapper = new List<EmailMessageWrapper>();

   4:

   5:     String slaTimeLabel = 'Days';

   6:     List<Case> listCases = [SELECT Id, CaseNumber, CreatedDate, OwnerId, Status FROM Case WHERE Status <> 'Closed'];

   7:     for(Case c : listCases) {

   8:         if(DateTime.now() > c.CreatedDate.addDays(slaTime)) {

   9:             long numDays = (DateTime.now().getTime() / 1000 / 60 / 1440) - (c.CreatedDate.getTime() / 1000 / 60 / 1440);

  10:             Map<String, String> mapSubjectParams = new Map<String, String> {

  11:                 '{!Case.Status}' => c.Status,

  12:                 '{!numDays}' => String.valueOf(numDays)

  13:             };

  14:

  15:             Map<String, String> mapBodyParams = new Map<String, String> {

  16:                 '{!Case.CaseNumber}' => c.CaseNumber,

  17:                 '{!Case.Status}' => c.Status,

  18:                 '{!numDays}' => String.valueOf(numDays),

  19:                 '{!Case.CreatedDate}' => c.CreatedDate.date().format(),

  20:                 '{!slaTime}' => String.valueOf(slaTime),

  21:                 '{!slaTimeLabel}' => slaTimeLabel

  22:             };

  23:             listEmailMessageWrapper.add(new EmailMessageWrapper('admin@salesforce.com', c.OwnerId, null, mapSubjectParams, mapBodyParams));

  24:         }

  25:     }

  26:     if(listEmailMessageWrapper.size() > 0)

  27:         UtilityClass.sendEmail(listEmailMessageWrapper, 'Case Aging Notification Email Template');

  28: }

To test this, you can call the CaseAgeNotificationExecute.processSchedules() method from either the developer console or using anonymous block from your eclipse IDE.

The code for entire set of classes referred in this article can be downloaded from here.

An architecture framework to handle triggers in the Force.com platform

Coding in Apex is similar to Java/C# in lot many ways, yet so different from them in few other ways. But one thing that is common is that the application of proper design patterns to solve the problem irrespective of platform or technological difference. This is very particular for the apex triggers, because in a typical Salesforce application, triggers play a pivotal role and a major chunk of the logic is driven through the triggers. Therefore a proper design of triggers is very essential, not only for successful implementations, but also for laying out a strong architectural foundation. There are several design patterns proposed like this, this and this. And some of these patterns have gotten mainstream support among the developers and the architects. This article series is going to propose one such design pattern for handling triggers. All these design patterns share a common trait – which is ‘One Trigger to rule them all’ – A phrase that Dan Appleman made it famous in his ‘Advanced Apex Programming‘ . In fact, the design pattern that is being published in this article series is heavily influenced by Dan Appleman’s original design pattern – many thanks to you, Dan. Another design pattern that influenced this implementation was published by Tony Scott and can be found here and I would like thank him as well for his wonderful contribution. I also borrowed couple of ideas from Adam Purkiss such as wrapping the trigger variables in a wrapper class and thank him as well. His training video on Force.com design patterns (available at pluralsight as a paid subscription) is great inspiration and anyone who is serious about taking their development skills on Force.com platform to the next level MUST watch them. That said, let’s dive into the details.

Now, why do we need a yet another design pattern for triggers, while we already have few. Couple of reasons – Firstly, though, the design pattern proposed by Dan and Tony provides a solid way of handling triggers, I feel that there is still room to get this better and provide a more elegant way of handling the triggers. Secondly, both these design patterns require touching the trigger framework code every time when a new trigger is added – the method that I propose eliminates it (Sure, this can be replicated with their design patterns as well, as I leverage the standard Force.com API). Thirdly, this trigger design pattern differentiates the dispatching and handling and provides full flexibility to the developers in designing their trigger handling logic. Last, but not least, this new framework, as the name suggests, it is not just a design pattern; it’s an architecture framework that provides complete control of handling triggers in more predictable and uniform way and helps to organize and structure the codebase in a very developer friendly way. Further, this design pattern takes complete care about dispatching and provides a framework for handling the events; thus the developers need to focus only on the building the logic to handle these events and not about dispatching.

Architecture

The fundamental principles that this architecture framework promotes are

  • Order of execution
  • Separation of concerns
  • Control over reentrant code
  • Clear organization and structure

Order of execution

In the traditional way of implementing the triggers, a new trigger is defined (for the same object) as the requirements come in. Unfortunately, the problem with this type of implementation is that there is no control over the order of execution. This may not be a big problem for the smaller implementations, but definitely it’s a nightmare for medium to large implementations. In fact, this is the exact reason, many people have come with a trigger design pattern that promotes the idea of ‘One trigger to rule them all’. This architecture framework does support this and it achieves this principle by introducing the concept of ‘dispatchers’. More on this later.

Separation of concerns

The other design patterns that I referenced above pretty much promote the same idea of ‘One trigger to rule them all’. But the one thing that I see missing is the ‘Separation of concerns’. What I mean here is that, the trigger factory/dispatcher calls a method in a class which handles all the trigger event related code. Once again for smaller implementations, this might not be a problem, but medium to large implementations, very soon it will be difficult to maintain. As the requirements change or new requirements come in, these classes grow bigger and bigger. The new framework alleviates this by implementing the ‘handlers’ to address the ‘Separation of concerns’.

Control over reentrant code

Many times there will be situation where the trigger code might need to perform a DML operation on the same object, which might end up in invoking the same trigger once again. This can go recursive, but usually developers will introduce a condition variable (typically a static variable) to prevent that. But this is not a elegant solution because, this doesn’t guarantee an orderly fashion of reentrant code. The new architecture provides complete control to the developers such that the developers can either deny the reentrant or allow both the first time call and the reentrant, but in totally separate ways, so they don’t step on each other.

Clear organization and structure

As highlighted under the section ‘Separation of concerns’, with medium to larger implementations, the codebase grows with no order or structure. Very soon, the developers might find it difficult to maintain. The new framework provides complete control over organizing and structure the codebase based on the object and the event types.

UML Diagram

The following UML diagram captures all the pieces of this architecture framework.

Trigger Architecture Framework

Trigger Architecture Framework

Framework Components

TriggerFactory

The TriggerFactory is the entry point of this architecture framework and is the only line of code that resides in the trigger definition (you may have other code such as logging or something else, but as far as this architecture framework, this will be the only code required). The TriggerFactory class, as the name indicates, is a factory class that creates an instance of the dispatcher object that the caller (the trigger class) specifies and delegates the call to the appropriate event handler method (the trigger event such as ‘before insert’, ‘before update’, …)  that the dispatcher provides. The beauty of the TriggerFactory is that it automatically finds the correct dispatcher for the object that the trigger is associated as far as the dispatcher is named as per naming convention and this convention is very simple as specified in the following table.

Object Type Format Example Notes
Standard Objects <Object>TriggerDispatcher AccountTriggerDispatcher
Custom Objects <Object>TriggerDispatcher MyProductTriggerDispatcher Assuming that MyProduct__c is the custom object, then the dispatcher will be named without the ‘__c’.

It accomplishes this by using the new Type API. Using the Type API to construct the instances of the dispatchers helps to avoid touching the TriggerFactory class every time a new trigger dispatcher is added (ideally only one trigger dispatcher class is needed per object).

Dispatchers

The dispatchers dispatch the trigger events to the appropriate event handlers to handle the trigger event(s). The framework provides the interface and a base class that provides virtual implementations of the interface methods, but the developers need to provide their own dispatcher class, which is derived from either the virtual base class for each object that they want to have this framework applied. Ideally, the developers want to inherit from the TriggerDispatcherBase, as it not only provides the virtual methods – giving the flexibility to the developers to implement the event handlers only that they are interested in their dispatcher class, but also the ability to provide reentrant feature to their logic.

ITriggerDispatcher

As discussed above, the ITriggerDispatcher essentially contains the event handler method declarations. The trigger parameters are wrapped in a class named ‘TriggerParameters’.

TriggerDispatcherBase

The TriggerDispatcherBase class implements the interface ITriggerDispatcher, providing virtual implementations for those interface methods, so that the developers need not implement all the event handlers that they do not wish to use. The TriggerDispatcherBase also has one more important method named ‘execute’ which controls if a call has to be dispatched in reentrant fashion or not. It has a separate member variable for each event to hold the instance of the trigger handler for that particular event which the ‘execute’ method utilizes to control the reentrant feature.

<Object>TriggerDispatcher

The trigger dispatcher classes contains the methods to handle the trigger events and this is the place where the developers had to instantiate the appropriate trigger event handler classes. At the heart of the dispatcher lies the ITriggerDispatcher interface which provides an interface for the developers to implement the appropriate dispatcher for the objects. The interface provides definitions for all trigger events, which means that the trigger dispatcher that the developers implement should implement methods for all the events. However, since it may not necessary to provide implementations for all trigger events – the framework provides a base class named ‘TriggerDispatcherBase’ that provides default implementation (virtual methods) to handle all events. This allows developers to implement the methods for only the events  that they really have to, by deriving from TriggerDispatcherBase instead of implementing the ITriggerDispatcher interface, as the TriggerDispatcherBase implements this interface. One more reason that the developer wants to derive from TriggerDispatcherBase instead of ITriggerDispatcher is because the TriggerDispatcherBase.execute method provides the reentrant feature and the developer will not be able to leverage this feature if the trigger dispatcher for the objects do not derive from this class.

It is very important that the trigger dispatcher class to be named as per the naming convention described under the TriggerFactory section. If this naming convention is not followed, then the framework will not be able find the dispatcher and the trigger class would throw an exception.

Understanding the dispatcher is really critical to successfully implement this framework, as this is where the developer can precisely control the reentrant feature. This is achieved by the method named ‘execute’ in the TriggerDispatcherBase which the event handler methods call by passing an instance of the appropriate event handler class. The event handler methods sets a variable to track the reentrant feature and it is important to reset it after calling the ‘execute’ method. The following code shows a typical implementation of the event handler code for ‘after update’ trigger event on the Account object.

   1: private static Boolean isAfterUpdateProcessing = false;

   2:

   3: public virtual override void afterUpdate(TriggerParameters tp) {

   4:       if(!isAfterUpdateProcessing) {

   5:            isAfterUpdateProcessing = true;

   6:            execute(new AccountAfterUpdateTriggerHandler(), tp, TriggerParameters.TriggerEvent.afterUpdate);

   7:            isAfterUpdateProcessing = false;

   8:       }

   9:       else execute(null, tp, TriggerParameters.TriggerEvent.afterUpdate);

  10:  }

In this code, the variable ‘isAfterUpdateProcessing’ is the state variable and it is initialized to false when the trigger dispatcher is instantiated. Then, inside the event handler, a check is made sure that this method is not called already and the variable is then set to true, to indicate that a call to handle the after update event is in progress. Then we call the ‘execute’ method and then we are resetting the state variable. At the outset, this (resetting the state variable to false) may not seem very important, but failure to do so, will largely invalidate the framework and in fact in most cases you may not be able to deploy the application to production. Let me explain this – when a user does something with an object that has this framework implemented, for example, saving a record, the trigger gets invoked, the before trigger handlers are executed, the record is saved and the after trigger event handlers are executed and then either the page is refreshed or redirected to another page depending on the logic. All of this happens in one single context. So, it might look like the state variable such as ‘isAfterUpdateProcessing’ needs to be set to true inside the if condition.

Handlers

Handlers contains the actual business logic that needs to be executed for a particular trigger event. Ideally, every trigger event will have an associated handler class to handle the logic for that particular event. This increases the number of classes to be written, but this provides a very clean organization and structure to the code base. This approach proves itself – as in the long run, the maintenance and enhancements are much easier as even a new developer would know exactly where to make the changes as far as he/she gets an understanding on how this framework works.

Another key functionality of the handlers is that the flexibility it gives to the developers to implement or ignore the reentrant functionality. The ‘mainEntry’ method is the gateway for the initial call. If this call makes a DML operation on the same object, then it will result in invoking this trigger again, but this time the framework knows that there is a call already in progress – hence instead of the ‘mainEntry’, this time, it will call the ‘inProgressEntry’ method. So if reentrant feature to be provided, then the developer need to place the code inside the ‘inProgressEntry’ method. The framework provides only the interface – the developers need to implement this interface for each event of an object. The developers can chose to ignore to implement the event handlers, if they are not going to handle the events.

ITriggerHandler

The ITriggerHandler defines the interface to handle the trigger events in reentrant fashion or non-reentrant fashion.

TriggerHandlerBase

The TriggerHandlerBase is an abstract class that implements the interface ITriggerHandler, providing virtual implementation for those interface methods, so that the developers need not implement all the methods, specifically, the ‘inProgressEntry’ and the ‘updateObjects’ methods, that they do not wish to use.

<Object><Event>TriggerHandler

As discussed above, the developer need to define one class per event per object that implements the ITriggerHandler interface. While there is no strict requirement on the naming convention like the dispatcher, it is suggested to name as per the following convention.

Object Type Format Example Notes
Standard Objects <Object><Event>TriggerHandler AccountAfterInsertTriggerHandler,
AccountAfterUpdateTriggerHandler, etc.
Custom objects <Object><Event>TriggerHandler MyProductAfterInsertTriggerHandler Assuming that MyProduct__c is the custom object, then the handler will be named without the ‘__c’.

So, if we take the Account object, then we will have the following event handler classes to be implemented by the developer that maps to the corresponding trigger events.

Trigger Event Event Handler
Before Insert AccountBeforeInsertTriggerHandler
Before Update AccountBeforeUpdateTriggerHandler
Before Delete AccountBeforeDeleteTriggerHandler
After Insert AccountAfterInsertTriggerHandler
After Update AccountAfterUpdateTriggerHandler
After Delete AccountAfterDeleteTriggerHandler
After UnDelete AccountAfterUndeleteTriggerHandler

Note that NOT all the event handler classes, as defined in the above table, needs to be created.  If you are NOT going to handle a certain event, such as, ‘After Undelete’ for the Account object, then you do not need to define the ‘AccountAfterUnDeleteTriggerHandler’.

TriggerParameters

The TriggerParameters class encapsulates the trigger parameters that the force.com platform provides during the invocation of a trigger. It is simply a convenience, as it avoid repeating all those parameters typing again and again in the event handler methods.

<Object>Helper

Often, there will be situations where you want to reuse the code from different event handlers such as sending email notifications. After all one of the fundamental principle of object oriented programming is code re-usability. In order to achieve that, this architecture framework proposes to place all the common code in a helper class so that not only the event handlers, but also the controllers, scheduled jobs, batch apex jobs can use the same methods, if necessary. But this approach is not without its caveats; for e.g. if the helper method slightly varies based on the event type, then how would you handle that? Do you pass the event type to the helper method, so that the helper method uses condition logic? There’s no right or wrong answer – but personally, I think it’s not a good idea to pass the event types to the helper methods; And for this example, you can just pass a flag and that can solve the issue. But for other types of situations, passing a flag may not be enough – you need to think little smarter and I’ll leave it to the reader as the situation is totally unique to their requirements.

Using the framework to handle updates

The framework comes with a default implementation to handle the updates. It achieves this by adding a Map variable to hold the objects to be updated and all that the developer needs to do is to just add the objects to be updated to this map in their event handlers. The TriggerHandlerBase abstract class has the default implementation to update the objects from this map variable which is called by the  ‘execute’ method in the TriggerDispatcherBase. Note that I chose to call the updateObjects method only for the ‘mainEntry’ and not for the ‘inProgressEntry’ simply because I didn’t have the time to test it.

Another thing to note is that since the framework will utilize the helper classes to get things done, sometimes the objects that you need to update may be handled in these helper classes. How would you handle that? I suggest design your helper methods to return those objects as a list and add them to the map from your event handler code, instead of passing the map variable to the helper class.

This framework can be easily extended to handle the inserts and deletes as well. To handle the insert, add a List variable to the TriggerHandlerBase and provide a virtual method named ‘insertObjects’ that will insert the list and call it from the ‘execute’ method on the TriggerHandlerBase. I’ll update the code when time permits and for the meanwhile, I’ll leave this to the reader to implement for their projects.

Note that it is not possible to do upsert in this way, because force.com platform doesn’t support upserting generic objects and since our map uses the generic object (sObject), this is not possible. (Thanks to Adam Purkiss for pointing out this fact).

Example

To illustrate this design pattern, the following diagram depicts how this architecture framework will apply for the account object.

Trigger Architecture Framework

Trigger Architecture Framework

The trigger architecture framework project is available as open source and hosted at google code.The source code for this entire framework, along with a dummy implementation which includes classes for handling trigger events on the Account object, is available as an app exchange package. If you need it as a zip file, then it can be downloaded from here. The code documentation is built using apexdoc and can be downloaded from here.

Summary

The new trigger architecture framework will provide a strong foundation on the force.com platform to build applications that will provide multiple benefits as outlined previously. The framework does involve many parts and the business solutions built using this framework need to follow certain rules and conventions and the number of classes written will be little high – but the benefits from this approach will easily out-live the effort and the investments. It may be still overkill for very smaller implementations, but it will prove its usefulness for medium to larger implementations.

Integrating with Force.com Platform using DbAmp: Part-03: Insert

Typically, when data is uploaded to Salesforce, it will be usually an upsert operation, but it is not uncommon to use insert operation. For e.g. if you are importing a fresh set of data or importing data for new object(s), there is no need to do an upsert and insert is a perfect candidate for these kind of scenarios.
The insert operation using the DbAmp can be done either directly against a Salesforce org or can be done locally in a staged database and then can be inserted into Salesforce org. The main advantage for using the staged database is that while the direct access method is very slow, especially for bulk uploads, the local access method (the staged database) is much faster. Also, the direct access method affects the governor limits at a faster rate when compared to local access method, since in the direct access method, each record inserted count against the API limit, while in the local access method, even without using BULKAPI switch, it takes 200 record inserts to count against the API limit. We will cover both the scenarios here and demonstrate them with couple of examples.
Direct Access Method:
It’s very simple to use the direct access method. The following statement inserts new Contact record in the Contact object:
INSERT INTO sforce contact ( LastName) VALUES (‘TestLastName’ )
As you may guess, the auto-populate system fields such as the Id, CreatedBy, CreatedDateTime, LastModifiedBy, LastModifiedDateTime fields should be left blank. These fields will be auto populated by the Salesforce itself.
Staged (Local) Database Method:
In the staged database method, a database is created locally in the SQL server and the data is inserted using a temporary table called load table. Here is the brief steps on accomplishing this:
  • Create a database in the SQL server (one-time task)
  • Replicate / Refresh the table that you want to insert the record(s)
  • Create the load table. The load table is a physical table, but temporary in nature. This load table should include the following fields
    • Id, Error and any fields that are necessary or that you want to populate.
      • The ‘Id’ field should be left empty and should not be populated. When insert succeeds, this field is populated with the ‘Id’ returned by the Salesforce.
      • The ‘Error’ field is populated by DbAmp with the status of the insert operation.
  • Populate the load table
  • Insert the data from the table into Salesforce using the SF_BULKOPS stored procedure.
Note that the SF_BULKOPS is one of the several stored procedure that ships with the DbAmp. We will cover the options, nitty-gritty’s of this stored procedure in a separate blog post.
Now, let’s look at an example of doing insert with the staged database method. The following script populates a list of contacts in the load table and then does the SF_BULKOPS to insert those contacts into the Salesforce.
EXEC SF_REFRESH ‘SALESFORCE’, [Contact]
CREATE TABLE [dbo]. [Contact_Load]
  (
    [Id] [nvarchar] ( 100 ) NULL,
    [Error] [nvarchar] (4000 ) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
    LastName [nvarchar] ( 100 ) NULL
   )
INSERT INTO Contact_Load ( LastName) VALUES ( ‘Smith’ );
INSERT INTO Contact_Load ( LastName) VALUES ( ‘Adams’ );
INSERT INTO Contact_Load ( LastName) VALUES ( ‘Paul’ );
 
EXEC dbo . SF_BulkOps @operation = ‘Insert’, @table_server = ‘SFORCE’ , @table_name = ‘Contact_Load’
Let’s analyze what this script does.
  • The first line ‘EXEC SF_REFRESH ‘SFORCE’, [Contact]‘ refreshes the contact table from the Contact object in the Salesforce. The ‘SFORCE’ is the linked server name and the ‘[Contact]’ is the Salesforce Contact object. The reason that we have to refresh always before doing any CRUD operation is because, there could be many changes made between the last time the Contact object was refreshed and now. The SF_REFRESH brings the delta changes from the Salesforce Contact object to the local contact table in the SQL server.
  • The ‘CREATE TABLE [dbo]. [Contact_Load]…‘ creates the load table. As described previously, this table SHOULD have at least two fields which is ‘Id’ and the ‘Error’.
  • The ‘INSERT INTO Contact_Load…‘ statements inserts the records into these table. In the ideal world, the load table will be populated from a SELECT query that you will use to select the records from your tables.
  • The line ‘EXEC dbo . SF_BulkOps‘ is what actually inserts the data into Salesforce.
When you execute the script and the execution completes successfully, the contacts would have been inserted into the Salesforce Contact object. The ‘Id’ field will be populated with the Id for these new records and the ‘Error’ field will be populated with the status message. The older versions of the DbAmp used to leave the field empty, but somewhere in the documentation, I read the latest version of  DbAmp populates this field for even successful operations. Note that if you accidentally populate the Id field in the Contact_Load table with the existing record, the SF_BULKOPS will not detect that this is an existing record and will always insert this as a new record in the Contact object. In other words, if you have any records in the Contact_Load table, the SF_BULKOPS procedure will treat it as new records to be inserted and will do so.
Using Bulk API for INSERT
Salesforce.com provides two different types of APIs for sending (Insert, Upsert, Update, Delete) data to Salesforce.com. All the commercial tools such as Dell Boomi, Snaplogic, Cast Iron, Jitterbit, DbAmp, Pragmatic Works Task Factory, etc… and the open source tools such as Mulesoft, dataloader.io and even the Salesforce.com’s Data Loader use the same API to send the data to Salesforce.com. There is no other back door to deal with the Salesforce.com. The first of them which is the Web Services API is what highly used in most of the scenarios. The Web Services API is a synchronous API and can sends records in a batch of 200. That means if you 1000 records, then it makes 5 API calls to send all the 1000 records. The web services API should cater to the most integration needs because in a typical integration scenario the data moved is usually in few hundreds to thousands. But there are situations where bulk data movement is necessary which may run in few thousands to few hundred thousands of records. For such scenarios, Salesforce has a different set of API called the Bulk API. The Bulk API can support upto 10,000 records in one single API call. It is also asynchronous in nature which means that when a bulk API is used, the call is queued and executed whenever the system has the resources, which is usually, quite sooner. It is the responsibility of the caller to query and get the status of the queued call. So, if you need to send 100,000 records, with bulk API, it would take only 10 API calls, while the web services API would take 500 API calls.
DbAmp supports both the APIs and uses the web service API by default. If you need to use the bulk API, then it can be specified as an option. For e.g.
EXEC dbo . SF_BulkOps @operation = N’Insert:bulkapi’, @table_server = ‘SALESFORCE’ , @table_name = ‘Contact_Load’
 
One important thing to remember is that when you use the bulk API, the ‘Error’ column will be updated with a token and not with the status and you can use this token to track the status. To get the status, use the following command.
EXEC dbo . SF_BulkOps @operation = N’Status’, @table_server = ‘SALESFORCE’ , @table_name = ‘Contact_Load’
Conclusion
This article explored the basics of using DbAmp to insert data from local SQL database to Salesforce. There are much more things can be done with the insert operation and complete details can be found under the DbAmp documentation. The next article will focus on using the DbAmp to update the existing data in Salesforce from local SQL server.

Integrating with Force.com Platform using DbAmp: Part-02: Basics

In the first part of the article series titled ‘Integrating with Force.com platform using DbAmp’, we got introduced to the DbAmp tool and its benefits. We also went through the fundamentals and learned how it works and components of the DbAmp. In this article, we’ll look at some basics of the DbAmp as this is very essential in developing the DbAmp Jobs to integrate with Salesforce. The official DbAmp documentation is an excellent source of information to learn about the basics and this article series will touch upon few topics that you can find in the documentation, but will not go into those topics in detail, as they have been already covered in that documentation.

Execution Types

When writing SQL queries to perform any CRUD operations against Salesforce, there are two ways a developer can choose to write them.

Direct Access:

The first method is to write the queries directly against the Salesforce. In this method, the developers use specific type of syntax to access the Salesforce objects and the queries are executed against the Salesforce in real-time. The format is described as follows:

LINKED_SERVER_NAME…OBJECT NAME.

For e.g. to select all records from the Account object, you simply execute the following command (assuming your linked server name is SFORCE)

   1: SELECT * FROM SFORCE...Account

This has few drawbacks; Firstly, the performance takes a big hit as it has to execute directly against the Salesforce. Secondly, this will have a big impact on the API call limit. The reason is if you use INSERT or UPDATE in this manner, every record that gets inserted or updated count against the API call.

Local Access:

With local access method, you first replicate the required objects and store it in your local SQL server database and you write your SQL query against this table for SELECT queries. For CUD queries, you create a load table (as explained in part-1). This method has couple of advantages: Firstly, the performance is much improved, because your SELECT queries run against local database and not against the Salesforce. Secondly, as a developer, you have the option of using bulk API. With bulk API, upto 10,000 records can be sent in one single call. Also, when using the SF_BULKOPS (without the bulk api option), DbAmp batches upto 200 records in one single call.

Object-Table representation

DbAmp supports both child to parent relationship queries and parent to child queries. When replicating the Salesforce objects from Force.com platform to the local SQL server database, DbAmp uses a special mechanism to flatten the Salesforce objects in the local SQL server database. Here is a list of things that happens when an object is replicated:

  • The DbAmp creates every table with a column titled ‘Id’ which is mapped to the Object Id. Note that Salesforce never displays the Id field in the object definition. You can check my earlier article on Object Id’s to learn more about it. DbAmp also makes this field as primary key.
  • For lookup fields, the columns are created with the same name and suffixed by ‘Id’. For e.g. the ‘CreatedBy’ field becomes ‘CreatedById’ column in the local table and DbAmp stores the record id of the record it is pointing to. For e.g when replicating Case object, the field ‘CreatedBy’ is created as ‘CreatedById’ column in the local Case object and if the case is created by the user ‘John Adams’ and if the ‘Id’ of the User object for this user is ‘005014A7CA434AC’, then the ‘CreatedById’ column in the local Case object will have the value of ‘005014A7CA434AC’.
  • When a parent object in a lookup or master-detail relationship is replicated, there will be one record for each of the record in the related list of the parent object. For e.g. If an Account object has 3 contacts, then there will be totally three account records. If there are multiple related related lists, then there will be one record for each of the record in each of the related list. For e.g. if an Account object has 2 contacts with 5 cases, then there will be totally seven account records.

Handling DateTime

The value for the datetime types are always represented as UTC in Salesforce. When the user sees a value for the DateTime field, the value is represented in the user’s timezone (configured in the user’s profile). When the DateTime fields in Salesforce are manipulated using DbAmp, it is converted into local timezone. This is handled by DbAmp automatically and users need not convert back and forth from UTC to local and local to UTC. This is a very important point to remember when dealing with DateTime fields.

Using Column Subset Views

The Salesforce API has a limit on 10,000 characters for the SELECT queries; hence objects with huge number of columns cannot be replicated or refreshed who cross this limit. The DbAmp documentation mentions that if field count is above 325, the API may return error. In my experience, we got this error when the field count went above 400. Anyways, DbAmp suggest that the object can be split and then queries and this is called as Column Subset Views. In this method, the developer writes a SELECT query with a specific form of syntax to split the objects. The syntax is as follows:

   1: SELECT * FROM <LinkedServerName>...<ObjectName>_ColumnSubset<StartingAlphabetPrefix><EndingAlphabetPrefix>

For e.g. to split the Case object into two tables, the following query can be issued.

   1: SELECT * FROM SFORCE...[Case]_ColumnSubsetAM

   2:

   3: SELECT * FROM SFORCE...[Case]_ColumnSubsetNZ

   4:

Note that the Case object is enclosed by square brackets. This is because, ‘Case’ is reserved keyword and any reserved keyword in the table/column names should be enclosed by square brackets. The above query will get all the records from the Case table and will create two separate tables, named ‘Case_SubsetAM’ which will contain all the columns which start with A to M, both inclusive and another table named ‘Case_SubsetNZ’ which will contain all the columns that starts with N to Z, both inclusive. The column subset view can be used with SF_REPLICATE stored procedure as well, as follows:

   1: EXEC SF_REPLICATE 'SFORCE', 'Case_ColumnSubsetAM'

   2:

   3: EXEC SF_REPLICATE 'SFORCE', 'Case_ColumnSubsetNZ'

Governer Limits

In order to work with Salesforce data, every tool in the market has to use the Salesforce API. There is no other shortcut or preferred treatment for certain vendors. To ensure fair usage, Salesforce restricts the resource usage by putting a limit on the API calls. The following list provides high level limits on the API:

  • SELECT/SF_REPLICATE/SF_REFRESH – 1000 records => 1 API call
  • UPDATE & INSERT – 1 record => 1 API call
  • SF_BULKOPS without bulkapi switch 1 API call for each batch of 200 records
  • SF_BULKOPS with bulkapi switch => 1 API call for each batch of 10,000 records

Summary

Understanding the fundamentals of DbAmp helps developer to write better queries against Salesforce. In this article we covered such fundamentals and in the next article we will explore how records can be retrieved from the Salesforce. For list of the articles in this article series, please check the first part of this article series.

Integrating with Force.com platform using DbAmp: Part-01: Introduction

Introduction
The Force.com platform is becoming a ubiquitous platform of choice in the cloud for many organizations to move their Sales, Service and custom cloud applications. These applications or solutions cannot exist in vacuum and in almost all cases, they need to tightly integrate with the enterprise data that is housed local on-premise network or even with other cloud services such as Windows Azure or AWS. For e.g. if an organization implements a sales cloud, then they need their user information from their internal active directory to load in the Contact object, the customer lists from their SAP or other system for Account or Contact objects. Similarly custom Force.com applications might require data from the organization’s internal systems. Integration is a big concern for the organizations as they grow their presence in the cloud and it’s numerous tools and services are flooding the market to integrate with any systems.
This article series will cover one such tool called DbAmp from the ForceAmp.com. The goal here is to demonstrate the capabilities of this tool and show how the common tasks such as Insert, Delete, Upsert, Update operations can be done using this tool to accomplish the integration between a SQL Server deployed in the on-premise network with the Force.com platform. All of these topics are discussed elaborately in the following articles:
  • Integrating with Force.com Platform using DbAmp: Part-01: Introduction
  • Integrating with Force.com Platform using DbAmp: Part-02: Basics
  • Integrating with Force.com Platform using DbAmp: Part-03: Insert
  • Integrating with Force.com Platform using DbAmp: Part-04: Update
  • Integrating with Force.com Platform using DbAmp: Part-05: Upsert
  • Integrating with Force.com Platform using DbAmp: Part-06: Delete
  • Integrating with Force.com Platform using DbAmp: Part-07: Using SSIS
  • Integrating with Force.com Platform using DbAmp: Part-08: SF_REPLICATE
  • Integrating with Force.com Platform using DbAmp: Part-09: SF_REFRESH
  • Integrating with Force.com Platform using DbAmp: Part-10: SF_BULKOPS
  • Integrating with Force.com Platform using DbAmp: Part-11: Under the hood
  • Integrating with Force.com Platform using DbAmp: Part-12: Common integration techniques
DbAmp
DbAmp is a simple, yet very powerful tool, that exposes the Force.com as another database to your SQL Server. It allows the developers to use their familiar SQL (and SOQL as part of Open Query) to do all the CRUD operations on the Salesforce objects. The data on the Force.com platform can be backed up completely to an on-premise SQL Server using DbAmp with very little programming.
Benefits
Complete backup of Force.com data
DbAmp can be used to backup the data from the Force.com platform completely to an SQL Server database located on-premise. It is not limited to any specific Salesforce edition or application or objects; it can backup standard and custom objects and even custom settings for all types of Force.com cloud, such as Sales, Service, Marketing, Custom clouds, etc. However it is not a universal Force.com backup tool, meaning, it cannot backup other Force.com features such as profiles, permission sets, groups, roles, etc., because it is only a data backup tool. But it does support both full backup and incremental backup for the data. Full backups enable to completely backup all the objects with all the records, while incremental backups refreshes only the delta changes from Force.com to local on-premise database.
Build reports
DbAmp allows developers to build reports that can use data both from local repository and Salesforce in both real-time and offline fashion using simple SQL SELECT queries. For offline, DbAmp provides stored procedures (which will be covered shortly) to bring the data from Force.com platform to on-premise. Developers
Integrate local and cloud
DbAmp can be used to integrate local on-premise applications with Force.com data in both directions. That means that data can be inserted or upserted or deleted in Force.com from the on-premise SQL database.
How it works?
As outlined previously, DbAmp exposes Force.com as another database to your SQL Server. It does this by providing a OLE DB Provider to Force.com. As you might know, the SQL Server can connect to other data sources (that can support OLE DB, such as another SQL Server or Oracle, and expose them as another database using Linked Server. The following diagram (taken from the above link) illustrates this idea:
 
A3-P1-001-LinkedServer
This allows the developers to use the DbAmp OLE DB Provider to create a linked server that can connect to Force.com. Once this linked server is created, now developers can use regular SQL queries to manipulate the data on the Force.com platform. Developers can also use SOQL directly with the SQL using the Open Query, but keep in mind that there are some limitations in using the SOQL with the Open Query. Check the DbAmp documentation for more details.  One important thing to highlight here is that DbAmp takes care of all governer limits, so a developer need not to worry about governer limit exceptions when he/she tries to upsert with 5000 rows/records. DbAmp even allows developer to specify the limit per operation so that the developer can precisely control how the data should be batched.
Components
Now that we have understood how DbAmp works, let’s see the components of the DbAmp to better understand about how DbAmp works.
DbAmp OLE Db Provider
The DbAmp OLE Db Provider is used to connect to the Force.com by the linked server configuration. This is the engine of the DbAmp and takes care of all the background processes such as converting the SQL to appropriate Force.com web services API, sending the data from on-premise to Force.com and bringing the data back to on-premise from Force.com, etc.
DbAmp System Stored Procedures
The DbAmp system stored procedures does the bulk work to backup completely or incrementally. It also does the bulk inserts, deletes, updates and upserts. Here are those stored procedures.
  • SF_REPLICATE – The SF_Replicate stored procedure creates a new table which is an exact replica of a Force.com object and imports every single record from that object into the local, similarly named table. If this table already exists, it takes a temporary backup and drops and recreates this table.
  • SF_REFRESH – The SF_Refresh stored procedure synchronizes the local table with it’s corresponding Salesforce object. It deletes the records in the local table if these records got deleted in the Salesforce and gets any all the records that was added in Salesforce and doesn’t exist locally and the records whose fields got updated in the Salesforce.
  • SF_BULKOPS – The SF_Bulkops stored procedure is the key stored procedure that is used to send any new records or updated records in Force.com and even to delete any existing records.
There are few other stored procedures that ships with DbAmp, but these are the most widely used stored procedures. In a later article in this article series, we will explore these three stored procedures in depth to gain deep understanding about how it affects the way a developer designs/develops the integration job.
Anatomy of a DbAmp Job
Using DbAmp to integrate on-premise data with Force.com requires SQL skills, as the developers will write their integration entirely using SQL. The DbAmp integration can be either done using regular SQL jobs or SSIS packages. For the most part, this article series will be using only SQL jobs to implement the integration.
SQL Jobs
The anatomy of a SQL job that uses DbAmp slightly varies depending on the task involved and the time it runs. For uploads, we first refresh the local tables and populate a load table and then perform the upload operation (CUD). The load table is a temporary (physical, but temporary in nature) table that mimics the Salesforce object. The load table name should start with the name that is similar to the object that it deals; for e.g. to perform upload operation on ‘Case’ object, the load table name can be ‘Case_Load’ or ‘Case_BillingIssues_Load’, but it can’t be ‘ABC_Load’, etc. The load table need not have every column as in the Salesforce object and can have fields/columns that the job is going to affect. The only other column that is required is named ‘Error’ which will be used by the DbAmp to update the status information after the SF_BULKOPS call. The load table can be dropped or left as it is after the SF_BULKOPS call. In general, it’s a best practice to first check for its existence and create it if it doesn’t exist and to drop it just before the end of the logic in the SQL job.
At a high level the anatomy of a DbAmp SQL job look like as follows:
To upload data from local on-premise SQL database to Force.com platform for frequent jobs.
  • Refresh the local table(s) from the Salesforce objects using SF_REFRESH (or it’s other variants)
  • Create the load table
  • Populate the load table with the data that needs to be sent to Salesforce
  • Use SF_BULKOPS to upload the data. The upload operation can be ‘Insert’, ‘Update’, ‘Upsert’, ‘Delete’
  • Check ‘Error’ column in the load table to manage the error handling scenarios.
To upload data from local on-premise SQL database to Force.com platform for nightly jobs.
  • Replicate the local table(s) from the Salesforce objects using SF_REPLICATE (or it’s other variants)
  • Create the load table
  • Populate the load table with the data that needs to be sent to Salesforce
  • Use SF_BULKOPS to upload the data. The upload operation can be ‘Insert’, ‘Update’, ‘Upsert’, ‘Delete’.
  • Check ‘Error’ column in the load table to manage the error handling scenarios.
To upload data from Force.com platform to local on-premise SQL database for frequent jobs.
  • Refresh the local table from the Salesforce objects using SF_REFRESH (or it’s other variants)
  • Perform CUD (Create, Update, Delete) operation(s) on your other databases retrieving data from the refreshed table(s)
To upload data from local on-premise SQL database to Force.com platform for frequent jobs.
  • Replicate the local table(s) from the Salesforce objects using SF_REPLICATE (or it’s other variants)
  • Perform CUD operation(s) operation(s) on your other database retrieving data from the refreshed table(s).
Conclusion
This article introduced the DbAmp, a tool to integrate local on-premise databases with Force.com data and discussed about the benefits and the underlying mechanism of how it works. It also discussed the basic constructs of the DbAmp tool and the anatomy of a DbAmp SQL job. In the next part, we will explore how the data can be inserted from local SQL database to Force.com using a DbAmp SQL job.
Stay tuned.

Demystifying Object IDs in Force.com platform

The demystifying series will take a single concept, idea, technology, standard, feature and will dissect it and provide a thorough understanding about that particular thing.

In Salesforce.com platform, the fundamental mechanism of storing data is through objects. Objects define the schema for storing related information for a particular ‘thing’. This ‘thing’ can be either standard objects such as Accounts, Contacts, Cases, Leads, Opportunities or custom objects that the users define to meet the business requirements. Objects can contain standard fields such as CreatedBy, LastModifiedBy and custom fields that the users define. A record is an instance of an object and every object can have zero or more records. Every record has a unique identifier represented by a standard field named ‘ID’. The subject of this blog post is all about this ‘ID’ and how it plays an important role in handling the objects and the relationships.

Note: I’m going to use ‘ID’ (upper case) to denote the ID type, while ‘Id’ (proper case) to denote that it is a value.

ID

The ID field is defined automatically by the platform whenever an object is created. The ID field is indexed and it is never shown in the field definition, hence this field cannot be edited or deleted. The ID field acts like a primary key for each record, hence every record can be uniquely identified using an ID. The value of this field for a record cannot be updated either through UI or through the web services. The IDs are unique within an organization and they are represented as either 15 digits or 18 digits. The 15 digit ID is base-62 encoded and is case sensitive and contains alpha-numeric characters (0-9,a-z,A-Z). The 18 digit ID is base-62 encoded and is case-insensitive and contains alpha-numeric characters (0-9,a-z,A-Z). The 18 digit IDs are introduced since some applications like Microsoft Access/Visual Basic are case-insensitive and do not distinguish between 00B30000007Mlka and 00B30000007MLKA.

The first 3 characters of an ID indicates what type of entity it references. For e.g. the following table list the first 3 characters for various entities:

# Entity Type Starts With
1 Groups/Queues 00G
2 Profiles/Roles 00E
3 Permission Sets 0PS
4 Object Records

Starts with a alphanumeric such as 5003, 0010.

The Salesforce UI always uses 15 digit IDs when viewing/editing records of the objects, users, roles, groups, profiles and permission sets, while web services always use 18 digit IDs. To get the 15 digit IDs from 18 digit, simply get the first 15 digits and ignore the last 3 digits. Salesforce recommends to use 18 digit IDs as the web service API’s always use 18 digits.

IDs in Relationship

The IDs are used as references field types in relationship fields. This means that when a lookup or master-detail relationship field is defined, it uses the ID to reference to the lookup or the master object that it refers to. There are subtle differences how the Ids are used in the ID fields and the reference fields.

# ID Type Fields Reference Fields
1 The Ids are generated by the platform and the user/developer cannot update the ID field The reference fields can be updated with the Ids from other objects
2 The ID fields are not displayed in the Field definitions The reference fields are always displayed in the field definitions
3 The Id of a record is always unique within an object. For e.g. every case record will have a unique id. The Ids in the reference fields are not unique. For e.g. the Last Modified By field can have the same user id
4 An object cannot have a null Id The reference fields in an object can be null for lookup objects, but in master-detail objects, the Id cannot be null

Relationship fields such as lookup or master-detail relationship fields are always defined in the child. When the user creates a new record, the platform automatically provides a lookup icon next to the field to choose the value from the related object. When the user select a value from the lookup dialog window and saves it, the platform saves a ‘reference‘ in the child object to the related object. This ‘reference‘ is the ‘Id‘ from the related object. But when you view this record, you would see the lookup value which is the record name (defined by the ‘Record Name’ during the object creation) and not the related object’s id.

Let’s understand how this works with an example. I’m going to use DbAmp for this example. DbAmp is an integration tool that exposes Salesforce as another database in your SQL Server. The advantage with this approach is that you can use regular SQL (not SOQL) to to all your CRUD operations. I’m not going to show how to setup the DbAmp as this is not in scope and the assumption here is it is already setup and ready for use.

For this demonstration, I created a new object named ‘Position’. You can see that the Position object definition screen doesn’t have the ID field. Also, note the standard lookup fields that are created for this object. These fields such as ‘Created By’, ‘Last Modified By’ are lookup fields. We will cover these fields shortly.

A3-P2-PositionDefinition(1)

Image 1: Position object definition

The following screenshot shows how the position record looks like when it is created. You can see that the ID field is never displayed in the UI (because it is not part of the field definition), but the URL has the Id which uniquely indicates this particular record. Actually, Salesforce always uses Id in the URLs to access the records for both viewing and editing. The highlighted fields indicate the lookup fields and in this case these fields are standard fields which gets created automatically when an object is created and populated by the platform every time a record is created.

A3-P1-NewPositionRecord

Image 2: A position record seen through UI

Let’s switch back to SSMS to see how this record looks like.

A3-P1-PositionRecord

Image 3: A position record seen through DbAmp

As previously said, DbAmp allows to manipulate records using regular SQL and the above command gets all records from the Position__c table (mapped to the Salesforce object Position__c). If you compare the image 1 and 3, you can find couple of differences. First, there is a difference in the field names; for e.g. Image 1 shows the field name as ‘Created By’ and ‘Last Modified By’, while the Image 2 shows it as ‘CreatedById’ and ‘LastModifiedById’. The reason is that every field has 2 fields to represent the field name;

  • Field Label – used in the page layouts
  • Field Name/API Name – used by the web services.

The standard fields use the ‘Field Name’ convention, while the custom fields use the ‘API Name’ convention. Nevertheless, the reason the Image 3 shows these fields as ‘CreatedById’ and ‘LastMofidiedById’ is because, DbAmp knows these fields are lookup fields, hence it appends the ‘Id’ to the column name.  Second, in the UI, the Id is 15 digits (which is displayed in the URL), but in DbAmp, the Id is 18 digits. The reason is that the web services always use 18 digit and DbAmp uses the web services behind the scenes to convert the SQL to web service API call to perform the CRUD operations. The point to note here is that the relationship fields use Ids as reference to link other objects and these Ids can be updated in the relationship fields, while the ID field cannot be updated.

IDs in CRUD

IDs play a crucial role in the CRUD operations. In fact, most CRUD operations use IDs in one way or another. The following sections details how the CRUD operations use IDs.

Create

As explained previously, whenever a record is created, the ‘ID’ field is auto-populated by the platform and the user has no way to update or delete. A record in Salesforce object cannot exist without an ID and it is always unique within the organization.

  • Create through Web Services: When the Create() method in the web service is called, it creates a new record and returns the 18 digit ID that the platform creates as part of the record creation.
  • Create through UI: When a new record is created through the Salesforce UI, it always displays the 15 digit ID.

Upsert

Upsert either creates new record if it doesn’t exist or updates if it exists. The upsert() call can be used in custom objects only if the object has an external field. When upsert() method is called, it uses the ID field to see if a record already exists. If it exists, then it updates; otherwise, it creates a new record and returns the 18 digit Id from the newly created record.

Update

Update can accept either the ID field or any other field in the condition list. But it cannot accept the ID field in the updateable list. If one or more Id is listed in the condition, the same number of records are updated, while if a non-Id is used in the condition list, it may update 0 or more number of records depending on how many records matches the condition.

  • Update through web services: Certain fields of an object (both in standard & custom objects) can never be updated such as ‘Created By’, ‘Last Modified By’, ‘Last Modified Date’, etc. The ID field is one such field. However, the reference (lookup & master-detail relationship) fields can be updated such as ‘OwnerId’.
  • Update through UI: Similar to web services, certain fields cannot be updated including the ID field, while the reference (lookup & master-detail relationship) fields can be updated.

Delete

Delete can accept either the ID filed or any other field in the condition list. If one or more Id is listed in the condition, the same number of records are deleted, while if a non-Id is used in the condition list, it may delete 0 or more number of records depending on how many records matches the condition.

Conclusion

The concept of ID is very simple, but not effectively understood many times. Understanding how the IDs are used in the Force.com platform helps developers to design and write applications in Force.com platform effectively.

Salesforce.com certified Force.com Developer Certification: Knowledge Check Questions-Part-IV

This blog post is all about the knowledge check questions from the premier training catalog. The following list is the links to the knowledge check questions.
Questions:
1. The Created Date can be updated for a record via the API as long as the Inserting System Fields feature is turned on.
 
A. True
B. False
2. What are the ways in which you can obtain a record’s ID? (Select all that apply.)
 
A. URL
B. Reports
C. API
D. On each record
E. Import Wizard
3. What is the advantage of performing upserts while migrating data?
 
A. Data migration to the Salesforce application is faster
B. It gives an error if there is a missing data record
C. It can use the Salesforce ID or an external ID to prevent data duplication and create missing records.
D. It duplicates a record using the external ID in Salesforce
4. The upsert function allows developers to use relationships defined in legacy systems which importing data into Force.com
 
A. True
B. False
5. Developers need to know the record ID’s in Salesforce to use upsert with relationships.
 
A. True
B. False
6. For which of the following activities can developers use import wizards?
 
A. Load up to 50,000 records
B. Load any object supported by the API
C. Schedule regular data loads such as nightly feeds
D. Mass delete supported objects
7. The objects that developers want to load and the operation they want to perform are key factors in determining the right data management tool.
 
A. True
B. False
8. The Data Loader supports importing of data from CSV and exporting data to CSV.
 
A. True
B. False
9. The ________ operation uses a SOQL string to export a set of records from Salesforce.
 
A. Extract
B. Insert
C. Upsert
D. Update
10. To transfer a record that a user does now own, the user needs to have the required user permissions and read sharing access on the record.
 
A. True
B. False
 
11. The mass transfer tool can be used to transfer only accounts and custom objects from one user to another.
 
A. True
B. False
 
12. Which are the permissions that allow a system administrator to manage an application? (Select all that apply.)
 
A. View All Data
B. Read
C. Create
D. Modify All Data
E. Edit All Data
13. System Administrators can customize the permissions of both standard and custom profiles
A. True
B. False
14.  Identify all true statements
 
A. If you remove access to an app from a profile, the users in that profile will still be able to see the tabs in that application.
B. If you hide a tab from a profile, the users in that profile will not be able to see records for that object.
C. If you have two record types for an object, you need to have two page layouts for that object.
D. If a user does not have access to a specific record type, they will still be able to see the records with the record type.
15. A field hidden by field-level security is still visible through the API.
 
A. True
B. False
16. What will you use to limit the picklist options?
 
A. Page Layouts
B. Record Types
C. Field-Level Security
D. Profiles
 
Answers:
1. B
2. A,B,C
3. C
4. A
5. B
6. A
7. A
8. A
9. A
10. A
11. B
12. A,D
13. B
14.
15. B
16. C.

Salesforce.com certified Force.com Developer Certification: Knowledge Check Questions-Part-III

This blog post is all about the knowledge check questions from the premier training catalog. The following list is the links to the knowledge check questions.
Questions:
1. Once you create a report that utilizes a field for counting, you can use that field in additional calculations.
 
A. True
B. False
2. When creating a formula field to allow you to count unique records in a report, what formula should you use?
 
A. 1
B. X + 1
C. X.1
D. 2
3. Once you create a new formula field to group information, what else do you need to do to show the grouping on a dashboard? (Select all that apply).
 
A. Create a formula for each of the groups.
B. Run a report using the grouping formula field
C. Create/update a dashboard component using the updated report.
D. Create a dashboard component for the non-grouped information.
4. You need to create a new formula field when you want to do which of the following? (Select all that apply).
 
A. Group scores into categories
B. Create a custom report
C. Count the number of records
D. Schedule to run a report at a later date.
5. You want to count the number of interviews conducted for each position. Once you create a formula field to count the number of interviews, what is your next step?
 
A. Run a report using the new field
B. Create a “Number of Interviews” component on the dashboard.
C. Create a report to group interviews by positions
D. Calculate the ratio between positions and interviews
6. A formula field can be used to “bucket” or group together similar values to make a report or dashboard easier to read
A. True
B. False
7. Analytic snapshots map the fields on the source report to fields on a custom object
 
A. True
B. False
8. Analytic snapshots always run weekly
 
A. True
B. False
9. Analytic snapshots allow you to view historical data and analyze trends.
 
A. True
B. False
10. Setting up the analytic snapshot includes which of these steps? (Select all that apply)
 
A. Selecting the source report
B. Selecting the target object
C. Mapping the fields on the report to the target object
D. Running the report to analyze historical data
E. Scheduling the frequency for taking the snapshot
11. Analytic snapshots can be scheduled daily, weekly, or monthly
 
A. True
B. False
12. You are running a report that utilizes data captured in an Analytic Snapshot to analyze historical trends. What should you include in the report?
 
A. The source report
B. The target object
C. Both the source report and the target object.
 
13. The more time exists between the Start and End Dates, the more snapshots will be listed in the report.
 
A. True
B. False
 
14.  How do analytic snapshots facilitate analyzing trends?
 
A. They automatically run reports at regular intervals so users can compare the reports side by side
B. They allow users to capture the highest and lowest points of data on a given time so that users can analyze the range of performance
C. They automatically store data at regular intervals so that users can report on and analyze that data later.
D. They provide a comprehensive analysis of past performance compared to current performance.
15. To set up the analytic snapshot, you must map ______
 
A. Fields on the resource report to the snapshots
B. Fields on the source report to the custom object
C. Fields on the custom object to fields on the source report
D. Fields on the custom object to the scheduled snapshots
16. Which path would you follow to create a custom object as the target object for an analytic snapshot?
 
A. Your Name  | Setup | Customize | Objects
B. Your Name | Setup | Create | Objects
C. Your Name | Setup | Data Management | Analytic Snapshots
D. Your Name | Setup | Data Management | Objects
17. In which of these reporting scenarios would you need a partner’s help?
 
A. You want to show who are the employees who interviewed each candidate
B. You want to Create a report that lists all contacts that have no cases associated with them.
C. You want to use different colors to highlight certain text values on a report.
D. You want the names of highly qualified candidates to feed into a job offer form, which you’ll mail to these candidates
E. You want to create a Candidate Competency report that will tell how qualified are the candidates that you teams are interviewing.
18. Some challenges are best solved using a RaaS application; others are best solved using a BIaaS application; and yet others – using a Data Warehousing application
 
A. True
B. False
19. To find partner applications that extend reporting capabilities, go to
 
 
Answers:
1. A
2. A
3. B,C
4. A,C
5. A
6. A
7. A
8. B
9. A
10. A,B,C,E
11. A
12. B
13. A
14. C
15. B
16. B
17. B,C,D
18. A
19. B

Salesforce.com certified Force.com Developer Certification: Knowledge Check Questions-Part-II

This blog post is all about the knowledge check questions from the premier training catalog. The following list is the links to the knowledge check questions.

Questions:

1. All users who access the same report at the same time will see the same data.
A. True
B. False
2. Access to a report is determined by the folder in which a report is stored.
A. True
B. False
3. What is the purpose of a summary report?
A. Provides a simple listing of your data
B. Provides a listing of data with sorting and subtotaling
C. Helps compare related totals by summarizing data in a grid
D. Allows application of custom filters to show limited data.
4. Which of the following tasks can an Admin User perform? (Select all that apply.)
A. Run a report on Monday, and then schedule the same report to run every Monday for the next four weeks.
B. Schedule reports to run in the time zone of the report recipient.
C. Email a report in a Public folder to Salesforce users in different time zones.
D. Schedule a report to run only once, a week from today.
5. Saving a custom report will save a snapshot of that data at that moment in time
A. True
B. False
6. You can use filter options with “AND” or “OR” relationships.
 
A. True
B. False
7. Which of the following statements about reports is accurate? (Select all that apply.)
A. Reports show only the data and fields that are visible to the user running the report.
B. New objects and fields are immediately available for reporting.
C. Running a saved report shows the data in real-time based on the saved parameters.
D. Reports can include standard objects and/or custom objects.
8. To email a report to other users, the report must be in public folder.
A. True
B. False
9. To export the data to Excel but still keep the report formatting, click Export Details.
 
A. True
B. False
10. You can select the columns you want to appear on your report, and their order as well.
 
A. True
B. False
11. To list your data including sorting and subtotaling of the data, select a Tabular Report
 
A. True
B. False
12. What is the first step when creating a custom report?
 
A. Choose the fields by which to group and subtotal the data.
B. Select the objects to include in the report.
C. Select the columns to include and order them
D. Select the type of report
13. Match each component type on the left with the appropriate requirements on the right?
 
A. Chart : Show the average time for a Temp Contract.
B. Table: Show recruiting statistics from an external system.
C. Gauge: Compare the number of temporary positions across departments.
D. Metric: Show the top five teams by number of open positions
E. Visualforce Page: Show progress towards filing 100% of open positions
14. Match each item on the left with the way it manages security on the right?
 
A. Dashboard Folder: Determines what data is displayed on the dashboard
B. Running User: Control who sees a dashboard
15. If you have access to a dashboard component, then you can see its underlying report.
 
A. True
B. False
16. You can schedule a dashboard refresh every other Wednesday.
 
A. True
B. False
17. If you check the option to email the dashboard ‘To me’, what will be emailed to you?
 
A. A notification of the refresh schedule that you set up
B. A notification when the dashboard has refreshed
C. The report used as a source for the dashboard
D. The refreshed dashboard components
18. A dashboard can have up to how many compoents? __________
 
19. Which of these are types of dashboard components? (Select all that apply.)
 
A. Visualforce page
B. Graph
C. Metric
D. Table
E. Gauge
F. Chart
20. What does a running user determine?
 
A. The data displayed on a dashboard
B. The users who can see a dashboard
C. The frequency of dashboard auto refresh
D. The users who receive email notification of dashboard refreshes
21. Once you create a dashboard, the first component is added by default.
 
A. True
B. False
22. Which of the following dashboard layout styles are available? (Select all that apply.)
 
A. One Column
B. Two Columns
C. Three Columns
D. Four Columns
23. You can specify the column under which to add a new component.
 
A. True
B. False
24. Components can be moved around after they are created.
 
A. True
B. False
25. Which report types are NOT created automatically? (Select all that apply.)
 
A. Positions with Offers
B. Candidates with Reviews
C. Position
D. Candidates with Interviewers
26. Report Types define the objects that are included on a report.
 
A. True
B. False
27. Which statements about report types are true? (Select all that apply.)
 
A. Standard report types are predefined.
B. Reports based off a standard report type cannot be customized
C. A standard report type is created upon creation of a custom object.
D. A standard report type is similar to an outer join.
28. To incorporate fields from other objects in a CRT, you can include the object relationship or include fields related via lookup.
 
A. True
B. False
29. “With or Without” reports shows all “A” records that have a related “B” record.
 
A. True
B. False
30. How many objects can you choose when defining which related records from different objects are returned in the report results?
 
A. 4
B. 6
C. 8
D. 10
31. System Administrators and users with the “Manage Custom Report Types” permission can create CRTs.
 
A. True
B. False
32. CRTs allow you to remove fields from the report builder and choose which fields should be selected by default.
 
A. True
B. False
Answers:
1. B
2. A
3. B
4. A,C,D
5. B
6. A
7. A,B,C,D
8. A
9. B
10. A
11. B
12. B
13.
A. Chart : Compare the number of temporary positions across departments.
B. Table: Show the top five teams by number of open positions
C. Gauge: Show progress towards filing 100% of open positions
D. Metric: Show the average time for a Temp Contract.
E. Visualforce Page: Show recruiting statistics from an external system.
14.
A. Dashboard Folder: Control who sees a dashboard
B. Running User: Determines what data is displayed on the dashboard
15. B
16. B
17. D
18. 20
19. A,C,D,E,F
20. A
21. B
22. B,D
23. A
24. A
25. A,B,D
26. A
27. A,C
28. A
29. B.
30. A
31. A
32. A

Salesforce.com certified Force.com Developer Certification: Knowledge Check Questions-Part-I

This blog post is all about the knowledge check questions from the premier training catalog. The following list is the links to the knowledge check questions.

Questions:

1. In a custom application, you can include only a landing page.
A. True
B. False
2. Which of the following statements are true about custom objects? (select all that apply)
A. Salesforce provides a set of custom objects that you can use to store data
B. After you create a custom object, you need to add the user interface
C. Custom objects come with an infrastructure include reporting, auditing, and access control.
D. When you create a custom object, you get a direct access to the database.
E. Custom objects are reportable and searchable.
3. Which of the following statements is true about custom tabs?
A. A custom tab is a user interface component you create to display custom object data or other web content embedded in the application
B. There are two types of custom tabs
C. Custom object tabs display any external web-based application or web page in a user interface tab.
D. Web tabs display the data of your custom object in a user interface tab.
4. Identify the correct statements about dependent picklists. (Select all that apply)
A. Standard picklists can be controlling fields but not dependent fields.
B. The maximum number of values allowed in a controlling field is 400.
C. Before defining a dependency, you should ensure that your picklist has at least one value.
D. A custom multi-select picklist can be set as the controlling field for a dependent field.
E. If a field represents both a controlling field and a dependent field, it cannot contain more than 300 values.
5. When a customizing a page layout, you can change the filed locations, page setting customizations, and related list customizations.
A. True
B. False
6. Match the given relationships with their features.
A. Lookup relationship :  A junction object is used to connect the two objects
B. Many-to-many relationship :  Security, Relationship can be only one layer deep.
C. Master-detail relationship : Security, access and deletion of child records are independent of the parent.
7. Match the following features of the Enhanced Page Layout Editor with their descriptions
A. Quick Find : Switch to other layouts for the same object
B. Section : Can search for an element in palette.
C. Blank Spaces: Adds section anywhere above the related lists on the page layout.
D. Page Layout : Helps visually align and distinguish elements on the page.
8. Which of the following statements are true about a lookup relationship? (Select all that apply)
A. A maximum of 2 relationships is allowed per object
B. A lookup relationship can span to multiple layers.
C. A parent record is required for each child.
D. A lookup field is not a required field.
E. Access to parent determines access to children.
9. Custom formula fields are smart custom fields that can be used to build business-specific calculations using simple wizards and Excel-like formulas.
A. True
B. False
10. Which of these statements are true for cross-object formula fields? (Select all that apply)
A. You can reference cross-object formulas in roll-up summary fields.
B. You cannot reference merge fields for objects related to activities
C. You can use cross-object formula fields to reference record owner merge fields for any object.
D. The limit for cross-object formulas is 10 unique relationships per object across all formulas and rules
11. Which statements about roll-up summary formulas are correct? (Select all that apply)
A. They calculate values from a set of related records.
B. They are read/write formula fields
C. They can be created to display a value on a master record based on the values of records in a detail record.
D. They can be added for all lookup relationships.
12. If all error conditions are false, the corresponding error message is displayed and the save is aborted.
A. True
B. False
13. A change set can be used to deploy metadata only between __________ orgs
14. What happens if one component of a change set fails to deploy?
A. The entire change set fails to deploy.
B. The entire change set gets deployed.
C. Except the failed component, all other components of change sets get deployed.
D. The deployment time increases.
15. Change sets can be used to move data and metadata from one organization to another.
A. True
B. False

Answers

1. B
2. C, E
3. A
4. A,C,E
5. A
6.
A. Lookup relationship :  Security, access and deletion of child records are independent of the parent.
B. Many-to-many relationship :  A junction object is used to connect the two objects
C. Master-detail relationship : Relationship can be only one layer deep.
7.
A. Quick Find : Can search for an element in palette.
B. Section : Adds section anywhere above the related lists on the page layout.
C. Blank Spaces: Helps visually align and distinguish elements on the page.
D. Page Layout : Switch to other layouts for the same object
8. B, D
9. A
10. C,D
11. A,C
12. B
13. related
14. A
15. B
%d bloggers like this: